SAT: Dynamic Spatial Aptitude Training for Multimodal Language Models

1Boston University, 2University of Washington, 3Allen AI, 4Microsoft Research (MSR) 5New York University
*equal advising, † joint second author

SAT is a dynamic spatial aptitude training dataset requiring reasoning about ego and object motions that go beyond simple static relationships in existing datasets.

SAT Teaser

Abstract

Reasoning about motion and space is a fundamental cognitive capability that is required by multiple real-world applications. While many studies highlight that large multimodal language models (MLMs) struggle to reason about space, they only focus on static spatial relationships, and not dynamic awareness of motion and space, i.e., reasoning about the effect of egocentric and object motions on spatial relationships. Manually annotating such object and camera movements is expensive. Hence, we introduce SAT, a simulated spatial aptitude training dataset comprising both static and dynamic spatial reasoning across 175K question-answer (QA) pairs and 20K scenes. Complementing this, we also construct a small (150 image-QAs) yet challenging dynamic spatial test set using real-world images. Leveraging our SAT datasets and 6 existing static spatial benchmarks, we systematically investigate what improves both static and dynamic spatial awareness. Our results reveal that simulations are surprisingly effective at imparting spatial aptitude to MLMs that translate to real images. We show that perfect annotations in simulation are more effective than existing approaches of pseudo-annotating real images. For instance, SAT training improves a LLaVA-13B model by an average 11% and a LLaVA-Video-7B model by an average 8% on multiple spatial benchmarks, including our real-image dynamic test set and spatial reasoning on long videos -- even outperforming some large proprietary models. While reasoning over static relationships improves with synthetic training data, there is still considerable room for improvement for dynamic reasoning questions.

Approach

SAT Approach We take actions in a 3D simulator and check the 3D locations of assets. We use natural language descriptions of the assets and make QA pairs based on how the 3D nature of the scene changes with the actions taken.

Results


Open-source MLMs struggle on our dynamic spatial reasoning as well as large proprietary models despite stronger static spatial performance

SAT Tasks

Fine-tuning on SAT improves spatial performance on existing static benchmarks
SAT Tasks

Fine-tuning on SAT improves performance on a video spatial benchmark, VSI-Bench (Yang et al, 2024)
SAT Tasks

BibTeX

@misc{ray2025satdynamicspatialaptitude,
      title={SAT: Dynamic Spatial Aptitude Training for Multimodal Language Models}, 
      author={Arijit Ray and Jiafei Duan and Ellis Brown and Reuben Tan and Dina Bashkirova and Rose Hendrix and Kiana Ehsani and Aniruddha Kembhavi and Bryan A. Plummer and Ranjay Krishna and Kuo-Hao Zeng and Kate Saenko},
      year={2025},
      eprint={2412.07755},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2412.07755}, 
    }