Overview
This project developed computational models explaining how babies learn language through visual exploration patterns. By studying the intersection of vision and language development, we created AI systems that mirror infant learning processes and match behavioral patterns observed in developmental psychology studies.
Objectives
- Model infant language acquisition through visual attention mechanisms
- Understand the role of visual exploration in early language development
- Create computational models matching developmental psychology findings
- Bridge artificial intelligence and developmental cognitive science
Methodology
We employed attention-based neural networks inspired by infant gaze patterns and combined them with language models. Our approach uses reinforcement learning to model exploratory behavior and employs cross-modal learning to understand vision-language connections. Eye-tracking data from developmental studies informed model architecture design.
Results
Successfully replicated key findings from developmental psychology including the vocabulary spurt phenomenon and word-object association patterns. Our models achieved 95% correlation with infant gaze patterns in controlled experiments. The research provided new insights into the mechanisms underlying early language acquisition.
Impact
This interdisciplinary research contributes to both artificial intelligence and developmental psychology, offering new perspectives on learning mechanisms. Applications include more natural language learning systems and educational technologies that adapt to human developmental patterns.
Funding
- Cognitive Science Research Grant — BND $55,000 (2019-2022)
Collaborators
- Dr. Linda Smith (Indiana University)
- Dr. Chen Yu (University of Texas at Austin)
Publications
- Computational Models of Infant Language-Vision Learning — A.A. Bhat, L. Smith, C. Yu (2022)
- Visual Attention and Word Learning in Artificial Agents — A.A. Bhat, C. Yu (2021)