Abstract
WOLVES integrates dynamic neural fields for vision and language to model cross-situational word learning. Peaks represent sustained attention to objects and words; memory fields accumulate co-occurrence statistics. The model reproduces human looking and learning curves across 12 experiments.
Methodology
Eight coupled field equations evolve under local excitation and lateral inhibition. Word triggers excite word-object binding fields; gaze dynamics follow activation peaks. Co-occurrence counts update associative maps across trials. Simulations match children’s preferential looking data. WOLVES integrates dynamic neural fields for vision and language to model cross-situational word learning. Peaks represent sustained attention to objects and words; memory fields accumulate co-occurrence statistics. The model reproduces human looking and learning curves across 12 experiments.
Eight coupled field equations evolve under local excitation and lateral inhibition. Word triggers excite word-object binding fields; gaze dynamics follow activation peaks. Co-occurrence counts update associative maps across trials. Simulations match children’s preferential looking data. WOLVES integrates dynamic neural fields for vision and language to model cross-situational word learning. Peaks represent sustained attention to objects and words; memory fields accumulate co-occurrence statistics. The model reproduces human looking and learning curves across 12 experiments.