Authors: Bhat A. A., Spencer J. P., & Samuelson L. K.
Journal: Proceedings CogSci 2018
Tags: word-learning, attention, memory, neural, DFT
Link: URL
WOLVES integrates dynamic neural fields for vision and language to model cross-situational word learning. Peaks represent sustained attention to objects and words; memory fields accumulate co-occurrence statistics. The model reproduces human looking and learning curves across 12 experiments.
Eight coupled field equations evolve under local excitation and lateral inhibition. Word triggers excite word-object binding fields; gaze dynamics follow activation peaks. Co-occurrence counts update associative maps across trials. Simulations match children's preferential looking data.