Robot Language Acquisition Modelling via Cross-Situational Learning with Little Data
Résumé
How do children bootstrap language through noisy supervision? Most prior works focused on tracking co-occurrences between individual words and referents. We model cross-situational learning (CSL) at sentence level with few (1000) training examples. We compare two recurrent neural network architectures often use as cognitive models: reservoir computing (RC) and LSTMs on three datasets including complex robotic commands. Surprisingly, reservoirs demonstrate robust generalization when increasing vocabulary size: the error grows slowly compared to an LSTM of fixed size. This suggests that that random projections used in RC helps to bootstrap generalization quickly. How robots acquire basics of language like in child-caregiver (Human-Human) interactions could give hints of how to link animal vocalisations with behaviour in ambiguous context. Cross-statistics between sequence of vocalisations and various contexts could probably be learnt in few trials by such Reservoir architecture.
Origine | Fichiers produits par l'(les) auteur(s) |
---|