Séminaire Jim Magnuson : annulé

Date: 
Vendredi, 19 Mai, 2023 - 11:00
Date fin: 
Vendredi, 19 Mai, 2023 - 13:00
Lieu: 
Campus St Charles, salle des Voûtes

Le séminaire de Jim Magnuson est annulé

Jim Magnusson
(U. Connecticut & BCBL, Basque Center on Cognition, Brain & Language)
Basque Center on Cognition, Brain and Language
Ikerbasque
University of Connecticut

Beyond transitional probabilities in models of statistical learning

Biological statistical learning (SL) – the ability of an organism to implicitly develop sensitivity to coherent covariation in its environment – is presumed to be a core basis for adaptive perception, cognition, and action. I will review two aspects of SL where potentially simple explanations have been, in my opinion, ruled out prematurely. The first is the potential for SL based on transitional probability (TP) detection to provide a robust basis for bootstrapping word learning by helping infants properly segment speech into words. While equating SL with TP detection is a flawed premise, I attempted to replicate simulations by Yang and colleagues that they claimed demonstrated that TPs could not provide a basis for bootstrapping segmentation. I find that their TP-based strategy works much better than they reported (when applied to a much larger corpus of child-directed speech), and that performance improves substantially when I simplify their TP-based algorithm. This result challenges their rejection of TPs (and SL more generally) for bootstrapping lexical segmentation. The second is the ability for simple recurrent networks (SRNs) to account for core demonstrations of human SL. I find that in some cases where SRNs were tested and appeared to fail to account for data, SRNs succeed with minor modifications to model or task parameters. I also find that SRNs succeed in some cases where they were not tested because researchers assumed they would fail (including one case where the PARSER model fails to simulate human performance). The assumption that SRNs would fail appears to follow from a common but incorrect assumption that SRNs can only learn TPs. Given that SRNs can learn complex (multiscale) and nonadjacent contingencies in sequences -- that is, they can learn patterns much more complex than just TPs -- recurrent networks more generally provide a promising explanation of many aspects of SL, and that includes providing a powerful potential basis for bootstrapping word learning.