Authors
R Harald Baayen, Cyrus Shaoul, Jon Willits, Michael Ramscar
Publication date
2016/1/2
Journal
Language, Cognition and Neuroscience
Volume
31
Issue
1
Pages
106-128
Publisher
Routledge
Description
Current theories of auditory comprehension assume that the segmentation of speech into word forms is an essential prerequisite to understanding. We present a computational model that does not seek to learn word forms, but instead decodes the experiences discriminated by the speech input. At the heart of this model is a discrimination learning network trained on full utterances. This network constitutes an atemporal long-term memory system. A fixed-width short-term memory buffer projects a constantly updated moving window over the incoming speech onto the network's input layer. In response, the memory generates temporal activation functions for each of the output units. We show that this new discriminative perspective on auditory comprehension is consistent with young infants' sensitivity to the statistical structure of the input. Simulation studies, both with artificial language and with English child-directed …
Total citations
2015201620172018201920202021202220232024217131316151911214
Scholar articles
RH Baayen, C Shaoul, J Willits, M Ramscar - Language, cognition and neuroscience, 2016