Rules versus statistics in reading aloud: New evidence on an old debate

authors

  • Perry Conrad
  • Ziegler Johannes C.
  • Braun Mario
  • Zorzi Marco

document type

ART

abstract

Nonword reading performance, that is, the ability to generate plausible pronunciations to novel items, has probably been the hardest test case for computational models of reading aloud. This is an area where rule-based models, such as the Dual-Route Cascaded (DRC) model, typically outperformed connectionist learning models. However, what is the evidence that people apply rules when reading nonwords? This was investigated in German. Nonwords were created that allowed us to test whether people apply an abstract rule to determine vowel length or whether they would be sensitive to the statistical distribution of vowel length in the mental lexicon. The human data showed a great amount of variability in nonword pronunciations. Simulations of these nonwords, where the DRC was contrasted with a fully implemented and freely available German version of the connectionist dual process model (German_CDP+), a model that learns the statistical mapping between spelling and sound, showed that CDP+ provided a better account of the data than the DRC. These results support the view that rule based models may simply approximate patterns of language use rather than provide an accurate description of the underlying cognitive machinery.

more information