1. Learnability of embedded syntactic structures depends on prosodic cues
- Author
-
Angela D. Friederici, Jörg Bahlmann, and Jutta L. Mueller
- Subjects
Grammar ,Artificial grammar learning ,Learnability ,business.industry ,Computer science ,Process (engineering) ,Cognitive Neuroscience ,media_common.quotation_subject ,Experimental and Cognitive Psychology ,computer.software_genre ,Syntax ,Implicit learning ,Artificial Intelligence ,Artificial intelligence ,Syllable ,business ,Prosody ,computer ,Natural language processing ,media_common - Abstract
The ability to process center-embedded structures has been claimed to represent a core function of the language faculty. Recently, several studies have investigated the learning of center-embedded dependencies in artificial grammar settings. Yet some of the results seem to question the learnability of these structures in artificial grammar tasks. Here, we tested under which exposure conditions learning of center-embedded structures in an artificial grammar is possible. We used naturally spoken syllable sequences and varied the presence of prosodic cues. The results suggest that mere distributional information does not suffice for successful learning. Prosodic cues marking the boundaries of the major relevant units, however, can lead to learning success. Thus, our data are consistent with the hypothesis that center-embedded syntactic structures can be learned in artificial grammar tasks if language-like acoustic cues are provided.
- Published
- 2011