10 results on '"*GENERATIVE grammar"'
Search Results
2. The growth of language: Universal Grammar, experience, and principles of computation.
- Author
-
Yang, Charles, Crain, Stephen, Berwick, Robert C., Chomsky, Noam, and Bolhuis, Johan J.
- Subjects
- *
COMPARATIVE grammar , *EXPERIENCE , *SPEECH , *ASYMMETRY (Linguistics) , *LANGUAGE acquisition , *ONTOGENY - Abstract
Human infants develop language remarkably rapidly and without overt instruction. We argue that the distinctive ontogenesis of child language arises from the interplay of three factors: domain-specific principles of language (Universal Grammar), external experience, and properties of non-linguistic domains of cognition including general learning mechanisms and principles of efficient computation. We review developmental evidence that children make use of hierarchically composed structures (‘Merge’) from the earliest stages and at all levels of linguistic organization. At the same time, longitudinal trajectories of development show sensitivity to the quantity of specific patterns in the input, which suggests the use of probabilistic processes as well as inductive learning mechanisms that are suitable for the psychological constraints on language acquisition. By considering the place of language in human biology and evolution, we propose an approach that integrates principles from Universal Grammar and constraints from other domains of cognition. We outline some initial results of this approach as well as challenges for future research. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
3. Directed Random Generation of Sentences.
- Author
-
Friedman, Joyce and Bobrow, D.
- Subjects
- *
COMPUTATIONAL linguistics , *GENERATIVE grammar , *COMPARATIVE grammar , *COMPUTER software , *FORTRAN - Abstract
The problem of producing sentences of a transformational grammar by using a random generator to create phrase structure trees for input to the lexical insertion and transformational phases is discussed. A purely random generator will produce base trees which will be blocked by the transformations, and which are frequently too long to be of practical interest. A solution is offered in the form of a computer program which allows the user to constrain and direct the generation by the simple but powerful device of restricted subtrees. The program is a directed random generator which accepts as input a subtree with restrictions and produces around it a tree which satisfies the restrictions and is ready for the next phase of the grammar. The underlying linguistic model is that of Noam Chomsky, as presented in Aspects of the Theory of Syntax. The program is written In FORTRAN IV for the IBM 360/67 and is part of a unified computer system for transformational grammar. It is currently being used with several partial grammars of English. [ABSTRACT FROM AUTHOR]
- Published
- 1969
- Full Text
- View/download PDF
4. The Storage Requirement in Precedence Parsing.
- Author
-
Bertsch, Eberhard
- Subjects
- *
PARSING (Computer grammar) , *COMPUTATIONAL linguistics , *GENERATIVE grammar , *COMPARATIVE grammar , *MATHEMATICS , *COMPUTER storage devices - Abstract
This article focuses on the precedence tables and linear precedence functions. They have been used and extensively studied as aids for syntax-directed compiling. If a precedence table exists but cannot be reshaped into a pair of precedence functions, several techniques for eliminating blank entries may be used. In that case, all non-blank entries have to be kept, however. Thus, there is a considerable gap between the storage requirements of precedence functions and precedence tables. It is well known that the use of precedence functions may entail a delay in error detection compared with precedence tables.
- Published
- 1977
- Full Text
- View/download PDF
5. Dependency Parsing of Modern Standard Arabic with Lexical and Infectional Features.
- Author
-
Marton, Yuval, Habash, Nizar, and Rambow, Owen
- Subjects
- *
LEXICOLOGY , *PARSING (Computer grammar) , *COMPUTATIONAL linguistics , *COMPARATIVE grammar , *FORMAL languages , *GENERATIVE grammar , *ARABIC language - Abstract
We explore the contribution of lexical and inflectional morphology features to dependency parsing of Arabic, a morphologically rich language with complex agreement patterns. Using controlled experiments, we contrast the contribution of different part-of-speech (POS) tag sets and morphological features in two input conditions: machine-predicted condition (in which POS tags and morphological feature values are automatically assigned), and gold condition (in which their true values are known). We find that more informative (fine-grained) tag sets are useful in the gold condition, but may be detrimental in the predicted condition, where they are outperformed by simpler but more accurately predicted tag sets. We identify a set of features (definiteness, person, number, gender, and undiacritized lemma) that improve parsing quality in the predicted condition, whereas other features are more useful in gold. We are the first to show that functional features for gender and number (e.g., "broken plurals"), and optionally the related rationality ("humanness") feature, are more helpful for parsing than form-based gender and number. We finally show that parsing quality in the predicted condition can dramatically improve by training in a combined gold+predicted condition. We experimented with two transition-based parsers, MaltParser and Easy-First Parser. Our findings are robust across parsers, models, and input conditions. This suggests that the contribution of the linguistic knowledge in the tag sets and features we identified goes beyond particular experimental settings, and may be informative for other parsers and morphologically rich languages. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
6. Expressive power of LL() Boolean grammars
- Author
-
Okhotin, Alexander
- Subjects
- *
PARSING (Computer grammar) , *COMPUTATIONAL linguistics , *FORMAL languages , *LINGUISTIC context , *PROGRAMMING languages , *COMPARATIVE grammar , *GENERATIVE grammar - Abstract
Abstract: The paper studies the family of Boolean LL languages, generated by Boolean grammars and usable with the recursive descent parsing. It is demonstrated that over a one-letter alphabet, these languages are always regular, while Boolean LL subsets of obey a certain periodicity property, which, in particular, makes the language non-representable. It is also shown that linear conjunctive LL grammars cannot generate any language of the form , with non-regular, and that no languages of the form , with non-regular , can be generated by any linear Boolean LL grammars. These results are used to establish a detailed hierarchy and closure properties of these and related families of formal languages. [Copyright &y& Elsevier]
- Published
- 2011
- Full Text
- View/download PDF
7. Passive verb morphology: The effect of phonotactics on passive comprehension in typically developing and Grammatical-SLI children
- Author
-
Marshall, Chloe, Marinis, Theodoros, and van der Lely, Heather
- Subjects
- *
CHILDREN'S language , *APPLIED linguistics , *LANGUAGE disorders in children , *SPECIFIC language impairment in children , *COMMUNICATIVE disorders in children , *SENTENCES (Grammar) , *COMPUTATIONAL linguistics , *COMPREHENSION , *COMPARATIVE grammar , *GENERATIVE grammar , *GRAMMAR - Abstract
In this study we explore the impact of a morphological deficit on syntactic comprehension. A self-paced listening task was designed to investigate passive sentence processing in typically developing (TD) children and children with Grammatical-Specific Language Impairment (G-SLI). Participants had to judge whether the sentence they heard matched a picture they were shown. Working within the framework of the Computational Grammatical Complexity Hypothesis, which stresses how different components of the grammar interact, we tested whether children were able to use phonotactic cues to parse reversible passive sentences of the form the X was verbed by Y. We predicted that TD children would be able to use phonotactics to parse a form like touched or hugged as a participle, and hence interpret passive sentences correctly. This cue is predicted not be used by G-SLI children, because they have difficulty building complex morphological representations. We demonstrate that indeed TD, but not G-SLI, children are able to use phonotactics cues in parsing passive sentences. [Copyright &y& Elsevier]
- Published
- 2007
- Full Text
- View/download PDF
8. Machine Learning Based Approach to S-clause Segmentation.
- Author
-
KIM, MI-YOUNG and LEE, JONG-HYEOK
- Subjects
- *
ARTIFICIAL intelligence , *PARSING (Computer grammar) , *COMPUTATIONAL linguistics , *FORMAL languages , *GENERATIVE grammar , *COMPARATIVE grammar - Abstract
When a dependency parser analyzes long sentences with fewer subjects than predicates, it is difficult to recognize which predicate governs which subject. To handle such syntactic ambiguity between subjects and predicates, we define a "S(ubject)-clause" as a group of words containing several predicates and their common subject. This paper proposes a method to segment S-clauses and perform syntactic analysis of long sentences using S-clause segmentation. To segment S-clauses, various machine learning methods were applied. We found that the Logitboost method produced the best performance. In our experimental evaluation, S-clause information turned out to be effective in determining the governor of a subject and that of a predicate in dependency parsing. Further syntactic analysis using S-clauses achieved an improvement in precision by 5.25 percent. [ABSTRACT FROM AUTHOR]
- Published
- 2004
9. INSTANCE-SPECIFIC SOLUTIONS FOR ACCELERATING THE CKY PARSING OF LARGE CONTEXT-FREE GRAMMARS.
- Author
-
Bordim, Jacir L., Barra, Oscar H., Ito, Yasuaki, and Nakano, Koji
- Subjects
- *
PARSING (Computer grammar) , *VERILOG (Computer hardware description language) , *COMPUTATIONAL linguistics , *FORMAL languages , *COMPARATIVE grammar , *GENERATIVE grammar - Abstract
The main contribution of this paper is an FPGA-based implementation of an instance-specific hardware which accelerates the CKY (Cocke-Kasami-Younger) parsing of context-free grammars. Given a context-free grammar G and a string x, the CKY parsing determines whether G derives x. We developed a hardware generator that creates a Verilog HDL source to perform the CKY parsing for any fixed context-free grammar G. The generated source is embedded in an FPGA using the design software provided by the FPGA vendor. The results show that our instance-specific hardware solution attains an astonishing speed-up factor of up to 3,700 over traditional software solutions. [ABSTRACT FROM AUTHOR]
- Published
- 2004
- Full Text
- View/download PDF
10. WHAT THE PARSER NEEDS TO KNOW IN ORDER TO ATTACH A PHRASE.
- Author
-
Lasser, Ingeborg
- Subjects
- *
PARSING (Grammar) , *COMPARATIVE grammar , *COMPUTATIONAL linguistics , *FORMAL languages , *GENERATIVE grammar , *LANGUAGE & languages , *LINGUISTICS - Abstract
The enterprise in parsing research is to find out how the mind creates syntactic representations. The veritable theory of parsing will be both part of a well-motivated mental model of language and language processing and accommodate the experimental evidence. The goal of this paper is to compare two sorts of parsing models that are being discussed in literature, taking into account two criteria. The two types of models the author will discuss are the Licensing Parser and Immediate Attachment Parser. In section one, the author defines these models. In the rest of the paper, the two kinds of models are evaluated.
- Published
- 1994
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.