Back to Search
Start Over
Emergence of analogy from relation learning
- Source :
- Proceedings of the National Academy of Sciences of the United States of America, vol 116, iss 10, Proceedings of the National Academy of Sciences of the United States of America
- Publication Year :
- 2019
- Publisher :
- eScholarship, University of California, 2019.
-
Abstract
- Significance The ability to learn and make inferences based on relations is central to intelligence, underlying the distinctively human ability to reason by analogy across dissimilar situations. We have developed a computational model demonstrating that abstract relations, such as synonymy and antonymy, can be learned efficiently from semantic feature vectors for individual words and can be used to solve simple verbal analogy problems with close to human-level accuracy. The approach illustrates the potential synergy between deep learning from “big data” and supervised learning from “small data.” Core properties of high-level intelligence can emerge from relatively simple computations coupled with rich semantics. The model illustrates how operations on nonrelational inputs can give rise to protosymbolic relational representations.<br />By middle childhood, humans are able to learn abstract semantic relations (e.g., antonym, synonym, category membership) and use them to reason by analogy. A deep theoretical challenge is to show how such abstract relations can arise from nonrelational inputs, thereby providing key elements of a protosymbolic representation system. We have developed a computational model that exploits the potential synergy between deep learning from “big data” (to create semantic features for individual words) and supervised learning from “small data” (to create representations of semantic relations between words). Given as inputs labeled pairs of lexical representations extracted by deep learning, the model creates augmented representations by remapping features according to the rank of differences between values for the two words in each pair. These augmented representations aid in coping with the feature alignment problem (e.g., matching those features that make “love-hate” an antonym with the different features that make “rich-poor” an antonym). The model extracts weight distributions that are used to estimate the probabilities that new word pairs instantiate each relation, capturing the pattern of human typicality judgments for a broad range of abstract semantic relations. A measure of relational similarity can be derived and used to solve simple verbal analogies with human-level accuracy. Because each acquired relation has a modular representation, basic symbolic operations are enabled (notably, the converse of any learned relation can be formed without additional training). Abstract semantic relations can be induced by bootstrapping from nonrelational inputs, thereby enabling relational generalization and analogical reasoning.
- Subjects :
- word embeddings
Exploit
Computer science
Big data
Social Sciences
Analogy
computer.software_genre
Basic Behavioral and Social Science
050105 experimental psychology
03 medical and health sciences
0302 clinical medicine
Clinical Research
Converse
Behavioral and Social Science
0501 psychology and cognitive sciences
generalization
semantic relations
Pediatric
Multidisciplinary
Small data
learning
business.industry
Deep learning
05 social sciences
Supervised learning
Neurosciences
Modular design
analogy
Psychological and Cognitive Sciences
Artificial intelligence
business
computer
030217 neurology & neurosurgery
Natural language processing
Subjects
Details
- Database :
- OpenAIRE
- Journal :
- Proceedings of the National Academy of Sciences of the United States of America, vol 116, iss 10, Proceedings of the National Academy of Sciences of the United States of America
- Accession number :
- edsair.doi.dedup.....aa6a5d24cd787034d05bca00cb7225e6