Intelligence is not Artificial

by piero scaruffi

Cognitive Science and Artificial Intelligence | Bibliography and book reviews | My book on consciousness | Contact/feedback/email
(Copyright © 2018 Piero Scaruffi | Terms of use )


(These are excerpts from my book "Intelligence is not Artificial")

Boring Footnote: Semantic Analysis

Semantic parsing is different from the generative approach that Chomsky pioneered. Syntactic parsing wants to find out which one is the noun and which one the verb and so on: i.e. wants to build a tree that represents the grammatical structure of the sentence. Semantic parsing wants to turn a sentence into a logical representation, for example into a formula of first-order predicate logic. The advantage of this approach is that the logical representation lends itself to logical reasoning, i.e. automated processing by the computer. In 1970 Alfred Tarski's former student in philosophy Richard Montague at UCLA developed a formal method for mapping natural language into first-order predicate logic. Mark Steedman at the University of Edinburgh introduced "combinatory categorial grammar" that treats verbs as functions ("Combinatory Grammars and Parasitic Gaps", 1987). Technically speaking, both employed a compositional semantics based on the lambda calculus invented in 1936 by Alonzo Church at Princeton University. Semantic parsing was applied to database queries by John Zelle and Raymond Mooney at the University of Texas, who designed the system CHILL (Constructive Heuristics Induction for Language Learning), based on the learning methods of inductive logic programming ("Learning Semantic Grammars with Constructive Inductive Logic Programming", 1993). In 2005 Luke Zettlemoyer at MIT started developing a Steedman-style learning semantic parser ("Learning to Map Sentences to Logical Form", 2005). These approaches turn an utterance directly into a logical representation.

Probabilistic logic has been used to represent the meaning of natural language by Lise Getoor's student Matthias Broecheler at the University of Maryland ("Probabilistic Similarity Logic", 2012), by Raymond Mooney's team at the University of Texas, that merged Montague and Markov via Pedro Domingos' Markov logic networks ("Montague Meets Markov", 2013); and by Tom Mitchell for parsing of conversations, i.e. not just one sentence at a time but an entire discourse ("Parsing Natural Language Conversations using Contextual Cues", 2017).

The parser is supposed to learn how to map natural language sentences into logical representations of their meaning. The training data may consist of sentences coupled with lambda-calculus meaning representations, and the parser is expected to build a generalization that will help generate the logical representation of future sentences. Mooney's students built systems such as KRISP and WASP (2006) that use statistical machine learning to learn grammars. Mark Steedman's student Tom Kwiatkowski at the University of Edinburgh introduced an intermediate representation to learn language-independent grammars ("Inducing Probabilistic CCG Grammars from Logical Form with Higher-order Unification", 2010).

The success stories of the 2010s were mostly in distributional semantics. "Frame semantics" was instead used by chatbots designed to answer simple questions (the Alexa kind of chatbot). In frame semantics the system is trained to identify the action, the object and the modifiers (space and time) of a sentence. Ask a chatbot to "find a flight to Beijing on friday" and the chatbot will break down the sentence into the action ("find"), the object ("flight") and the modifiers ("Beijing" and "friday"). The traditional, rule-based, "model-theoretic semantics" was largely abandoned because the rules, full of exceptions, are just too difficult to encode; hence we can't come up with good models of the language to drive the semantic analysis.

There were also the first attempts at "grounded semantics". Grounded semantics is the idea that a system should learn language (unsupervised) by simply "listening" to people talking and by engaging in conversations, i.e. via "indirect" supervision. The pioneering methods by Dan Roth's student James Clarke at the University of Illinois ("Driving Semantic Parsing from the World's Response", 2010) and Michael Jordan's student Percy Liang at UC Berkeley ("Learning Dependency-Based Compositional Semantics", 2011) were limited to question-answering. Tom Mitchell's student Jayant Krishnamurthy at Carnegie Mellon University ("Weakly Supervised Training of Semantic Parsers", 2012) and Mark Steedman's student Siva Reddy ("Large-scale Semantic Parsing Without Question-answer Pairs", 2014) used combinatory categorial grammar to learn a semantic parser. Unsupervised semantic analysis had been studied by Pedro Domingos' student Hoifung Poon at the University of Washington ("Unsupervised SEMANTIC PARSING", 2009), who then moved to Microsoft, and a further step towards grounded semantics was taken by Ankur Parikh, working with Hoifung Poon at Microsoft, ("Grounded Semantic Parsing for Complex Knowledge Extraction", 2015).

None of these experiments has been particularly successful. Either our natural language is fundamentally not logical or we still haven't figured out its logic.

Back to the Table of Contents


Purchase "Intelligence is not Artificial")
Back to Cognitive Science | My book on consciousness | My reviews of books | Contact