Piero Scaruffi(Copyright © 2006 Piero Scaruffi | Legal restrictions - Termini d'uso )
Language: Minds Speak
(These are excerpts from, or extensions to, the material published in my book "The Nature of Consciousness")
The Hidden Metaphysics of Language
Language is obviously one of the most sophisticated cognitive skills that humans possess, and one of the most apparent differences between the human species and other animal species. No surprise, then, that language is often considered the main clue to the secrets of the mind. After all, it is through language that our mind expresses itself. It is with language that we can study the mind.
In the 1950s the USA linguist Benjamin Lee Whorf extended the view of his teacher, the German-born Edward Sapir, that language is even more than a tool to speak: it is "thought" itself. Or, better, that language and thought influence each other. Language is used to express thought, but, in turn, language shapes thought. In particular, the structure of the language they speak has an influence on the way its speakers understand the environment. Language influences thought because it contains what Sapir had called "a hidden metaphysics", i.e. a view of the world, a culture, a conceptual system. Language contains an implicit classification of experience. Whorf restated Sapir's view in his principle of "linguistic determinism": grammatical and categorial patterns of language embody cultural models. Every language is a culturally-determined system of patterns that creates the categories by which individuals not only communicate but also think.
The USA psychologist Katherine Nelson, whose studies were focused on the stages of cognitive development in a child, discovered that language is crucial to the formation of the adult mind: language acts as the medium through which the mind becomes part of a culture, through which the shared meanings of society take over the individualistic meanings of the child's mind. Society takes over the individual mind, and it does so through language. From the moment we were born, the ultimate goal of our mind, through our studying, working, making friends, writing books, etc., was to be social.
A Tool To Shape Minds
This view was inspired by Lev Vygotsky's theory of the mediating language: that language provides a semiotic mediation of knowledge and therefore guides the child's cognitive growth. Cognitive faculties are internalized versions of social processes. This implies that cognition develops in different ways in different cultures. Your mind depends on the cultural conditions of the community that raised you.
The individual is the result of a dialectical cooperation between nature and history, between the biological sphere and the social sphere. An individual is a product of culture (nurture) as well as a product of nature. Children develop under the influence of both biology and society.
Vygotsky insisted on the concept of "zone of proximal development": the difference between the unguided (independent) problem solving skills and the guided (coached) problem solving skills.
Language is a way to organize (internally) the world. But language is also a way to transmit mind to less "mentally-able" individuals and across generations: the by-products of this process of "coaching" are the arts and sciences.
The acquisition of language itself is such a process of transmission of mind: teaching a child to speak is a way of coaching the mind of the child.
Humans solve problems by speaking as well as by using their body and tools.
Vygotsky also realized that the process of "learning" from a coach is mostly unconscious (just like the child is not conscious that s/he is learning to speak). He thought it was a general phenomenon: we become conscious of a function only after we have mastered it by practicing it unconsciously.
Human Language And Animal Language
Language is actually quite widespread in nature in its primitive form of communication (all animals communicate and even plants have some rudimentary form of interaction), although it is certainly unique to humans in its human form (but just like, say, chirping is unique to birds in its "birdy" form).
Language is very much a mirror image of the cognitive capabilities of the animal. Is human language really so much more sophisticated than other animals' languages?
Birds and monkeys employ a sophisticated system of sounds to alert themselves of intruders. The loudness and the frequency are proportional to the distance and probably to the size of the intruder. Human language doesn't have such a sophisticated way of describing an intruder. Is it possible that human language evolved in a different way simply because we became more interested in other things, than in describing the size and distance of an intruder?
There are three levels at which human language operate: the "what", the "where", the "why". What are you doing is about the present. Where are you going is about the future. Why are you going there is about the relationship between past and future. These are three different steps of communication. Organisms could communicate simply in the present, by telling each other what they are doing. This is what most machines do all the time when they get connected. Living organisms also move. Bees dance to other bees in order to communicate the location of food ("where?"). Humans are also interested in motives ("why?") all the time. Without a motive a description often sounds incomplete. It is common in rural Southeast Asia to greet people by asking "what are you doing?" The other person will reply "I am rowing the boat". The next question will be "where are you going?" And the last question will be "why are you going there?" With these three simple questions the situation has been fully analyzed, as far as human cognition goes.
This does not mean that there could not be a fourth level of communication that we humans simply do not exhibit because it is beyond our cognitive capabilities.
There are other features that are truly unique to humans: clothes, artifacts, and, first and foremost, fire. Have you ever seen a lion wear the fur of another animal? light a fire to warm up? build a utensil to scratch its back? Why humans do all of these things? Are they a consequence of our cognitive life, or is our cognitive life a consequence of these skills? One wonders if Sapir-Whorf’s principle applies only to language or, ultimately, to all behavior.
Language Changes Minds
Language is a form of communication. The linguistic tradition focused on the mental processes of understanding language, thereby taking a "one-brain" view of communication. But communication, by definition, involves (at least) two participants, i.e. two brains. Communication (and language in particular) is a process between two brains. There is a neural process going on in one of the two brains and language is a means for that neural process to affect the neural process occurring in the other brain. Ultimately, "communication" is about one brain trying to replicate some kind of neural pattern into another brain. Language uses sounds (or written symbols) to induce such a mental replication. Those sounds (symbols) are structured in such a way as to interact with the neural process of the other brain and cause it to create a specific neural pattern (that’s what we call "understanding"). This is an error-prone process that requires a lot of interaction, due to the fact that each brain is slightly different. But the goal is to eventually transmit a neural pattern from one brain to another. That pattern could be a scene or a story, if we are "narrating" something, or it could be a belief if we are trying to "convince" of something, or a concept if we are trying to explain something. It is a pattern that already exists in our brain and we want to recreate it in the brain of our interlocutor.
Needless to say, this implies that brains are capable of changing their neural patterns based on sounds/symbols. This is true of all species: bee brains must be capable of changing their neural patterns based on the dances of other bees. And must be willing to.
Naturally, once the pattern (a scene, a story, a concept) has been copied in the other brain, it takes on a life of its own because it interacts with the neural pattern that already inhabited that brain.
This complex interplay of brains must provide some significant evolutionary advantage if it appeared and became widespread among all species.
As the USA psychologist James-Mark Baldwin noticed, species capable of learning are better at evolving. If language is such an efficient tool for learning that shapes an entire system of thought in a few years, then it is probably useful to survival and evolution.
Ultimately, language creates minds. We not only speak, but also listen. The listening is no less important than the speaking: the speaking expresses our mind, but the listening shapes our mind.
Communication and Ecology
Communication is two beings that engage in changing each other’s brain. That is actually the most natural phenomenon if one views life "top-down" and not "bottom-up". When we think bottom-up, we conceive life as many small beings making up societies and larger and larger entities (ecosystems) and eventually making up the Earth. It actually works the other way around: the Earth existed before life as we know it, and the Earth, at any point in time, is made of living components such as ecosystems, which are made of societies, which are made of individual beings. It is no surprise that all those ecosystems, societies and individuals are capable of communicating: they are merely "parts" of one giant organism, the Earth. Communicating is their natural state. They are "parts" of the same organism.
Communication (and therefore language) is one of the most basic modes of living beings. When a bird sings in the woods, it is most likely telling other birds about the environment. The slightest disturbance will cause the tune to change. The bird singing in the woods is, therefore, reacting to sounds and smells and sights. The sounds the bird is making are "caused" by the environment and are in harmony with the environment. Those "sounds" communicate to other birds information about the environment. Indirectly it is the environment "talking" to the other birds, i.e. to itself.
Language is more than just sound. Language is sound (or vision, when you are reading) with a structure, and therefore packs more information than just sound. Language carries meaning. This was a crucial invention: that you can use sound as a vehicle to carry more information than the sound itself. Again, the tip probably came from Nature itself: Nature speaks to us all the time. The noise of a river or the noise of an avalanche creates concepts in our minds, besides the representation of those sounds. Brain connections are modified at two levels: first to reflect the stimuli of the noise, and then to reflect what we can infer from the noise. Our brain can learn at two levels: there is a noise in that direction, and it is a river (meaning, for example, water to drink). Stimuli modify connections both at the level of perception and at the level of concepts. Language exploits this simple fact.
(The same is true of cinema, but our bodies are not equipped with an organ to make images the way we are equipped with an organ to make sounds, and the invention of writing required a lot less technological knowledge than television or cinema. However, in the future we may end up carrying our portable image-maker so that we can show what happened in images instead of telling it in words).
Sound is not the only way to communicate. Movement can also communicate. Sound is a particular case of movement.
The environment is a symphony of sounds, smells, sights and movement. Language is but one of the instruments in this symphony.
Anomalies of Language
The structure of any natural language is so complex that no machine has been able to fully master one yet. It is hard to believe that a child can learn a language at all. Its complexity should make it impossible at the outset.
If we analyze the way language works, we can draw two opposite conclusions: on one hand, the power of language looks overwhelming, on the other, its clumsiness is frustrating.
On one hand, we know that on average western languages are about 50% redundant: we would not lose any expressive power if we gave up 50% of our dictionary. We can guess the meaning of most sentences from a fragment of them. We also know that professional translators are able to translate a speech with minimal or no knowledge of the topic the speech is about.
On the other hand, we tend to believe that humans have developed amazing capabilities for communicating: language, writing, even television. However, in reality human communication is rather inefficient: two computers can simply exchange in a split second an image or a text, pixel by pixel or character by character, without any loss of information, whereas a human must describe to another human the image in a lengthy way and will certainly miss some details. Two computers could even exchange entire dictionaries, in the event they do not speak the same language. They could exchange in a few seconds their entire knowledge. Humans can only communicate part of the information they have, and even that takes a long time and is prone to misunderstandings.
Furthermore, why is it that we can accurately describe a situation, but not our inner life of emotions? Language is so rich when it comes to the external world, but so poor and inefficient when it comes to my inner life.
The Polish linguist Alfred Korzybski noted that the ability to manufacture symbols gives humans a tremendous advantage (the ability to generalize experience and pass them on to other humans, so they do not need to repeat our mistakes or rediscover what we already discovered), but also a disadvantage, that accounts for many of our social and personal problems: there are fewer words (and concepts) than experiences. This means that we use the same word to describe different situations, objects, or feelings. No two apples are the same, but we use the word "apple" for all of them. Worse: we use the word "apple" even for the drawing of an apple, for the dream of an apple and for the string of characters "a-p-p-l-e", which are completely different objects. We tend to equate situations, objects and feelings that are actually different. We tend to define situations more often by "intension" (the "kind" they belong to) than by "extension" (the unique facts of a situation).
Many people speak English. So we know that the English language exists and there must be a way to learn it and speak it. However, there is no definition of what the English language is, or of any other natural language. If you want to find out whether a word is English or not, you have to check a dictionary and hope that the author of that dictionary did not miss any word (in fact, almost all of them do miss some words, as new words are created all the time). If you want to find out whether a sentence is English, the single words are not enough. A foreign word can actually show up in an English sentence. For example, "bambino is not an English word" is a perfectly valid English sentence that everybody understands. Even words that are not words in any language can figure in an English sentence: "xgewut is not a meaningful word" is an English sentence. What makes a sentence English?
At the beginning of the 20th century, the Swiss linguist Ferdinand DeSaussure was asking precisely this kind of questions. He distinguished the "parole" (an actual utterance in a language) from the "langue" (the entire body of the language).
Building on those foundations, in 1957 the USA linguist Noam Chomsky started a conceptual revolution. He was reacting to "structural" linguists, who were content with describing and classifying languages, and to behaviorists, who thought that language was learned by conditioning.
At the time, all scientific disciplines were being influenced by a new propensity towards formal thinking that had its roots as much in Computer Science as in David Hilbert's program of "formal systems" in Logic. Chomsky, basically, extended the idea of formal systems to Linguistics: he realized that the logical formalism could be employed to express the grammar of a language; and that the grammar of a language "was" the specification for the entire language. Chomsky's idea was therefore to concentrate on the study of grammar, and specifically syntax, i.e. on the rules that account for all valid sentences of a language.
His assumption was that the number of sentences in a language is potentially infinite, but there is a finite system of rules that defines which sentences can potentially be built and determines their meaning, and that system of rules is what identifies a language and differentiates it from other languages. That system of rules is the grammar of the language.
One of Chomsky's goals was to explain the difference between "performance" (all sentences that an individual will ever use) and "competence" (all sentences that an individual can utter, but will not necessarily utter). We are capable of saying far more than we will ever say in our entire lifetime. And we understand sentences that we have never heard before. You have probably never seen any of the sentences contained in this book but, hopefully, you understand them all. We can tell right away whether a sentence is correct or not, even when we do not understand its meaning. We do not learn a language by memorizing all possible sentences of it. We learn, and subsequently use, an abstraction that allows us to deal with any sentence in that language. That abstraction is the grammar of the language.
Behaviorists thought that language is learned via a process of conditioning: one learns the meaning of a sentence by being exposed to it and to its meaning. But Chomsky pointed out that virtually no sentence is similar to other sentences we heard before. You have never read a sentence with these exact words before but, hopefully, you understand the meaning of what I just wrote.
Chomsky therefore argued for a "deductive" approach to language: how to derive all possible sentences of a language (whether they have been used or not) from an abstract structure (its "generative" grammar).
Chomsky also argued for the independence of syntax from semantics: the notion of a "well-formed" sentence in the language is distinct from the notion of a "meaningful" sentence. A sentence can make perfect sense from a grammatical point of view, while being absolutely meaningless (such as "the table eats cloudy books").
Language is a set of sentences. Each sentence is a finite string of words from a lexicon. And a grammar is the set of rules that determine whether a sentence belongs to that grammar’s language. And, dually, the rules of the grammar are capable of generating (by recursive application) all the valid sentences in that language: the language is "recursively numerable".
When analyzing a sentence (or "parsing" it), the sequence of rules applied to the sentence builds up a "parse tree". This type of grammar, so called "phrase-structure grammar", turns out to be equivalent to a Turing machine, and therefore lends itself to direct implementation on the computer.
The phrase-structure approach to language is based on "immediate constituent analysis": a phrase structure is defined by the constituents of the sentence (noun phrase, verb phrase, etc).
Initially, Chomsky thought that a grammar needs to have a tripartite structure: a sequence of rules to generate phrase structure, a sequence of morpho-phonemic rules to convert strings of morphemes into strings of phonemes, and a sequence of transformational rules that transform strings with phrase structure into new strings to which the morpho-phonemic rules can apply.
Whatever the set of rules, the point was that analyzing language was transformed into a mechanical process of generating more and more formal strings, just like when trying to prove a mathematical theorem. The underlying principle was that all the sentences of the language (which are potentially infinite) could be generated by a finite (and relatively small) number of rules, through the recursive application of such rules. And this fit perfectly well with the Logic-based approach of Artificial Intelligence to simulating the mind.
Chomsky thought that two levels of language were needed: an underlying "deep structure", which accounts for the fundamental syntactic relationships among language components, and a "surface structure", which accounts for the sentences that are actually uttered. The latter gets generated by transformations of elements in the deep structure. For example, "I wrote this book" and "This book was written by me" use the same constituents ("I", "to write", "book") and such constituents are in the same relationship: but one is an active form and the other is a passive form. One gets transformed into the other. Their deep structure is the same, even if their surface structures are different. Many different sentences may exhibit the same deep structure.
The phrase structure produces the "deep structure" of a sentence. That needs to be supplemented by a transformational component and a morpho-phonemic component, which together transform the deep structure into the surface structure of the sentence (e.g. active or passive form).
Technically, the deep structure of a sentence is a tree (the "phrase marker"), that contains all the words that will appear in its surface structure. Understanding language, basically, consists in transforming surface structures into deep structures.
In Chomsky's "standard theory" a grammar is made of a syntactic component (phrase structure rules, lexicon and transformational component), a semantic component (that assigns a meaning to the sentence) and a phonologic component (which transforms it into sounds).
In the end, every sentence of the language is represented by a quadruple structure: the D-structure (the one generated by phrase-structure rules), the S-structure (obtained from the D-structure by applying transformational rules), the P-structure (a phonetic structure) and a "logical form". The logical form of a sentence is the semantic component of its representation, usually in the guise of a translation into first-order Predicate Logic of the "meaning" of the sentence. These four structures define everything there is to know about the sentence: which grammar rules it satisfies, which transformational rules yield its external aspect, which rules yield the sounds actually uttered by the speaker, and finally the meaning of what is said.
Chomsky's computational approach had its flaws. To start with, each Chomsky grammar is equivalent to a Turing machine. Because of Godel's theorem, the processing of a Turing machine may never come to an end. Therefore, a grammar may never find the meaning of a valid sentence, but we have no evidence that our brain may never find the meaning of a valid sentence in our language. Therefore, some conclude that Chomsky's grammars are not what our brain uses. Also, Chomsky had to explain how we can learn the grammar of our own language: if the grammar is computational in nature, as Chomsky thought, then it can be proved mathematically that no amount of correct examples of sentences are enough to learn a language. It is mathematically impossible for a child to have learned the language she speaks!
An important assumption lies at the core of Chomsky's theory. Chomsky claimed that we have some innate knowledge of what a grammar is and how it works. Then experience determines which specific language (i.e., grammar) we will learn. When we are taught a language, we do not memorize each sentence word by word: eventually, we learn the grammar of that language, and the grammar enables us to both understand and utter more sentences than we have ever heard. Somehow our brains refuse to learn a language by memorizing all possible sentences: our brains tend to infer a grammar from all those sentences. Chomsky concluded that our brains are pre-wired to deal with grammars, that there exists some kind of universal linguistic knowledge.
The linguistic ability is inherent as much as arms: we do not learn to have arms, we just have them. Experience simply shapes them.
In 1981, Chomsky introduced the concept of a "universal grammar" to defend his thesis that language is innate: we are born with a brain that is pre-wired to learn language; which language we learn depends on what sentences we are exposed to. Our brain is born with a "universal grammar", a set of universal rules that enable it to deal with language (as an abstract skill). Chomsky's point was that languages are impossibly difficult to learn: however, children routinely learn their home language in a few years. Therefore, their brain must be "ready" to acquire language in a way that no computer is. In other words, what the brain has to learn is not the whole concept of "language", but something smaller and simpler. If the brain contains a "universal grammar", then what we have to learn is not the whole concept of "language" but only the specifics of our home language.
Formally stated, Chomsky decomposes a user's knowledge of language into two components: a universal component (the "universal grammar"), which is the knowledge of language possessed by every human, and a set of parameter values and a lexicon, which together constitute the knowledge of a particular language. The ability to understand and utter language is due to the universal grammar that is somehow encoded in the human genome. A specific grammar is learned not in stages, as Jean Piaget thought, but simply by gradually fulfilling a blueprint that is already in the mind.
Children do not learn, as they do not make any effort. Language "happens" to a child. The child is almost unaware of the language acquisition process. Learning to speak is not different from growing, maturing and all the other biological processes that occur in a child. A child is genetically programmed to learn a language, and experience will simply determine which one. The way a child is programmed is such that all children will learn language the same way.
Language acquisition is not only possible: it is virtually inevitable. A child would learn to express herself even if nobody taught her a language.
Chomsky’s belief in innate linguistic knowledge is supported by a mathematical theorem proved in 1967 by Mark Gold ("Language Identification In The Limit"): a language cannot be learned from positive examples only. A grammar could never be induced from a set of the sentences it is supposed to generate. But the grammar can correctly be induced (learned) if there is a (finite) set of available grammars to choose from. In that case the problem is to identify the one grammar that is consistent with the positive examples (with the known sentences), and then the set of sentences can be relatively small (the grammar can be learned quickly). .
Later (in 1977) Chomsky (inspired by the USA linguist John Ross) advanced a theory of "government binding" that would reduce differences between languages to a set of constraints, each one limiting the number of possible variants. Grammar develops just like any other organ in the body: an innate program is started at birth but it is conditioned by experience; still, it is constrained in how much it can be influenced by experience. An arm will be an arm regardless of what happens during growth, but frequent exercise will make its muscles stronger. Ditto for grammar. Growth is deterministic to some extent: its outcome can fluctuate, but within limits.
Wedding Biology and Linguistics
In the following years, a number of psychologists, linguists and philosophers corroborated the overall picture of Chomsky’s vision.
The USA linguist Ray Jackendoff thinks that the human brain contains innate linguistic knowledge and that the same argument can be extended to all facets of human experience: all experience is constructed by unconscious, genetically determined principles that operate in the brain. These same conclusions can be applied to thought itself, i.e. to the task of building concepts. Concepts are constructed by using some innate, genetically determined, machinery, a sort of "universal grammar of concepts". Language is but one aspect of a broader characteristic of the human brain.
According to the German linguist Eric Lenneberg, language should be studied as an aspect of our biological nature, in the same manner as anatomy. Chomsky's universal grammar is to be viewed as an underlying biological framework for the growth of language. Genetic predisposition, growth and development apply to language faculties just like to any other organ of the body. Behavior in general is an integral part of an organism's constitution.
Another implication of the standard theory (and particularly of its transformational component) is on the structure of the mind. The transformations can be seen as corresponding to mental processes, performed by mental modules (as in Jerry Fodor's computational theory of the mind), each independent of the others and each guided by elementary principles.
The Canadian psychologist Steven Pinker believes that children are "wired" to pay attention to certain patterns and to perform some operations with words. All languages share common features, suggesting that natural selection favored certain syntactic structures. Pinker identified fifteen modules inside the human mind, organs that account for instincts that all humans share.
Our genetic program specifies the existence and growth of the "language organs", and those organs include at least an idea of what a language is. These organs are roughly the same for all humans, just like hands and eyes are roughly the same. This is why two people can understand each other even if they are using sentences that the other has never heard before.
In biological words, the universal grammar is the linguistic genotype. Its principles are invariant for all languages. The values of some parameters can be "selected" by the environment out of all valid values. This pseudo-Darwinian process is similar to what happens with other growth processes. The model used by Gerald Edelman both in his study of the immune system (the viruses select the appropriate antibodies out of those available) and in his study of the brain (experience selects the useful neural connections out of those available at birth) is quite similar.
A disturbing consequence of this theory is that our mental organs determine what we are capable of communicating, just like our arms or legs determine what movements we are capable of. Just like there are movements that our body cannot possibly make, there are concepts that our language can never possibly communicate.
Languages, not only language
A consequence of Chomsky’s hypothesis of the universal grammar is that all human languages (or at least their grammars) must be relatively similar, if they all sprung up from the same universal grammar that is genetically transmitted from brain to brain.
There must exist a relatively simple mechanism by which the brain constructs a specific grammar (say, English) out of the universal grammar. The USA linguist Mark Baker tried to identify the similarities among languages and what "parameters" determine whether you speak, say, English or French. Imagine a car that you could personalize so much that it could eventually look completely different from the car of your neighbor, while still being manufactured at the same plant. The "personalization" would consist in selecting "options" such as paint color, body shape, number of doors, etc. At each step of the assembly line, machines would obey one of the options and add a different touch to your car. Something similar occurs in the brain with language, according to Baker.
Baker imagines a "tree" (a hierarchy) of such linguistic parameters. The parameters are arranged according to their power to affect one another. At each junction in the hierarchy, a parameter (or more) determines a way to structure sentences. Below that junction, that parameter is fixed and other parameters are taken into account. Baker's hierarchy of linguistic parameters looks like the periodic table of elements or Carl Linnaeus' classification of animals and plants, but it is different in that it specifies how to "generate" such a classification.
Alas, Baker does not account for the evolution of languages. We do know that Italian evolved from Latin. Italian has wildly different grammatical rules from Latin. How did it happen that children born with a pre-wired brain and later brainwashed to acquire the parameters of the Latin language ended up changing those parameters? In fact, the one thing that seems to change a lot is the grammar itself. Words are relatively similar among languages of the same geographic area, but the grammar can be quite different (as any Italian who studied Latin in high school painfully remembers). Those changes occurred gradually over many generations.
It is possible that all languages descend from the same original language; and that children learn a language in the amount of time that it takes to learn a language, not any faster. These may be clues to another scenario for the origin of language.
No question the brain is born with a structure that facilitates survival on this planet. It is even too easy to claim that so many skills must be "innate". We would not be alive if that were not the case. However, claiming that our brains are "pre-wired" to discover General Relativity would be an exaggeration. It took Einstein to discover General Relativity. Decades from now every human on the planet will master General Relativity. That does not mean that General Relativity was inevitable for all human brains. There may have been an Einstein of communicating, who invented language. And we all might be simply re-learning, generation after generation, that marvelous invention.
The Language Instinct
Pinker talks of a "language instinct" rather than of a "universal language". He notices how inherently and naturally complex language is. It is virtually impossible for a community of human beings to develop a "simple" language. As Pinker puts it: "there are stone-age societies, but there is no such a thing as a stone-age language". All languages, even in the most primitive societies, are complex, convoluted, redundant, full of exceptions, synonyms and ambiguities. His explanation is that humans are equipped with a "language instinct" that makes them reinvent language generation after generation.
The way children learn a language seems to be largely independent of what their parents teach them. For example, children use grammatical forms that they never heard from their parents. The ability of children to learn a language depends on their brains to be biased towards some kinds of phrases (e.g., noun phrases) and to be equipped with some kinds of procedures.
Pinker thinks that language is merely a (very limited) medium to pack our thoughts and broadcast them to our fellow humans. In the process, we lose most of the reasoning that went on in our brains. Pinker thinks that the mental representation of what we are saying is way more complex and subtle, and just cannot be expressed in words. Words are an effective way to deliver the essential part of the meaning in a reasonable amount of time. Our thoughts are not verbal: they are mental representations of the kind of Jerry Fodor's "mentalese". And this mentalese is a genetic fact: we inherit it when we inherit human genes. It is universal. We all reason the same way. We think in mentalese, not in English or Chinese. It is only when we have to pack information for another human being that we use the language of our community (e.g., English or Chinese), and in doing so we have to limit our message to what can be said in that language.
Pinker therefore sides with the school of the "physical symbolic processor": a mind is a purely syntactic processor of symbols, and the "intelligence" of that mind arises from processing the symbols, just like the Turing machine is capable of solving (almost) every problem by moving symbols around, without actually "knowing" what problem it is solving (or even that it is solving a problem).
Pinker's point is that mental representation is a powerful invention. That was Turing's fundamental discovery. Once we equip something with the ability to represent the world through symbols that can be processed in some logical way, that something is suddenly endowed with the power to solve even the most complex of problems. Mental representation sounds indeed miraculous. The symbols are not intelligent. The algorithm to process them is not intelligent. But the result is intelligent. In fact, the result "is" intelligence itself.
Language and thought have little in common, according to Pinker: "knowing a language is knowing how to translate mentalese into strings of words, and viceversa". People without a language are still thinking in mentalese.
Pinker notes that mentalese might actually be simpler than the languages we speak, because it doesn't have to deal with the oddities of spoken language (such as pronouns and indexicals) or with pronunciation. Presumably, Pinker thinks that the complexity of spoken languages evolved because it helps communicate the essential very quickly.
Phonetic perception itself is part of the "language instinct". Sentences are made of words, words of morphemes, and morphemes of phonemes. And phonemes are fundamentally different from the other components of language, because they do not combine according to a grammar: they represent the analogic to digital interface.
At some point in evolution, the mouth and the ear developed additional functions: to help utter sounds and to help to "understand" sounds. Pinker claims that we can hear "words" where there are just sounds because phonetic perception is a sixth sense, another piece in the puzzle of the language instinct. Our brains are hardwired to recognize "meaningful" words out of a stream of "meaningless" sounds (there are actually no meaningless sounds, and, according to Pinker's own theory, words are not really meaningful, but Pinker uses "meaningful" as in "useful for the purpose of reacting to a sentence"). Pinker shows how we use an assembly of organs to create the sounds of sentences. Recognizing a phoneme is much more difficult than dealing with grammar, so much so that no machine has been built yet that can recognize speech the way the human brain does. First of all, there is hardly any separation between words when we speak: there is a continuous flow of sounds. Secondly, different speakers pronounce the same words in different ways. Thirdly, the same speaker can pronounce the same word in different ways (depending on whether she is sleepy or not, angry or not, in a hurry or not). Machines that try to recognize speech have to be "trained" to the voice of a particular speaker, and can generally recognize only a small subset of the vocabulary (usually, only a dozen of words, instead of the tens of thousands that the human brain recognizes effortlessly). Clearly, the wiring of the human brain is the secret to recognizing speech.
There are other features of the human brain that are difficult to replicate with a machine, and they all have to do with "analogic" versus "digital" reasoning. Uttering or listening to speech is a function very similar to grasping an object, another difficult act for a machine. The brain does not simply calculate distance and angle to program the movement of the arm. The movement of the arm and of many other organs, from the joints of the fingers to the muscles of the eyes are continuously refined as the movement itself takes place. A complex process of feedback makes sure that the movement achieves its goal. Pinker shows that something similar happens when we speak: a number of organs cooperate in making the sounds of words, and the sound wave is refined as it is being uttered; and viceversa when the sound wave is being heard. Thus speech belongs to the general class of "motor control". Just like with other kinds of motor control, the "expectation" contributes to the success of the operation. If one "expects" to hear something, then she is more likely to hear it. Since we expect a speaker to make sounds that are words in our language, we are more likely to detect them, even though the words are not coded in the exact same sound wave each and every time. Speech recognition is greatly facilitated if the hearer continuously guesses what the speaker is trying to say. The secret of speech is not in its "digital" workings (how the brain dissects and processes its constituents) but in its "analogic" workings (how the brain continuously readjusts the process to sculpt an output that matches the input).
If all humans are equipped with the same universal grammar, a legitimate question is why are there so many languages instead of just one. Pinker's answer is similar to the answer to the question of why are there so many species of animals if all animals are equipped with the same genetic code: it's the way evolution works, namely variation is an inherent element of evolution. Linguistic variation boosts cultural evolution the same way that genetic variation boosts biological evolution. New languages are born the same way that new species are born, through a process of variation, heredity and isolation. (This similarity had originally been pointed out by Darwin himself). Pinker does not elaborate on the linguistic equivalent of "natural selection", i.e. the role played by the "environment" (which, in the case of language, is the society of other speakers), but language too is subject to environmental pressure. If a child utters a meaningless sentence that brings no benefit (or is even harmful in achieving the goal), that sentence will die out. On the other hand, novel sentences or grammatical constructs or idiomatic expressions that turn out to be very effective are inherited by other speakers and spread throughout the population of speakers. This is the equivalent of what natural selection does to organs of bodies.
Pinker calls it "language instinct". Like all instincts, it must be implemented somewhere in the brain, and that implementation must be dictated by some genes of the human genome (and neurologists proved that language is implemented in the left hemisphere). That is why Pinker thinks that animals cannot speak language, "real" language: they lack the genes, and therefore they lack the brains. They can certainly be trained to recognize and react to certain sounds (although never with the same dexterity of a child) but they lack the "discrete combinatorial system" that would enable them to understand "other" sentences besides the ones they have been trained to react to. Children do not simply repeat the sentences that they have been taught: children come up with their own sentences. What children have learned is "language", not just a few words or a few sentences. That is what animals cannot learn. Animals can learn to react to the sentence "Kiss me" and "Dog", but they cannot understand the sentence "Kiss the dog". Even less likely is that they can understand the sentence "Kiss her". And even less likely to reply to a simple question such as "Why?" Children, instead, rapidly learn to deal with sentences such as "Why aren't you nice to her?" even if they never heard those words before in that specific sequence, as long as they have learned what "nice" means, and what pronouns stand for, and what is expected by a "why".
Neurally speaking, Pinker thinks that human language is ultimately controlled by the neocortex, whereas animal "language" is controlled by the evolutionary older structures in the brain stem and in the limbic system. Humans too have this primitive form of language controlled by the same ancient brain structures, but those are the kind of sounds that cannot be "combinatorially combined", for example a scream of terror or a burst of laughter. Human language is not a combination of these primitive sounds, but a different process altogether of syntax, morphology and phonology that takes place in a different region altogether of the brain.
Either because they did not agree with his vision of the human mind, or because they considered unnatural the limits of his grammars, or because they devised mathematically more efficient models, other thinkers rejected or modified Chomsky's theory.
One powerful idea that influenced many thinkers is that the deep structure of language is closer to the essence of concepts than to the syntax of a specific language. For example, "case-frame grammar" (developed in the late 1960s by the USA linguist Charles Fillmore) assumes that each sentence represents explicitly the relationships between concepts and action. Fillmore shifted the emphasis towards "cases". Traditional cases (such as the ones used by German or Latin) are purely artificial, because the same sentence can be rephrased altering the cases of its constituents: "Piero" in "Piero wrote this book" and "this book was written by Piero" appears in two different cases. But its role is always the same, regardless of how we write the sentence. That role is the real "case", in Fillmore’s lingo. These cases are universal. They are not language-specific. My relationship towards the book I wrote is the same in every language of the world, regardless of how a specific syntax allows me to express such relationship. Fillmore concluded that a universal underlying set of case-like relations plays a fundamental role in determining syntactic and semantic relations in all languages.
In Fillmore’s grammar, therefore, a sentence is represented by identifying such cases. Sentences that deliver the same meaning with different words, but that describe essentially the same scene, get represented in the same way, because they exhibit the same items in the same cases.
Fillmore's approach started a whole new school of thought.
Drawing from the Aristotelic classification of state, activity and eventuality, the USA linguist David Dowty proposed that the modal operators "do", "become" and "cause" be used as the foundations for building the meaning of every other verb. Within a sentence, the various words have "roles" relative to the verb. A thematic role is a set of properties that are common to all roles that belong to that thematic role. A thematic role can then be seen as the relationship that ties a term with an event or a state. And this allows one to build a mathematical calculus (a variant of the lambda calculus) on thematic roles.
Likewise, Ray Jackendoff proposed that the meanings of all verbs be reduced to a few space-time primitives, such as "motion" and "location".
The culmination of this school was, in the 1970s, Roger Schank's "conceptual dependency" theory, whose tenet is that two sentences whose meaning is equivalent must have the same representation. This goal can be achieved by decomposing verbs into elementary concepts (or semantic primitives).
Sentences describe things that happen, i.e. actions. While every language provides for many different actions (usually expressed as verbs), it turns out that most of them can be defined in terms of simpler ones. For example, "to deliver" is a combination of "to move" and other simpler actions. In other words, a number of primitive actions can be used to form all complex actions. In analyzing language, one can therefore focus on those primitive actions. Or, equivalently, a verb can be decomposed in terms of more primitive concepts.
Each action entails roles which are common to all languages. For example, "to move" requires an agent who causes the move to happen, an object to be moved, an old location and a new location, possibly a timeframe, etc. Sometimes the roles are not explicitly stated in a sentence, but they can be derived from the context. Whether they are stated in the sentence or implicit in the context, those roles always exist. And they exist for every language that has that concept. "To move" may be translated with different words in different languages, but it always requires a mover, an object, etc. An important corollary is that any two sentences that share the same meaning will have exactly the same representation in conceptual dependency, regardless of how much is left implicit by each one, regardless of how each is structured. If they refer to the same mover and the same object and the same location and the same timeframe, two different sentences on "moving" will have identical representations.
Conceptual dependency reveals things that are not explicit in the surface form of the utterance: additional roles and additional relations. They are filled in through one’s knowledge of lexical semantics and domain heuristics. These two components help infer what is true in the domain.
Conceptual dependency represented a major departure from "Chomskyan" analysis, which always remained relatively faithful to the way a sentence is structured. Schank’s analysis considers negligible the way words have been assembled and shifts the emphasis on what is being described. If nothing else, this is presumably closer to the way our memory remembers them.
In Chomsky's linguistic world, the "meaning" of a sentence was its logical form. At the end of the process of parsing a sentence, a logical translation would be produced which allowed for mathematical processing, and that logical form was considered to be the meaning of the sentence.
Unfortunately, syntax is ambiguous.
Sentences like "Prostitutes appeal to Pope" or "Soviet virgin lands short of goal again" (actual newspaper headlines reported by Keith Devlin) are "ambiguous". Language does that. In every language one can build a sentence that is perfectly valid but not clear at all. Solving ambiguities is often very easy. If the second sentence is encountered in the context of the development of Siberia, one may not even notice the ambiguity. The context usually solves the ambiguity.
A related problem is that of "anaphora". The sentence "He went to bed" is ambiguous in a different way but still ambiguous: technically speaking, "he" could be any of the 3 billion males who live on this planet. In practice, all we have to do is read the previous sentences to find out who "he" is. The context, again, helps us figure out the meaning of a sentence.
Not to mention expressions such as "Today is an important day" or "Here it is cold": when and where are these sentences occurring?
Because of linguistic phenomena like ambiguity and anaphora, understanding a discourse requires more than just figuring out the syntactic constituents of each sentence. In fact, even understanding the syntactic constituents may require more than syntax: the "lies" in "Reagan wins on budget but more lies ahead" is a noun (plural of "lie") or a verb (third person of "to lie")?
The scope of semantics lies beyond the single word and the way the words relate to each other.
In the 1960s the USA linguist Jerrold Katz provided one of the most extensive studies on semantics. His basic tenet is that two components are necessary for a theory of semantics. The first one is a dictionary, which provides for every lexical item (i.e., for every word) a phonological description, a syntactic classification ("grammatical marker", e.g. noun or verb) and a specification of its possible distinct senses ("semantic marker", e.g. "light" as in color and "light" as the opposite of heavy). The second one is a set of " projection rules" that determine how the meaning of a sentence can be composed from the meaning of its constituents. Projections rules, therefore, produce all valid interpretations of a sentence.
The Math of Grammars
First-order Predicate Logic, the most commonly used logic, may be too limited to handle the subtleties of language.
The "generalized phrase-structure grammar" pioneered by the British linguist Gerald Gazdar, for example, makes use of "Intensional Logic", which is a variant of the "lambda calculus". Gazdar abandoned the transformational component and the deep structure of Chomsky's model and focused on rules that analyze syntactic trees rather than generate them. The rules translate natural language sentences in an intensional-logic format. This way the semantic interpretation of a sentence can be derived directly from its syntactic representation. Gazdar defined 43 rules of grammar, each one providing a phrase-structure rule and a semantic-translation rule that show how to build an intensional-logic expression from the intensional-logic expressions of the constituents of the phrase-structure rule. Gazdar’s system was fundamentally a revision of Katz’s system from Predicate Logic to Intensional Logic.
The USA mathematician Richard Montague developed the most sophisticated of intensional-logic approaches to language. His intensional-logic system employed all sorts of logical tools: type hierarchy, higher-order quantification, lambda abstraction for all types, tenses and modal operators; and its model theory was based on coordinate semantics.
In this version of Intensional Logic the sense of an expression determines its reference. The intensional-logic formula makes explicit the mechanism by which this can happen.
Reality consists of two truth values, a set of entities, a set of possible worlds and a set of points in time. A function space is constructed inductively from these elementary objects.
Montague’s logic determines the possible sorts of functions from possible "indices" (sets of worlds, times, speakers, etc.) to their "denotations" (or extensions). These functions represent the sense of the expression. In other words, sentences denote extensions in the real world. A name denotes the infinite set of properties of its reference. Common nouns, adjectives and intransitive verbs denote sets of individual concepts, and their intensions are the properties necessarily shared by all those individuals.
Through a rigorously mechanical process, a sentence of natural language can be translated into an expression of intensional logic. The model-theoretic interpretation of this expression serves as the interpretation of the sentence.
Rather than proving a semantic interpretation directly on syntactic structures, Montague provides the semantic interpretation of a sentence by showing how to translate it into formulas of Intensional Logic and how to interpret semantically all formulas of that logic.
Montague assigns a set of basic expressions to each category and then defines 17 syntactic rules to combine them to form complex phrases. The translation from natural language to Intensional Logic is then performed by employing a set of 17 translation rules that correspond to the syntactic rules. Syntactic structure determines semantic interpretation.
Montague’s work was based on the idea of "categorial grammars" pioneered by the German mathematician Yehoshua Bar-Hillel ("A Quasi-arithmetical Notation for Syntactic Description", 1953), the man who had organised the first conference on Machine Translation in 1952. Categorial grammar is built up from two primitive categories: noun phrase and verb phrase. A sentence is composed of a noun phrase (Piero, the red apple, the president of the USA) and a verb phrase (wrote this book, is rotting, has canceled his trip). They can be related in an arithmetic way by using the same rules of fractions: a verb phrase VP is the sentence S divided by the noun phrase NP. Categorial grammars provide a unity of syntactic and semantic analyses.
Montague's semantics is truth-conditional (to know the meaning of a sentence is to know what the world must be for the sentence to be true, or the meaning of a sentence is the set of its truth conditions), model-theoretic and uses possible worlds (the meaning of a sentence depends not just on the world as it is but on the world as it might be, i.e. on other possible worlds).
The Polish linguist Anna Wierzbicka was the originator of "natural semantics".
She regarded language as a tool to communicate meaning, and semantics as the study of meaning encoded in language. To her syntax is a piece of semantics.
Corresponding to the three types of tools employed by language to convey meaning (words, grammatical constructions and "illocutionary" devices), Linguistics can be divided into lexical semantics, grammatical semantics and illocutionary semantics. To her the division into syntax, semantics and pragmatics makes no sense because every element and aspect of language carries meaning. Meaning is an individual's interpretation of the world. It is subjective and depends on the social and cultural context. Therefore, semantics encompasses lexicon, grammar and illocutionary structure.
Wierzbicka’s project was to build semantics from elementary concepts. There exist a broad variety of semantic differences among languages (even emotions seem to be cultural artifacts), but she identified a few semantic primitives shared by all languages. Such universal semantic primitives make up a semantic meta-language that could be used to explicate all other concepts in all languages.
The USA linguist Ronald Langacker, one of the originators of Cognitive Linguistics (whose first conference was organized in 1989), reacted against the prevailing view that language is a self-contained system that can be studied in isolation. He opposed the view that grammar was distinct from lexicon and semantics, and that the meaning of a sentence could be expressed in mathematical logic. Langacker believed that language cannot be separated from cognition, that semantics is about concepts, and that semantic analysis is conceptual analysis. Ultimately, he believed that language is psychology and neurology as much as it is linguistics.
Noting that grammar is simply a way to refer symbolically to concepts, i.e. that grammar is a symbolic element connecting phonology (the sounds of speech) and concepts, in Langacker recast grammar as an extension of the lexicon. Grammar is an "inventory of symbolic resources". Grammatical units have a meaning, just like the items of a lexicon have meaning. This "meaning" cannot be merely a truth condition or a combination thereof, because that meaning is related to the whole cognitive process of understanding/speaking language, i.e. to a cognitive domain.
For example, the class of nouns refers to a kind of cognitive processing, and that is its meaning, whereas the class of verbs refers to a different kind of cognitive processing, and that is its meaning. And different classes of nouns (e.g., count nouns as opposed to mass nouns) refer to different kind of "noun" cognition.
Any item in the lexicon (any word) refers to a kind of cognitive processing, which is its meaning.
Langacker admits only three kinds of units: semantic (the concepts), symbolic (grammar, lexicon, morphology) and phonological (the sounds). The symbolic units connect units of the other two kinds.
At the same time, the form used to construct the concept is also "meaningful". One can create a content using many different forms of language. Langacker used the term "imagery" to refer to how content is structured. By definition, a grammar already forces constraints on the "images" that content can assume. Each grammar already limits the universe of imagery that is available to the language user.
Rather than sentences and grammatical rules, Langacker’s grammar is built on image schemas, which are schemas of visual scenes.
Again, only a semantic and a phonological components are necessary, mediated by a symbolic component. This approach directly reflects the semiological function of language: to build symbols for concepts (semantics) by means of sounds (phonology). Grammar reduces to these symbolic relationships between semantic structures and phonological structures.
A speaker's linguistic knowledge is contained in a set of cognitive units, which are originated by a process of reinforcement of recurring features (or "schematization"), or, identically, by a process of patterns of neural activity. These units are therefore grounded in daily experience and are employed by speakers in automatic fashion: a unit is a whole that does not need to be broken down into constituents to be used. Phonological units, for example, range from the basic sounds of language (such as the "t" of the English language or the "r" of the French language) to familiar phrases and proverbs.
Units form a hierarchy, a schema being instantiated in sub-schemas. A linguistic category may be represented by a network of quite dissimilar schemas, clustered around a prototype. A grammar is but an inventory of such units.
Nouns and verbs are central to grammatical structure because of the archetypal status of a cognitive model whose elements are space, time, matter and energy. That is a world in which discrete physical objects move around in space thanks to some form of energy, in particular the one acquired through interactions with other objects. Matter spreads over space and energetic interactions occur over time. Objects and interactions are instantiated, respectively, in space and time. Objects and interactions are held to be the prototypes, respectively, for the noun and verb grammar categories. These categories differ primarily in the way they construe a situation, i.e. their primary semantic value is "imagic", has to do with the capability to construe situations.
Langacker took issue with the "Chomskyan" view that language is an infinite set of well-formed sentences or any other algorithm-generated set. To him, a language is a psychological phenomenon that eventually resides in neural activity. Chomsky's generative grammar is merely a platonic ideal.
A similar change in perspective was advocated by the French linguist Gilles Fauconnier.
Fauconnier's focus was on the interaction between grammar and cognition, i.e. into the interaction between syntax/semantics and "mental spaces". The mind is capable of making connections between domains and Fauconnier investigates the kinds of cognitive connections that are possible: pragmatic functions (such as that between an author and her book), metonymy, metaphor, analogy, etc. Some domains are cognitively accessible from others and meaning is to be found in these interactions.
A basic tenet of Fauconnier's theory is that linguistic structure reflects not the structure of the world but the structure of our cognitive life.
The idea is that, as the speaker utters one sentence after the other, she is in fact constructing mental spaces and the links among them, resulting in a network of mental spaces. Language builds the same kind of mental spaces from the most basic level of meaning construction all the way up to discourse and reasoning. While logic-based semantics (whether Chomsky’s or Montague's) assumed that language provides a meaning that can be used for reasoning, Fauconnier maintained that mental spaces facilitate reasoning.
Furthermore, mental spaces allow for alternative views of the world. Fauconnier thinks that the mind needs to create multiple cognitive spaces in order to engage in creative thought.
The USA linguist George Lakoff is critical of Chomsky's theory on philosophical grounds: Chomsky's theory belongs to the old logical-analytical tradition, because Chomsky embraced logical formalism and several of Descartes' assumptions while neglecting how thinking and language rest on bodily experience. Lakoff does not believe in an innate, universal grammar. Lakoff does not believe that the structure of language is independent of meaning.
Lakoff's "cognitive linguistics" rests on the opposite assumption that language (like anything else in mental life) is grounded in our bodily experience. Language is embodied, which means that its structure reflects our bodily experience. Syntax is a consequence (not a prerequisite) of concepts. Our bodily experience creates concepts that are then abstracted into syntactic categories. Syntax is a direct consequence of our bodily experience, not an innate property. It is shared (to some degree) by all humans for the simple reason that we all share roughly the same bodily experience.
In 1986 Noam Chomsky, aware of the shortcomings of his generative theory of language, introduced a new theory of language, the theory of "Principles and Parameters", later (1995) renamed "Minimalism".
Chomsky recognized that there might be no universal grammar, just a circuit in the brain that is more or less plastic: change the connections and you get one or the other language. Instead of innate knowledge of language, Chomsky proposed that the brain comes equipped with a virtually infinite set of concepts. There are no "rules" of grammar as such, but there are associations between sounds and concepts: we learn a concept when we make the connection with a sound. Basically, we "rediscover" concepts that we have always unconsciously known (they have always been in our mind, since, presumably, prehistoric times).
Chomsky basically claimed that the "rules" of grammar are only a consequence, a side-effect, of the way language works. One could come up with a set of rules of how a muscle or a stomach works, but it is not that the brain has rules on how to run the muscle or the stomach: the rules are a way to explain what actually happens. The key is to discover the mechanism that generates those apparent "rules" of behavior (and their countless exceptions).
Most linguists simply neglect history and the fact that we are a species capable of learning and of transmitting knowledge. Were we a species that does not change over the centuries, Chomsky’s original theory of language might have worked just fine. Alas, we keep changing our culture and our behavior, and we instruct our children to maintain our changes. Whatever human phenomenon we observe we are bound to be confused by our own messing with it over the millennia. There might indeed be simple mechanisms that explain language, but those mechanisms are probably perturbed by the fact that humans continuously change their own culture, including their own language. Thus, at every point in time, one can find countless exceptions to every rule. Those exceptions are probably a sign that language is in progress, changing as we observe it. Imagine if you had to study the behavior of a machine while the machine is being dismantled and rebuilt. That is what we do when we study any human phenomenon.
A study of the history of language might show that there are many more regularities than one supposes. Irregular verbs probably have a reason to be what they are (they may have been regular in the past, according to a long-forgotten rule). Words may be derived from very simple sounds. Idiomatic expressions may be based on bodily features. And so forth. If one studies the history of a language, there might be simple explanations for every "odd" feature of it.
Language as a By-product
The USA developmental psychologist Elizabeth Bates pointed out that the development of language occurs while many other cognitive faculties are developing. She believes that language is not "one" isolated phenomenon but the result of a number of cognitive developments, each of which affects more than one cognitive faculty and the sum of which accounts for the development of all cognitive faculties, including language. In other word, there is no program for learning to speak, but there are several programs to learn several skills, which, together, enable "also" language. For example, we learn to play chess, but that does not mean that a program to play chess is present in our genetic information. Playing chess requires a number of skills, shared with many other tasks, that are enabled by our genetic information.
According to Bates, there is no "universal grammar" a` la Chomsky. There is a global development of interconnected cognitive skills.
The USA linguist Donald Loritz has argued that rhythm is the "central organizing mechanism" of language. He shows that the sequence in which a child learns both phonology and morphology is based on the development of rhythms. Loritz shows that this phenomenon has a neural basis.
Loritz notes that children learn to walk before they learn to talk. Their learning of talking improves exponentially after they have learned to walk. He argues that walking introduces a "rhythmic dipole" into the child's brain, which has the effect of organizing the child's sounds: "babbling gets rhythm and becomes speech".
Rejecting Chomsky's generative grammar, Loritz wants to found "adaptive grammar", which is built around the "topic". Neurally speaking, the topic is the most active resonance. Loritz claims that discourse and sentences are built around the topic: "syntax is ordered by topicality". This is also how grammar is grounded into reality: it is built around reality, a reality that is active in the brain in the form of a resonance.
Loritz argues that this phenomenon may recapitulate how language originated in the first place.
Allen, James: NATURAL LANGUAGE UNDERSTANDING (Benjamin Cummings, 1995)
Bach, Emmon: CATEGORIAL GRAMMARS (Reidel, 1988)
Baker, Mark: THE ATOMS OF LANGUAGE: THE MIND'S HIDDEN RULES OF GRAMMAR (Basic Books, 2001)
Bar-Hillel, Yehoshuas: LANGUAGE AND INFORMATION (Addison Wesley, 1964)
Bates, Elizabeth: THE EMERGENCE OF SYMBOLS (Academic Press, 1979)
Bresnan, Joan: MENTAL REPRESENTATIONS OF GRAMMATICAL RELATIONS (MIT Press, 1982)
Chierchia, Gennaro: MEANING AND GRAMMAR (MIT, 1990)
Chierchia, Gennaro: DYNAMICS OF MEANING (Univ of Chicago Press, 1995)
Chomsky, Noam: SYNTACTIC STRUCTURES (Mouton, 1957)
Chomsky, Noam: ASPECTS OF THE THEORY OF SYNTAX (MIT Press, 1965)
Chomsky, Noam: REFLECTIONS ON LANGUAGE (Pantheon, 1975)
Chomsky, Noam: THE LOGICAL STRUCTURE OF LINGUISTIC THEORY (University of Chicago Press, 1975)
Chomsky, Noam: RULES AND REPRESENTATIONS (Columbia Univ Press, 1980)
Chomsky, Noam: LECTURES ON GOVERNMENT AND BINDING (MIT Press, 1981)
Chomsky, Noam: KNOWLEDGE OF LANGUAGE (Greenwood, 1986)
Devlin, Keith: GOODBYE DESCARTES (John Wiley, 1997)
Dowty, David: WORD MEANING AND MONTAGUE GRAMMAR (Reidel, 1979)
Dowty, David: INTRODUCTION TO MONTAGUE SEMANTICS (Reidel, 1981)
Fauconnier, Gilles: MENTAL SPACES (MIT Press, 1994)
Fauconnier, Gilles & Eve Sweetser: SPACES, WORLDS, AND GRAMMAR (Univ of Chicago Press, 1996)
Gazdar, Gerald: GENERALIZED PHRASE STRUCTURE GRAMMAR (MIT Press, 1985)
Goddard, Cliff & Wierzbicka, Anna: SEMANTIC AND LEXICAL UNIVERSALS (Benjamins, 1994)
Jackendoff, Ray: SEMANTICS AND COGNITION (MIT Press, 1983)
Jackendoff, Ray: SEMANTIC STRUCTURES (MIT Press, 1990)
Jackendoff, Ray: LANGUAGES OF THE MIND (MIT Press, 1992)
Katz, Jerrold: AN INTEGRATED THEORY OF LINGUISTIC DESCRIPTIONS (MIT Press, 1964)
Katz, Jerrold: SEMANTIC THEORY (Harper & Row, 1972)
Katz, Jerrold: THE METAPHYSICS OF MEANING (MIT Press, 1990)
Korzybski, Alfred: SCIENCE AND SANITY - AN INTRODUCTION TO NON-ARISTOTELIAN SYSTEMS AND GENERAL SEMANTICS (1933)
Lakoff, George: PHILOSOPHY IN THE FLESH (Basic, 1998)
Lehnert, Wendy: STRATEGIES FOR NATURAL LANGUAGE LANGUAGE (Lawrence Erlbaum, 1982)
Langacker, Ronald: FOUNDATIONS OF COGNITIVE GRAMMAR (Stanford Univ Press, 1986)
Lenneberg, Eric: BIOLOGICAL FOUNDATIONS OF LANGUAGE (Wiley, 1967)
LePore, Ernest: NEW DIRECTIONS IN SEMANTICS (Academic Press, 1987)
Levinson, Stephen: PRAGMATICS (Cambridge Univ Press, 1983)
Loritz, Donald: HOW THE BRAIN EVOLVED LANGUAGE (Oxford Univ Press, 1999)
Lycan, William: LOGICAL FORM IN NATURAL LANGUAGE (MIT Press, 1984)
Montague, Richard: FORMAL PHILOSOPHY (Yale University Press, 1974)
Nelson, Katherine: LANGUAGE IN COGNITIVE DEVELOPMENT (Cambridge University Press, 1996)
Pinker, Steven: THE LANGUAGE INSTINCT (William Morrow, 1994)
Pinker, Steven: HOW THE MIND WORKS (Norton, 1997)
Sapir, Edward: LANGUAGE (1921)
Schank, Roger: CONCEPTUAL INFORMATION PROCESSING (North Holland, 1975)
Searle, John: SPEECH ACTS (Cambridge Univ Press, 1969)
Searle, John: EXPRESSION AND MEANING (Cambridge Univ Press, 1979)
Van Benthem, Johan: A MANUAL OF INTENSIONAL LOGIC (Univ Of Chicago Press, 1988)
Van Benthem, Johan: LANGUAGE IN ACTION (MIT Press, 1995)
Vygotsky, Lev: THOUGHT AND LANGUAGE (MIT Press, 1964)
Whorf, Benjamin Lee: LANGUAGE, THOUGHT AND REALITY (MIT Press, 1956)
Wierzbicka, Anna: SEMANTICS, CULTURE, AND COGNITION (Oxford University Press, 1992)
Wierzbicka, Anna: THE SEMANTICS OF GRAMMAR (Benjamins, 1988)
Wierzbicka, Anna: SEMANTIC PRIMITIVES (1972)