It is commonly accepted that language has both a natural and a cultural dimension. However, the main controversies in current linguistic theory have to do with the relative weight given to nature and nurture in the characterisation of human language. Here I suggest an alternative—and conciliatory—strategy to address the delimitation between language natural and cultural dimensions.
Such an alternative is based on the intuition that there could be a correlation between, on the one hand, the natural and cultural factors in human language and, on the other, the various components that have been recognized in the study of human languages.
I’m going to sketch a model of the relationship between lexicon and syntax in languages according to which syntax is universal (invariable in time and space) and, therefore, a solid candidate for representing the natural conditioning for language, while the lexicon would reflect the historical and cultural dimension of human languages. To the extent that this model of the relationship between lexicon and syntax in human languages is empirically correct, it can be considered a way of clarifying old and noxious controversies in linguistic theory.
Denying a natural, biological dimension to human language would be the same as repeating Descartes’ dualistic error. This means that it does not make sense to think that language is not an attribute of the human brain such as memory, vision or emotions. As Antonio Damasio pointed out, the human brain, as an organ of the body, is a subtle mixture of innate dispositions and development through experience:
“At birth, the human brain comes to development endowed with drives and instincts that include not just a physiological kit to regulate metabolism but, in addition, basic devices to cope with social cognition and behavior. […] Yet there is another role for these innate circuits which I must emphasize because it usually is ignored in the conceptualization of the neural structures supporting mind and behavior: Innate circuits intervene not just in bodily regulation but also in the development and adult activity of the evolutionarily modern structures of the brain”Damasio 1994: 126, 110
Modern neuroscience has shown that memory, vision and emotion are capacities that cannot be explained without an innate bias in the development of the brain tissues that make them possible. In the context of modern cognitive science, assuming that language is an exception might well be considered surprising, and indeed suspicious. However, the traditional claim that language is a social institution (Saussure 1916), rekindled in recent decades by the insistence on the essentially external, cultural nature of languages, seems to point in that direction. Thus, for example, Christiansen & Chater, in a paper entitled The language faculty that wasn’t, state:
“It is time to return to viewing language as a cultural, and not a biological, phenomenon”.
However, the faculty of language, whose initial state (prior to experience) is called Universal Grammar (UG) in the Chomskyan generative tradition, exists by definition. As Chomsky has pointed out:
“To say that ‘language is not innate’ is to say that there is no difference between my granddaughter, a rock and a rabbit. In other words, if you take a rock, a rabbit and my granddaughter and put them in a community where people are talking English, they’re all learn English”Chomsky 2000: 50
Note that such a statement, despite how it may appear, is not a simplification (or a provocation). It simply insists on a crucial idea that has been misinterpreted: that the capacity of language is innate does not imply that there must be language genes or linguistic neurons, but rather that human beings have a unique ability to acquire the languages of their environment. Since immersion in a linguistic environment is not enough for language to develop in nonhuman (natural or artificial) organisms, then there must be something in human children that differentiates them from other organisms. The relevant question, then, is not whether UG exists, but what its properties are, and from where they are derived.
Those who reject the existence of UG argue that the development and use of language can be explained by adducing general principles of human cognition that are not specific to language. Thus, for example, Tomasello (2009) rejects the existence of a human faculty of language and stipulates that the restrictions that the human brain can impose on the structure and nature of languages would be general and not specifically linguistic:
“For sure, all of the world’s languages have things in common. But these commonalities come not from any universal grammar, but rather from universal aspects of human cognition, social interaction, and information processing – most of which were in existence in humans before anything like modern languages arose”.Tomasello 2009: 471
But here we are faced with a false problem. Note that Tomasello invokes “universal aspects of human cognition”, that is, principles of cognition that are common to all human beings and specific to them. But, since human beings are the only organisms that develop knowledge of language, then it is very difficult to differentiate those “universal principles of human cognition” from the notion of UG, since it is defined as “a characterization of the child’s pre-linguistic initial state” (Chomsky1981: 7), that is, as a part of human nature.
What is truly relevant here is the unquestionable fact that some sort of biological conditioning determines the course of language development and the subsequent structure of the knowledge systems that we call human languages. Of course, it is debatable whether the principles that form UG are language-specific or if they are the same ones that underlie other human cognitive systems (and, of course, it is debatable whether the term universal grammar is adequate), but this discussion soon becomes sterile in the absence of a detailed specification of what those principles are, and also in the absence of a definition of what language is, and what languages are. And it is precisely here, in the conception of what a language is, that we really find a discrepancy, more than as to whether or not there is a biological conditioning for language.
A diagnosis and a proposal
According to the Chomskyan point of view, a specific language is a particular state of the faculty of language (FL), that is, it is a system of knowledge (usually known as internal language, I-language); for the opposite point of view, a language is an external system, a cultural object that the brain is able to assimilate and represent internally. This difference is at the root of the different assessment that both traditions make of the relative weight that nature and culture have in explaining the structure of languages.
In my opinion, the error of the externalist approach consists in artificially separating the universal aspects of human cognition from the faculty of language, that is, assuming that there can be no universal linguistic aspects of human cognition. But why should there not be universal linguistic aspects of human cognition, since only humans (and all humans) can learn languages?
I think that the rejection of this possibility is related (at least in part) to the fact that the externalist approach operates with an inductive conception of language. In such a conception, language is not a cognitive capacity, but a theoretical construct derived from the comparison of languages and the establishment of generalizations, in a Greenbergian manner. If language is induced from specific languages, then, paradoxically, the species-specific ability to learn languages has to be considered non-linguistic by definition.
Note that in the previous quote from Tomasello this is clearly expressed: languages, he says, emerged after the existence of these general cognitive conditioning factors, that is, languages are considered as independent cultural phenomena that (more or less) adapt to the format required by human brains and their general cognitive principles. Yet from a cognitive (internalist) perspective such a view is unsustainable. It would be tantamount to saying that the skin of an animal, for example an elephant, is an external object that adapts to the shape of the elephant’s body. Obviously it is true that the shape of the skin depends on the shape of the elephant’s body, but this does not permit us to ignore the fact that the elephant’s skin is part of its body, and not an external object that has adhered and adapted to it.
The following table summarizes the main discrepancies regarding the nature of language presented by the two models (identified here as internalist versus externalist views):
|Language||Internalist view||Externalist view|
|Origin||Natural / biological||Cultural|
|Location||Internal / Individual||Interiorized / Collective|
|Variation||Superficial / Universality||Deep / Relativism|
I have suggested that the source of this discrepancy is an incomplete conception of what language is from the externalist point of view. The vision is incomplete because the possibility of the existence of ‘general linguistic principles of human cognition’ is excluded. The idea that I want to introduce now is that this biased view of language could actually be due to a misunderstanding regarding what the term language (both in the mass and in the count meanings) refers to. In my opinion, the externalist tradition inappropriately identifies a language with what is actually a part of a language, more specifically, with the “lexicon” of a language. Of course, for this statement to make sense, the term lexicon would have to be defined more precisely (see below). But before doing so, it is worth considering that if this diagnosis were reasonable, then the same table that serves to illustrate deep discrepancies in the nature of language could also show differences as to the nature of the two main components of languages, as can be seen in the following version, in which only the column titles have been changed:
|Origin||Natural / biological||Cultural|
|Location||Internal / Individual||Interiorized / Collective|
|Variation||Superficial / Universality||Deep / Relativism|
In what follows I will suggest a model of the architecture of the faculty of language which is consistent with what is reflected in the second table, that is, a model according to which the division of labor between the natural/biological and cultural factors in human language matches two different components of the language faculty: syntax and lexicon. One of these, syntax, will be considered to be mainly natural, innate, internal to the mind/brain, common in languages and invariable (universal), while the other, the lexicon, will be considered to be mainly cultural, learned from the environment, internalized but collective, and variable in diverse linguistic communities.
The lexicon as an interface for language externalization
According to the influential model proposed by Hauser et al. (2002), the human faculty of language (FL) could be conceived of as a complex system minimally composed by three independent components: a conceptual-intentional (CI) system, related to meaning and interpretation, a sensorimotor (SM) system, related to the perception and production of linguistic signals, and a computational system (CS), the syntax in the narrow sense, responsible for the creation of the recursive and productive hierarchical syntactic structures that underlie linguistic expressions.
The model leaves some interesting open questions, such as how the various components of FL are related to each other, and which component or components reflect language diversity. In later work, Chomsky (2007) has suggested that the relationship between the computational system and the two other components (CI and SM systems) is asymmetric, in the sense that the computational system would have evolved adapting itself to the CI system, forming a kind of internal language of thought (ILoT) aimed essentially at the creation of thought: “the earliest stage of language would have been just that: a language of thought, used internally” (Chomsky 2007: 13). This ILoT, common in essence to the species, would have subsequently been connected to the SM system for externalization and, therefore, for communication. According to this vision, externalisation would be ancillary and secondary, that is, a process exposed to fluctuations in the environment and, therefore, susceptible to historical change and diversification. This implies that for the internalist view language is primarily a system of thought, and secondarily an instrument of communication. Aristotle said that language is sound with meaning. Chomsky asserts that language is meaning with sound.
What this scenario implies, then, is that the FL must also include a component derived from the environment (that is, internalised) whose mission would be to systematically connect the derivations generated by the ILoT (resulting from the interaction between the conceptual and the computational systems) with sensorimotor systems. The crucial idea is that this component is what really differentiates languages from each other and constitutes the genuinely cultural component of every human language. For expository convenience I call this component the lexical interface. The use of the expression lexical interface is based on the traditional idea that the lexicon of a language is the component where sounds and meanings are systematically matched. However, the reading in which the lexicon is the set of words or morphemes that the syntax combines to create sentences must be avoided. In the use of the term that interests me now, the lexical interface should be interpreted as an area of long-term memory that provides a stable connection between, on the one hand, the syntactic-semantic derivations produced by the computational system in interaction with the conceptual-intentional system and, on the other hand, the sensorimotor systems that process and produce the linguistic signals (sounds or visual signs) that human beings perceive and produce when they use language for communication. This asymmetry is shown in the scheme in Figure 1.
Like any organic system, the FL of each person (their I-language) is conditioned by two types of factors: internal (derived from biology and other natural factors) and external (derived from environmental information). According to the scheme, any I-language, insofar as it is a person’s FL, is formed by the four components. Three of these (the conceptual-intentional system, the computational system and the sensorimotor system) are essentially universal because they are organism-internal and are naturally conditioned, while the fourth, the lexical interface (highlighted in a darker tone), is culturally variable since it is the result of internalisation from environmental stimuli.
The acquisition of language does not imply the internalisation of the entire system of knowledge (the I-language), but only of one of its components (the lexical interface). The development of language in the individual can be glossed, then, as the process of learning to externalise the ILoT in the same way as the members of our community do.
Note that in each component I have indicated the scope of traditional grammar with which it would be centrally associated. In this way, the CI system is related to semantic and pragmatic interpretation. I do not mean to imply that there is no linguistic and cultural variation in this respect, but that there is an underlying uniformity. Note that we can, for example, ask someone in what language they speak, in what language they think, or in what language they dream, but it is strange to ask in what language they mean. The very fact that we can consider whether two linguistic expressions (of the same or of different languages) have the same meaning shows that there is a layer of meaning deeper than the linguistic forms that externalise it.
I have also assumed that syntax is uniform, but in this case I refer to the basic computational mechanisms and the formal principles governing the syntactic derivation (merge, binarity, endocentricity, etc.) and not to the fact that the apparent syntax of languages is diverse (as with basic word order or argument marking patterns). In fact, the hypothesis of much of modern formal linguistics is precisely that these differences in the “visible syntax” (morphosyntax) are the result of differences in the repertoire of linguistic formants and constructions that each language uses to externalise the homogeneous syntactic derivations produced by the internal computational system. In traditional terms, the underlying hypothesis is that any difference between the structure of languages is of a morphological and phonological nature.
This model of the internal structure of an I-language allows us a better understanding of the disparity of opinions on the balance of forces between nature and nurture in language design that we observe in current linguistics. The central (simple) idea is that not all linguists use the word language in the same sense. More specifically, I want to suggest that a crucial difference in this use comes from the fact that the externalist tradition identifies a language with one of its parts, the lexical interface. For this reason, generativist authors tend to reject many of the statements about languages that externalist authors make, such as the claim that languages are external to the mind, that they can vary profoundly, that they are learned using general mechanisms of statistical learning, or that they owe their structure to the historical processes of change and not to a faculty of language. But note that these same claims would be more acceptable if they were interpreted as referring to the lexical interface rather than to the whole I-language.
Thus, I conclude that the notorious divergences regarding the nature of language and languages currently seen in theoretical linguistics are largely a consequence of an incomplete vision of what a natural human language really is, a vision based on the misidentification of languages with their learned and historically modified component, that is, the lexical interface that each group of speakers uses to externalise uniform syntactic-semantic derivations.
From a generativist point of view it is reasonable to think that to a large extent the lexical interface is something external and cultural; however, the assertion that it is a language is not admissible. Of course, the opposite might be argued (by Tomasello, for example), that this component is the object of study if one wants to study language and not cognition in general. But this, in my opinion, is a crucial error. It is an error which derives from an externalist conception of language and an empiricist conception of the mind. In fact, any I-language is also part of general cognition (if this expression makes sense). To affirm otherwise would be to pretend (to return to our former analogy) that the study of the shape of the elephant’s skin is independent of the study of the elephant’s body. Yet the skin is part of the elephant’s body, just as a language is the whole thing (the whole body) not just its most superficial part (the skin). Contrary to what Tomasello suggests in the above quotation, a language is not a cultural system represented in the brain. If anything, this is the definition of the lexical interface, that is, of the complex cultural component of languages that connects their most internal parts with the sensorimotor system of externalization. A language, rather, is a system of knowledge that includes a variable cultural component, but also universal linguistic aspects of human cognition.
Internalist and externalist research programmes are more complementary than they are incompatible, despite the fact that most practitioners appear to ignore this.