The Goldsmith Paper

Prof. John Goldsmith, University of Chicago, and Dr. Gero Jenner, author of „Principles of Language“ (Peter Lang 1993, revised 2018: “Principles revised” ), criticize Chomsky’s Universal Grammar

When it comes to Universal and Generative Grammar – undoubtedly a central topic of the modern science of language – the prevailing attitude of linguists – even that of its American representatives – is best described as hagiographic prostration vis-à-vis its prominent author: an attitude stifling to the critical mind and that furthermore stigmatizes all those as heretics who dare to proffer their “ceterum censeo”. How memorable therefore the courage of a distinguished Chicago professor of linguistics! Very much against his original will and intent, he was led to admit that Chomsky’s universal grammar forfeits its claim to universality as it is based on “sloppy and non-scientific concepts”. Prof. Goldsmith did even come very near to a second insight of no less import, namely that GG (being based on distributional analysis) is not generative at all but simply presents a case of tautology. By the same sound and honest argument, which had motivated Prof. Goldsmith to refute the claim of universality, he could have been led to this conclusion as well: it would have meant a definite farewell to hagiography. But at this stage courage failed him; he relapsed into silence.

I think Dr. Goldsmith would be much too modest to agree with the publication of our debate, but not to make it known to critical and inquisitive minds like his would be a mistake. Our discussion may serve as a proof that people honestly bent on the exploration of truth will through initial uncertainly finally be rewarded with a somewhat deeper understanding: In this case a deeper understanding of universal as well as of generative grammar.

The starting point of our debate were the ten main theses that I proposed; they were critically commented in turn by Dr. Goldsmith and by myself. In order to facilitate orientation, Misunderstanding mostly arose from Prof. Goldsmith attributing to my theses a much more sophisticated meaning than they are meant to express: they should always be taken quite literally. I highlighted (in yellow) the turning-points of our discussion. (Go=Goldsmith and Je=Jenner refer to first comments, Go2 and Je2 to the second round). Our discussion started with an exchange of emails.

Language is a means of transmitting mental content to others by means of signs. Such a transmission is based on the following ten preconditions:

Go: Here Chomsky would disagree; he believes language’s function for communication is peripheral rather than central. I agree with you, however, though I don’t know how to rank the importance of communication and thinking as actions that language makes possible for humans.

Je: I don’t see why any language of signs would ever be invented if it were not for the purpose of making a listener understand what the speaker has in his mind.

Go2: Well, let’s not worry about this, since I agree with you. Still, I have colleagues who could give interesting reasons for a different point of view, according to which each child is motivated to develop a meaning-bearing system (a language!) even if the other people around him do not use it or understand it. The cases of this sort that have been studied are those involving deaf children who are not surrounded by deaf signers. These kids tend to develop languages that they use to sign with their mother and siblings even though the mother and siblings never learn the subtleties that the deaf child uses (which the scientist who is analyzing the video can see).

Je2: Well, you are right we need not worry about this point but I think that we may agree or not agree with the colleagues you have in mind. Children certainly develop a meaning-bearing system (the deep structure of meaning). Even animals like dogs and cats do so because they recognize individual faces and actions and behave accordingly even if doing so without using signs. When, on the other hand, children tend to develop signs that others don’t understand it is – I would say – precisely because they want to make manifest (put into signs) what they experience.

1) Discrete mental content must exist in both the speaker and the listener, bound to identical signs.

Go: „Discrete“ here is a red flag for me. There are many ways of representing knowledge that I find interesting and attractive in which (some or all) concepts are viewed as operations (this is a metaphor taken over from physics, at least for some people (I refer to myself by that)), and the objects that an operator applies to need not be discrete.

Je: In whatever manner we may understand the units of mental contents (for instance as operations) they must be distinct in the brain of the speaker in order to evoke their distinct counterpart in the brain of some listener. Thus, the concept “house” must be distinct from the concept “moon” in both the emitting and the receiving brain – and the same applies to all those ten thousand or so concepts engraved in our brains. But distinct they are only in view of their differences compared to each other; they may be quite fluid in themselves. My notion of a house may be tinged with all the recollections of different types of houses I got to know in India, Japan and so on. Your concept may be tinged with the igloos in Greenland or the huts of Indian natives. And as to the moon, the fairytales we read in our youths certainly impress on each of us a host of associations modifying the basic concept. That is to say that no single thought I transmit to you will convey exactly the same meaning in your as in my brain. But the basic meaning remains, nevertheless, the same – otherwise communication would make no sense because what the listeners receives bears no resemblance to what the speaker emits. All documents from all languages of the past prove that a sentence like “If there are no living beings on the moon we mustn’t expect to find houses” is understood everywhere (If basic meaning were not identical we would not be able – as we in fact are – to translate documents of all times in our present-day languages).

On the other hand, the distinctness of mental or other contents must give rise to the distinctness of their formal representation. China currently wants one billion people whose faces are definitely distinct (no randomly taken two faces being exactly identical) to be registered by cameras. This is feasible only if, on the level of form, the total of one billion contents – namely, the faces – may be mapped as distinct formal signs as well, for instance in the shape of printed pixels on a sheet of paper or as binary expressions on a hard disc. No two single configurations of bits representing in form a definite face should be identical – otherwise the system wouldn’t fulfill its task.

But different signs may, of course, be chosen in different systems – and in this sense they are arbitrary. According to the operating system or the program that analyses the visual images obtained by the cameras the binary representation of some distinct content (a specific face) may be encoded in quite a different manner. In the same way, the mental content “house” appears as ‘Maison’ in French but as ‘Jia’ in Chinese. The basic postulate that discrete mental content must be expressed by discrete signs in order to make communication possible in the first place is, therefore, valid only within some speaking community relying on the same “operating system” (French, German, Chinese etc.)

Go2: Well, yes, I do agree with you, and maybe this principle deserves to be called “Saussure’s principle”– if you can specify all the discrete objects that you think play a role in your language, at every different level (sounds, morphemes, words,…) then you have done a large part of the work of describing the language. I would add to that that the next step we would have to do (once we had satisfied ourselves with the sets of discrete but contrasting objects we had identified) would be to understand how you can put two or more things together to make a third. This notion of part vs. whole is found absolutely everywhere in language, and there are hardly two places where the same sense of “part vs. whole” is employed. From my own point of view, this is the question which Husserl devoted the most thought to.

Je2: I don’t know Husserl’s text so I may be wrong in trying to understand what you mean. But in my perspective, discrete concepts obtained by the process of mental “analysis” described below must be and are indeed reunited by the reverse process of “synthesis”. While there are thousands of concepts (in theory their number is without limits), only a definite and quite restricted number of “wholes” or “syntheses” is to be found in natural languages. I tried to describe these “types of synthesis” in “Principles revised”.

2) The smallest discrete units that we call concepts only come into being after a mental analysis of reality. In sensual perception, “house”, “man”, “run”, “big”, “here”, “later” are no primary objects that can be grasped as such, but they arise as constants of experience after comparison of many sensory impressions – a complex process up to the present day never completely elucidated. It is, however, beyond doubt that this mental analysis basically (if not in detail) occurs in the same way in all human beings. That is why all languages know concepts like “house”, “people”, “run”, “big”, “here”, “later”. In other words, they all arrive at the same fundamental categories to which these individual concepts belong (substance, action, quality, spatial and temporal determinants; agent/patient; question/answer, topic/comment etc.). Without the existence of such universal categories we would not be able to translate any human language A into any other language B.

Go: At this point, I can read your statement in two different ways. One of them is empiricist, and does not appeal to me, while the other is pragmatic, in a particular sense. The empiricist sense is that the mental content of one of these words is the result of combining the meanings encountered in a certain number of past experiences, and in so combining the learner figures out what is really part of the meaning and what is irrelevant. The pragmatic (I’m using that word as an adjective for „pragmatism“) approach says that the learner builds a picture for himself of what the speaker was trying to do when he used that word, and in so doing, the learner could be right even after hearing the word only once.

“It is, however, beyond doubt that this mental analysis basically (if not in all details) occurs in the same way in all human beings. That is why all languages know terms like house, people, run, big, here, later.”

Go: Well, this is the whole game. From my perspective, it’s not beyond doubt that mental analysis is the same in all human beings, though perhaps Chomsky would have no difficulty in agreeing with you on this. At worst he would be agnostic. I believe that we humans arrive at the concepts that we have mostly by learning them, though with some basic abilities of the sort that Kant described. Learning to speak our first language teaches us a great deal, and we continue to learn more (again, think math or music) of things that cannot be easily expressed in and through language.

“In other words, they all know the fundamental categories to which these individual concepts belong (substance, action, quality, spatial and temporal determinants, agent/patient; question/answer, topic/comment etc.). Without the existence of such universal categories we would not be able to translate any human language A into any other language B.”

Go: This sounds to me like a statement of credo: this I believe. As for myself, I think I can easily argue that many, many sentences cannot be adequately translated from language 1 to language 2. I suppose if we tried to have a real argument about this, in the end we would conclude that we disagreed as to what we meant by 2 sentences being adequate translations of each other. I think it would be easy to find sentences in American Sign Language (which is the only non-IndoEuropean language I can converse in) that cannot be adequately translated into English. Even in two very similar language, English and French, there are serious problems in translation, in my experience (I’ve just been working on how to translate a sentence with the English word „chafe“, used metaphorically). Perhaps you would say I am interpreting your comment in too strict or stringent a way.

Je: Indeed, your interpretation is much too stringent. In “Principles of Language”, I distinguish the different attitudes of poets and logicians when dealing with language. The poets tend to think that in the last analysis no translation is possible at all: arguing in a logically consistent way they would have to conclude that even the transmission of my thoughts to any different individual is out of the question as I and he never mean exactly the same. “Semantic tingeing” of concepts, as I call it, is indeed an all-pervading feature of concept formation so that in a strict sense the basic contents of meaning (like “house”, “moon” etc.) enshrined in the brains of different individuals are unique. So far poets are perfectly right.

But the basic meaning remains, nevertheless, the same – here I agree with logicians. For otherwise communication would make no sense since what the listeners receives bears no resemblance to what the speaker emits. As stated above, concepts may be fluid in themselves but they are distinct in comparison to each other. All documents from languages of the past prove that a sentence like “If there are no living beings on the moon we mustn’t expect to find houses” is understood everywhere.

For this reason, the logician has no qualms in letting the poets insist on the nearly infinite variety concerning the total associative content of any given concrete concept, but he on his part insists that the structures (the syntheses, see below) built by means of these basic components are the same and indeed very few in natural languages. Up to now at least, no language was found that does not use them. I would, therefore, conclude that the logician is no victim to a mere credo but has all available evidence on his side.

Go2: I’m going to have to try to express my view slightly differently. Bear in mind that I am taking your position seriously, and I am not interpreting you to mean what you say only approximately, or that you mean it some of the time. I think the only way to approach this question is through some simple examples. I’m working with my co-author, who is French, translate our book into French. I used the word “chafe”– “he chafed under the conditions in which he had to work” – and the translation of “chafe” was “s’irriter”.  That didn’t seem right at all to me (way too general, not at all specific) and I searched as best I could for a better translation. I could find none. So I talked with Laks, and explained, in French: imagine an ox which has been wearing a yoke for 10 years. He knows he will never get rid of it, and he doesn’t like it – and Laks said, “pesant”. And I said yes, that’s right. It’s not a translation – “chafe” really refers to how your skin is turning reddish, it’s very concrete, but when you use it for an ox, or an unpaid lecturer, it is not concrete, and the metaphor & word to use in French is “pesant”. So it’s a good translation for the book, but the two languages are really different in their semantic analysis of the world.

Je2: With your stress on the semantic variety of concepts we are back to the difference between poets and logicians. Poets are perfectly right with regard to concepts, logicians are right with regard to the higher level of synthesis. Just as any specific concept in my mind will never be completely identical with the corresponding concept in yours even if we both speak the same language, so concepts are never identical between languages even if we are able to give a rough translation. This is certainly a very interesting field, but it is not the one I am dealing with. For it is anything but trivial that on a higher level – that of the synthesis of concepts in a few definite types – these differences completely disappear so that we arrive at a structure of meaning (syntheses, see below) that is basically the same for all languages.

Go2: In American Sign Language, there are prosodic morphemes (on the signer’s face) that mean “do something in a slapdash manner”. That is how I would translate it into English at least. But would every language have a way to say “in a slapdash manner”? Or to put the point slightly differently, do I think one of my children, when they were 10 years old, even had the concept of “a slapdash manner”? I would say no; everything they did then they did in a slapdash manner; they did not understand the concept of “carefully”. These are concepts that one learns by being an observant and responsible member of society. These are not atoms of a universal semantic system. They are learned as one becomes a member of a society.

Je2: I agree. While even a very young child genetically knows the basic types of synthesis, the potentially infinite field of concepts needs time to expand.

3) While the concrete analysis of reality allows the creation of a potentially infinite number of concepts (limits only posed by human memory), the number of syntheses is strictly limited to about half a dozen and it is the same in all languages. These constitute what I call the logical structure of meaning – just let me mention the action-synthesis (Peter comes right now), the quality-synthesis (the tree is very high), the identity synthesis (cows are mammals) – each with or without spatial and temporal determinants. The informational structure of meaning comprises free versus bound syntheses (men eat rice/ men eating rice (are usually well-fed)), topic/ comment, statement/ question etc. So, we are really speaking of a two-tiered mental process, the first consisting in dissecting sensual reality into single concepts (analysis), the second reversing this process by uniting them in a very small number of basic syntheses.

Go2: It is certainly a goal of long standing to find a special (perfect in some sense) language whose elements can be used to build up all the words of any existing human language. I believe it is safe to say that no one has done it, nor gotten close to doing it. Take a difficult case, like the notion of agentivity. Different languages have different ways in which they allow speakers to view objects as agentive. In English, we can say that I shut the door, or the door shut (with a slam). In French, if the door shuts, it uses a reflexive. But reflexives in French allow a vastly greater set of things to be subjects; that is, in English if we can say, “People say that frequently,” we cannot say “That says frequently,” while in French we can say “Le monde dit ca,” and also “Ca se dit.” Other languages will either forbid or put severe restrictions on what non-human things can be subjects of a transitive sentence. Each language consists of a way of looking at responsibility and how it can be charged to people and to things in the world.

Je2: I am afraid you miss the point at the core of my argument. I do not try to “find a special (perfect in some sense) language whose elements can be used to build up all the words of any existing human language.” The fundamental division between the deep structure of meaning on the one hand and the surface structure of form (sequences of acoustic signs, represented as words on a sheet of paper) would render such an undertaking quite impossible as the laws governing the structure of meaning have nothing to do with the laws governing formal realization (they are independent from each other). It is true that my aim is to find out basic building blocks. As far as the structure of meaning is concerned these are what I call the types of synthesis (hardly more than half a dozen, see below) to be found in all languages. In „Principles of Language“ I have tried to do (whether successfully or not, others will have to decide) what – as you boldly assert – nobody has tried before.

As to some concrete examples like the ones you just mentioned (c’est-ce que dit le monde or Ca se dit), these are differences dealt with at some length in “Principles revised”.

4) The purpose of mental analysis lies in its subsequent reversal through the process of synthesis, without which meaningful communication would not come into being. In themselves, individual concepts like “Hans” or “come” do not form messages. Only through their connection in higher-level units – that is in a synthesis reversing exactly the preceding process of analysis – does a message originate, for instance by way of the “action-synthesis”: „Hans arrives now” (composed of a substance, an action and a temporal determinant). Psychology – and, presently, even neurology – have as their proper object the investigation of these mental processes. However, linguistics need not be concerned with what actually happens in our brains, it only needs to accept universal building blocks such as substance, action, quality, etc. and the types of synthesis resulting from them as the given building blocks of a universal semantic deep structure. The latter seems to be innate – a genetic heritage of Homo Sapiens.

Go: Yes, I agree with this, but I also think that the process of „synthesis“ — or binding of objects to a predicate — is also something we learn about as we develop our young minds.

Je: I do not wish to say that a synthesis “binds objects to a predicate”. It binds a substance (living or not) to an action or a quality or establishes an identity between them (identity synthesis) etc. And that’s all – as far as the logical structure of meaning is concerned. The term predicate implies a transition to informational meaning. Just think of the identity synthesis “Cows are ruminant mammals”. If it represents the answer to a child’s question what cows are like, then “cows” is topic while “they are ruminant mammals” constitutes the comment. But supposed that an alien who doesn’t know cows finds this entry in a dictionary of zoology. Then to him the whole sentence is a topic. The traditional distinction of predicate and objects blurs the difference between the logical and the informational structure of meaning. Both the logical as well as the informational structure of meaning seem to be innate.

Go2: I’m not sure I understand, but I’m not sure it matters in the broader context. The context that I have in mind is that at the end of the 19thcentury, it became in fashion to think of the relation between a verb and its subject and object(s) as similar to the relation of a mathematical function and its variables. Frege was famous for promoting this idea. It effectively said that Kant’s very traditional view, going back to the classics, that the principal relationship was subject-predicate, though it is possible that the classical view conflated subject-predicate and topic-comment. So if we look at the recent history of philosophy, it would be putting the cart before the horse to say that predicate and object blurs a distinction; it’s really the other way around; predicate/argument analysis is the new kid on the block (as of 150 years ago).

Je2: The point is important in my exposition of the informational (as different from the logical) part of the structure of meaning. Traditional terms like subject and predicate just do not live up to the complexity of the problem but this is, I agree, of secondary importance in our present discussion. You may see what I have to say on the subject in “Principles revised; (http://www.gerojenner.com/mfilesm/Principles_revised_2017.pdf).

5) These elementary mental building blocks stand outside time; this applies to concepts such as “house” or “stone” etc., which in themselves do not yet form a message, and to the synthesis as the smallest unit of communication, e.g. „Hans arrives now“. On the mental level such an action-synthesis has neither a temporal nor a spatial dimension: in the brain it constitutes an extra-temporal unity. However,it necessarily receives a temporal (and through writing a spatial) dimension at the level of signs, because acoustic signals must follow one another (or as signs on a sheet of paper they must be spatially arranged). The transformation of mental contents into a language of signs (what I call “formal realization”) thus necessarily adds a temporal dimension to a non-temporal semantic substrate (henceforth I will omit the spatial dimension, which only occurs in writing).

Go: Yes, I agree with this, which I take as an anti-positivist statement.

6) The temporal element, that is the sequence of units (usually referred to as words), offers itself as an emergent means of expressing mental contents (such as the opposition of statement and question, agent and patient, etc.). In the binary artificial language of computers only two units are used (plus and minus); thus, all contents are expressed exclusively by the temporal or spatial sequence of two signs, i.e. by position. In natural languages, position plays a quantitatively minor role, but is of primary importance for the surface-structure (formal syntax) of specific languages such as English or Chinese. Apart from the fact that the concepts belonging to any single synthesis appear in immediate or mediated neighborhood, position does not need to have any meaning at all, as for example in Latin (and many other languages), because the formal units (words) are related to each other by designation (affixes of the most diverse kind) (see Lat: ‘homin-es ama-nt equ-um’; in this case, 3 times 2 = 6 permutations of position are possible). In English and Chinese, however, the contrast between agents and patients is not formally realized by designation, but by position (‘people like the horse’; here, only a single sequence is acceptable).

Go: Fine.

7) The transposition (formal realization) of mental contents into a sequence of acoustic signals presupposes a tertium comparationis, which allows the mental structure to be represented by a formal one. Both the mental contents and its formal representations (acoustic signals) must consist of discrete countable units. I call the measuring rod for this tertium comparationis the „differentiation value“. A category such as substance has a potentially infinite differentiation value (several thousand in the usual dictionaries; this applies, albeit to a lesser extent, to actions (run, give, hold, etc.), qualities (soft, long, large, etc.), and in general to most categories of the „logical structure of meaning“. On the side of form, only discrete formal units (acoustic signals or the words on a sheet of paper) may be considered adequate means of realization in natural languages, because they have an equally high differentiation value. The situation is different for polar categories such as agent/ patient, which only have a differentiation value of two and sometimes three (he gives Mary the book). It is therefore possible – as happens in English and Chinese – to formally realize the semantic difference (Dif-Val = 2 or 3) by position. Such formal realization by position equally applies to other polar categories in the informational structure of meaning such as question/ answer, topic/ comment and others.

“The transformation (formal realization) of mental contents into a sequence of acoustic signals presupposes a tertium comparationis, which allows the mental structure to be represented in a formal one.”

Go: I’m not sure what lies behind this statement. It could be taken to be a statement about how we use words when we discuss communication. But it sounds like you mean it as a statement about the world, and in that way I’m not sure what you are committing yourself to in saying this.

“I call this tertium comparationis the „differentiation value“. A category such as substance has a potentially infinite differentiation value (several thousand in the usual dictionaries; this applies, albeit to a lesser extent, to actions (run, give, hold, etc.), qualities (soft, long, large, etc.), and in general to most categories of the „logical structure of meaning“.

Go: I am reading you as if you were writing about the choice a person makes in speaking, in choosing one particular word rather than another one in some given context. Is that right? I am very much uncertain.

“That explains, why in natural languages, only discrete formal units (words) can be considered as means of realization, because they have an equally high differentiation value.”

Go: Though in most languages, there are many places where one would say that the speaker’s choice is active at the level of morphemes, which are smaller than words: did you say „ami“ or „amie“? I want to know if you were telling me about the gender of your friend.

“The situation is different for polar categories such as agent/ patient, which only have a differentiation value of two. As in English and Chinese, the semantic difference (Dif-Val = 2) may be realized by position. This equally applies to polar categories in informational structure of meaning such as question/ answer, topic/ comment and others.

Je: Again, my proposition is far simpler. Suppose, some primitive society just starts inventing some signs in order to express (formally realize) basic concepts like “bear”, “deer”, “enemy”, “wife” etc. At the beginning they will have, say, ten different signs at their disposition – the Differentiation Value is, in this case, equal to ten. In other words, it is totally inadequate for expressing the thousand or more concepts (Dif-Val. = 1000) actually used by these people. If they want to establish a sign system representing the totality of concepts and the structure of meaning they must rely on formal means having at least the same Differentiation Value. It is important to note that natural languages cannot proceed in the same way as artificial ones. Computer language may use but two different signs (plus and minus) using position (with an infinite Differentiation Value) to express a potential infinite number of contents. Natural languages use position mostly to distinguish the sequence of two elements (before and after), that is with a Differentiation Value of 2.

Go2: Then there are no objections to your statement.

8) The formal realization of mental contents on the level of surface structure produces syntactic slots (for instance, three slots in the case of ‘Hans arrives now’). These slots may in any natural language be filled with many, but by no means with arbitrary mental contents. If all nouns in all languages were only filled with substances, all verbs only with actions, all adjectives only with qualities etc., then these formal slots would give rise to universal formal categories like nouns, verbs, adjectives etc. But this is evidently not the case. The concepts actually grouped together in such formal slots of different languages do indeed overlap: nouns always contain substances, verbs always contain actions; adjectives always qualities – otherwise we would not even think of applying the identical name of noun or verb to Chinese, English or Russian. But, depending on the language under review, they may take up other semantic categories as well (the English noun, for instance, may contain both actions (withdrawal) and qualities (greatness) and even spatial or temporal determinants. For this empirical reason, formal categories like noun, verb etc. belonging to the surface structure of languages cannot be universals (their specific semantic content varying from one language to the next). It would, however be pointless to describe the different semantic content for each language (for instance English verb, German verb etc.) – this would be a nearly endless task and entail no substantial gain of knowledge. On the other hand, it is a very important question indeed, to explain the reason why languages resort to such re-ordering of concepts within different formal classes (why does English allow an action like “withdraw” and a quality like “great” to be formally classed not only as English verb or English adjective respectively but as English noun as well?). I try to show in the “Principles” that in this way they substantially enlarge their power of expressing more subtle mental contents. This formal device, therefore, belongs to the field of language evolution.

“The formal realization of mental contents on the level of surface structure produces syntactic slots (for instance, three slots in the case of ‘Hans arrives now’).”

Go: Well, here I may not be in agreement with most of my colleagues. While there is something right about thinking about grammar as being similar to functions which have arguments, I think this is an example of a place where linguists have liked a concept that was developed in another field (philosophy of mathematics) and decided it must be true for everybody. Today we speak of functions with three arguments, etc, but that is very, very recent (less than 200 years old).

Je: I did not think of functions and arguments but simply of formal classes as described by linguistic Distributionalism (Harris). When comparing English sentences like ‘Hans arrives now’, ‘Peter comes earlier’, ‘Greatness came later’, ‘Withdrawal happened at once’ etc., we find the three slots of English noun, English verb and English adverb arranged in a specific temporal sequence. In this way, the same formal pattern serves to express totally different semantic structures since the first two instances represent the action-synthesis, which is not the case in the two following ones.

Go2: Well, this is not so simple, for me. By the way, since you are German, you can easily read the place where this was first developed, and it was long before Zellig Harris. It is in the second volume of Husserl’s Logische Untersuchungen. 

“These slots may in any natural language be filled with many, but not with arbitrary mental contents. If all nouns in all languages were only filled with substances, all verbs only with actions, all adjectives only with qualities etc., then these formal slots would give rise to universal formal categories like nouns, verbs, adjectives etc. But this is evidently not the case. The emerging formal classes overlap: nouns always contain substances, verbs always actions, adjectives always qualities, but depending on the language under review they may take up other semantic categories as well (in English noun may contain both actions (withdrawal) and qualities (greatness) and even spatial or temporal determinants. For this empirical reason, formal categories like noun, verb etc. belonging to the surface structure of languages cannot be universals.”

Go: That’s not a sound argument, in my opinion, though it may depend on what you think must be shown to establish that something is universal. If each language has a surface category of which 60% of the members are names of perduring physical objects, then I would agree that „noun“ is a universal category. And I suspect that this condition is in fact met.

Je: Again, you read more into my statement that it actually is meant to say. I only want to affirm that the formal slots in the surface structure of different languages are not filled with the same concepts. In this case, no positive statement about universals is implied but only a negative one: the English noun, verb etc. cannot be identical with the Japanese noun, verb etc. because the concepts they group within these formal slots are not identical. To be sure, they overlap (verbs always comprise actions) and it is precisely because of such partial identity that we are led to speak of Chinese or Japanese verbs, in the first place. But, again, the total content of concepts allowed to appear in these slots is different for every language we know.

Go2: I think we are coming to the heart of the matter, by which I mean the validity of a science of language which proposes to use the term noun across languages, even if the semantic range (or logical range) of what is a noun in one language differs from that in another. You believe that is not valid; we should understand that if we say both languages have “nouns” we are being sloppy and not scientific.

Je2: Quod erat demonstrandum!

That’s exactly what I want to say – but only with regard to a universal grammar that is bound on principle to use categories which may be applied to any and every natural language. No universal grammar may aspire to live up to its claim if it uses “sloppy and unscientific” concepts that change content with each language. It is for this reason (and certainly not because of any preference for neologisms) that in the “Principles” I had to create a whole new set of concepts for both the “general structure of meaning” and “the surface structure of form”. My objection against the method chosen by Chomsky’s allegedly universal grammar is evident. It is based on the fact that he chooses as his basic categories “sloppy and not scientific” concepts.

There is, however, no harm in using “sloppy and not scientific” concepts like noun and verb when describing any particular language without claim to generalization. Seen in this light (that is without any claim to universality), Chomsky’s procedure like that of any traditional grammar is, indeed, quite legitimate as otherwise we would have to invent a new set of descriptive concepts for each language under consideration. If only we keep in mind that the semantic contents of formal slots (like Russian, Chinese or English verbs or nouns) merely overlap without ever being identical, our procedure is totally correct and may certainly not be criticized as either sloppy or unscientific. The same holds true for the use of such categories in translation machines.

“It would, however be pointless to describe these formal classes (which I call „paratactic“ in order to distinguish them from formal syntax) for each language separately – this would entail no gain of knowledge.”

Go: I don’t agree /this applies to the passage in italics, not the preceding paragraph “Je2”/

Je: You are right in the sense that such knowledge is a prerequisite for the basic insight that verbs, nouns etc. cannot be considered universals as there are only English, Japanese etc. verbs and nouns. But a concrete description of the semantic content of these slots in every language (though perfectly well understood and practiced by any native speaker) would be a nearly infinite grammatical task, which, I am afraid, would not really widen our horizon.

“But it is a very important question indeed, to explain the reason why languages resort to such re-ordering of concepts within different formal classes. It can be shown, that in this way they substantially enlarge their power of expressing more subtle mental contents. So, this formal device belongs to the field of language evolution.”

Go: Maybe what you are saying here corresponds to what I said above. I am not certain.

Je: The question, I want to answer is why languages do not all just group substances in the slot of nouns and actions in that of verbs. What kind of progress do they make if, like English, they allow nouns to express actions (the withdrawal) or qualities (the greatness)?

Go2: Yes, I agree with you that this is a fascinating question, and it is not one which generative grammar pays much attention to. 

9) Languages differ substantially in that they may extend messages by obligatory semantic contents (such as tempus, number, social rank, genus and many more). In a statement like „Mr. Zhang departed to America yesterday,“ Chinese realizes the temporal determinant exclusively through the formal expression „yesterday”. In English, on the other hand, time is expressed redundantly both by „yesterday“ and by the suffix (depart-)ed. As a rule, obligatory contents of meaning are formally realized by obligatory affixes. The formal difference between agglutinating and isolated languages has its deeper roots on the level of the structure of meaning. We may say, that a philosophy of what reality is like first originates in giving special emphasis to certain semantic contents and subsequently gets expressed by obligatory formal means.

Go: This is an old view in linguistics, to be sure.

10) The actual achievement of a semo-formal universal grammar – which strictly separates the two areas of semantic deep versus formal surface structure – consists in that it is able to expound both fields to a maximum extent and independently of each other so that it is capable of finally explaining the diversity of languages on the semantic as well as on the formal level. It recognizes that generativeness – the fact that mankind was and still is able to invent thousands of languages all mutually intelligible – is due to both the deep and the surface structure. By exploring the constraints imposed on form such a dual generative grammar is able to stake out the possibilities as well as the unsurmountable limits of possible formal variation.

“The actual achievement of a semo-formal universal grammar – which strictly separates the two areas of semantic deep versus formal surface structure – consists in that it is able to expound both fields to a maximum extent and independently of each other so that it is capable of finally explaining the diversity of languages on the semantic as well as on the formal level.”

Go: That is a desirable goal, and I think most linguists would agree on that…

“It recognizes that generativeness – the fact that mankind was and is still able to invent thousands of languages all mutually intelligible”

Go: Is that what you mean by generativeness? I think Chomsky means by it that linguists are able to give a formal grammar that predicts what word sequences will be judged grammatical or not by speakers, in each and every language.

“is due to both the deep and to the surface structures. By exploring the constraints imposed on form such a dual generative grammar is able to stake out the possibilities as well as the unsurmountable limits of possible formal variation.”

Go: I have been picky in my comments, because I have read through, sentence by sentence, what you wrote and asked myself whether I agree with each sentence. But if we take a little bit of distance from exactly what you wrote and look at the big picture, I think a lot of linguists would agree with you about the over-all goal.

Je: The above-mentioned preconditions of what I call a universal and truly generative grammar contain, up to 9), no explicit reference to Chomsky. In fact, while being a lecturer in Sendai, Japan, I came across Sapir’s book on Amerindian Languages and was so impressed that I felt challenged to ask myself what constitutes the common ground of these languages compared to those I know to a certain extent, namely Sanskrit, Latin, Japanese, Chinese, Russian, English French, Italian. Subsequently, I got acquainted with Chomsky’s work but immediately felt that I could not agree with his basic concepts. The difference he establishes between deep and surface structure is, to my knowledge, not really understood by anybody. But it is perfectly clear from my point of view. Acoustic marks and their structure (replicated as sentences on a sheet of paper) represent the surface structure while mental contents (the structure of meaning) represent the deep structure (most likely neuronally engraved in our brains). This distinction is clear and unequivocal.

Go2: But you know that this is not at all what Chomsky meant.

Je2: Of course, I know. As stated above, a truly universal and generative grammar should be free of “sloppy and not scientific” concepts that nip universality, so to speak, in the bud.

“But I do not blame Chomsky for his different choice. In descriptions of any particular language like Sanskrit or English, it is indeed much more convenient to use traditional terms. Despite of all its apparent sophistication, Chomsky’s work is nothing but a continuation of traditional grammar.”

Go2: Well, yes, definitely. It is certainly a continuation of Zellig Harris’s project, and Sapir saw Harris as the person who (if anyone) would carry his project on further. Chomsky is (also) a child of the neogrammarians. I’m sure Chomsky would object to those statements, but there you are. 

Je: I therefore insist that machine translation, which does not know meaning but only transfers the formal surface structure of some language A into the formal structure of some language B, is well served with Chomsky’s traditional procedure. But just like any of the formal surface structures used by a machine cannot be universal (as they always refer to some definite language), so Chomsky’s formal trees consisting of nouns, verbs etc. wrongly claim universality. The latter is only found on the level of meaning where a handful of basic syntheses probably belong to the genetically fixed operations of our mind.

Now to generativeness. I agree with your statement that “Chomsky means by it that linguists are able to give a formal grammar that predicts what word sequences will be judged grammatical or not by speakers, in each and every language.” According to the point of view expounded above what Chomsky really offers is a formal grammar, let’s say of English, that predicts what word sequences will be judged grammatical or not by speakers of English. Or a different formal grammar for Japanese that predicts what word sequences will be judged grammatical or not by speakers of Japanese. But that is, of course, what all traditional grammars tried to achieve.

Go2: Well, no, not really. The traditions that I am aware of do not do this. Take two of my favorite books: Grevisse (Le bon usage) and Jespersen’s Modern English Grammar. Both explore the range of what people have written, and show how and why they do it. In pedagogical grammars, the effort is to show you things that you are likely to do but which you should not do. Husserl was among the very first to try to use the notion of ungrammaticality (LU, cited above) in our modern way.

Je2: Again, you resort to the more sophisticated position (of the poet) rather than the more elementary one (of the logician) that I want to stress. Before indulging in “le bon usage” we must, I think, first deal with the grammatically correct usage. You will agree, that the correct and grammatical is the indispensable fundament of all excellence we may create at a higher stage.

Je: In translations, a traditional grammar – including that of Chomsky – tells an Englishman learning Japanese: You must put words in such and such order. In teaching their own children it tells parents to correct their children when they make mistakes saying ‘go-ed’ instead of ‘went’ or ‘fli-ed’ instead of ‘flew’. In other words, in both instances such a traditional grammar is not generative at all when it imposes the right pattern but rather when we make mistakes.

Go2: I repeat: that is not the same thing as what a generative grammar tries to do. Traditional grammars either glory in the ways the language can be used (Grevisse, Jespersen) or they aim to help the adult moving from one language to another. Neither of those are the task of the generative grammarian. The generative grammarian is trying to use the metaphor of logical proof to solve the problem of grammaticality, which was not a problem before. The GG creates a new problem, and then solves it.

Je2: I agree that this is what Chomsky’s generative grammar claims to achieve. But I contest that it ever substantiated its proof. On my part, I tried to show in the Principles that distributionalism, the method used by Chomsky, never arrives at any deduction at all: it cannot get out of its deductive trees more than it had previously (or surreptitiously) put into them. Let me use a Chomskyan tree in its most elementary shape to illustrate what I mean. In the upper first line we may find an abstract expression like “noun phrase”. In the second line beneath we may find a more concrete expression like “noun       verb”; the third line then provides some concrete example like for instance “(the) girl smiles”. Now, no English-speaking person would accept “(the) diddle doddles” in the third line, while a Bantu without any knowledge of English might well accept it. In other words, without a perfect previous knowledge of English we wouldn’t be able to exclude the last example. In quite the same way, an English speaker wouldn’t accept “(the) girl smile” as it violates an elementary rule of English grammar. Finally, an example like “(the) stone smiles” would even be rejected by a Bantu or an Apache because all human beings make use of an innate structure of meaning which tells them that stones are incapable of smiling (other than in a metaphorical sense). In all three cases a perfect previous knowledge of either the semantic deep structure or of valid formal realization of English is a necessary prerequisite if we want to arrive at proper examples. This is to say, that the Chomskyan tree only creates examples which we must know beforehand – or, put in an alternative way, it generates nothing but merely distinguishes admissible sentences from those which are not – and that is exactly what any traditional grammar does. However, by giving to the procedure of traditional grammar the shape of a tree, Chomsky imparts the misleading impression as if he were able to deduce the concrete from the general (the girl smiles from noun phrase) – which is simply not true. In my view, GG did not create a new problem, and it certainly didn’t succeed in solving it. What it actually achieved to generate was something very much resembling an overpowering illusion.

Je: /Here, I continue the preceding “Je”/ “Mistakes” of formal realization changed the French of invading Normans to develop into modern English, and in course of time “mistakes” produced what we now call “sound shifts” in German. Similarly, the German language is now transformed by the “mistakes” of immigrants. While the rules of concept formation and patterns of synthesis seem to be innate and universal so that the structure of meaning (as distinguished from specific concepts) is basically the same, this cannot, of course, be said of the rules governing its formal realization in different languages (that is, the rules established by traditional grammar – including that of Chomsky). These rules cannot be innate since children learn different rules of formal realization according to the linguistic community to which they belong. 

It seems to be obvious – you may hate this word – that automatic translation machines lack generativeness as a matter of principle as they only apply rules they have previously learned through programming. They learn to substitute German ‘Hütte’ for English ‘hut’ and to treat ‘dog’ in the English sentence ‘He hit the dog’ as an object, which in German requires an article in the accusative ‘den Hund’. The fact that thousand different concepts may fill the two formal slots ‘he’ and ‘dog’, fewer the slot ‘hit’ and very few ones the slot ‘the’, only proves the generativeness of concepts in the general structure of meaning but not that of the formal surface.

Go2: You are right that MT (machine translation) is not trying to solve the generativist’s problem; MT wants to work whether the input sentence is grammatical or not. What you describe is not the way that MT currently works, but that’s beside the point. 

Je: But when understood in a different sense the idea of generativeness is of utmost importance, so I immediately appropriated it for my own use. Each of us may generate new concepts: the entire evolution of languages consists primarily in this process. This is first of all what generativeness means on the level of meaning or deep structure.

But man is again generative when it comes to formal realization: On top of semantic generativeness it creates the formal diversity of languages. It does so by creating quite diverse syntactic surface structures on the one hand and quite diverse “paratactic classes” on the other because nouns, verbs etc. have different semantic contents in different languages. Or let me further illustrate what I mean by the following hypothetical experiment. Take a number of three-year-olds from different linguistic communities (like Japanese, Russian, Chinese, English etc., who have already been raised in their respective communities (so that they are no longer in danger of becoming speechless idiots) and unite them in one group that from now on is strictly separated from all linguistic community except their own. All these children are equipped with man’s general capacity to analyze and re-synthesize reality in the above described manner – this capacity is innate to all of them – but they are accustomed to formal realizations conforming to their previously acquired knowledge of Japanese, Russian etc. As human beings they will, however, be urged to understand each other, which means that they will gradually develop a common language, which will not be identical with any of the ones they had learned. Not only will single words be chosen among them as new conventional signs but in the course of time a totally new syntax and paratax will naturally evolve and be finally adopted. This is what I understand by formal generativeness. It is not innate but follows tradition or, in this experiment, the urge to develop a common idiom.

Go2: Yes, I agree with you, but that’s part of the marvelousness of Chomsky’s ability to market his idea: he tried to use a formal definition but he attached it to a word that was redolent of all sorts of wonderful connotations. 

And we end up where we were, agreeing that the most fundamental and interesting question is that of creativity, and how it is possible and what its limits are –

Je2: Perhaps not exactly where we were at the beginning. This discussion proves that even quite similar concepts in my and your mind are far from being identical, but still we achieved some progress. At least, we seem to pretty much agree on eight out of ten points: no small achievement. If we don’t reach total agreement this is partially due to gaps in my knowledge, partly due to your intellectual career. For instance, it never came to my mind to compare any sentence of a natural language with a function and its arguments. Logic and mathematics may be said to be subsystems or borderline cases of natural language but I certainly would not agree to accept the other way round. As to your perspective – you always wanted to push me back from more general considerations drawing my attention instead towards particular examples that are surely interesting enough but, in my view, should be discussed after the general problem of universals has been successfully approached. As to creativity, I surely have no difficulty in being fascinated by it. “Creative reason”, a book dealing with human freedom and being inspired very much by William James, was the outcome of such fascination. But in my work on language I took creativity for granted concentrating instead on what is lawful (or arbitrary) both on the level of meaning and form.

I guess that this insistence on what is most general and common to all languages appeared to you somewhat trivial – which it certainly is not! How come that concepts may vary infinitely why types of synthesis are very few and the same in all natural languages?

I think that the most important outcome of our discussion is the following. Any theory of grammar based on the use of “sloppy and not scientific concepts” cannot be said to be universal. And further more: any grammar based on a false claim of deducing the particular from the general cannot be generative. I am sure you will not like this conclusion as you do not want to declare yourself a partisan of an iconoclastic German like myself. On the other hand, you know quite well that this is what science is all about: ringing the death knell for paradigms that do not live up to what they claim to achieve.