Limited humans, limitless technology?

Applications like Chat-GPT do not have our limits, and that’s ok

by Francesca Alloatti

alt=

The images of this article were generated using Leonardo.AI by Nicola Gubernale

AI 17 October 2023

As a project manager in a company that deals daily with the latest technological innovations, I hear about AI a lot. And by a lot I mean that everyday there’s a new product, some new features, and Sam Altman just “confessed” something that blew the minds of a third of my LinkedIn connections. I confess that it got to a point where it is boring. All the noise, all the frenzy talk about AI and its new marvelous capabilities doesn’t surprise me anymore and, more importantly, it doesn’t matter to me. It doesn’t arouse in me any sense of interest or novelty. How am I even supposed to discern what is meaningful from what is not in this cacophony of news, opinions and (sometimes) facts.

And then, something happened. I was in Carpi, a small town next to Modena, Italy. Beautiful day, sunny, even hot given that we were at the end of September. I was sitting in the main square of Carpi together with another hundred people, all attentively listening to the words of philosophers, writers, political scientists. Weird setting, you must be thinking. Well, that is how the Festival of Philosophy looks like. For three days swarms of people can attend conferences and workshops from the finest intellectuals that the world has to offer. And they can do that for free, since the goal of the Festival is to allow for anybody — from students to people just passing by — to participate.

Back to us: while in Carpi then, attending the Festival of Philosophy, I listened to a lecture by Professor Andrea Moro. Prof. Moro teaches general linguistics at the IUSS University School of Advanced Studies in Pavia, where he founded and directed for six years the research center in Neurocognition, Epistemology and Theoretical Syntax (NEtS). He studies syntax theory of human languages and the relationship between language and the brain using both neuroimaging and electrophysiological techniques.

Because of his multidisciplinary approach to how our brain learns and interprets language, he was able to cast some light over a peculiar aspect of LLMs, a.k.a. Large Language Models (that is, the new AIs that have gained popularity in the last months. ChatGPT — to name one — is based on a LLM named GPT). Prof. Moro noticed that LLMs are able to parse sentences beyond the syntactic constructs that for us, as humans, are essential to make sense of what we are reading. In other words, our own brain is constraining us (humans) within certain limits, in order for us to be able to learn a language faster when we are kids. LLMs don’t have that kind of constraint, as they will happily accept as input something that for us is agrammatical. As Prof. Moro himself put it,

Machines don’t have limits or, if they have any, they are not our limits. And, after all, we are our limits.

This kind of consideration immediately sparks followup questions: does that mean that machines can be now considered as being definitely smarter than us? Or do these limits actually power us in a way that will be forever out of reach for machines?

Let’s take a step back and understand what brought Prof. Moro to state that “we are our limits”. One of the most important discoveries of contemporary linguistics, whose roots go  back to the original work of Noam Chomsky and Joseph Greenberg, is that all languages behave according to certain (grammatical) rules. It is not true that each language may accept a different set of rules, and there are some rules that in fact can never be accepted, by any language. One of the clearest examples of that comes from syntax. Syntax is that part of the language that allows us to produce potentially infinite meanings by combining a finite set of elements (words). That combination however can never be based on the linear order of the words: no human language accepts as a rule that the meaning of a sentence shall be strictly determined by the order in which the words in a sentence appear.

alt=

Take for instance the minimal elements of a sentence: a noun (e.g. Sarah) and a verb (e.g. sing). In all languages — each with its own differences — there will be an agreement between the noun and the verb in the sentence: so, “Sarah sings”. The noun of the sentence, Sarah, can be boxed in a larger structure, and still remain a noun (they’re called noun phrases): “the friends of Sarah” is still a noun and it can behave as such. But if we connect “the friends of Sarah” with the verb “sing”, we obtain “the friends of Sarah sing”, not “*the friends of Sarah sings”, although “Sarah” and “sing” are still physically one next to the other in the sentence. This is because no language will accept this “flat rule” as a valid one in the construction of meaning: in other words, languages are hierarchical, and they do not rely on word order to build its semantics.

There are other syntactic principles that immediately tell us that a certain sentence is correct (or rather, acceptable), while another one is not. You could have a phrase like “you think I should evaluate a nurse before meeting the doctor”. This sentence contains two noun phrases: a nurse and the doctor. You could then turn this sentence into a question about the nurse. The sentence would then become “which nurse do you think I must evaluate before meeting the doctor?”. Since the doctor is also a noun phrase, technically you could transform the first sentence into a question about the doctor too. However, if you try it, you would get something like “*which doctor do you think I must evaluate a nurse before meeting?”. Which immediately sounds wrong.

Neuroscientific studies have proven that the distinction between an acceptable input and an unacceptable one is grounded in our brain. The difference between a possible language and an impossible one is not due to a cultural factor, because no matter the cultural background or the language spoken, when presented with impossible sentences, all human brains will activate neural pathways that are not related to language. They will rather try to interpret what they have in front of them as another kind of problem, but not a linguistic one.

The constraint posed by our brain on what is acceptable and what is not is actually beneficial to our evolution. By allowing only a subset of the infinite possibilities of language, it allows us humans to quickly learn to communicate.

What about LLMs? As you probably guessed, machines don’t care about the syntax acceptability of the input text. If you ask ChatGPT “which doctor do you think I must evaluate a nurse before meeting?” it will answer you without a flinch. That is not necessarily a bad thing: it all depends on what we think these tools — like ChatGPT — should be and what are our expectations toward them.

On one hand, LLMs just can’t reach our competence: they are not able to think language without uttering it (i.e. they do not have an internal speech), and they do not have any internal representation of the structure of the language (i.e. they do not cognize syntax, since they will happily accept an impossible input). On the other hand, though, that does not make them less than us humans: in fact, they do outperform us in terms of computing semantics. In a sense, they lack the very same limits that make us humans. It was often said (and written) that the difference between humans and machines would be that the former do not have self-consciousness and can’t feel emotions the same way we do.

Thanks to the work of Prof. Moro and his colleagues, we can now say that a new paradigm is dawning: the real difference between machines and humans might just be that the former do not have our limits, which means that humans are their limits.

I want to thank Professor Moro for his speech at Carpi and for sharing with me his latest articles on this topic. Some of the examples of impossible languages are the same ones Prof. Moro used in his speech and papers.