(versione italiana qua)
Back in March 2023 — when the world was just starting to marvel at the impressive capabilities of ChatGPT, the first generative AI tool to reach a broad audience, and headlines everywhere were predicting that AI would soon replace most workers — I wrote: “In other words, if you do not already know the correct answer, what it tells you is likely to be of no help at all.” Two weeks later, I added: “since they express themselves in a form that is meaningful to us, we project onto their outputs the meaning that is within us.”
The reason lay, quite simply, in how these systems (technically known as Large Language Models, or LLMs) work: they rely on a probabilistic model of language, highly sophisticated and trained on an immense corpus of texts, which encodes statistics about the most plausible continuations of word sequences and sentences. This enables them to display strikingly fluent linguistic performance—the kind that, when produced by humans, we readily interpret as intelligent—though in reality it is not. In other words, showing competence with the words that describe the world is not the same as having competence about the world itself. Yet, because the human brain is wired to perceive meaning even in entirely arbitrary patterns, we project onto these systems the intelligence that actually resides within us.
Since even back then the economic stakes around generative AI were enormous (and since then hundreds of billions of dollars have continued to pour into the field), the same story kept being told: that artificial intelligence would soon be able to deliver human-level performance across all domains. As usual, while the majority kept following the pied piper, far more authoritative experts than myself urged caution. I wrote about this several times — here and here in relation to school education, here and here regarding social relationships, and here about the very term itself—pointing out that these systems produce expressions that seem meaningful to us only because we project onto them the understanding that actually resides within us. Consequently, we could never truly rely on them to replace people. Gradually, this is starting to surface — even if still cautiously — in the words of the CEOs and chief scientists of leading companies in the field, as well as tech investors. And it really seems that 2025 will be the year when this bubble begins to deflate.
Earlier this month, Andriy Burkov (a researcher specializing in machine learning and author of best-selling books on the subject — the very technique underlying LLMs) commented on X in response to a series of posts highlighting the poor performance of LLMs in mathematical reasoning with these words: “Only if you already know the right answer or you know enough to recognize a wrong one (e.g., you are an expert in the field) can you use this reasoning for something.” Essentially, the very same words I had used in the post mentioned at the beginning.
A few days later, Brad DeLong (professor of economics at the University of California, Berkeley) wrote on his blog: “If your LLM reminds you of a brain, it’s because you're projecting — not because it's thinking.” Once again, these were essentially the same words I had written two years ago.
It's obviously gratifying to have your hunches confirmed, but more importantly, it's interesting to see that, as I put it in the title, the awareness that this new emperor doesn't have such beautiful clothes after all is slowly spreading around the world.
Now, this doesn't mean that generative AI tools are useless. On the contrary, they are extremely useful if you use them as brute-force labor in a field you know well, while being fully aware of their inability to truly understand. They are amplifiers of our logical-rational cognitive abilities, just as industrial machines are amplifiers of our physical capacities. But just like with those machines, if you don't know how to use them, you risk causing disasters. Getting into the cockpit of an airplane without any training won't make you fly over the seas like a bird; more likely, it will lead to a bad end. Using generative AI in a field you don't know exposes you to the same risks. If you master the subject, however, you can in many cases — but not all! — work faster, as long as you continue to pay close attention to what it suggests.
A recent survey on the future of AI research, conducted by the American Association for Artificial Intelligence among experts in the field, found that 76% believe it is "unlikely" or "very unlikely" that LLMs will lead to what is known as "artificial general intelligence." It will certainly require different methods, not just a statistical approach like the one used by current LLMs, but also a "symbolic" approach (which was the one used in the field of artificial intelligence before the explosion of machine learning techniques), thus integrating different methods.
In short, the future of this technology is certainly interesting, provided we remain aware that, now as it was thousands of years ago, it is necessary to pursue an aurea mediocritas, to follow the middle path.
--The original version (in italian) has been published by "StartMAG" on 19 April 2025.
Nessun commento:
Posta un commento
Sono pubblicati solo i commenti che rispettano le norme di legge, le regole della buona educazione e sono attinenti agli argomenti trattati: siamo aperti alla discussione, non alla polemica.