(versione italiana qua)
Continuing the discussion on cognitive machines that began in the previous post, we observe that there's a tendency to consider the data processed by cognitive machines as something objective and absolute, based on the etymology of "data" (which derives from the Latin datum = that which has been given) when in reality, as a representation of a phenomenon, it constitutes only one model among many possible ones. Therefore, in the act of choosing data, there's already an interpretative idea that is subjective and guides the ways in which whoever reads the data will later make sense of it. The example everyone knows is that of the bottle, which can be half empty or half full. Even if we use more scientific language, I can describe this bottle as a one-liter container holding half a liter of water (half full bottle) or containing half a liter of air (half empty). I'm describing the same phenomenon, but I'm focusing the reader on two different interpretations. The selection of a certain set of data is therefore the fundamental act upon which subsequent interpretation is based. This is a mechanism well known both to information professionals, who often use it to guide readers toward a certain interpretation, and to those whose job it is to try to understand what really happened on certain occasions, who often find themselves facing conflicting accounts of the same event from eyewitnesses.
A second relevant observation is that, given the enormous quantity of usable digital data and the availability of sophisticated machine learning techniques, it seems there's no longer any need for theories—that is, coherent interpretative frameworks for phenomena—because the work of inventing these theories can be replaced by the activity of cognitive machines that, effortlessly processing terabytes and terabytes of data with sophisticated statistical analyses, will discover all the necessary theories for us. This opinion, launched in 2008 by Chris Anderson, editor-in-chief of Wired (one of the first magazines to address the topic of digital impact on society), argued precisely that with the deluge of available data, there would no longer be any need for theory. The hypothesis received a sharp scientific refutation in 2016 from computer scientists Christian Calude and Giuseppe Longo, who mathematically demonstrated how as the quantity of data increases, the number of correlations that can be found in them increases. Since this is true even if the data were generated randomly, it follows that a correlation found simply by applying statistical techniques without being guided by an interpretative model (that is, by a theory) has no intrinsic meaning. Cognitive machines, therefore, with their enormous data analysis capabilities, can certainly enrich the scientific method, but they can never replace it.
Cognitive machines are certainly useful for the progress of human society and, given the speed of technological advancement, it's reasonable to expect that on a purely rational cognitive level their analytical-deductive capabilities will soon be unsurpassed. This, however, doesn't mean that the so-called "technological singularity" will soon be reached. This term refers to the moment when a cognitive machine becomes more intelligent than a human being, thus foreshadowing the subjugation of our species. This is an ancestral fear, that of the machine rebelling against its creator, present in literature since the medieval Jewish myth of the "golem," passing through Karel Capek's story, which gave rise to the modern use of the word "robot," up to science fiction and modern accounts in the mass media, the latter originating from very well-known figures in the technological field such as Ray Kurzweil and Elon Musk.
The reality is quite different. Machine intelligence and human intelligence are two rather different things, even if they have some overlap. The problem is that by using the term intelligence, which throughout human history has always indicated human intelligence, coupled with the adjective artificial, we tend to evoke the idea that it's human intelligence artificially realized through automata. In other words, the term "artificial intelligence" leads us to believe that it describes more than what it actually is. Instead, as mentioned, it deals only with the aspect related to purely rational analytical-deductive capabilities, that is, the possibility of calculating new data logically implied by the data under examination. I've articulated this reflection in an article in which I suggested (somewhat provocatively, because I don't think that, at this point, we can really change the way of speaking) using the expression "mechanical intelligence" instead of "artificial intelligence", to better focus attention on the fact that we're still talking about mechanical capabilities, even if extremely sophisticated. In this field, as we've already had proof in the realm of board games, cognitive machines are superior to human capabilities, just as industrial machines have surpassed humans in terms of physical capabilities.
We'll discuss in the next post some characteristics of human intelligence that appear difficult to achieve through machines.
[[The posts in this series are based on the Author's book (in Italian) La rivoluzione informatica: conoscenza, consapevolezza e potere nella società digitale, (= The Informatics Revolution: Knowledge, Awareness and Power in the Digital Society) to which readers are referred for further reading]].
--The original version (in italian) has been published by "Osservatorio sullo Stato digitale" (= Observatory on Digital State) of IRPA - Istituto di Ricerche sulla Pubblica Amministrazione (= Research Institute on Public Administration) on 12 February 2025.
Nessun commento:
Posta un commento
Sono pubblicati solo i commenti che rispettano le norme di legge, le regole della buona educazione e sono attinenti agli argomenti trattati: siamo aperti alla discussione, non alla polemica.