(versione italiana qua)
The debate over the use of ChatGPT has been raging in recent days, especially because of the news about the decision to block it issued by the Italian Data Protection Authority, which has elicited opposite reactions: some applauding it, others considering it as a liberticidal and authoritarian measure. I therefore believe that, first of all, it is necessary to make it clear, even to those who know nothing about what is "under the hood" of digital technologies, which is the issue at stake.
For this purpose, I propose a thought experiment which may appear distant from the topic of this article, but which, in my opinion, illustrates the essence of the situation very well. So, imagine that someone comes along with a machine of the size of a gas boiler for an apartment and says to you: "Here is a mini-nuclear reactor for domestic use, its initial charge of fissile material is capable of giving you hot water for heating and all the needs of the house for the next 10 years, and it costs only a few thousand euros!"
Certainly, with this solution, we could solve many problems and all live better. However, it could happen that, every now and then, such an object would overheat and produce a mini nuclear explosion....
Not a very attractive prospect, is it? Even though our society would not be destroyed or damaged, it would seem obvious to everyone that the game is not worth the candle. Equally obvious is that we would not allow them to be freely marketed, even if the manufacturing companies insisted that they would soon find solutions able to prevent explosions altogether. In a similar way, observing that this is a strategic technology for our country, which would be developed by others if we did not, would not be a valid reason to accept its uncontrolled use.
In the case of generative Artificial Intelligence (AI) systems, ChatGPT is the most famous example of, we are in a similar situation: the enormous power which nuclear technology deploys at the physical level is comparable to what AI based systems release at a cognitive level. In my recent book “La rivoluzione informatica. Conoscenza, consapevolezza e potere nella società digitale” (= The Informatics Revolution. Knowledge, awareness and power in the digital society), I introduced the term “cognitive machines”, to denote the fact that any digital system, not only the AI based ones, acts at this level where it is able, through its purely logical-rational capabilities, to concatenate and infer data, emulating what previously only human beings were able to carry out. I synthetically described in a previous article their dangers in relation to the cognitive development of children.
The possible evolution of this scenario is even more harmful if we consider that, while mini-reactors would have to be built, shipped and put in place one by one with hardly compressible timeframes, these other systems (also called "chatbots") can be replicated at will with no effort at all and made available anywhere in the blink of an eye.
All the media have therefore given great visibility to the appeal, launched on March 29 by very important people in the AI area, including scientists like Yoshua Bengio or Stuart Russell and entrepreneurs like Elon Musk or Steve Wozniak, with the goal of blocking for 6 months the development of the next generation of chatbot technology.
The problem is real and I can understand why so many of my colleagues have supported this appeal with their signature.
Further comments by equally relevant scientists, however, have pointed to the risk that such an appeal actually contributes to the frenzy that has reached soaring levels in recent months, diverting the attention from the real problems. As noted – among others – by Emily Bender, the researcher who in 2020 published, along with Timnit Gebru (the scientist who was later fired by Google for this very reason), the first paper warning of the potential negative effects of this technology, the letter points to some false problems (e.g., that the realization of a "digital mind" is now imminent or that a system with "general artificial intelligence" is now possible) while neglecting many of the real ones, such as the absolute lack of transparency about how these systems have been developed and work, the lack of clarity about safety tests conducted, the risk that accessibility to everyone is already spreading misinformation that is also very harmful (in my previous article I brought a few examples), the fact that their development significantly affects the consumption of natural resources.
As I discussed in other occasions, I do not think it makes sense to block research and development in this area but, as the thought experiment described at the beginning of this article I hope has shown, some form of regulation must be found to balance the indispensable precautionary principle with the importance of using innovation to improve society.
That is why I believe the decision taken by the Italian Data Protection Authority is appropriate, even though not decisive. The way forward is the one Center for AI and Digital Policy in the U.S. has indicated by filing a complaint with the Federal Trade Commission (the independent agency of the U.S. government dedicated to consumer protection and competition oversight) and calling on it to intervene since chatbots engage in behaviors that are deceptive to consumers and dangerous from the standpoint of information accuracy and user safety.
A similar request was made by the European Consumers' Organization, which asked national and European authorities to open an investigation into these systems.
Better focused on the real issues at stake, however, is the open letter published by Leuven University in Belgium. It calls up the manipulation risk that people may face by interacting with chatbots, as some individuals build a bond with what they perceive as another human being, a bond that can result in harmful situations.
In fact, the main threat chatbots pose to humans is that they exhibit human-like competence on the syntactic level but are light years away from our semantic competence. They have no real understanding of the meaning of what they are producing, but (and this is a major problem on the social level) since they express themselves in a form that is meaningful to us, we project onto their outputs the meaning that is within us.
In a nutshell, these are the proposals of this second appeal: awareness-raising campaigns for the general public, investing in research on the impact of AI on fundamental rights, creating a broad public debate on the role to be given to AI in society, establishing a legal framework with strong guarantees for users, meanwhile taking all necessary measures to protect citizens under existing legislation.
We are nowadays facing extremely powerful systems which, as Evgeny Morozov recently reminded us in his article in The Guardian, are neither intelligent in the sense that we humans give to this term, nor artificial since – as demonstrated ad abundantiam, among others, by Antonio Casilli in his book "Schiavi del Click" – they are based on an enormous amount of undeclared and poorly paid human labor carried out in Third World countries, as well as on our (in)voluntary contribution consisting of all the "digital traces" we relentlessly provide during our activity on the Web.
The potential benefits are enormous, but so are the risks. The future is in our hands: we must figure out together, democratically, what form we want it to take.
--The italian version has been published by "StartMAG" on 3 April 2023.
Nessun commento:
Posta un commento
Sono pubblicati solo i commenti che rispettano le norme di legge, le regole della buona educazione e sono attinenti agli argomenti trattati: siamo aperti alla discussione, non alla polemica.