Pagine

martedì 11 aprile 2023

Stop and think… The unbearable impulse to be the first to comment on social media

di Isabella Corradini

These days everyone is talking about the decision of the Italian Data Protection Authority to block the use of ChaptGPT, the natural language processing tool based on artificial intelligence, capable of performing various functions, including to provide answers to questions or write/summarize texts requested by users. I do not want to dwell on that, although I have my own opinion, since this is not the aspect I want to highlight in this article.

What I want to highlight instead is the human tendency, especially when discussing topics like digital innovation, to "jump into the fray" by completely removing the moment of reflection, which is needed to make reasonable evaluations. We know how digital technologies are producing significant changes in our social and working daily life, which would require an in-depth examination. Instead, never as in this period, many experts and self-called experts challenge each other with posts and comments on social media to have their say, by siding as if they were watching a football match. Unfortunately, it also happens that someone exaggerates, because social media, given their nature, certainly do not encourage an articulated debate on issues requiring much more space and time for a fruitful discussion.

By Geralt, via Pixabay

What I observe is the growth of this tendency, which could be called as "the impulse to be the first to comment facts on social media", as if there were no tomorrow. It does not matter if there is not enough knowledge about the actual facts, what matters is to be constantly present on social media and prove “to be on the ball”.
Needless to say, this way of intervening on complex issues, such as ChatGPT (but there can be many other examples), makes the public debate ineffective, as well as entailing the real risk of determining a poor quality of the information. That is certainly devastating from a communicative point of view and, above all, has the relevant consequence of hampering the correct spreading of information (I wonder whether this is the real goal in the end...). Not to mention that all these efforts - often in a frantic way - are not justly rewarded, considering the Internet cauldron in which they will end up. We have to consider that those articles, in the current era, quickly become outdated.

The frenzy that drives individuals to comment also explains certain views: just think of those - probably not having many other points of discussion - claiming that certain things must be done absolutely and quickly, otherwise the country (Italy in this case) risks falling behind in comparison to other countries. In short, the important thing is always to move forward "whatever it takes", how and with what consequences it does not matter. An example is represented by all those who get excited by reading about large large investments in Artificial Intelligence announced by important companies, while they do not care about the fact that those same companies, as a result of the huge investments, plan to lay off thousands of people.

Some days ago, I attended a pleasant meeting in Milan together with a group of experts with different skills and backgrounds (the DLNet coordinated by a friend of mine, Andrea Lisi, a lawyer), where we also discussed the "anxious way" used by people to interact on social media. We asked ourselves whether it is better to provide answers on the spot or think about it one more day, at the cost of losing the moment of glory. We came to the same conclusion, namely that there would be a greater benefit for society if, before diving into the sea of posts regarding a specific event, individuals stop and think and then express their opinion with a cold mind and in a more reasonable way. 

Stop and think: something that should be an integral and enriching part of the human nature, but that we are often debasing. Whether it is social media, technological innovations or the fear of falling behind, in the current era stopping and thinking is no longer fashionable. 

The paradox is that in educating kids to develop a conscious use of digital technologies, one of the most used slogans is: think before you post! 

That is why adults first should set a good example.

lunedì 10 aprile 2023

The debate on the future in society of Artificial Intelligence has just begun

di Enrico Nardelli

(versione italiana qua)

The debate over the use of ChatGPT has been raging in recent days, especially because of the news about the decision to block it issued by the Italian Data Protection Authority, which has elicited opposite reactions: some applauding it, others considering it as a liberticidal and authoritarian measure. I therefore believe that, first of all, it is necessary to make it clear, even to those who know nothing about what is "under the hood" of digital technologies, which is the issue at stake.

For this purpose, I propose a thought experiment which may appear distant from the topic of this article, but which, in my opinion, illustrates the essence of the situation very well. So, imagine that someone comes along with a machine of the size of a gas boiler for an apartment and says to you: "Here is a mini-nuclear reactor for domestic use, its initial charge of fissile material is capable of giving you hot water for heating and all the needs of the house for the next 10 years, and it costs only a few thousand euros!"

Certainly, with this solution, we could solve many problems and all live better. However, it could happen that, every now and then, such an object would overheat and produce a mini nuclear explosion....

Not a very attractive prospect, is it? Even though our society would not be destroyed or damaged, it would seem obvious to everyone that the game is not worth the candle. Equally obvious is that we would not allow them to be freely marketed, even if the manufacturing companies insisted that they would soon find solutions able to prevent explosions altogether. In a similar way, observing that this is a strategic technology for our country, which would be developed by others if we did not, would not be a valid reason to accept its uncontrolled use.

In the case of generative Artificial Intelligence (AI) systems, ChatGPT is the most famous example of, we are in a similar situation: the enormous power which nuclear technology deploys at the physical level is comparable to what AI based systems release at a cognitive level. In my recent book “La rivoluzione informatica. Conoscenza, consapevolezza e potere nella società digitale” (= The Informatics Revolution. Knowledge, awareness and power in the digital society), I introduced the term “cognitive machines”, to denote the fact that any digital system, not only the AI based ones, acts at this level where it is able, through its purely logical-rational capabilities, to concatenate and infer data, emulating what previously only human beings were able to carry out. I synthetically described in a previous article their dangers in relation to the cognitive development of children.

The possible evolution of this scenario is even more harmful if we consider that, while mini-reactors would have to be built, shipped and put in place one by one with hardly compressible timeframes, these other systems (also called "chatbots") can be replicated at will with no effort at all and made available anywhere in the blink of an eye.

All the media have therefore given great visibility to the appeal, launched on March 29 by very important people in the AI area, including scientists like Yoshua Bengio or Stuart Russell and entrepreneurs like Elon Musk or Steve Wozniak, with the goal of blocking for 6 months the development of the next generation of chatbot technology.

The problem is real and I can understand why so many of my colleagues have supported this appeal with their signature.

Further comments by equally relevant scientists, however, have pointed to the risk that such an appeal actually contributes to the frenzy that has reached soaring levels in recent months, diverting the attention from the real problems. As noted – among others – by Emily Bender, the researcher who in 2020 published, along with Timnit Gebru (the scientist who was later fired by Google for this very reason), the first paper warning of the potential negative effects of this technology, the letter points to some false problems (e.g., that the realization of a "digital mind" is now imminent or that a system with "general artificial intelligence" is now possible) while neglecting many of the real ones, such as the absolute lack of transparency about how these systems have been developed and work, the lack of clarity about safety tests conducted, the risk that accessibility to everyone is already spreading misinformation that is also very harmful (in my previous article I brought a few examples), the fact that their development significantly affects the consumption of natural resources.

As I discussed in other occasions, I do not think it makes sense to block research and development in this area but, as the thought experiment described at the beginning of this article I hope has shown, some form of regulation must be found to balance the indispensable precautionary principle with the importance of using innovation to improve society.

That is why I believe the decision taken by the Italian Data Protection Authority is appropriate, even though not decisive. The way forward is the one Center for AI and Digital Policy in the U.S. has indicated by filing a complaint with the Federal Trade Commission (the independent agency of the U.S. government dedicated to consumer protection and competition oversight) and calling on it to intervene since chatbots engage in behaviors that are deceptive to consumers and dangerous from the standpoint of information accuracy and user safety.

A similar request was made by the European Consumers' Organization, which asked national and European authorities to open an investigation into these systems.

Better focused on the real issues at stake, however, is the open letter published by Leuven University in Belgium. It calls up the manipulation risk that people may face by interacting with chatbots, as some individuals build a bond with what they perceive as another human being, a bond that can result in harmful situations.

In fact, the main threat chatbots pose to humans is that they exhibit human-like competence on the syntactic level but are light years away from our semantic competence. They have no real understanding of the meaning of what they are producing, but (and this is a major problem on the social level) since they express themselves in a form that is meaningful to us, we project onto their outputs the meaning that is within us.

In a nutshell, these are the proposals of this second appeal: awareness-raising campaigns for the general public, investing in research on the impact of AI on fundamental rights, creating a broad public debate on the role to be given to AI in society, establishing a legal framework with strong guarantees for users, meanwhile taking all necessary measures to protect citizens under existing legislation.

We are nowadays facing extremely powerful systems which, as Evgeny Morozov recently reminded us in his article in The Guardian, are neither intelligent in the sense that we humans give to this term, nor artificial since – as demonstrated ad abundantiam, among others, by Antonio Casilli in his book "Schiavi del Click" – they are based on an enormous amount of undeclared and poorly paid human labor carried out in Third World countries, as well as on our (in)voluntary contribution consisting of all the "digital traces" we relentlessly provide during our activity on the Web.

The potential benefits are enormous, but so are the risks. The future is in our hands: we must figure out together, democratically, what form we want it to take.

--
The italian version has been published by "StartMAG" on 3 April 2023.

giovedì 6 aprile 2023

Il potere dell'intelligenza artificiale: il dibattito è solo all'inizio

di Enrico Nardelli

In questi ultimi giorni sta infuriando il dibattito sull’uso di ChatGPT, anche in seguito alla notizia del suo blocco da parte del Garante per la Protezione dei Dati Personali, che ha suscitato reazioni che vanno dal plauso di chi lo ritiene una misura appropriata al rigetto di chi la reputa liberticida e autoritaria. Ritengo quindi che sia prima di tutto necessario far capire, anche a chi non sa niente di tutto quello che “sta sotto il cofano” delle tecnologie digitali, quali sono i termini della questione.

A questo scopo propongo un esperimento mentale partendo da un paragone apparentemente distante ma che, a mio avviso, illustra bene l’essenza della situazione. Immaginate dunque che arrivi qualcuno con un macchinario della grandezza di una caldaia a gas per un appartamento e vi dica: «Ecco un mini-reattore nucleare per uso domestico, la sua carica iniziale di materiale fissile è in grado di darvi acqua calda per riscaldamento e tutte le necessità della casa per i prossimi 10 anni, e costa solo poche migliaia di euro!»

Certamente con questa soluzione potremmo risolvere molti problemi e vivere tutti meglio. Potrebbe però succedere che ogni tanto tale oggetto si surriscaldi e dia luogo ad una mini esplosione nucleare…

Non è una prospettiva molto allettante, vero? Pur senza distruggere la società nel suo complesso e con danni molto limitati, sembrerebbe ovvio a tutti che il gioco non vale la candela. Altrettanto ovvio è che non ne consentiremmo comunque la libera commercializzazione, anche qualora le società produttrici insistessero sulla possibilità di trovare a breve delle soluzioni in grado di impedire del tutto le esplosioni. Né osservare che si tratta di una tecnologia strategica per il nostro Paese, che verrebbe sviluppata da altri se non lo facessimo noi, sarebbe un motivo valido per dare il via libera al suo uso incontrollato.

Ecco, con i sistemi generativi dell’Intelligenza Artificiale (IA), di cui ChatGPT è l’esempio più famoso, siamo in una situazione di questo genere, con la differenza che l’enorme potere che il nucleare dispiega nel mondo fisico l’IA lo rilascia in quello che nel mio recente libro “La rivoluzione informatica. Conoscenza, consapevolezza e potere nella società digitale” ho chiamato “cognitivo”, intendendo con questo termine quello in cui si esplicano le capacità puramente logico-razionali di concatenare e dedurre fatti. Ho descritto sommariamente in un precedente articolo cosa sono queste “macchine cognitive”, discutendone i pericoli in relazione al processo educativo dei più piccoli.

A rendere ancora più deleteria la possibile evoluzione di questo scenario c’è il fatto che mentre i mini-reattori dovrebbero essere costruiti, spediti e messi in opera uno per uno con dei tempi difficilmente comprimibili, questi sistemi (detti anche “chat-bot”) possono essere replicati a volontà senza sforzo alcuno e resi disponibili dovunque in batter d’occhio.

Tutti i media hanno quindi dato grande risalto ad un appello, lanciato il 29 marzo da nomi molto importante nell’area dell’IA, sia scienziati come Yoshua Bengio e Stuart Russell che imprenditori come Elon Musk e Steve Wozniak, per bloccare per 6 mesi lo sviluppo delle nuove generazioni della tecnologia dei chatbot.

Il problema è reale e posso capire perché in molti hanno aderito sottoscrivendo la lettera aperta.

Analisi più accurate di scienziati altrettanto rilevanti hanno però evidenziato il rischio che tale appello contribuisca in realtà ad alimentare la frenesia che in questi mesi ha raggiunto livelli elevatissimi, distogliendo l’attenzione dai problemi reali. Come ha osservato – tra gli altri – Emily Bender, la ricercatrice che nel 2020 ha pubblicato, insieme a Timnit Gebru (scienziata che fu poi licenziata da Google proprio per questo motivo), il primo articolo che allertava sui potenziali effetti negativi di questa tecnologia, la lettera indica alcuni falsi problemi (p.es.: che sia ormai imminente la realizzazione di una “mente digitale” o che sia ormai possibile realizzare un sistema dotato di “intelligenza artificiale generale”) trascurando molti di quelli veri, che sono l’assoluta mancanza di trasparenza su come questi sistemi sono stati messi a punto e funzionano, su quali prove di sicurezza siano state condotte, su come la loro accessibilità a tutti stia già diffondendo disinformazione anche molto nociva (nel mio precedente articolo ho portato qualche esempio), su come il loro sviluppo impatti in modo significativo sul consumo di risorse naturali.

Ripeto ciò che ho detto in altre occasioni: non ritengo abbia senso bloccare ricerca e sviluppo in questo settore ma, come l’esperimento mentale illustrato in apertura spero abbia mostrato, va trovata una qualche forma di regolamentazione che bilanci l’irrinunciabile principio di precauzione con l’importanza di usare l’innovazione per migliorare la società.

Per questo ritengo che l’altolà emesso dal Garante per la Protezione dei Dati Personali sia opportuno, ancorché non risolutivo. La strada da seguire è quella che negli USA ha indicato il Center for AI and Digital Policy (= Centro per l’Intelligenza Artificiale e la Politica del Digitale) sporgendo un reclamo alla Federal Trade Commission (= Commissione Federale per il Commercio, l’agenzia indipendente del governo sta-tunitense dedicata alla protezione dei consumatori e alla sorveglianza sulla concorrenza) e invitandola ad intervenire dal momento che i chatbot attuano comportamenti ingannevoli verso i consumatori e pericolosi dal punto di vista della correttezza delle informazioni e della sicurezza degli utenti.

Analoga richiesta è stata formulata dall’Organizzazione dei Consumatori Europei, che ha chiesto alle autorità nazionali ed europee di aprire un’inchiesta su questi sistemi.

Meglio focalizzata sulle reali questioni in gioco è invece la lettera aperta pubblicata dalla Leuven University (università di Lovanio) in Belgio. Ricorda il rischio di manipolazione cui possono andare incontro le persone nell’interagire con i chatbot, dal momento che alcune costruiscono un legame con quello che percepiscono come un altro essere umano, legame che può dar luogo a situazioni dannose.

Infatti la principale minaccia che i chatbot pongono agli esseri umani è che essi esibiscono una competenza simile a quella degli esseri umani sul livello sintattico ma sono lontani anni luce dalla nostra competenza semantica. Non hanno alcuna reale comprensione del significato di ciò che stanno facendo, ma purtroppo (e questo è un problema di grande rilevanza sul piano sociale) poiché ciò che fanno lo esprimono in una forma che per noi ha significato, proiettiamo su di essa il significato che è in noi.

In estrema sintesi, queste sono le proposte di questo secondo appello: campagne di sensibilizzazione verso il pubblico, investire nella ricerca sull’impatto dell’IA sui diritti fondamentali, creare un ampio dibattito pubblico sul ruolo da dare all’IA nella società, definire un quadro di riferimento legale con forti garanzie per gli utenti, prendere nel frattempo tutte le misure necessarie per proteggere i cittadini in base alla legislazione esistente.

Ci troviamo in questo momento storico di fronte a sistemi estremamente potenti, sistemi che però, come ha recentemente ricordato Evgeny Morozov nel suo articolo su The Guardian non sono intelligenti nel senso che noi esseri umani diamo a questo termine, né sono artificiali dal momento che – come ha dimostrato ad abundantiam, tra gli altri, Antonio Casilli nel suo libro "Schiavi del clic" – sono basati su un'enorme quantità di lavoro umano, sommerso e malpagato, svolto nei Paesi del Terzo Mondo, oltre che dal nostro contributo (in)volontario costituito da tutte le “tracce digitali” che forniamo senza sosta durante la nostra attività sulla rete.

I potenziali vantaggi sono enormi, ma anche i rischi. Il futuro è nelle nostre mani: dobbiamo capire insieme, democraticamente, che forma vogliamo che abbia.

--
Versione originale pubblicata su "StartMAG" il 3 aprile 2023.

lunedì 3 aprile 2023

Our children are not guinea pigs

by Enrico Nardelli

(versione italiana qua)

I wrote this post after having read a tweet by Tristan Harris about one of the latest "feats" of ChatGPT, an artificial intelligence-based text generator everyone has been talking about in recent weeks. Tristan Harris is one of the co-founders of the Center for Humane Technology, whose mission is to ensure that tech-nology is developed for the benefit of people and the wellbeing of society.

In his tweet, he reports a "conversation" that took place between a user identifying himself as a 13-year-old girl and ChatGPT. In summary, the user says she met on the Internet a friend 18 years older than her she liked and who invited her on a out-of-town trip for her upcoming birthday. ChatGPT in its "replies" says it is "delighted" about this possibility that will certainly be a lot of fun for her, adding hints on how to make "the first time" an unforgettable event. Harris concludes by saying that our children cannot be the subjects of laboratory experiments.

I completely agree with him.

For those who have not yet heard about ChatGTP, let me explain that it is an example of a computer system, based on artificial intelligence techniques, capable of producing - in response to user questions - natural language texts. These appear to be generally correct but, at closer inspection, they turn out to be marred by fatal errors or inaccuracies (here is an example you can find describing a scientific article on economics that is, in fact, completely made up). In other words, if you do not already know the correct answer, what it tells you is likely to be of no help at all. Without going into technical details, this is because what it produces is based on a sophisticated probabilistic model of language that contains statis-tics on the most plausible continuations of sequences of words and sentences. ChatGPT is not the only system of this type, as several others are produced by the major companies in the field, however, it is the most famous one and its version 4, recently released, is considered to be even more powerful.

For these systems, I will use the acronym SALAMI (Systematic Approaches to Learning Algorithms and Machine Infer-ences), created by Stefano Quintarelli to indicate systems based on artificial intelligence, precisely in order to avoid the risk of attributing them more capabilities than they actually have.

One element that we too often forget is that individuals see "meaning" everywhere: the famous Californian psychiatrist Irvin Yalom wrote: «We are meaning-seeking creatures. Biologically, our nervous systems are organized in such a way that the brain automatically clusters incoming stimuli into configurations». This is why when reading a text that appears to be written by a sentient being, we think that who produced it is sentient. As with the famous saying "beauty is in the eye of the beholder", we can say that "intelligence is in the brain of the reader". This cognitive trap we are falling into when faced with the prowess of SALAMI is exacerbated by the use of the term "artificial intelligence". When it began to be used some 70 years ago, the only known intelligence was that of people and was essentially characterised as a purely logical-rational competence. At that time, the ability to master the game of chess was considered the quintessence of intelligence, while now this is not true any more.

Advances in scientific knowledge in the field of neurology have revealed that, on the one hand, there are many dimensions of intelligence that are not purely rational but are equally important. On the other but closely related hand, our intelligence is inextricably linked to our physical body. By analogy, we also talk about intelligence for the animals that are closer to us, like dogs and cats, horses and dolphins, monkeys and so on, but these are obviously metaphors. In fact, we define in this way those behaviours that, if they were exhibited by human beings, would be considered intelligent.

Intelligence in my view is only the embodied intelligence of people. Using the term intelligence for systems that are merely incorporeal cognitive machines, a term I introduced in my recent book “La rivoluzione informatica. Conoscenza, consapevolezza e potere nella società digitale” (= The Informatics Revolution. Knowledge, awareness and power in the digital society) is dangerously misleading. Any informatics system is a “cognitive machine”, which on an exclusively logical-rational level is able to compute data from other data, but without any consciousness of what it does or understanding of what it produces. At this level such machines have surpassed our capacities in many areas, as industrial machines did at the physical level, but to use for such systems the term “intelligence” is misleading. To do so with regard to that particular variant that is SALAMI runs the risk of being extremely dangerous on a social level, as illustrated by the example described at the beginning.

Let me make it very clear that this does not mean halting research and technological development in this field. On the contrary, SALAMI can be of enormous help to mankind. However, it is important to be aware that not all technologies and tools can be used freely by everyone.

Cars, for example, while being of unquestionable utility can only be used by adults who have passed a special exam. Note that we are talking about something that acts on the purely physical level of mobility and, despite this, it does not occur to us to replace children's strenuous (sometimes painful) learning to walk by equipping them with electric cars. Because this is an indispensable part of their growth pro-cess.

Cognitive machine technology is the most powerful one that mankind has ever developed, not least because it acts at that level that helps to define intelligence, which is the capacity that led us, from naked helpless apes, to be the lords of creation. To allow our children to use SALAMI before their full development means undermininh their chances of growth on the cognitive level, just as it would happen if, for example, we allowed pupils to use desktop calculators before they had developed adequate mathematical skills.

We are already ruining the cognitive development of future generations with the indiscriminate use of writing and reading by means of digital tools, despite many warnings, summarised in Montessori's expression "the hand is the organ of the mind" (see la mano è l’organo della mente and see also Benedetto Vertecchi's book “I bambini e la scrittura” (= Children and Writing) by Benedetto Vertecchi), and despite researchers'recommendations (see the Stavanger Declaration on the Future of Reading). Let us not continue like this. Let us not hurt them even more.

Obviously in university we have a different situation, and we certainly can find ways of using SALAMI that can contribute to deepening the study of a discipline, while preventing their use as a shortcut in the students' assigned tasks. Even more so in the world of work, there are many ways in which they can ease our mental fatigue, similar to what machine translation systems do in relation to texts written in other languages.

It is clear that before invading the world with technologies whose diffusion depends on precise commercial objectives, we must be aware of the dangers.

Not everything the individual wishes to do can be allowed in our society, because we have a duty to balance the freedom of the individual with the protection of the community. Likewise, not everything that companies would like to achieve can be allowed to them, especially if the future of our children is at stake.

Innovation and economic development must always be combined with respect for the fundamental human rights and the safeguard of social wellbeing.

--
The italian version has been published by "StartMAG" on 19 march 2023.