(versione italiana here)
In recent months, the world of education, particularly among teachers, has been in a state of upheaval. It's all because of generative artificial intelligence tools (which I'll refer to as GenAI for short), the most famous of which is ChatGPT. So, why is this happening? To begin with, GenAI tools are objectively able to produce content — whether it's text, images, or many other cognitive products typically created by humans — that is often indistinguishable from what a person would make. These tools, which Stefano Quintarelli aptly defines as just SALAMI (Systematic Approaches to Learning Algorithms and Machine Inferences), are mistakenly perceived by the general public as truly intelligent entities. This is due to a typical human tendency to project meaning onto things and see significance everywhere.
On top of this, enormous commercial interests are at stake. Billions of dollars are being invested annually in a race to capture market shares that are projected to be worth at least a thousand times more. This relentless drive pushes the media to constantly describe scenarios that swing from the fantastic (we'll solve all of humanity's problems) to the apocalyptic (we'll all lose our jobs). However, these narratives always aim to create a sense of inevitability: there is no alternative! And, since the most effective way to convince someone to use a product is to get them accustomed to it from an early age, the education sector has become the primary target of this pressure.
Moreover, GenAI tools are now fully integrated into the technological appendage we all carry around — our smartphones — and so, whether we like it or not, everyone ends up using them. "It's inevitable!" is the constant refrain. The term "innovative" is then used as a catch-all to convince those who are not experts, completely ignoring the potential long-term consequences. We just have to remember how, over the past two decades, we have let even very young children use digital devices without any real oversight. We have since realized that this created some serious problems, as documented in the final report from June 2021 by the Senate's Public Education and Cultural Heritage Commission. The report, part of a fact-finding investigation into "The Impact of Digital Technology on Students, with a Particular Focus on Learning Processes", highlighted significant issues.
Now, if we just focus on text-based GenAI tools — those that produce written responses to a request (e.g., "What were the key events of the Punic Wars?", "Write a 20-line summary of War and Peace", or "Describe the process of plant pollination") — the immediate consequence for schools is that one of the most traditional methods of assessing skills, homework, becomes completely ineffective. It's a no-brainer that students will use GenAI to do their assignments, unless schools enforce a prohibitive atmosphere, which in any case would only work for a short while.
The solution, however, is not to invent improbable ways for students to use these tools anyway. Later on, I will discuss five key problems with such an approach. The real answer is to place greater value on oral interaction and in-person relationships. This means less homework and more in-class work, which in turn requires smaller class sizes and more teachers. Of course, this costs money, but a country's future hinges on its education system. This kind of approach also restores the value of the relational aspect of the teacher-student bond. We know — at least since Plato's analysis in his Dialogues — that the emotional component of the educational relationship between a didàskalos and a mathetés, a master and a student, is a fundamental aspect of paideia, the ethical and spiritual growth of the disciple. While technology can enrich this component if used appropriately, it can never, ever replace it. To do so would impoverish and destroy our humanity.
Still, some people argue: "Let's turn the availability of GenAI into an opportunity for students to learn better. We should have them experiment with these technologies, since they'll have to use them as adults anyway." This approach, however, has some very important critical points that are worth examining.
1) These tools still under development, which often produce responses that seem correct but are inaccurate. Only if we are already well-versed in the subject on which GAI has generated a text can we spot what is wrong. Since the acquisition of knowledge is a fundamental part of education, especially for younger students, it makes no sense to risk them learning incorrect information (or worse, information distorted by prejudice or stereotypes).
2) These tools are controlled by the usual Big Tech companies, and there is a complete lack of oversight on how they were developed and how they work, what data was used for their training, and what security tests have been conducted. For every potentially harmful technology, societies have introduced regulations. In this case, the European Union is attempting to do so with the so-called "AI Act," for which a political agreement was recently announced, although the details remain unknown. In the meantime, however, the use of these tools is being encouraged as much as possible, regardless of the potential risks. A major problem erupted in the last days of 2023, with the news that The New York Times, one of the world's best-known and most reputable newspapers, has sued OpenAI and Microsoft for allegedly training their GenAI tools on its articles without authorization. Other lawsuits are expected soon, particularly concerning visual GenAI tools, those that can produce images and videos, since several users on X (formerly Twitter) have pointed out that such tools can produce copyrighted material even when the user’s prompt is phrased in a completely generic way.
3) Using tools that are still in development means we are essentially working for free for the companies that are building them, and who might one day sell them back to us. In other words, we are once again making the same mistake we made when we mindlessly embraced intrusive and abusive social platforms that collected massive amounts of data about us and now use it for commercial gain. And what did we get in return? When I hear people say that we need to train teachers in how to phrase questions to GAI systems so they can get the help they need (a practice known as "prompt engineering"), I shudder. Do we really want to turn teachers into free labor for Big Tech?
4) Even if GenAI tools always provided correct answers (which they do not), using them in the lower grades risks more than just our children acquiring false knowledge. It could also cripple their cognitive development because they would no longer be practicing essential skills. The ability to summarize texts, argue a position, and present a point of view are fundamental skills for any citizen. If students do not practice them in school, they will never acquire them. It is like if we only ever travel by car and never walk or bike: our physical abilities would weaken. Furthermore, while studying from books trains us to consider a variety of viewpoints and ways of presenting a topic, relying on very few sources of knowledge creates a strong risk of social indoctrination, especially in the humanities.
5) I find it astonishing how much of a contrast there is between the constant encouragement to use these tools and the complete lack of a serious ethical evaluation of involving minors in what are still experimental technologies. For any other experiment involving children, ethical approval is — rightfully — required, yet in this case, there's nothing? For a recent research project in elementary schools that compared two methods for teaching a fundamental informatics concept, my colleagues and I had to get approval from an ethics committee. Yet now I see people urging minors to use GenAI without any consideration of these issues, or of privacy concerns. Can you imagine if the Wright brothers had started flying people on their airplanes in the early 1900s while they were still being developed?
Some argue: “There’s no alternative, GenAI is here, and it’s part of our lives.”. While it is true that it is already a part of our lives, there is no absolute need to use it in the classroom. This does not mean we should not talk about it or pretend it does not exist. Collective use in the classroom, under a teacher's guidance and control, along with a critical discussion (at a level appropriate for the children's age), provides useful information about a technology they are bound to encounter. But at the current stage of the technology, students do not truly need to be using these tools regularly. For teachers, some uses are possible, as I described in a recent article. But extreme caution is necessary given the experimental nature of these tools and the risks associated with the handling of minors' personal data. In this regard, I would be interested to know if the Italian Data Protection Authority has assessed whether the encouragement for teachers to use these tools to improve their teaching by personalizing it for individual students might jeopardize the privacy of those students.
To be even clearer, in a context where we teach the scientific foundations of informatics in schools, there is certainly a place for teaching the principles of artificial intelligence, which is a very important field of informatic itself. However, teaching what machine learning, the core technique behind GenAI, is to someone who does not know what an automaton or an algorithm is, is like trying to explain trigonometry to someone who only knows the four basic arithmetic operations. You can, of course, explain that trigonometry allows you to measure the distance between two trees on opposite sides of a river without crossing it, but without a solid mathematical foundation, that amounts to nothing more than popular science. Useful, certainly, for a citizen who has little time to study but still needs to stay informed on scientific and technological advancements. In school, however, we should provide the scientific foundations needed to understand the world around us, both the natural one and the one we have artificially built, which is increasingly digital.
Rethinking the entire school curriculum in light of the transition from an industrial to a digital society is a necessary step, and it is far, far more important than chasing technological fads by training students and teachers to use GenAI tools.
--The italian version has been first published by "StartMAG" on 4 january 2024.