(versione italiana qua)
We are seeing a frantic race to adopt Artificial Intelligence (AI) tools in schools, even though they still have critical issues that, especially in a sector where interventions can have long-term effects, should be considered with extreme care and studied over a longer timeframe. I think it's important to avoid a repeat of what happened with the widespread introduction of digital devices in education over the last two decades. Only recently have we started paying attention to the problems that their excessive use has created. For example, see the final report published in June 2021 from the inquiry “Sull’impatto del digitale sugli studenti, con particolare riguardo ai processi di apprendimento” (= On the Impact of Digital Technology on Students, with Particular Attention to Learning Processes) conducted by the Senate’s Committee “Istruzione Pubblica e Beni Culturali” (= Public Education and Cultural Heritage).
Here are a few points to help foster a more thoughtful reflection on this topic.
Before getting started, it’s important to recall that different levels of education—primary, lower secondary, upper secondary, university, and vocational training—have different needs and require different approaches. Therefore, in this article, I will focus on the school sector. I've discussed the university world elsewhere. For instance, master's degree students, who already have foundational knowledge in a field and are working to deepen and refine it, can be asked to critically evaluate papers produced by AI tools. This serves as an important exercise to test their level of preparation. This approach, however, seems far less relevant in schools, where students are still building their basic skills. To be able to criticize, you first have to know and know how to do.
It is also essential to distinguish between the different roles for using AI in schools: students, teachers, and administrative staff. The latter two groups can use these tools to help with repetitive tasks (while paying close attention to data privacy). For students, however, repeated exercise of cognitive functions cannot be bypassed (through the use of such tools) without inevitably undermining the development of their skills. This point is too often overlooked, even though centuries of educational experience remind us that both physical and intellectual abilities require constant and repeated practice to be developed.
Additionally, it is crucial to distinguish between using technology and education on the scientific principles behind it, a distinction that is too often overlooked. AI tools, while enormously useful for rational cognitive work — that is, the kind of reasoning that starts from objective elements and produces new data that are logically consistent with and derived from those elements (something that, more generally, all computer systems do, the “cognitive machines” I discussed in my book La rivoluzione informatica) — operate through fairly complex mechanisms. To truly understand how they work, you need adequate mathematical and scientific prerequisites that schools typically don't provide.
In fact, Italy is still lagging when it comes to the basic scientific skills needed to successfully navigate the digital transition. For more on this, I suggest looking at the proceedings from the conference on teaching informatics in schools, held at the Accademia dei Lincei on October 19th. This is precisely the gap highlighted by the European Commission's recommendation from April 2023 (COM(2023) 206 final), which urged all member states to include high-quality informatics education from the beginning of compulsory schooling. The general public still doesn't truly grasp what informatics is—the scientific discipline that underpins the digital world. AI is a highly specialized field within informatics. And while its tools, like those from many other areas of informatics, can be used by the average person, a real understanding of how they function must be built on a foundation of basic informatics knowledge that most people currently lack. It's like trying to explain differential and integral calculus to someone who's never taken a math class.
Lastly, and no less important, we need to evaluate the risks and benefits of introducing AI tools into schools. They can certainly bring advantages for certain types of users by reducing the effort associated with repetitive cognitive-rational tasks. However, these potential benefits must be balanced against possible downsides. These include loss of privacy, the spread of misinformation, the generation of inaccurate texts based solely on statistics (so-called "hallucinations," which I've discussed a elsewhere), and the significant energy consumption and environmental impact. For instance, a single query to an AI-based tool can be dozens of times more costly than a query to a standard search engine. We also must consider the potential dehumanization of the teacher-student relationship, where the relational component is an essential aspect.
AI tools can boost teacher productivity by reducing the time spent on repetitive tasks, allowing educators to dedicate more attention to students who need it most. These routine tasks include, for example: generating exercise and exam questions, creating presentations from texts (summarization), generating texts from detailed outlines, and providing explanations for simple requests for clarification, among others. Of course, it is crucial to remain aware of their margins of error, which makes it necessary to carefully review whatever they produce (and therefore to have a solid grasp of the subject matter). Ultimately, responsibility always lies with the human, not the tool.
In general, however, unsupervised use of these tools by students for schoolwork should be avoided. Some, for example, have suggested they could use them at home to get a first evaluation of their homework. A rather odd idea—since at that point they might as well use them to write the assignments, too! More importantly, given the risk of hallucinations in these systems, the very students who would benefit most from such feedback are also the least equipped to spot the mistakes the tools might introduce. Of course, students will still have access to them through their smartphones, so rather than relying on prohibition, the better approach is to provide information and raise awareness, for example, through collective use and critical discussion in the classroom. On the other hand, and this is probably good news, the fact that anyone can produce a high-quality written text with these tools could encourage a renewed emphasis on direct human interaction and oral dialogue.
For some schools, particularly technical institutes for Informatics, curricula will need to be adapted (and supported by suitable professional development for teachers) to include adequate scientific and technological training in this field, which will significantly influence the world of work in the near future. Since those who choose this technical track are generally aiming to enter the job market directly after graduation, it's only right that preparation in this area becomes part of their education.
In conclusion, it will be essential for AI tools used in schools monitored and regulated by the public sector. This is to avoid the risks associated with purely commercial products that lack independent oversight regarding their development, the data they were trained on, the security checks they have undergone, how they use and manipulate provided data, and any potential gender- or diversity-related biases. These are all aspects with a high social impact and, for this reason, they require the highest level of attention. It should also be remembered that this technology is still in an experimental phase, and every one of our interactions with it contributes to its development completely free of charge. This would only be acceptable for a tool owned by the community, where any improvements would ultimately benefit everyone.
Approximately twenty years after video game consoles first appeared in Italy and fifteen years after smartphones became widespread, the Parliamentary Commission mentioned at the beginning concluded its investigation with these words: «There are physical harms… and there are psychological harms… But the most worrying thing is the progressive loss of essential mental faculties, the faculties that for millennia have represented what we broadly call intelligence: the ability to concentrate, memory, critical thinking, adaptability, dialectical skills... These are the effects that the use, which in most cases can only degenerate into abuse, of smartphones and video games produces on the youngest.» I would not want a new Parliamentary Commission, twenty years from now, to reach the same (or perhaps even more) alarming conclusions regarding the use of AI tools.
The future of our children is too important to be sacrificed on the altar of productivity and technological progress.
--The italian version has been published by "StartMAG" on 19 November 2023.