If you were to walk around campus and interview students about their experience with artificial intelligence, it probably would not take long to find someone who has used AI tools for coursework before. As AI usage spreads rapidly through university classrooms, so do the controversies surrounding it.
In June 2025, a group of academics from diverse fields, such as computing science, sociology, law, philosophy, cognitive science, and artificial intelligence, published an open letter urging all Universities and Universities of Applied Sciences of the Netherlands to “stop the uncritical adoption of AI technologies in academia” (Open Letter: Stop the Uncritical Adoption of AI Technologies in Academia, n.d.). By now, more than 1,600 professionals, lecturers, and students from around the world have signed the letter, and it has also been translated into other languages. This emphasises a broad agreement throughout different positions in academia and society that AI technologies, in their current use, cause harm to the quality and reputation of work done by staff and students in higher education institutions. Some of the authors elaborate their stance in an opinion paper that adds insight into why AI technologies should not be used and glorified without appropriate reflection in (higher) education (Guest et al., 2025). Essentially, using certain kinds of AI systems in higher education contradicts many of academia’s values, like diversity, openness, critical reflection, and sustainability. In their paper, the authors examine the challenges arising from the vague use of the term ‘artificial intelligence’. They argue that, as a buzzword, ‘AI’ often lacks a precise definition, which can generate hype and make it difficult for users to evaluate these tools objectively. To address this, the authors offer explanations of the most common AI terms and add links for the reader to further investigate on their own. It is worth taking a look at the abundance of references used, which include scientific papers, news articles, and blog posts, as the article serves as a good starting point for building a critical perspective on AI technology in education.
The article criticises AI companies’ marketing strategies by suggesting that the AI tools have abilities that they actually do not have. One example being the claim that large-language models can read or write: There is no such thing as a robot crafting a text similar to how a human would do, it is all just a most likely prediction based on predominantly intransparent algorithms. Special caution is needed regarding profit-based AI technologies such as ChatGPT: The company behind it, OpenAI, does not publicly provide any source code, which strongly goes against fundamental academic principles such as transparency.
The chatbot ‘ChatGPT’, designed by the company OpenAI, is especially popular amongst students, though it brings massive risks of deskilling.

From: https://www.pexels.com/photo/close-up-of-a-person-holding-a-smartphone-displaying-chatgpt-16461434/
The users do not know how the output of this tool, often perceived as some sort of universal remedy, is actually computed. The commercial field of artificial intelligence repeatedly goes through hype cycles but keeps on “over-promising and under-delivering” (Guest et al., 2025, p. 10), as the authors put it. Similarly, the term ‘artificial intelligence’ somehow suggests that human intelligence is something that can easily be replicated artificially, while this vastly simplifies the cognitive processes that are commonly understood as contributing to ‘intelligence’. Generally, the article reminds us how the concept of intelligence is historically problematic because of its racist, sexist, classist, and ableist background. AI easily falls into the trap of reproducing those exact issues and therefore is especially “harmful to minoritised and vulnerable groups” (Guest et al., 2025, p. 4). Though sometimes believed, artificial intelligence will not bring more justice and equality as a solution. When people use AI tools regularly, it gives tech companies a lot of social power and might lead to data colonialism. The perception that a large-language model is in any way neutral or objective because it is a machine ultimately leads to dehumanisation. Additionally, AI, in many cases, is harmful to the environment. For example, data centres use a lot of land, water, and energy to make the tools available.
Approaching the conflict of AI use in higher education and academic values, the paper urges a reflection on the relationship between society and specific technologies in a more general sense. To stimulate this train of thought, they provide a nice metaphor: One could compare AI use to driving responsibly in traffic. If a regular person is speeding, this poses a risk to everyone around them. However, if a paramedic is driving very fast to transport a patient to the hospital, this is not considered unethical or wrong, given that the paramedic is specifically trained for driving safely at higher speeds. Similarly, there is a moral difference between trained experts using AI in scientific practice and laypersons using AI in their daily lives. This analogy helps to understand why it is relevant for lecturers in higher education to carefully and responsibly train their students to use AI tools appropriately.
Lecturers should remind their students of the original purpose of higher education, which is more than getting a degree

From: https://www.pexels.com/photo/a-class-having-a-recitation-8199166/
Rather than mindlessly reproducing the marketed talking points of AI companies, teachers in higher education should carefully consider which pedagogical goals they are pursuing with their classes and whether AI can meet these. Does the AI tool of choice really have a significant added value to the learning process? According to the paper, successful education is based on mutual trust between educators and students, and to prevent students from cheating, e.g., letting AI tools generate an essay but present it as their own work, teachers should remind students of the general purposes of education. Especially, the question of copyright is a central issue of AI use at universities: Not only does the regular use of LLMs normalise to appropriate work as one’s own, but it also devalues the actual work of students and staff who abstained from AI. Following this line of argumentation, it reduces academics, who formerly were workers, to customers. Writing a good prompt does not need the same level of skills as writing a good paper. It is possible as an educator to teach AI literacy without needing to encourage the use of the AI tools discussed. According to the authors, the increasing prevalence of AI use in education will likely lead to an increase in illiteracy and, in a broader sense, deskilling of students, who will also become dependent on the technology industry. The latter, however, could also be argued for any kind of technology, such as writing programs, digital learning environments, reference managers, etc., where most of them could still be considered harmless. Handling AI usage well in the classroom remains a challenge, since merely raising awareness of its risks does not take them away. In the long-term, even talking about AI critically might reinforce a certain normalisation of the topic.
Applying the five core principles of the Netherlands Code of Conduct for Research Integrity to AI usage, the paper clearly lays out tensions: Honesty would demand that researchers disclose when AI have been used, and refrain from making unsupported claims about their capabilities. Scrupulousness requires only using AI tools whose functions are well-specified, validated and relevant for the field, and that the researchers can justify why the technologies are used. Transparency includes that the AI tools used should be open source and computationally reproducible, which disqualifies most of the large-language model chatbots of major tech companies. Independence of such AI firms could pose big challenges for the researchers to stay unbiased and without a conflict of interest. Finally, responsibility obliges academics to avoid any AI tools that are harmful to people, animals, the environment or legal guidelines. In sum, the paper states how most current AI technologies struggle to meet even the basic ethical and methodological standards of academic research.
Students themselves often recognise the risks of AI or even prefer outright bans. The argument is straightforward: To truly acquire a skill, you must practice it. In the paper, this is highlighted through examples such as learning basic arithmetic: When this is learned in school, pupils are usually not allowed to use a calculator, so that they are forced to go through the effort of learning relevant skills. Similarly, it would be advisable to ban AI systems in higher education in order to allow students to properly learn what it means to work scientifically. Unfortunately, the current academic system already pressures individuals to take unethical shortcuts, but the spread of AI accelerates the problem. Yet the spread of AI is not inevitable, despite how it is often presented. While reversing course becomes more difficult over time, it is not impossible: Dutch schools, for example, have successfully reintroduced phone bans, with measurable benefits for student focus and learning outcomes (Kohnstamm Instituut, 2025). Ultimately, AI systems cannot really replace the depth and quality of human craft and thinking. The challenge for academia is not just to resist the hype, but to actively reclaim the values and practices that make education meaningful.
References
- Guest, O., Suarez, M., Müller, B., Edwin, V. M., Arnoud, O. G. B., Ronald, D. H., Andrea, R. E., Blokpoel, M., Scharfenberg, N., Kleinherenbrink, A., Camerino, I., Woensdregt, M., Monett, D., Brown, J., Avraamidou, L., Alenda-Demoutiez, J., Hermans, F., & Iris, V. R. (2025). Against the uncritical adoption of “AI” technologies in academia. Zenodo (CERN European Organization for Nuclear Research). https://doi.org/10.5281/zenodo.17065099
- Kohnstamm Instituut (2025). Url: https://open.overheid.nl/documenten/54c01e11-5a20-4779-9243-f4ed5fda1c9f/file
- Open Letter: Stop the uncritical adoption of AI technologies in Academia. (n.d.). https://openletter.earth/open-letter-stop-the-uncritical-adoption-of-ai-technologies-in-academia-b65bba1e
The University of Leiden has organised the seminar on the same topic, the recording is available at: https://www.universiteitleiden.nl/en/events/2026/02/against-the-uncritical-adoption-of-ai-technologies-in-academia
