The digital transformation of society is not a recent phenomenon (Castells, 2024). Over the past several decades, a series of technological breakthroughs – including the internet, the World Wide Web, cloud computing, smartphones, and the Internet of Things – have profoundly influenced our daily lives. However, the release of large language models such as ChatGPT (OpenAI) and Gemini (Google) seems to have propelled digitalisation into a new phase.
As Lucy Avraamidou compellingly argues in her article “Can we disrupt the momentum of the AI colonization of science education?”, nearly every day a new generative AI tool is advertised as promising to revolutionise different aspects of our lives (Avraamidou, 2024). This AI-driven revolution is already affecting multiple sectors and occupations (Fadel et al., 2024). As UNESCO (2021) notes, there are numerous well-known applications of AI, including:“Automatic translation between languages and automatic facial recognition—used for identifying travellers and tracking criminals—to self-driving vehicles and personal assistants on smartphones and other devices in our daily life. One particularly noteworthy area is health care. A recent transformative example is the application of AI to develop a novel drug capable of killing many species of antibiotic-resistant bacteria (Trafton, 2020). A second example is the application of AI to analyse medical imaging – including foetal brain scans to give an early indication of abnormalities, retinal scans to diagnose diabetes, and X-rays to improve tumour detection.”
At the same time, these rapid advancements raise questions that societies worldwide must grapple with. What will AI’s impact be on the labor market? Will its adoption enhance human well-being, or could it exacerbate inequalities? How does the significant energy consumption associated with AI influence climate change? Which ethical and moral concerns must we consider when we implement AI at scale? In the field of education – particularly in higher education – these discussions are especially vibrant. Some take a technopositive view, asserting that AI could solve a host of educational challenges. Others are more cautious, highlighting the potential pitfalls and risks. While debates are occurring at both academic and societal levels, the scientific literature outlines four possible scenarios for how AI might affect higher education (van Slyke et al., 2023):
– Minimal impact: AI tools do not significantly alter teaching or professional practices in higher education. Both students and faculty continue to rely on traditional methods, maintaining the status quo.
– AI as a tool (automation): AI automates routine tasks – such as generating exercises, grading assignments, or writing code – freeing up time for more value-added activities. However, the core educational model remains largely unchanged.
– AI as a trusted partner (augmentation): AI becomes a collaborative partner. Students, educators, and professionals interact with AI tools that function as tutors, coaches, or co-creators. This scenario fosters the co-evolution of learning processes and encourages greater creativity by combining human ingenuity with AI capabilities.
– AI as a competitor: AI tools replace certain roles traditionally held by educators or professionals, diminishing their demand. Here, faculty take on secondary roles, such as curriculum design and mentoring, while students learn primarily through “AI educators.” In this model, teachers become “co-learners” who work alongside students.
Although we do not yet know which scenario will prevail, it is increasingly clear that higher education urgently needs the competencies to engage with AI responsibly. This has led to a growing emphasis on AI literacy for both students and educators, regardless of academic level. Yet a fundamental question arises: Is any AI literacy sufficient, or should we focus on fostering a critical AI literacy that prioritises social, ethical, and moral considerations?
A helpful illustration of good practice can be found at the Centre for Learning and Teaching (CLT) within the Faculty of Science and Engineering (FSE) at the University of Groningen. In October 2023, the CLT initiated a strategic vision on integrating AI into educational contexts. A consultation group of 33 participants – including six student representatives – drew insights from FSE’s six educational clusters and eight research institutes. Two sessions were held to discuss real-world cases within FSE, identify the teaching staff’s needs, and refine a strategic vision draft.
The outcome of this effort was the “AI Literacy Vision Document” now publicly available. This vision document addresses how to integrate AI tools into teaching and research in a critical and responsible manner. It advocates for critical AI literacy, defined as the competencies required to evaluate, communicate with, and work alongside AI technologies in scientific and engineering contexts. While offering an overarching strategy based on current research, the document also recognises that disciplinary differences demand distinct approaches. Accordingly, it provides guidelines for designing courses and programs aligned with FSE’s strategic goals, particularly those related to innovation and social impact. Among its key themes, the vision document calls for balancing AI’s potential benefits with its risks – ranging from bias and ethical pitfalls to the potential erosion of human-centric interaction.
Active learning is emphasised, encouraging educators to draw on students’ prior experiences with AI as part of a broader, more intentional instructional design. Core values such as equity, transparency, accountability, and ethical use of AI are likewise underscored, forming a foundation for maintaining academic integrity as AI-based tools become more widespread. Yet these strategic outlines should be treated as guiding compasses that must ultimately lead to concrete action. In other words, we must go a step further and translate critical AI literacy into actual training programs.
Here is where Erasmus+ and related educational innovation initiatives become indispensable. For example, the Centre for Learning and Teaching, in partnership with the University of Nicosia, University College Dublin, the University of the Aegean, All Digital, and CARDET, is undertaking the INFINITE project whose main objective is:“To prepare Higher Education (HE) faculty to critically and ethically exploit AI-based technology in their professional and pedagogical practices, thereby helping Higher Education Institutions (HEIs) leverage the best possible outcomes from AI developments.”
At the time of writing, this consortium has already created an AI Literacy Toolkit, which presents best practices that Higher Education institutions can easily adopt or adapt, and an AI Digital Hub that (1) makes a variety of AI tools, digital resources, and European data accessible for innovative teaching and learning, and (2) offers a centralised repository of AI-related tools and Open Educational Resources (OERs) to help the higher education community remain informed about the latest advancements.
The next step is the development of a formal training program for both educators and students, through which the principles of critical AI literacy will become even more actionable and meaningful. In conclusion, Higher Education stands on the cusp of a transformative moment, one in which many stakeholders have vested interests. It is vital to bear in mind that the shape AI adoption ultimately takes depends on collective decisions made by educators, students, administrators, and policy-makers. AI is here to stay; how we harness it will determine its value for society at large. By engaging with these technologies critically, ethically, and proactively, we can ensure that AI becomes an ally – rather than a threat – in the pursuit of quality education, social equity, and human well-being.
AI vision in higher education: Toward A critical AI literacy at the University of Groningen
