Over the last months, we have shared several updates about our work at the University of Groningen within the INFINITE project, particularly in connection to critical AI literacy and capacity-building for higher education (HE) instructors. We began by introducing the AI Literacy Vision document, a collaboratively developed text that outlined a shared understanding of what it means to be critically literate in AI contexts. Later, we described how we translated that vision into action through the design and implementation of professional development courses, such as the one focused on exploring the limits and possibilities of course design with AI.
In this new article, we would like to take a step further and focus more directly on the ethical dimension of AI in education. This is the main topic of a recent paper I (Francisco Castillo) co-authored with Miquel Pérez, titled Una mirada crítica en l’alfabetització en Intel·ligència Artificial per l’educació STEM, published in the journal Ciències. In this paper, we reflect on how the use of generative AI tools in STEM classrooms – particularly large language models like ChatGPT – raises new ethical and pedagogical dilemmas that we cannot ignore.
Through four concrete examples, we aim to show that these tools are not neutral, and that their integration in education requires much more than technical understanding. We argue that educators and students alike need a critical AI literacy, one that is not only based on knowing what AI can do, but also on asking important questions about how it works, who builds it, with what values, and with what consequences.
Each of the four examples presents a common situation where AI is used in educational contexts:
- In the first example, we ask AI to generate a short academic paragraph with references. While the result looks convincing, it includes made-up citations and fabricated data, revealing how easily these tools can distort reality and create a false sense of credibility. The dilemma here is: how do we teach students to distinguish between scientific validation and plausible text generation?
- The second example focuses on image generation. We ask the AI to generate a picture of a typical classroom in Barcelona, and the result shows a stereotypical Western classroom with white students and a female teacher at the blackboard. This image exposes the hidden biases in the training data of generative AI tools. We ask: how are STEM disciplines represented by these tools? Are the learning situations and teaching methods they propose inclusive and aligned with contemporary research?
- The third example deals with copyright and authorship. We present a song originally by Sia, but sung in Rihanna’s voice using AI. This opens up questions about creative ownership and data ethics, especially when AI tools are trained on copyrighted material without consent. In education, this raises concerns not only about academic integrity but also about data use, privacy, and responsibility.
- Finally, the fourth example highlights the environmental impact of AI technologies. Using a real news release from Nvidia about their new processors, we reflect on the enormous energy demands of training large AI models. This leads us to ask: how do we reconcile the use of AI with our responsibility to foster sustainable STEM education?
These four examples are not just technical curiosities, they are ethical dilemmas that invite us to reflect on how AI is changing not only what we teach in STEM, but also how and why we teach it. For each example, we propose reflection questions that can help educators examine their own practices and assumptions.
But the paper doesn’t stop there. After presenting the dilemmas, we also propose a set of core values that should guide our engagement with AI in education: equity, responsibility, and transparency.
- Equity means actively recognising and addressing the biases built into AI systems, and making sure that these technologies do not reproduce or amplify existing inequalities.
- Responsibility is about clarifying the roles and accountability of everyone involved, not only educators and students, but also the companies that develop AI tools.
- Transparency calls for greater clarity about how AI systems work and how they process our data. This includes ensuring that both teachers and students have control over what data is collected, how it is used, and for what purposes.
We believe these values are essential for building an educational system that does not blindly adopt new technologies, but instead evaluates them critically and ethically. In the final section of the paper, we return to the core questions of STEM education – what we teach, who teaches, and how we teach – and argue that AI must be integrated in ways that are consistent with socioconstructivist learning theories and current challenges in science education.
For example, we reflect on how AI is changing scientific practices and how this should be reflected in school science. We also raise concerns about how some AI tools claim to personalise learning, but often do so by simplifying complex human interactions into data-driven models. This kind of thinking risks reducing education to a cognitive process, ignoring its social, emotional, and cultural dimensions.
Moreover, we highlight some of the deeper challenges facing STEM education today, such as underrepresentation of certain social groups, gender gaps in science careers, and increasing levels of anxiety and mental health issues among students. These problems will not be solved by AI. In fact, if we are not careful, they could be made worse.
If you are interested, you can find the full article here: link.

