Category: News

  • Developing Meaningful AI Literacy in Higher Education

    Developing Meaningful AI Literacy in Higher Education

    The rapid development of Artificial Intelligence (AI) has created both excitement and apprehension across global academic landscapes. This web article explores the transition from technical experimentation to digital fluency, drawing insights from the blended courses and real-classroom implementations of AI scenarios that were recently developed as part of Work Package 4 of the INFINITE (https://infinite-erasmus.eu/) Erasmus+ project. By moving beyond the initial “hype”, institutions can develop deep, sustainable AI literacy among both faculty and students. 

    The Power of Scenario-Based Learning 

    One of the most effective ways to integrate AI into Higher Education (HE) is through scenario-based pedagogy. This approach is grounded in the pedagogical tradition that emphasises constructivism and experiential learning, as theorised by authors such as Jean Piaget and Lev Vygotsky (Della Volpe, 2024). 

    According to Piaget, learning is an active process occurring through the interaction between the individual and their environment. Similarly, Vygotsky highlights the importance of social context, arguing that knowledge is acquired through dialogue and collaboration. Scenario-based learning applies these principles by creating environments that require active participation, placing students in complex situations where they must solve problems and make decisions. (Della Volpe, 2024) 

    Hence, rather than teaching AI as a standalone technical subject, embedding it within these real-world academic or professional challenges allows learners to: 

    • Move beyond technical experimentation: In line with Piaget’s “active process”, participants transition from simply “playing” with tools to making reflective, informed decisions based on specific goals. 
    • Contextualise AI use: By applying AI to specific tasks—such as digital presentations creation, or research—the technology becomes a relevant partner in the learning process, facilitating the “interaction with the environment”. 
    • Reduce barriers to adoption: Using “discipline-neutral” scenarios that are easily adaptable helps faculty quickly customise AI activities for their specific subjects. This enhances the collaborative environment Vygotsky advocated, without requiring a deep computer science or IT background. 

    The Ethics-First Approach 

    A critical pillar of developing long-term AI readiness is the systematic embedding of ethical reflection. Rather than being treated as a separate, theoretical topic, ethics should be integrated in AI-related activities. This includes: 

    • Critical Evaluation: Developing the ability to assess AI-generated outputs for accuracy, reliability, and potential algorithmic bias. 
    • Academic Integrity: Establishing clear guidance to help students understand the boundaries between “AI-assisted” work and “AI-generated” work. 
    • Responsible Data Practices: Promoting an awareness of data privacy and intellectual property. 

    Building Long-Term Capacity 

    Evidence from the INFINITE project implementations suggests that a blended approach—combining self-paced online learning with face-to-face collaborative application—creates the strongest foundation for AI readiness. While asynchronous courses provide the necessary theoretical framework, in-class sessions offer the pedagogical depth needed for peer discussion and real-time troubleshooting. 

    To sustain this growth, HE institutions must view AI literacy not as a one-time training session, but as a continuous journey of development. As technology evolves, they must ensure that the academic community remains both technically skilled and critically aware. 

    References 

    Della Volpe, V. (2024). Scenario-Based Learning: An Inclusive Methodology. Journal of Research & Method in Education14(6), 1-5. 

  • What We Learned from Bringing AI into the Classroom in Greece

    What We Learned from Bringing AI into the Classroom in Greece

    Key insights from the INFINITE WP4 implementation at the University of the Aegean

    How do we move beyond abstract discussions about Artificial Intelligence in higher education and support responsible, meaningful, and ethical use of AI in everyday teaching and learning? This question was at the heart of the Greek implementation of the INFINITE project, carried out at the University of the Aegean.

    Over the course of 2025, academics and students engaged with AI not as a shortcut or replacement for learning, but as a tool for reflection, critical thinking, and pedagogical innovation. What emerged were not only new skills, but also valuable lessons about what works—and what needs careful attention—when AI enters the higher education classroom.

    AI training works best when pedagogy comes first

    One of the strongest insights from the Greek implementation is that AI capacity building is most effective when it is pedagogically framed. Both academics and students participated in blended courses built around the INFINITE learning scenarios, which emphasise intentional use, ethical awareness, and “human-in-the-loop” approaches.

    Rather than focusing on mastering specific tools, participants explored why, when, and how AI might support learning. This shift in focus proved crucial. Post-course feedback shows increased confidence not only in recognising AI applications, but also in evaluating their limitations and risks. Importantly, participants did not become uncritical adopters of AI; instead, they developed more reflective and selective attitudes toward its use.

    Real classrooms reveal real challenges—and real learning

    Two undergraduate classroom implementations provided a powerful reality check. When AI tools were integrated into courses on literature and history education, students initially tended to trust AI-generated content too easily. Hallucinations, oversimplified language, biased perspectives, and factual inaccuracies quickly surfaced.

    Rather than treating these issues as failures, instructors used them as learning opportunities. By applying the INFINITE Visualised Framework and AI Readiness Checklist, students were guided to question outputs, verify information, and revise AI-generated material using their own disciplinary knowledge. This process transformed moments of uncertainty into deep learning experiences.

    A key lesson here: AI literacy grows strongest where friction exists. Encountering the limits of AI helped students sharpen critical thinking, ethical judgement, and disciplinary awareness—skills that are central to higher education and future teaching professions.

    Ethical awareness resonates with both students and academics

    Across courses and classroom activities, ethical considerations consistently stood out as one of the most meaningful aspects of the experience. Participants reported the greatest learning gains in areas such as:

    • recognising bias and inaccuracies

    • understanding authorship and academic integrity

    • evaluating when AI use is appropriate—and when it is not

    Notably, even after the courses, both academics and students remained cautious about using AI in assessment and high-stakes academic work. This nuance is significant: the goal was never to promote unrestricted AI use, but to cultivate informed, responsible decision-making.

    Using multiple AI tools builds digital resilience

    Another important insight concerns the value of working with more than one AI system. By comparing outputs from tools such as ChatGPT, DeepSeek, and Gemini, participants quickly realised that AI tools are neither neutral nor interchangeable.

    This comparative approach helped demystify AI and supported the development of what instructors described as digital resilience: the ability to assess outputs critically, adapt prompts, and avoid overreliance on any single system. For students—especially pre-service teachers—this understanding is vital for navigating an evolving digital landscape.

    Lessons for the future

    The Greek WP4 experience offers several clear lessons for future AI initiatives in higher education:

    • Critical AI literacy must remain central. Technical skills alone are insufficient without ethical reflection and verification practices.

    • Discipline-specific examples matter. Humanities and social sciences benefit greatly from tailored scenarios and prompts.

    • Assessment needs rethinking. Hybrid human–AI work requires transparent criteria and redesigned evaluation methods.

    • Flexibility supports participation. Blended and asynchronous formats help academics and students engage meaningfully despite workload pressures.

    Above all, the experience shows that AI training can act as a catalyst for broader pedagogical reflection. Academics reconsidered teaching and assessment practices, while students reflected on their future professional responsibilities as educators.

    Moving forward with intention

    By meeting—and in many cases exceeding—its participation and engagement targets, the University of the Aegean’s implementation of WP4 demonstrates how the INFINITE project can translate European priorities into grounded, classroom-level change.

    The key takeaway is clear: responsible AI integration is not about doing more with technology, but about thinking better with it. When supported by thoughtful pedagogy, ethical frameworks, and reflective practice, AI can become a powerful tool for learning—not by replacing human judgement, but by strengthening it.

  • All that glitters is not gold: The harms of uncritical use of AI tools in education

    All that glitters is not gold: The harms of uncritical use of AI tools in education

    If you were to walk around campus and interview students about their experience with artificial intelligence, it probably would not take long to find someone who has used AI tools for coursework before. As AI usage spreads rapidly through university classrooms, so do the controversies surrounding it. 

    In June 2025, a group of academics from diverse fields, such as computing science, sociology, law, philosophy, cognitive science, and artificial intelligence, published an open letter urging all Universities and Universities of Applied Sciences of the Netherlands to “stop the uncritical adoption of AI technologies in academia” (Open Letter: Stop the Uncritical Adoption of AI Technologies in Academia, n.d.). By now, more than 1,600 professionals, lecturers, and students from around the world have signed the letter, and it has also been translated into other languages. This emphasises a broad agreement throughout different positions in academia and society that AI technologies, in their current use, cause harm to the quality and reputation of work done by staff and students in higher education institutions. Some of the authors elaborate their stance in an opinion paper that adds insight into why AI technologies should not be used and glorified without appropriate reflection in (higher) education (Guest et al., 2025). Essentially, using certain kinds of AI systems in higher education contradicts many of academia’s values, like diversity, openness, critical reflection, and sustainability. In their paper, the authors examine the challenges arising from the vague use of the term ‘artificial intelligence’. They argue that, as a buzzword, ‘AI’ often lacks a precise definition, which can generate hype and make it difficult for users to evaluate these tools objectively. To address this, the authors offer explanations of the most common AI terms and add links for the reader to further investigate on their own. It is worth taking a look at the abundance of references used, which include scientific papers, news articles, and blog posts, as the article serves as a good starting point for building a critical perspective on AI technology in education. 

    The article criticises AI companies’ marketing strategies by suggesting that the AI tools have abilities that they actually do not have. One example being the claim that large-language models can read or write: There is no such thing as a robot crafting a text similar to how a human would do, it is all just a most likely prediction based on predominantly intransparent algorithms. Special caution is needed regarding profit-based AI technologies such as ChatGPT: The company behind it, OpenAI, does not publicly provide any source code, which strongly goes against fundamental academic principles such as transparency.

    The chatbot ‘ChatGPT’, designed by the company OpenAI, is especially popular amongst students, though it brings massive risks of deskilling.

      From: https://www.pexels.com/photo/close-up-of-a-person-holding-a-smartphone-displaying-chatgpt-16461434/ 

    The users do not know how the output of this tool, often perceived as some sort of universal remedy,  is actually computed. The commercial field of artificial intelligence repeatedly goes through hype cycles but keeps on “over-promising and under-delivering” (Guest et al., 2025, p. 10), as the authors put it. Similarly, the term ‘artificial intelligence’ somehow suggests that human intelligence is something that can easily be replicated artificially, while this vastly simplifies the cognitive processes that are commonly understood as contributing to ‘intelligence’. Generally, the article reminds us how the concept of intelligence is historically problematic because of its racist, sexist, classist, and ableist background. AI easily falls into the trap of reproducing those exact issues and therefore is especially “harmful to minoritised and vulnerable groups” (Guest et al., 2025, p. 4). Though sometimes believed, artificial intelligence will not bring more justice and equality as a solution. When people use AI tools regularly, it gives tech companies a lot of social power and might lead to data colonialism. The perception that a large-language model is in any way neutral or objective because it is a machine ultimately leads to dehumanisation. Additionally, AI, in many cases, is harmful to the environment. For example, data centres use a lot of land, water, and energy to make the tools available. 

    Approaching the conflict of AI use in higher education and academic values, the paper urges a reflection on the relationship between society and specific technologies in a more general sense. To stimulate this train of thought, they provide a nice metaphor: One could compare AI use to driving responsibly in traffic. If a regular person is speeding, this poses a risk to everyone around them. However, if a paramedic is driving very fast to transport a patient to the hospital, this is not considered unethical or wrong, given that the paramedic is specifically trained for driving safely at higher speeds. Similarly, there is a moral difference between trained experts using AI in scientific practice and laypersons using AI in their daily lives. This analogy helps to understand why it is relevant for lecturers in higher education to carefully and responsibly train their students to use AI tools appropriately. 

    Lecturers should remind their students of the original purpose of higher education, which is more than getting a degree

    From: https://www.pexels.com/photo/a-class-having-a-recitation-8199166/ 

    Rather than mindlessly reproducing the marketed talking points of AI companies, teachers in higher education should carefully consider which pedagogical goals they are pursuing with their classes and whether AI can meet these. Does the AI tool of choice really have a significant added value to the learning process? According to the paper, successful education is based on mutual trust between educators and students, and to prevent students from cheating, e.g., letting AI tools generate an essay but present it as their own work, teachers should remind students of the general purposes of education. Especially, the question of copyright is a central issue of AI use at universities: Not only does the regular use of LLMs normalise to appropriate work as one’s own, but it also devalues the actual work of students and staff who abstained from AI. Following this line of argumentation, it reduces academics, who formerly were workers, to customers. Writing a good prompt does not need the same level of skills as writing a good paper. It is possible as an educator to teach AI literacy without needing to encourage the use of the AI tools discussed. According to the authors, the increasing prevalence of AI use in education will likely lead to an increase in illiteracy and, in a broader sense, deskilling of students, who will also become dependent on the technology industry. The latter, however, could also be argued for any kind of technology, such as writing programs, digital learning environments, reference managers, etc., where most of them could still be considered harmless.  Handling AI usage well in the classroom remains a challenge, since merely raising awareness of its risks does not take them away. In the long-term, even talking about AI critically might reinforce a certain normalisation of the topic. 

    Applying the five core principles of the Netherlands Code of Conduct for Research Integrity to AI usage, the paper clearly lays out tensions: Honesty would demand that researchers disclose when AI have been used, and refrain from making unsupported claims about their capabilities.  Scrupulousness requires only using AI tools whose functions are well-specified, validated and relevant for the field, and that the researchers can justify why the technologies are used. Transparency includes that the AI tools used should be open source and computationally reproducible, which disqualifies most of the large-language model chatbots of major tech companies. Independence of such AI firms could pose big challenges for the researchers to stay unbiased and without a conflict of interest. Finally, responsibility obliges academics to avoid any AI tools that are harmful to people, animals, the environment or legal guidelines. In sum, the paper states how most current AI technologies struggle to meet even the basic ethical and methodological standards of academic research. 

    Students themselves often recognise the risks of AI or even prefer outright bans. The argument is straightforward: To truly acquire a skill, you must practice it. In the paper, this is highlighted through examples such as learning basic arithmetic: When this is learned in school, pupils are usually not allowed to use a calculator, so that they are forced to go through the effort of learning relevant skills. Similarly, it would be advisable to ban AI systems in higher education in order to allow students to properly learn what it means to work scientifically. Unfortunately, the current academic system already pressures individuals to take unethical shortcuts, but the spread of AI accelerates the problem. Yet the spread of AI is not inevitable, despite how it is often presented. While reversing course becomes more difficult over time, it is not impossible: Dutch schools, for example, have successfully reintroduced phone bans, with measurable benefits for student focus and learning outcomes (Kohnstamm Instituut, 2025).  Ultimately, AI systems cannot really replace the depth and quality of human craft and thinking. The challenge for academia is not just to resist the hype, but to actively reclaim the values and practices that make education meaningful. 

    References 

    • Guest, O., Suarez, M., Müller, B., Edwin, V. M., Arnoud, O. G. B., Ronald, D. H., Andrea, R. E., Blokpoel, M., Scharfenberg, N., Kleinherenbrink, A., Camerino, I., Woensdregt, M., Monett, D., Brown, J., Avraamidou, L., Alenda-Demoutiez, J., Hermans, F., & Iris, V. R. (2025). Against the uncritical adoption of “AI” technologies in academia. Zenodo (CERN European Organization for Nuclear Research)https://doi.org/10.5281/zenodo.17065099  

    The University of Leiden has organised the seminar on the same topic, the recording is available at: https://www.universiteitleiden.nl/en/events/2026/02/against-the-uncritical-adoption-of-ai-technologies-in-academia  

  • Rethinking Distance Higher Education with AI: Opportunities, Challenges, and Student Readiness

    Rethinking Distance Higher Education with AI: Opportunities, Challenges, and Student Readiness

    Introduction

    As distance learning cements its place in higher education, Artificial Intelligence (AI) is emerging as a game-changer. From adaptive learning paths to real-time feedback and intelligent tutoring systems, AI technologies promise to transform how we teach and learn remotely. But are higher education institutions—and their students—ready to harness this potential responsibly?

    This article explores the expanding role of AI in distance higher education, its benefits and risks, and how current student practices reflect a broader need for institutional support, as evidenced by recent research conducted at the Hellenic Open University.

    The Promise of AI in Distance Learning

    AI offers a powerful toolkit for distance education, especially in areas where traditional teaching methods struggle to meet the needs of remote learners. Some of the most promising applications include:

    • Personalized learning environments that adapt to individual pace and learning styles;

    • Automated feedback systems that provide immediate responses to assignments or quizzes;

    • Chatbots and virtual assistants that offer 24/7 academic support;

    • Predictive analytics that identify students at risk of disengagement or dropout.

    These tools help bridge the gap created by physical distance, offering a more flexible and responsive learning experience. In asynchronous or self-paced programs—common in distance higher education—AI can act as a digital learning companion when human interaction is limited.

    A Reality Check: What Students Are Really Doing

    Despite the growing availability of AI tools, many students still struggle to integrate them meaningfully into their studies. A recent study by Kostas and Manousou (2025) at the Hellenic Open University surveyed 373 postgraduate distance learners across two academic years. The findings revealed that:

    • Most students were aware of AI tools but lacked the confidence or knowledge to use them effectively;

    • AI usage was sporadic and superficial, often limited to general writing or grammar tools;

    • Students expressed concerns about reliability, academic integrity, and ethical ambiguity in AI-generated content;

    • A significant number pointed to a lack of institutional training or guidance as a barrier to responsible use.

    This highlights a key challenge: technological availability does not equal readiness. For AI to truly enhance distance learning, students need structured support, not just access to tools.

    Barriers to Adoption: Not Just Technical

    Why aren’t more students fully embracing AI in distance learning?

    • Digital literacy gaps: Knowing how to use AI critically is different from knowing it exists.

    • Ethical concerns: Issues like plagiarism, transparency, and data privacy cause hesitation.

    • Limited institutional frameworks: Many universities have yet to provide clear guidelines or training on appropriate AI use.

    • Risk of overreliance: Students worry about losing their critical thinking and autonomy when AI becomes a shortcut.

    These concerns underline the need for holistic AI integration—one that goes beyond tools and focuses on skills, values, and policies.

    What Institutions Can Do: A Call for Action

    For distance education to benefit fully from AI, higher education institutions must step up by:

    1. Providing AI literacy training for both students and faculty;

    2. Developing ethical frameworks and usage policies that are transparent and inclusive;

    3. Integrating AI into course design in pedagogically sound and human-centered ways;

    4. Encouraging reflective use of AI—not just functional, but critical engagement.

    Projects like INFINITE are already leading the way by offering open educational resources, practical guides, and institutional tools that promote ethical, informed, and inclusive use of AI in higher education.

    Conclusion

    AI has the potential to make distance higher education more engaging, accessible, and effective—but only if implemented thoughtfully and ethically. The student perspective from the HOU study is clear: curiosity is high, but support is lacking. To close this gap, institutions must move beyond just offering AI tools—they must foster the skills and culture needed to use them responsibly.

    Further Reading

    Kostas, A., & Manousou, E. (2025). Benefits and challenges of AI in higher distance education: Students’ perceptions and practices in Hellenic Open University (HOU).Advances in Mobile Learning Educational Research, 5(2). https://doi.org/10.25082/AMLER.2025.02.011

  • Artificial Intelligence in STEM Education: Why we need a more critical perspective

    Artificial Intelligence in STEM Education: Why we need a more critical perspective

    Over the last months, we have shared several updates about our work at the University of Groningen within the INFINITE project, particularly in connection to critical AI literacy and capacity-building for higher education (HE) instructors. We began by introducing the AI Literacy Vision document, a collaboratively developed text that outlined a shared understanding of what it means to be critically literate in AI contexts. Later, we described how we translated that vision into action through the design and implementation of professional development courses, such as the one focused on exploring the limits and possibilities of course design with AI.

    In this new article, we would like to take a step further and focus more directly on the ethical dimension of AI in education. This is the main topic of a recent paper I (Francisco Castillo) co-authored with Miquel Pérez, titled Una mirada crítica en l’alfabetització en Intel·ligència Artificial per l’educació STEM, published in the journal Ciències. In this paper, we reflect on how the use of generative AI tools in STEM classrooms – particularly large language models like ChatGPT – raises new ethical and pedagogical dilemmas that we cannot ignore.

    Through four concrete examples, we aim to show that these tools are not neutral, and that their integration in education requires much more than technical understanding. We argue that educators and students alike need a critical AI literacy, one that is not only based on knowing what AI can do, but also on asking important questions about how it works, who builds it, with what values, and with what consequences.

    Each of the four examples presents a common situation where AI is used in educational contexts:

    • In the first example, we ask AI to generate a short academic paragraph with references. While the result looks convincing, it includes made-up citations and fabricated data, revealing how easily these tools can distort reality and create a false sense of credibility. The dilemma here is: how do we teach students to distinguish between scientific validation and plausible text generation?

    • The second example focuses on image generation. We ask the AI to generate a picture of a typical classroom in Barcelona, and the result shows a stereotypical Western classroom with white students and a female teacher at the blackboard. This image exposes the hidden biases in the training data of generative AI tools. We ask: how are STEM disciplines represented by these tools? Are the learning situations and teaching methods they propose inclusive and aligned with contemporary research?

    • The third example deals with copyright and authorship. We present a song originally by Sia, but sung in Rihanna’s voice using AI. This opens up questions about creative ownership and data ethics, especially when AI tools are trained on copyrighted material without consent. In education, this raises concerns not only about academic integrity but also about data use, privacy, and responsibility.

    • Finally, the fourth example highlights the environmental impact of AI technologies. Using a real news release from Nvidia about their new processors, we reflect on the enormous energy demands of training large AI models. This leads us to ask: how do we reconcile the use of AI with our responsibility to foster sustainable STEM education?

    These four examples are not just technical curiosities, they are ethical dilemmas that invite us to reflect on how AI is changing not only what we teach in STEM, but also how and why we teach it. For each example, we propose reflection questions that can help educators examine their own practices and assumptions.

    But the paper doesn’t stop there. After presenting the dilemmas, we also propose a set of core values that should guide our engagement with AI in education: equity, responsibility, and transparency.

    • Equity means actively recognising and addressing the biases built into AI systems, and making sure that these technologies do not reproduce or amplify existing inequalities.

    • Responsibility is about clarifying the roles and accountability of everyone involved, not only educators and students, but also the companies that develop AI tools.

    • Transparency calls for greater clarity about how AI systems work and how they process our data. This includes ensuring that both teachers and students have control over what data is collected, how it is used, and for what purposes.

    We believe these values are essential for building an educational system that does not blindly adopt new technologies, but instead evaluates them critically and ethically. In the final section of the paper, we return to the core questions of STEM education – what we teach, who teaches, and how we teach – and argue that AI must be integrated in ways that are consistent with socioconstructivist learning theories and current challenges in science education.

    For example, we reflect on how AI is changing scientific practices and how this should be reflected in school science. We also raise concerns about how some AI tools claim to personalise learning, but often do so by simplifying complex human interactions into data-driven models. This kind of thinking risks reducing education to a cognitive process, ignoring its social, emotional, and cultural dimensions.

    Moreover, we highlight some of the deeper challenges facing STEM education today, such as underrepresentation of certain social groups, gender gaps in science careers, and increasing levels of anxiety and mental health issues among students. These problems will not be solved by AI. In fact, if we are not careful, they could be made worse.

  • INFINITE Project Featured at Global Symposium on Sustainability Education

    INFINITE Project Featured at Global Symposium on Sustainability Education

    University College Dublin is pleased to announce that the Erasmus+ INFINITE AI in Higher Education project was presented at the Global Sustainability Education: Innovations in Digital & International Teaching Symposium [1], held on 1 December 2025 at the Hamburg University of Applied Sciences, Germany. The symposium, which invites contributions through a competitive peer-review selection process, brought together leading educators, researchers and innovators whose work advances sustainability within digital teaching practices worldwide.

    Representing INFINITE on behalf of University College Dublin, Dr. Stergiani Kostopoulou delivered a presentation on the project’s approach to developing sustainable, ethical, and critically informed uses of artificial intelligence in higher education. Her talk highlighted how INFINITE addresses a rapidly evolving educational landscape in which AI technologies must be integrated not only effectively, but responsibly and equitably. The presentation illustrated how the project’s methodology supports academic communities in navigating this transformation through structured pedagogy, scenario-based learning, and comprehensive capacity-building initiatives.

    Figure 1. Dr. Stergiani Kostopoulou from UCD presenting INFINITE’s work at the Symposium on Global Sustainability Education (1 December 2025)

    The symposium provided an international platform to showcase how INFINITE’s work contributes to broader conversations on quality education, digital inclusion, and sustainable innovation. Delivered in collaboration with Dr. Levent Görgü and Professor Eleni Mangina, the project’s activities at UCD emphasise the importance of empowering both educators and students to engage with AI tools in informed, reflective and ethical ways. By focusing on critical AI literacy as a foundational competency, INFINITE positions itself as a leading European initiative committed to sustainable digital transformation in higher education.

    The event also underscored the value of INFINITE’s transnational collaboration across Ireland, the Netherlands, Greece and Cyprus. Presenting at a global symposium not only reflects the academic quality and societal relevance of the project’s work but also affirms the growing interest in approaches that ensure AI is used thoughtfully, transparently and in line with principles of equity and sustainability.Speaking after the event, Dr. Kostopoulou noted that the symposium created a vital space for exchanging perspectives on how digital innovation can support long-term educational resilience. She emphasised that INFINITE’s contribution demonstrates how evidence-based teaching frameworks and open educational resources can strengthen institutional strategies, support educators in adapting to new technologies, and help students develop informed, future-ready AI competencies.

    The INFINITE consortium continues to advance its research, training resources and digital tools as part of its mission to support sustainable AI adoption across higher education systems in Europe. The project’s next phases will focus on extending outreach, strengthening international partnerships, and making its open resources widely accessible to educators and learners.

    [1] Information for the Symposium of Global Sustainability Education: Innovations in Digital & International Teaching can be found:https://www.haw-hamburg.de/detail/news/news/show/global-sustainability-education-innovations-in-digital-international-teaching/

  • Ethical Issues in AI-Powered Education: Lessons from the Pythia Learning Enhancement System

    Ethical Issues in AI-Powered Education: Lessons from the Pythia Learning Enhancement System

    1. Introduction

    Figure 1. AI is entering classrooms: students and teachers are supported by digital learning tools. (Image generated using OpenAI’s ChatGPT-5)

    Artificial Intelligence (AI) is transforming classrooms, offering personalized learning, real-time feedback, and adaptive pathways for students. But as schools adopt systems like the Pythia Learning Enhancement System, a recent case study highlights that these innovations also carry profound ethical implications [1].

    • Background: What is Pythia?

    The Pythia system was developed to adapt teaching content dynamically to each learner’s progress. By collecting and analyzing performance data, it suggests strategies tailored to individual needs. The promise is clear: higher engagement, improved outcomes, and more inclusive learning environments.

    However, the study stresses that such benefits cannot be separated from the responsibilities of using AI in education. Pythia serves as an example of both opportunity and caution, demonstrating how technology can improve learning while exposing risks if ethical safeguards are overlooked.

    • Key Ethical Challenges

    As illustrated in the Figure2, the study identifies five central ethical concerns: Fairness, Transparency, Privacy, Autonomy, and Accountability [2].

    Figure 2. The five main ethical challenges identified in the Pythia study: Fairness, Transparency, Privacy, Autonomy, and Accountability (Image generated using OpenAI’s ChatGPT-5)

    • Fairness & Bias

    AI systems are only as fair as the data that trains them. Skewed datasets may reinforce inequalities, leaving already disadvantaged students further behind [2].

    • Transparency & Explainability

    For many teachers and parents, how Pythia reaches its conclusions remains a mystery. Without clear explanations, trust in AI recommendations weakens.

    • Privacy & Data Protection

    Pythia continuously collects learner data to optimize results. This raises concerns about how securely sensitive information is stored and whether students’ autonomy is respected.

    • Teacher & Student Autonomy

    While AI offers helpful guidance, overreliance risks undermining teachers’ professional judgment and limiting students’ ability to think critically and independently.

    • Accountability & Oversight

    When an AI system makes a mistake—misclassifying a student’s ability or recommending harmful interventions—who is ultimately responsible?

    • Recommendations from the Study

    The authors argue that AI should be a supporting tool, not a replacement for human educators. To ensure ethical deployment, they propose:

    • Embedding human oversight in all AI-driven decisions.
      • Establishing clear frameworks for fairness, transparency, and accountability [3].
      • Strengthening data governance policies to protect learners’ privacy.

    Maintaining open communication with students, parents, and educators about how systems like Pythia function.

    • Implications for the Future

    The case study emphasizes that the conversation around AI in education must go beyond efficiency and outcomes. Ethical principles—fairness, dignity, and trust—must guide development and implementation.

    If designed and used responsibly, AI can help democratize access to quality education. But without safeguards, it risks eroding trust in both technology and educational institutions. International bodies such as the OECD stress that AI in education brings both opportunities and risks [4]. 

    • Conclusion

    The balance between technological innovation and ethical responsibility remains at the heart of the debate (see Figure 3).

    Figure 3. Balancing technological innovation with ethical responsibility is key to the future of AI in education. (Image generated using OpenAI’s ChatGPT-5)

    As the study notes, “AI in education is not just a technical challenge—it is a moral one.” Moving forward, collaboration between policymakers, technologists, and educators will be crucial to ensure that systems like Pythia support learners while upholding the values that education is built upon.

    References

      [1] S. Röhrl et al., “Ethical Considerations of AI in Education: A Case Study Based on Pythia Learning Enhancement System,” in IEEE Access, vol. 13, pp. 115331-115353, 2025, doi: 10.1109/ACCESS.2025.3583975. https://ieeexplore.ieee.org/document/11053800  [Accessed 15/09/2025].

      [2] UNESCO (2021). AI and Education: Guidance for Policymakers. Paris: UNESCO. https://unesdoc.unesco.org/ark:/48223/pf0000376709  [Accessed 15/09/2025].

      [3] European Commission (2019). Ethics Guidelines for Trustworthy AI. Brussels: European Union. https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai [Accessed 15/09/2025].[4] OECD (2021). AI in Education: Challenges and Opportunities. Paris: OECD Publishing. https://www.oecd.org/en/about/directorates/directorate-for-education-and-skills.html [Accessed 15/09/2025].

    1. INFINITE Project at the International Conference on Open & Distance Learning 2025

      INFINITE Project at the International Conference on Open & Distance Learning 2025

      The INFINITE Project was presented at the 13th International Conference on Open & Distance Learning 2025, held in Patras from 5 to 7 December 2025. The conference, titled “Open and Distance Education: 21st Century Skills and the Challenge of Artificial Intelligence”, brought together researchers and practitioners to explore the evolving role of AI in education.

      During the event, the paper “A Transnational Study on Artificial Intelligence for Professional and Pedagogical Practices in Higher Education – Insights from the INFINITE Project” was presented, showcasing key findings from the project’s ongoing research.

      By contributing to ICODL 2025, the INFINITE project continues to engage with the wider academic community, fostering dialogue on the development of AI literacy and the responsible use of emerging technologies in higher education.

    2. INFINITE at ESERA 2025: Sharing our vision for ethical and responsible AI in higher education

      INFINITE at ESERA 2025: Sharing our vision for ethical and responsible AI in higher education

      The INFINITE project recently reached an important milestone by sharing its work at one of the most prestigious international conferences in the field of science education: ESERA 2025.

      For those unfamiliar with it, ESERA stands for the European Science Education Research Association, an organisation established in 1995 with the aim to:

      • Enhance the range and quality of research and research training in science education across Europe.

      • Provide a forum for collaboration among science education researchers from different countries.

      • Represent the professional interests of researchers in this field.

      • Relate scientific research to the policies and practices of science education.

      • Build connections between European researchers and international communities worldwide.

      Since its creation, ESERA has hosted 16 conferences in cities such as Leeds, Barcelona, Lyon, and Bologna. In 2025, marking the 30th anniversary of the association, the conference was held in Copenhagen, Denmark (more information about the conference: here). This year’s theme was particularly meaningful for our project:

      “We live in an era of transitions and transformations, both digitally and environmentally. But what is society, and by extension science education, transitioning into? How do we design, implement, and evaluate our transformative efforts?”

      This focus on digital transformation and societal change resonated with the goals of the INFINITE project. Motivated by this shared vision, the consortium submitted a proposal to present the results of Work Package 2, which explores the ethical and responsible integration of AI in higher education.

      Our proposal was accepted, and the University of Groningen had the honour of presenting the work on behalf of the entire consortium. We chose the interactive poster format, which gave us the opportunity to engage with other researchers in a dynamic and personal way. The session began with a brief round of one-minute introductions from each presenter, helping participants decide which posters they wanted to explore further.

      What followed was a very enriching experience. We had many interesting conversations with researchers from different countries, who not only asked about the project’s results but also shared their own concerns, reflections, and good practices related to AI and higher education. Beyond the data, we found common ground in our shared challenges, particularly in areas like ethics, digital transformation, and teacher training. Some participants stayed even after the session ended to continue the discussion, while others left written feedback or expressed interest in future collaborations. This confirmed that the topic of ethical and responsible AI use is not only timely but increasingly relevant for the science education research community.

      And what exactly did we present?

      Our poster, titled “Fostering responsible and ethical AI integration in Higher Education: Key findings from an Erasmus+ project”, showcased the results of a transnational study carried out across five European countries. The aim of this study, which forms the foundation of Work Package 2, was to better understand how AI is currently used in higher education and what challenges arise when trying to integrate it into professional and pedagogical practices.

      To do this, we combined a systematic literature review with a needs analysis survey completed by 259 participants, including university teachers and students. The results show that while there is a strong interest in the potential of AI – especially for personalising learning, supporting assessment, and improving administrative processes – there are also important risks that must be taken into account. These include concerns about privacy, bias, academic integrity, lack of transparency, and the environmental and social impact of large-scale AI tools.

      The survey also revealed that although many educators are familiar with AI tools, they often feel unprepared to use them in a responsible and pedagogically meaningful way. This is particularly true when it comes to areas such as assessment, collaboration, or protecting student data. Overall, the results point to the urgent need for practical resources and targeted training that help educators move from awareness to confident and ethical implementation.

      It was a pleasure to represent the INFINITE consortium at this international conference and contribute to the wider conversation about the future of AI in higher education. But this is only the beginning. The consortium continues to share its work and has already submitted new proposals to other international conferences. We look forward to continuing this important dialogue with colleagues and institutions across Europe and beyond.

      As a result of our participation, the poster and abstract were included in the official Book of Abstracts of ESERA 2025.