Category: News

  • Translating critical AI literacy into action

    Translating critical AI literacy into action

    In a previous article titled AI vision in Higher Education: Toward a critical AI literacy at the University of Groningen, we discussed the importance of fostering critical AI literacy among both higher education (HE) instructors and students. We also described the process carried out at the Faculty of Science and Engineering (FSE), where a group of 33 participants—including instructors, students, and other faculty members—collaborated to develop what we call the “AI literacy vision document.” This document outlines our understanding of critical AI literacy as the set of competencies needed to evaluate, communicate with, and work alongside AI technologies. In addition, it offers practical guidelines for designing courses and programmes that aim to cultivate these competencies.

    In that same article, we stressed the need to go beyond definitions and strategic vision. If we are to promote critical AI literacy in meaningful ways, we must also create concrete training opportunities. This is precisely the goal we embraced at the Centre for Learning and Teaching (CLT) after completing the AI literacy vision document. It is also directly aligned with the aim of the EU-funded INFINITE project, particularly within Work Package 4: to support HE instructors in developing AI literacy skills through capacity-building courses. 

    In this article, we present one of the courses developed as part of this project, which has already been implemented at our faculty. Titled Exploring limits and possibilities in course design with AI, this course marked an important step toward equipping instructors with the tools and mindset necessary to engage with AI in thoughtful, informed, and critical ways.Rooted in socioconstructivist learning theories and drawing inspiration from challenge-based and inquiry-based pedagogies, the course was designed not as a technical training, but as an open space for reflection and dialogue. Its main goal was to invite instructors to explore both the potential and – perhaps more importantly – the limitations of generative AI tools when used in the context of lesson plan design.

    To support this objective, we structured the course into three main sections, each focusing on a specific sub-goal and building progressively on the previous one:

    • Section one – Lesson plan design: Art or engineering process? This opening section invites participants to reflect on how they currently design lesson plans in their own teaching practice and what kinds of knowledge this process requires. The underlying idea is: before using AI tools for lesson planning, it is essential to understand how we design without them, what scientific research says about effective instructional design, and what professional knowledge educators need for designing lessons.

    • Section two – AI in lesson plan design: Myth or reality? Building on the first section, this part of the course shifts the focus to generative AI. It encourages participants to consider how they interact with these tools and, more importantly, how to critically evaluate the quality, relevance, and educational value of the AI-generated content.

    • Section three – What does the literature say about AI-generated lesson plans? The final section explores the risks, and limitations involved in using AI for educational design. This discussion is informed by recent academic research, which helps participants move beyond personal impressions and engage with evidence-based perspectives.

    Nex, we take a closer look at how each part of the course was implemented in practice.

    The first section of the course consists of three tasks, each designed to prompt reflection on personal teaching practice and lay the groundwork for a deeper understanding of what lesson design entails.

    In Task 1, participants are asked to individually design a short lesson plan consisting of 5–7 activities. They are free to choose the topic and target group, based on their own expertise and teaching context. Importantly, they are instructed not to use any digital tools or AI support for this task. Instead, they rely solely on their professional experience, intuition, and creativity.

    Task 2 is also completed individually and focuses on meta-reflection. Participants are asked to think carefully about the process they followed while designing their lesson plan. Specifically, they describe the steps they believe they took (e.g., “First I thought about the topic, then the learning objectives, then the activities…”) and begin to surface the implicit logic behind their design choices.

    Task 3 brings participants together in small groups (3–4 members) to share and discuss their lesson plans and the design processes they followed. Based on these exchanges, each group collaboratively develops what they consider an “ideal” model for lesson plan design: a systematic approach that outlines clear steps and their intended order.

    The rationale behind these tasks mirrors the evolution of the instructional design field itself. Initially, lesson planning was viewed as an intuitive and artistic process, grounded in individual creativity and experience. Over time, however, the field began to adopt a more systematic, evidence-based approach, similar to engineering design. Today, instructional design is increasingly recognised as having a dual nature:  it is an art that requires creativity and intuition, but it is also considered an engineering process that uses knowledge from educational science research in a systematic way.

    To close this section, the course instructor facilitates a dialogic discussion in which participants are introduced to recognised models of instructional design, grounded in educational research. These models highlight the elements and types of knowledge necessary for designing effective lessons. By the end of this section, participants have developed a clearer, shared understanding of the foundations of instructional design, knowledge that is essential for critically engaging with AI-generated lesson plans in the next stages of the course.

    The second section of the course focuses on using generative AI tools in practice, while also encouraging participants to think critically about them. It consists of two tasks designed to help participants learn how to interact with AI systems, but more importantly, to strengthen their ability to critically evaluate the outputs these tools generate.

    In Task 4, participants return to the lesson plan they designed in section one. This time, they are asked to try to “improve” it, keeping in mind that what counts as an improvement can vary depending on the context and personal judgment. To do so, they select a Large Language Model (LLM) of their choice and follow three steps: (1) formulate a prompt they believe will help improve their lesson plan, (2) enter the prompt into the LLM, and (3) critically analyse the AI-generated output. For this task, participants are provided with a scaffolding table to help organise their analysis, but no predefined evaluation criteria are shared. Instead, they are encouraged to rely on their own professional judgment to assess the strengths and weaknesses of the output.

    After completing the task, we have a group discussion where participants share their reflections. Together, we talk about the possible benefits and limitations of using generative AI in lesson design. We also try to identify the criteria—often not clearly stated—that they used to judge the AI-generated content. This discussion helps participants better understand their own teaching values and how they make decisions when designing lessons.

    Task 2 builds on this experience with a more guided approach. Participants are introduced to commonly used frameworks for prompt design, many of which are recommended by AI companies. They are then asked to repeat the process: generating a new prompt, submitting it to the AI system, and critically analysing the output. This time, however, they are provided with two supports: the same scaffolding table from Task 1, and an additional document containing guiding questions for analysis. These questions are directly connected to the instructional design elements introduced in section one. By this stage, participants are expected not only to use AI tools more strategically, but also to evaluate their use in a more systematic and pedagogically grounded manner.

    To conclude, the third section introduces an ethical dilemma associated with the use of generative AI tools: biases. This section follows the analysis of AI-generated outputs carried out in the previous tasks, where participants discussed both the strengths and limitations of these tools. Building on that discussion, the aim is now to raise awareness about how biases are embedded in AI systems and why this matters for education. We begin with a brief explanation of how Large Language Models work, placing special emphasis on how biases are generated during their development and training. We then connect these issues to the educational context, introducing the concept of pedagogical bias. This refers to the way in which AI-generated content can reflect and reproduce specific pedagogical assumptions, values, or perspectives.To deepen this reflection, we present a selection of recent research studies that explore the presence of pedagogical biases in AI tools. These examples help participants recognise that the use of AI in education is not neutral and that these technologies also come with significant limitations.

    As mentioned earlier, this capacity-building course is one step towards translating the idea of critical AI literacy into practice, specifically in the context of lesson plan design. It is worth noting that this is just one of several efforts we are currently undertaking. Within the CLT, we are also developing courses focused on the ethical dilemmas surrounding AI systems, academic integrity, and assessment. We encourage readers to explore these initiatives, adapt them to their own institutional settings, and reflect on what it means to engage with AI in responsible, ethical, informed, and pedagogically meaningful ways.

  • ChatGPT in Higher Education: Greek Students Speak, and INFINITE Listens

    ChatGPT in Higher Education: Greek Students Speak, and INFINITE Listens


    Large Language Models like ChatGPT are fast becoming part of the university experience—and not just for tech-savvy students. These tools can help with everything from generating essay ideas to making sense of complex topics. But as generative AI becomes more common in classrooms, questions around academic integrity, critical thinking, and equity are popping up everywhere.

    What Greek Students Are Actually Doing

    A recent survey involving 515 students from Greek universities* gives us valuable insights. A bit over a third said they feel fairly comfortable with the idea of artificial intelligence, but less than one in five use ChatGPT regularly for academic tasks. Those who do use it report significant benefits. Around three-quarters say it makes searching for information faster, and roughly two-thirds believe it has improved their writing by offering useful feedback.

    Still, there are concerns. Nearly seven in ten students worry that relying on ChatGPT too much could weaken their critical-thinking skills. About six out of ten are also worried about plagiarism and the reliability of AI-generated content. Interestingly, usage varies:undergraduates lean more on ChatGPT for drafting and research compared to postgrads and doctoral candidates, and students with stronger digital skills are more likely to both be familiar with AI concepts and use these tools in meaningful ways. These mixed responses paint a picture of cautious optimism—students see the promise, but they’re also aware of the pitfalls. 

    What Students Need

    It turns out that Greek students don’t just appreciate AI tools—they also want a roadmap for using them properly. Feedback from the survey shows strong demand for:

    • Clear institutional policies defining the ethical use of AI and how it fits into academic integrity rules.

    • Structured training sessions and technical support focusing both on ethical issues and practical AI usage.

    • Transparent guidelines around authorship, data handling, and how much AI assistance is acceptable.

    These aren’t extra—they’re essentials students believe are needed for responsible integration of AI into academic life. 

    How INFINITE Bridges the Gap

    This is exactly where the INFINITE Erasmus+ project comes in. INFINITE is doing comprehensive desk and field research across multiple European countries to understand how AI is being used in universities. From that, they’ve created two major tools:

    • AI Literacy Toolkit: A user-friendly bundle that includes real-world case studies, checklists, and a visual framework to help educators assess and choose AI tools for their teaching practices.

    • AI Digital Hub: A practical online platform offering free AI-driven tools and examples aimed at professional development, teaching, learning, and assessment.

    Thanks to this, Greek students’ calls for guidance and support are being met directly. Institutions adopting INFINITE’s tools can offer students a clear framework for AI use—building both trust and competence. 

    Final Thoughts

    The picture from Greek universities is hopeful: students are curious and see value in ChatGPT, but they’re wary of its potential to undermine learning. INFINITE’s research-backed, user-friendly resources offer a smart answer. By combining empirical student feedback with structured toolkits and digital platforms, universities across Europe can ensure AI becomes a partner in education—not a shortcut that erodes its core values.

    At the end of the day, AI is here to stay—and tools like INFINITE help us use it wisely, keeping critical thinking and academic integrity front and center.

    *Source: Kostas, A., Paraschou, V., Spanos, D., Tzortzoglou, F., & Sofos, A. (2025). AI and ChatGPT in Higher Education: Greek Students’ Perceived Practices, Benefits, and Challenges. Education Sciences15(5), 605. https://doi.org/10.3390/educsci15050605

  • New emerging competencies for students and educators in response to AI integration in Higher Education

    New emerging competencies for students and educators in response to AI integration in Higher Education

    The rapid integration of artificial intelligence (AI) into Higher Education (HE) has undoubtedly led to a shift in the competencies needed for both the students and the educators. Indeed, studies across several countries have revealed the emerging landscape of AI-related skills and knowledge requirements in academic settings (Maznev et al., 2024; Zawacki-Richter et al., 2019).

    Notably, the need for new competencies has been identified that can bridge academic skills that were traditionally predominant in education settings, with AI-specific capabilities (Bai, 2024; Scarci et al., 2024). Studies highlight that the development of critical thinking, ethical awareness, and adaptive learning abilities are of vital importance for the successful AI integration (Zouhaier, 2023). Technical AI skills and digital literacy were also identified as significant competencies (Bai, 2024; Scarci et al., 2024).

    For technical and digital competencies, AI literacy has emerged as essential, encompassing the ability to understand, use, and critically engage with AI tools, as well as proficiency in digital content creation and data interpretation (Maznev et al., 2024). Interestingly, as analysed in the study of Maznev et al. (2024), although students inherently posses advanced digital skills, they require explicit training on AI specific skills. Specifically, even that students are comfortable using frequently digital tools in their everyday life, they often lack the deeper AI-specific skills needed to use these technologies effectively in academic settings. Therefore, there is a growing need, for formal and structured training to ensure students are not just passive users of AI, but informed and capable participants in a digital future. 

    Similar to students’ needs, educators should develop new pedagogical competencies, including among others the facilitation of personalised learning experiences. Towards this direction, continuous professional development and support mechanisms for educators to effectively integrate AI into their teaching practices is of paramount importance. By enhancing these skills and understanding how AI can support them, they can avoid potential pitfalls and address practical issues including data protection, ethics, and privacy. 

    Successful implementation of these competencies requires robust institutional support and clear policy frameworks. At the same time, critical thinking and ethical understanding emerge as core competencies, particularly in evaluating AI-generated outputs and maintaining academic integrity.

    References

    Bai, X. (2024). The role and challenges of artificial intelligence in information technology education. Pacific International Journal, 7(1), 86-92.

    Maznev, P., Stützer, C., & Gaaw, S. (2024). AI in higher education: Booster or stumbling block for developing digital competence?. Zeitschrift Für Hochschulentwicklung, 19(1), 109-126.

    Scarci, A. S., Teixeira, T. M., & Dal Forno, L. F. (2024). Artificial Intelligence and its relations with digital competencies and Education.

    Zawacki-Richter, O., Marín, V. I., Bond, M., & Gouverneur, F. (2019). Systematic review of research on artificial intelligence applications in higher education–where are the educators?. International Journal of Educational Technology in Higher Education, 16(1), 1-27.Zouhaier, S. (2023). The impact of artificial intelligence on higher education: An empirical study. European Journal of Educational Sciences, 10(1), 17-33.

  • AI in Higher Education: Insights from the INFINITE Research in Cyprus

    AI in Higher Education: Insights from the INFINITE Research in Cyprus

    AI in Higher Education: Preparing Students for the Jobs of Tomorrow

    As artificial intelligence (AI) continues to reshape industries worldwide, higher education institutions (HEIs) face an essential question: how can they prepare students not only to understand AI, but also to use it responsibly in their future careers? The INFINITE project emphasises helping students develop AI literacy and resilience, ensuring that they enter the job market as confident professionals who know how to work with AI rather than be replaced or misled by it.

    AI is already transforming multiple fields of work. In marketing, students will need to know how to use AI-driven tools for consumer analysis, targeted campaigns, and content generation, while still relying on human creativity and strategy to connect with audiences. In business and finance, AI is central to data analysis, risk assessment, and predictive modelling, but decision-making, negotiation, and leadership remain distinctly human skills. In education, AI can support personalised learning and automated assessment, but teachers’ empathy, adaptability, and mentorship cannot be replicated. Similarly, in healthcare, AI can assist with diagnostics and data management, but ethical care and patient trust depend on human judgment.

    These examples show that AI will be a powerful partner in the workplace, but success will depend on whether students can balance technical know-how with human-centred skills.

    To thrive in the AI-driven job market, students must become AI literate. This means understanding what AI can and cannot do, questioning the outputs it affects, and using it in ways that improve—not replace—its learning and professional growth. HEIs play a crucial role in this journey by embedding AI literacy across different programs and faculties.

    Some practical steps for students include:

    • Using AI for research, brainstorming, and feedback, while critically evaluating results.
    • Developing skills that AI cannot replicate, such as critical thinking, creativity, and emotional intelligence.
    • Staying informed about the ethical implications of AI, especially concerning data privacy, fairness, and bias.
    • Treating AI as a toolbox for productivity and problem-solving, not as a shortcut that undermines learning.

    Employers increasingly expect graduates to be comfortable with AI-powered systems. In marketing and communications, companies already seek professionals who can manage AI-based campaigns. In engineering and technology, AI literacy is becoming a requirement for innovation and design. Even in law and public administration, AI tools are used for document analysis, compliance monitoring, and citizen services.

    By acquiring these competencies during their studies, students will not only increase their employability but also gain the ability to shape how AI is used in their professions. Rather than fearing automation, they will be prepared to lead with responsibility, ensuring that AI adoption in the workplace remains fair, transparent, and human-focused.

    For higher education, the challenge is twofold: to equip students with the technical understanding of AI tools and to infuse the values that support responsible use. The INFINITE project addresses this by promoting AI literacy, offering toolkits, training, and reflective practices that encourage students to see AI as a companion in their professional journey.

    AI will undoubtedly be part of every student’s career — from marketing to business, education to healthcare, law to engineering. The question is not whether they will use it, but how. By preparing today’s students to use AI wisely, HEIs can ensure that the next generation of professionals enter the workforce ready to collaborate with technology for the greater good.

    For more information on AI in HE and the INFINITE project, stay tuned for our upcoming news and results on how to integrate AI in academic and teaching practices effectively!

  • Bridging the Gap: Rethinking Teacher Training in the Age of AI

    Bridging the Gap: Rethinking Teacher Training in the Age of AI

    Introduction

    As Artificial Intelligence (AI) continues to reshape education, the role of educators is undergoing a fundamental transformation. AI tools promise personalization, automation, and deeper insights into learning—but they also introduce ethical, pedagogical, and professional challenges that must be addressed through robust teacher preparation. A new study from the University of the Aegean sheds light on an urgent issue: current Teacher Professional Development (TPD) programs are significantly biased toward technical skills, while largely overlooking the ethical and human-centered competencies outlined in UNESCO’s AI Competency Framework for Teachers (AI CFT).

    The Study: Evaluating TPD Through the Lens of AI CFT

    In a systematic review of 35 international TPD initiatives, researchers analyzed how well existing programs align with UNESCO’s comprehensive framework, which defines 15 AI-related competencies across five key areas: human-centered mindset, AI ethics, foundations and applications, pedagogy, and professional development. These are further divided into three progression levels—acquire, deepen, and create—to reflect the evolving needs of educators.

    The findings are revealing. While technical competencies such as AI foundations and applications were well represented (with over 57% of studies addressing them), core ethical principles and competencies promoting a human-centered approach were largely neglected. For example, only 8.6% of studies engaged with the human-centered mindset, and none addressed the highest-level ethical competencies, such as co-creating AI rules in educational settings.

    What’s Missing in Today’s TPD?

    The study identified a systemic imbalance: current TPD efforts prioritize immediate classroom utility over deeper professional reflection and ethical awareness. While this reflects a practical urgency to upskill educators, it fails to prepare them for the broader responsibilities they hold in an AI-mediated learning ecosystem.

    Barriers include the novelty and complexity of AI in education, limited institutional resources, and market-driven priorities that favor quick, technical solutions over long-term ethical readiness. Additionally, more advanced competencies like evaluating algorithmic bias or customizing AI tools for inclusion are rarely addressed.

    Why Ethical Readiness Matters

    With AI systems increasingly involved in student assessment, content creation, and decision-making, ethical literacy is non-negotiable. Teachers must be equipped to question algorithmic transparency, ensure data privacy, and uphold values of equity and inclusivity. Without these competencies, AI in education risks amplifying existing inequalities.

    Toward Balanced TPD Programs

    The authors argue that the AI CFT offers a roadmap for more balanced and responsible teacher training. Future TPD programs must go beyond basic digital skills and focus on cultivating critical understanding, ethical reflection, and the creative use of AI in diverse contexts. Structured initiatives should support teachers in not just acquiring tools, but understanding their societal implications and contributing to policy-making.

    Recommendations

    1. Mandate AI Ethics: Embed ethical considerations into the core of TPD curricula.

    2. Diversify Competency Focus: Ensure all five areas of AI CFT are addressed, not just technical and pedagogical ones.

    3. Promote Progression: Support teachers in advancing from basic understanding to critical innovation (Acquire → Deepen → Create).

    4. Invest in Infrastructure: Provide the technological and institutional support necessary for deep integration.

    5. Support Multistakeholder Collaboration: Involve educators, policymakers, researchers, and developers in co-designing training content.

    Conclusion

    The future of education is being written in algorithms—but it must be guided by educators who understand, shape, and question the technologies they use. Teacher professional development is the cornerstone of this vision. Aligning TPD with frameworks like UNESCO’s AI CFT ensures that educators are not only tech-savvy, but ethically grounded and professionally empowered. The time to rebalance is now.

    Further Reading: UNESCO AI Competency Framework for Teachers: https://www.unesco.org/en/articles/ai-competency-framework-teachers

    Source:

    Tsioukas, K., Kostas, A., & Tzortzoglou, F. (2025). From Technical Proficiency to Ethical Readiness: Mapping Teacher Professional Development Programs with UNESCO’s AI Competency Framework for Teachers. Education Sciences. (Forthcoming).

  • Artificial Intelligence and Literacy: How we learn, create and understand in the Digital Age

    Artificial Intelligence and Literacy: How we learn, create and understand in the Digital Age

    Artificial intelligence (AI) has entered education in a big way and changed the way children (and adults) create, read and understand information. Platforms such as chatbots, translators, or “intelligent” image-generating machines have already become part of our everyday lives. But what does this mean for the way we learn? And what does it mean to be “literate” in the age of AI?

    Today, education experts do not talk about literacy only as reading and writing. Instead, they see it as something much more complex: a social, cultural, and digital practice that involves the body, emotions, technology, and interaction with others.

    This means that when a child makes a digital comic using an AI tool, they are not just “using” technology. It is participating in a process of creating meaning – that is, it is trying to express itself, to make sense of the world, to connect with others.

    AI tools don’t work passively. Instead, they generate ideas, influence our choices, and suggest words or images. This means that AI actively participates in the creation of meaning. This is why many researchers now speak of AI as a “co-author”—not just an assistant.

    This has implications:

    • Whose voice is that?
    • Which ideas come first?
    • What data does the machine “learn” and from whom?

    Not all AIs are the same and do not always work “fairly”. Many studies have shown that algorithms may reinforce uniqueness, predict different linguistic or cultural realities, or even indirectly reinforce stereotypes. This is why it is important to acquire not only “skills in using AI”, but also a critical understanding:

    • Who made this tool?
    • How does it work?
    • What does it exclude or ignore?

    Today, even in pre-school, children are getting in touch with AI. Games, educational apps and even digital assistants like Alexa are already used in classrooms. Researchers say children can learn basic ideas around AI – e.g. what it means “a program learns from examples”.

    The question is how to design these experiences in a creative, safe, and critical way and how teachers will be supported to understand and integrate such technologies without fear or confusion.

    AI opens up new possibilities for learning, but it also brings challenges. Rather than rejecting or accepting it uncritically, we need a balanced approach:

    • Understanding how it works,
    • Discuss its implications,
    • And create new ways of literacy where humans and machines work together responsibly.

    Bhatt, I. (2023). Literacies and the digital university: Critical perspectives and contemporary practices. Routledge.

    Bhatt, I., & de Roock, R. (2013). Capturing the sociomateriality of digital literacy events. Research in Learning Technology, 21(0). https://doi.org/10.3402/rlt.v21i0.21281

    Burnett, C., & Merchant, G. (2020). Undoing the digital: Sociomaterialism and literacy education. Routledge.

    Burnett, C., Merchant, G., Simpson, A., & Walsh, M. (2014). Making New Literacies Research in Classrooms: Digital Literacies and Children’s Learning. English Teaching: Practice & Critique, 13(1), 5–20.

    Burnett, C., & Merchant, G. (2016). Literacy-as-event: Accounting for relationality in literacy research. Journal of Literacy Research, 48(3), 297–317. https://doi.org/10.1177/1086296X16665383

    Hawley, R. (2022). Towards reflective entanglements in the use of AI in education. In Postdigital Science and Education, 4, 362–381. https://doi.org/10.1007/s42438-021-00283-2

    Jandrić, P., & Ford, D. R. (Eds.). (2022). Postdigital ecopedagogies: Genealogies, contradictions, and possibilities. Springer.

    Knox, J. (2019). What does the ‘postdigital’ mean for education? Three critical perspectives on the digital, with implications for educational research and practice. Postdigital Science and Education, 1, 357–370. https://doi.org/10.1007/s42438-019-00045-y

    Lankshear, C., & Knobel, M. (2011). New literacies: Everyday practices and social learning (3rd ed.). Open University Press.

    Nichols, S. (2022). Old and new literacies: Assembling meaning in a timespace of change. In J. Rowsell & K. Pahl (Eds.), The Routledge handbook of literacy studies (pp. 203–213). Routledge.

    Selwyn, N. (2022). Should robots replace teachers? AI and the future of education. Polity Press.Bhatt, I., de Roock, R., & Adams, J. (2024). Literacy and AI: Postdigital perspectives on co-authorship and educational authorship. Postdigital Science and Education, 6(1), 21–40.

  • The INFINITE Digital AI Repository: Open Access to the Future of Ethical AI in Higher Education

    The INFINITE Digital AI Repository: Open Access to the Future of Ethical AI in Higher Education

    As artificial intelligence (AI) continues to reshape teaching, learning, and assessment, the INFINITE Erasmus+ Project takes a pioneering step with the launch of the INFINITE Digital AI Repository — a transformative open-access platform hosted at https://infiniteoer.ucd.ie. Developed by University College Dublin (UCD) in collaboration with partners from across Europe, the repository is an innovative response to the increasing demand for trustworthy, practical, and ethical AI integration in higher education.

    A One-Stop Hub for AI Literacy and Pedagogical Practice

    The INFINITE AI Repository is more than just a content bank. It is a carefully curated, community-driven knowledge base filled with open educational resources (OERs) designed to empower educators, students, and institutions.

    “The goal is to demystify AI and give academics and learners actionable strategies and tools to embed AI in ways that are ethical, inclusive, and impactful,” said Prof. Eleni Mangina.

    Resources range from AI tool tutorials and guidelines to curated collections of MOOCs, PDFs, online training tools, and more. The platform emphasizes reusability, offering materials in modular formats under open licenses and in accessible file types — making integration into any learning environment seamless.

    Choosing the Right Tool: Why Omeka Classic?

    UCD conducted a thorough analysis of available repository platforms, including DSpace and Fedora, before selecting Omeka Classic for its simplicity, visual flexibility, and user-friendly interface. Designed for scholars, librarians, and educators, Omeka offers a powerful combination of rich metadata support, item tagging, and exhibit building.

    “We needed a platform that could house diverse digital formats while allowing users to browse, search, and contribute with ease,” said Dr. Levent Görgü.

    Collections That Speak to Modern AI-Driven Classrooms

    The INFINITE repository currently houses six themed collections:

    • Online AI Tools
    • PDFs
    • Guidelines
    • Books
    • Online Training
    • MOOCs

    With over 22 searchable tags and a growing database of categorized content, users can quickly locate materials tailored to their instructional or learning needs. From enhancing AI literacy to showcasing real-life examples of AI applications in education, the repository fosters both awareness and skill-building.

    Community Engagement and Usability First

    Following a live demonstration at the INFINITE project meeting in February 2025, partner feedback played a critical role in refining the platform. Enhancements included links to the INFINITE Digital Hub and Project Website, the addition of HTTPS security, and usability improvements.

    User feedback highlighted features like:

    • The AI Literacy Toolkit
    • Intuitive search and filtering tools
    • Clear categories and free access to curated tools

    “The practical nature of the tools is what stands out”, commented one participant. “They’re clearly explained, directly applicable, and incredibly useful for my tasks as a teacher.”

    Building a Lasting Legacy for AI in Education

    The INFINITE AI Repository will remain active for at least five years after the project’s end, ensuring long-term access to its growing content base. As part of the broader INFINITE Erasmus+ mission, the repository is set to become a key reference point for educators and institutions navigating AI’s role in higher education.

    Whether you’re a university lecturer exploring ethical AI assessment tools or a student looking to build your AI skills, https://infiniteoer.ucd.ie is your gateway to high-quality, purpose-driven AI education resources.

    Explore, Learn, Share

    Visit the INFINITE Digital AI Repository today and become part of a growing network of educators and learners shaping the future of AI in education.

    🔗 https://infiniteoer.ucd.ie

  • Designing with learners in mind: Exploring methods to elicit prior understandings of AI

    Designing with learners in mind: Exploring methods to elicit prior understandings of AI

    As discussed in our previous article (AI Vision in Higher Education: Toward a Critical AI Literacy at the University of Groningen), generative AI tools are increasingly present in our daily lives, shaping how we consume content and interact with technology. From personalized playlists on Spotify to video recommendations on YouTube, AI-driven systems are constantly working behind the scenes, subtly influencing our choices and behaviors.

    This growing integration of AI into everyday life is also making its way into education. More and more students—from primary school to university—are incorporating AI tools into their learning journeys. These tools are used not only to explore new topics but also to complete assignments. Popular examples include ChatGPT (OpenAI) and Gemini (Google), but also lesser-known tools such as Elicit and Gamma, which support research and content creation, respectively.

    In light of this expanding presence, there is now a broad international consensus on the urgent need to foster AI literacy (UNESCO, 2022). This includes developing educational programs that equip learners of all ages with the skills to use AI tools ethically, critically, and responsibly. However, before we can design effective and meaningful AI curricula—regardless of the educational level—it is essential to understand how learners’ make sense of artificial intelligence. Gaining insight into learners’ perceptions, expectations, and experiences with AI is crucial for informing future curriculum development and ensuring that educational approaches are relevant, inclusive, and impactful.

    A recent study by Dagmar Mercedes Heeg and Lucy Avraamidou, researchers at the Centre for Learning and Teaching at the University of Groningen, offers a compelling example of how to approach this foundational step. Their 2024 article, Young Children’s Understanding of AI, published in Education and Information Technologies, investigates how children conceptualize AI, not only from a technical standpoint but also through the lens of their everyday lives and social interactions. What makes this study particularly valuable is its emphasis on the socio-cultural dimensions of AI—framing it not just as a tool, but as a force that shapes and is shaped by human experience.

    Although the research focuses on a younger audience than the one addressed in our Erasmus+ INFINITE project, its methodology has clear potential for adaptation. The approach used to surface children’s prior conceptions of AI can inspire similar exploratory phases in professional development programs for higher education teachers and students. By starting with participants’ existing understandings, these programs can be designed to be more context-aware, relevant, and impactful.

    To structure their investigation, the authors posed three key questions:

    • How do young children understand AI?

    • How do young children understand AI applications in their daily lives?

    • What (if any) ethical risks do young children identify in relation to AI?

    To answer these questions, the study followed a qualitative case study design involving 18 primary school children between the ages of 11 and 12. Data were collected through five online group interviews, each lasting between 20 and 30 minutes. These semi-structured interviews combined open- and closed-ended questions to allow for both guided discussion and spontaneous responses. For the analysis, the authors employed a thematic approach, aiming to identify which AI literacy constructs were most prominent in the children’s narratives. 

    Regarding their general understanding of AI, many children described it as a type of technology designed by humans that has the ability to “think” or “decide” independently. Some referred to it as an “algorithm” or an “intelligent machine” capable of learning and performing tasks on its own. These interpretations reveal how children’s notions of AI combine both factual understanding and intuitive reasoning, shaped by their interactions with digital tools.

    When reflecting on where they encounter AI in daily life, the children named a wide range of examples—most notably algorithms behind platforms like YouTube, TikTok, and Netflix. They also mentioned devices such as voice assistants (e.g., Alexa, Google Assistant), robotic vacuum cleaners, lawnmowers, and autonomous vehicles. These references show that AI, for them, is not a distant or futuristic idea—it’s part of the fabric of their daily routines.

    Interestingly, the children also demonstrated a noteworthy level of critical thinking when asked about the risks and ethical dimensions of AI. For instance, some expressed discomfort with how platforms like YouTube use their personal data for financial gain, touching on questions of digital ownership and consent. Others raised concerns about privacy, fearing that their data might be shared or exploited without their knowledge. Bias in AI systems also emerged as a common theme—all groups felt strongly that unfair or discriminatory behavior by AI should be addressed, although most struggled to explain why these biases exist or how they could be fixed.

    Taken together, the findings suggest that children’s understanding of AI is deeply connected to their everyday routines and social contexts. Rather than seeing AI purely as a technical system, they view it as something that can support and shape their actions—highlighting its socio-cultural role. Their knowledge reflects both what AI does and how it affects them—from the personalized content they receive to the data they unknowingly share. 

    So, what can we take away from this research—as members of the INFINITE consortium, but also as educators and researchers committed to developing meaningful AI training? Educational design literature consistently shows that effective curriculum development is not a linear process, but rather a complex and iterative one. Across various instructional design models—whether backward design, design-based research, or others—there is a shared emphasis on starting with learners: understanding their prior knowledge, beliefs, and experiences.

    This foundational step is crucial at all educational levels. If we want to create relevant and impactful AI literacy training for higher education teachers and students, we must first explore what they already know (or think they know) about AI. Just as the study of young learners highlighted the value of surfacing prior conceptions, we should also include this as a starting point in the design of our own programs.

    As a growing number of institutions rush to implement AI-related training and resources, let’s not lose sight of this essential first phase. Only by grounding our work in the real experiences and understandings of our learners can we ensure that AI education is not only technically sound, but also pedagogically meaningful and socially responsible.

  • THE AI DIGITAL LITERACY TOOLKIT

    THE AI DIGITAL LITERACY TOOLKIT

    Ready to harness the power of AI in your teaching and learning? We are excited to announce the release of our AI Digital Literacy Toolkit available in English, Greek, Dutch and Irish!  This free resource empowers Higher Education (HE) faculty, staff, and students worldwide to effectively integrate AI and related technologies into their professional and pedagogical practices.

    Access Toolkit

    What’s Inside the Toolkit?

    This comprehensive toolkit is designed to guide you through the exciting world of AI in education, regardless of your current level of expertise.  It provides:

    • Clear Definitions and Foundational Knowledge:  The Toolkit is equipped with definitions of key terms and notions related to the use of AI in HE. Understanding these fundamentals is crucial for effectively leveraging AI in your teaching and ensures everyone starts with a solid foundation.  We explore the role of these advanced technologies in education, outlining both the exciting possibilities and the potential challenges.
    • Practical Guidance and Best Practices:  The toolkit goes beyond theory, offering practical guidance and best practices for integrating AI into your HE setting.  We provide real-world examples and adaptable strategies that HEIs can easily implement.
    • A Self-Assessment Checklist:  Not sure where to start? Our comprehensive checklist helps HE academics assess their current AI readiness. Identify your strengths and areas for development to create a personalised learning path.
    • A Visual Framework for Tool Selection:  Choosing the right AI tool can be overwhelming. Our visual framework simplifies the process by guiding you through the selection of the most appropriate AI-based tools for your specific professional and pedagogical needs.  

    How Will This Toolkit Benefit You?

    • Stay Ahead of the Curve: Equip yourself with the knowledge and skills necessary to navigate the rapidly evolving landscape of AI in education.
    • Unlock the Potential of AI: Discover how AI can transform your teaching and enhance student learning.

    Learn more about the INFINITE project and our mission to promote digital literacy in higher education: here

  • THE ROLE OF CHATGPT IN EDUCATION: OPPORTUNITIES, CHALLENGES, AND FUTURE PROSPECTS

    THE ROLE OF CHATGPT IN EDUCATION: OPPORTUNITIES, CHALLENGES, AND FUTURE PROSPECTS

    The integration of artificial intelligence (AI) into education is no longer a futuristic concept—it is happening now. Among the most prominent AI tools, ChatGPT has emerged as a valuable asset for both students and educators. A recent systematic literature review by Dimeli & Kostas (2024) analysed 50 empirical studies on the use of ChatGPT in school and higher education (HE) offering key insights into its applications, challenges, and impact on learning.

    How is ChatGPT Being Used in Education? 

    The study reveals that ChatGPT has found applications across various educational levels, from preschool to university education. The main ways in which students and educators are using ChatGPT include:

    •  Content Generation for Educators: Educators are leveraging ChatGPT to create quizzes, lesson plans, personalised learning materials, and even design assessment rubrics. This allows educators to save time and focus on student engagement rather than administrative tasks.
    • Personalised Learning for Students: Students use ChatGPT as a study companion to explain complex concepts, generate ideas for assignments, and practice problem-solving. In foreign language learning, for example, ChatGPT acts as a conversational partner, providing real-time feedback on writing and grammar.
    • STEM Applications: In mathematics, chemistry, and computer science, ChatGPT is used to explain problem-solving steps and assist in programming tasks like code debugging. However, it struggles with accurate numerical computations and complex calculations, necessitating additional verification.
    • Academic Writing and Research: Many university students and researchers utilise ChatGPT to enhance their writing skills, refine essays, and even develop structured research methodologies.

    Performance and Impact on Learning

    The findings indicate that ChatGPT can enhance student performance in multiple disciplines, particularly in cognitive development, critical thinking, and motivation. Key areas of positive impact include:

    • Improved cognitive performance: Students using ChatGPT in history, science, and mathematics have shown higher knowledge retention and better problem-solving skills.
    • Enhanced critical thinking skills: In subjects like world religions, physics, and research methodology, ChatGPT has been used in knowledge-building activities, encouraging deeper analysis and evaluation.
    • Boost in student motivation and engagement: Many students report that ChatGPT makes learning more interactive and helps them stay motivated, particularly in subjects that require extensive writing or conceptual understanding.
    • Advancement in AI literacy: Exposure to ChatGPT fosters AI literacy among students, preparing them for an increasingly AI-driven world.

    Challenges and Ethical Concerns

    While the benefits are clear, the study also highlights several limitations and ethical concerns associated with ChatGPT in education:

    • Accuracy Issues: ChatGPT is known to produce inaccurate, outdated, or fabricated information (“hallucinations”), making it unreliable for subjects that require precise factual knowledge.
    • Academic Integrity Risks: The tool raises concerns about plagiarism, over-reliance, and reduced original thinking among students.
    • Lack of Creativity and Emotional Intelligence: Despite its ability to generate coherent text, ChatGPT struggles with creative problem-solving, deep reasoning, and emotional expressionskills that are crucial in human-centered disciplines.
    • Bias and Ethical Challenges: AI models like ChatGPT can perpetuate biases present in their training data, leading to ethical concerns in education. Additionally, unequal access to AI tools could widen the digital divide between students with different technological resources.

    Future Research and Recommendations

    The study underscores the need for a structured approach to integrating AI in education. Researchers recommend:

    • Training students and educators in AI literacy to ensure responsible and ethical usage.
    • Developing clear guidelines on AI-generated content to protect academic integrity.
    • Enhancing AI tools to minimise biases and improve reliability.
    • Exploring AI applications in special education and underrepresented educational settings, such as primary education and informal learning environments.

    Final Thoughts

    The role of ChatGPT in education is evolving rapidly. While it offers exciting opportunities for personalised learning and teaching support, it also presents challenges that require careful consideration. By embracing AI responsibly, educators and policymakers can maximise its benefits while safeguarding ethical standards and academic integrity.

    AI in education is not a replacement for teachers but a powerful tool to enhance learning experiences. The key lies in balancing innovation with critical oversight—ensuring that AI serves as a supportive partner rather than a disruptive force in the classroom.

    And that’s where our INFINITE project comes into play to empower the HE community for the responsible use of AI in teaching, learning and assessment.

    Our six partners from Belgium, Cyprus, Greece, Ireland and The Netherlands are working together to develop:

    – An AI Literacy Toolkit and AI Digital Hub with a rich repository of resources for HE academics to make responsible use of AI for innovative teaching and assessment.

    – A complementary blended course for HE students to help their understanding of the interdisciplinary nature and implications of the use of AI.

    By building capacity and digital resilience of HE staff and students, our project aims to create a domino effect for the digital transformation of the HE sector and increase the employability of HE graduates.

    What are your thoughts on the use of AI in general and ChatGPT specifically in education? Join the conversation in the comments!

    Source: Dimeli, M. & Kostas, A. (2025). The Role of ChatGPT in Education: Applications, Challenges: Insights from a Systematic Review. Journal of Information Technology Education: Research, 24. https://doi.org/10.28945/5422