Designing with learners in mind: Exploring methods to elicit prior understandings of AI

As discussed in our previous article (AI Vision in Higher Education: Toward a Critical AI Literacy at the University of Groningen), generative AI tools are increasingly present in our daily lives, shaping how we consume content and interact with technology. From personalized playlists on Spotify to video recommendations on YouTube, AI-driven systems are constantly working behind the scenes, subtly influencing our choices and behaviors.

This growing integration of AI into everyday life is also making its way into education. More and more students—from primary school to university—are incorporating AI tools into their learning journeys. These tools are used not only to explore new topics but also to complete assignments. Popular examples include ChatGPT (OpenAI) and Gemini (Google), but also lesser-known tools such as Elicit and Gamma, which support research and content creation, respectively.

In light of this expanding presence, there is now a broad international consensus on the urgent need to foster AI literacy (UNESCO, 2022). This includes developing educational programs that equip learners of all ages with the skills to use AI tools ethically, critically, and responsibly. However, before we can design effective and meaningful AI curricula—regardless of the educational level—it is essential to understand how learners’ make sense of artificial intelligence. Gaining insight into learners’ perceptions, expectations, and experiences with AI is crucial for informing future curriculum development and ensuring that educational approaches are relevant, inclusive, and impactful.

A recent study by  Dagmar Mercedes Heeg and Lucy Avraamidou, researchers at the Centre for Learning and Teaching at the University of Groningen, offers a compelling example of how to approach this foundational step. Their 2024 article, Young Children’s Understanding of AI”, published in Education and Information Technologies, investigates how children conceptualize AI, not only from a technical standpoint but also through the lens of their everyday lives and social interactions. What makes this study particularly valuable is its emphasis on the socio-cultural dimensions of AI—framing it not just as a tool, but as a force that shapes and is shaped by human experience.

Although the research focuses on a younger audience than the one addressed in our Erasmus+ INFINITE project, its methodology has clear potential for adaptation. The approach used to surface children’s prior conceptions of AI can inspire similar exploratory phases in professional development programs for higher education teachers and students. By starting with participants’ existing understandings, these programs can be designed to be more context-aware, relevant, and impactful.

To structure their investigation, the authors posed three key questions:

How do young children understand AI?

How do young children understand AI applications in their daily lives?

What (if any) ethical risks do young children identify in relation to AI?

To answer these questions, the study followed a qualitative case study design involving 18 primary school children between the ages of 11 and 12. Data were collected through five online group interviews, each lasting between 20 and 30 minutes. These semi-structured interviews combined open- and closed-ended questions to allow for both guided discussion and spontaneous responses. For the analysis, the authors employed a thematic approach, aiming to identify which AI literacy constructs were most prominent in the children’s narratives.

Regarding their general understanding of AI, many children described it as a type of technology designed by humans that has the ability to “think” or “decide” independently. Some referred to it as an “algorithm” or an “intelligent machine” capable of learning and performing tasks on its own. These interpretations reveal how children’s notions of AI combine both factual understanding and intuitive reasoning, shaped by their interactions with digital tools.

When reflecting on where they encounter AI in daily life, the children named a wide range of examples—most notably algorithms behind platforms like YouTube, TikTok, and Netflix. They also mentioned devices such as voice assistants (e.g., Alexa, Google Assistant), robotic vacuum cleaners, lawnmowers, and autonomous vehicles. These references show that AI, for them, is not a distant or futuristic idea—it’s part of the fabric of their daily routines.

Interestingly, the children also demonstrated a noteworthy level of critical thinking when asked about the risks and ethical dimensions of AI. For instance, some expressed discomfort with how platforms like YouTube use their personal data for financial gain, touching on questions of digital ownership and consent. Others raised concerns about privacy, fearing that their data might be shared or exploited without their knowledge. Bias in AI systems also emerged as a common theme—all groups felt strongly that unfair or discriminatory behavior by AI should be addressed, although most struggled to explain why these biases exist or how they could be fixed.

Taken together, the findings suggest that children’s understanding of AI is deeply connected to their everyday routines and social contexts. Rather than seeing AI purely as a technical system, they view it as something that can support and shape their actions—highlighting its socio-cultural role. Their knowledge reflects both what AI does and how it affects them—from the personalized content they receive to the data they unknowingly share. 

So, what can we take away from this research—as members of the INFINITE consortium, but also as educators and researchers committed to developing meaningful AI training? Educational design literature consistently shows that effective curriculum development is not a linear process, but rather a complex and iterative one. Across various instructional design models—whether backward design, design-based research, or others—there is a shared emphasis on starting with learners: understanding their prior knowledge, beliefs, and experiences.

This foundational step is crucial at all educational levels. If we want to create relevant and impactful AI literacy training for higher education teachers and students, we must first explore what they already know (or think they know) about AI. Just as the study of young learners highlighted the value of surfacing prior conceptions, we should also include this as a starting point in the design of our own programs.

As a growing number of institutions rush to implement AI-related training and resources, let’s not lose sight of this essential first phase. Only by grounding our work in the real experiences and understandings of our learners can we ensure that AI education is not only technically sound, but also pedagogically meaningful and socially responsible.

Leave a Comment

Your email address will not be published. Required fields are marked *