By Jen Ross
An important part of this project is bringing young people’s voices, perspectives, and imaginations to our exploration of AI futures for education. One way we have done this is through a series of artist-led workshops at three schools – two in Scotland, one in England. These workshops took place across four Thursdays in June, and they have helped us immeasurably in understanding not only some of the current thoughts, hopes and concerns that young people have, but also a lot about the kinds of engagements they are having with GenAI technologies and what sorts of futures this might indicate.
Our starting point for these workshops was the understanding that responsible AI principles of explainability, privacy and fairness should apply to educational uses of GenAI, but that GenAI use amongst learners, teachers and others is not only being driven by explicitly educational uses. In addition, as Miao and Holmes (2023) and others have explored, current GenAI models do not generate explanations by default, current approaches to training Large Language Models may result in outputs that replicate dominant perspectives, and issues of consent, data rights, and ethical data collection have not been resolved in relation to GenAI. We wanted, then, to explore how young people understand, use and create with GenAI, how they respond to ideas about responsible and irresponsible AI, and what futures they can envisage.
Across the three secondary schools we worked with, we had a chance to work with young people with a variety of academic backgrounds, and a wide range of ways of seeing, using and communicating about and with GenAI. Working with three different artists also brought distinctiveness to the different workshops. The aim here was not to achieve consensus, but to explore in a speculative way a wide range of meaning making approaches, including those that incorporated GenAI tools and those that did not.
We are only at the early stages of analysing the findings from these workshops. A lot of our sense-making will come when we put together a creative zine made up of the messages, images and ideas young people wanted to share with educational decision-makers. However, even at this early point, there are some key ideas that we are discussing. Here are three of them that emerged after workshop 1:
Creativity, imagination and work
Creativity and imagination are at the heart of our methods. These are some of the most contested issues in relation to GenAI, and young people are sensitive to this. The concept of ‘laziness’ came up often in more than one school, with GenAI understood as a ‘lazy’ way to avoid doing one’s own thinking, making and work. Ideas about theft and the nature of GenAI outputs were important to a number of young people, including one who did not want to use the technology at all in the first workshop because of concerns about this. ‘Covers’ of music/art were the most common example of AI-produced items that young people came up with at one school. Questions for us to consider: what does a ‘legitimate’ future for GenAI look like in educational settings that value hard work? What sorts of hard work does GenAI produce in learning settings? In creative subjects?
Mundane and weird – two faces of AI
Young people are savvy about AI, and relatively well-informed about how it works. However, they still experience it as ‘weird, eerie, creepy’, and as potentially powerful in ways that aren’t fully understood or realised. Issues of surveillance and intrusion were raised, for example personalised ads from smart speakers. Visual representations of AI were made in two schools, using both paper-based and GenAI methods – these were otherworldly, amorphous, futuristic; but also reflections of the humans who created it. Some comments reveal – even if playfully – some less mundane imaginings about AI futures – for example, one participant explained to another that they were writing politely to the GenAI so that it would remember their politeness when it took over. Question for us to consider: what is the place of weirdness in co-creating AI futures for education?
Keep calm… carry on?
Young people’s mood was more introspective than worried even when discussing dangers or uncertainties around AI futures. Participants considered some big (existential) questions about AI, for example: does it still exist when you close the program? How does it respond to philosophical challenges like the trolley problem? (for the latter, Microsoft co-pilot refused to answer). Many young people were remarkably calm even when speaking of potential & serious problems with AI. Participants were interested in subverting the protocols of AI and seeing what it could and could not do. Questions for us to consider: Are young people seeing AI futures as inevitable, even dystopian ones? What forms of AI literacy education can meet young people where they are and allow them to explore big questions?
We look forward to sharing more as we analyse our data and create our project zine.
Jen’s collage, using images generated or made by workshop participants at one of the three schools. It attempts to capture something of both the savviness of the young people in making use of GenAI in their own ways, for their own purposes, and the risks that they, and we, are not yet able to address around surveillance and datafication from GenAI tools that are not yet built for responsibility and accountability. The quote “not fed to ourselves in a silver platter” is from one of the participants, expressing AI futures for creativity and learning that they do not want.