Towards embedding responsible AI in the school system: co-creation with young people

Recent advances in Generative Artificial Intelligence (GenAI) have the potential to transform education, from reactive tweaks in assessment practices to fundamental philosophical debates about what we should value in the education of humans in an age of (currently narrow) machine intelligence. Though it is still early, the implications for learning in an age of pervasive use of GenAI are significant: issues of accountability; accuracy; and inclusion need addressing, that young people (YP) have a voice in how AI could and should be used in their education.

Responsible AI requires meaningful engagement with stakeholders, including YP, who have the right to be consulted about the systems which affect their lives. This project bridges the divide between principles of explainability, fairness and privacy as they apply to educational AI, and the values, hopes and concerns of YP when faced with emerging technologies whose implications are not yet fully understood. It will produce recommendations for educational policy and visions for educational practice that are grounded in lively, specific and meaningful engagements with YP as key stakeholders in education.

The three aims of this project are to:

  • Develop a picture of what responsible GenAI could look like within secondary school education.
  • Develop and test imaginative, speculative and participatory methods for generating meaningful insights into YP’s perspectives on emerging AI technologies, testing these methods in two distinct educational contexts and providing a strong methodological foundation for a BRAID demonstrator project focusing on YP and education.
  • Produce recommendations for policymakers, educators and technology developers about what YP consider to be important considerations for including GenAI in school learning and assessment, and how GenAI literacy should be fostered.

The project is funded by BRAID and runs from Feb – Aug 2024