University of Edinburgh Logo


What are we doing?

We’re exploring the implications of artificial intelligence (AI) for public service media (PSM), drawing out the big questions AI and data-intensive technologies raise alongside the practical and conceptual challenges they pose.
We aim to construct a cutting edge research agenda driven by state-of-the-art academic insights and real industry challenges.


We started in November 2021 by running six multidisciplinary workshops that brought scholars and industry representatives into dialogue to scope out the pressing challenges AI presents for public service media. We focused on the following topics:


Law & Regulation
Natural Language Processing & Speech/Voice
Value(s) & Ethics
Sustainability & Inequality
Work & Labour

Click here for an overview of what was discussed.


Why focus on AI and PSM?

AI and data-driven systems are changing the way news and media are produced and consumed with implications for the way citizens communicate and democratic societies function. Public service media (PSM) provide a fundamental information infrastructure but face challenges to innovating technologically in a responsible and value-aligned way while maintaining legitimacy and capacity to meet societal expectations.


Media Engineers at work

Photo by ThisisEngineering RAEng

We’ve found that practitioners at the coalface often have limited opportunity to stand back and reflect critically on the nature of changing technologies and the impacts they’re having. Meanwhile, scholars are often one step removed from transformations in industry and the experiences of expert practitioners. We want to bring them together to explore these issues from an interdisciplinary perspective, recognising that socio-technical problems require collaboration across varied fields within science and technology, social sciences, arts and humanities.

Some guiding questions


Can AI and data-driven innovation further public service priorities and values in journalism and media? If so how?

Do core PSM values misalign to values that inform the development of AI systems?

Is it productive to mobilise and/or re-imagine public service values?

  • How this might contribute to development and application of responsible, ethical and trusted AI?

Can such ‘public service AI’ contribute to value creation in the UK and beyond? How might this be fostered?

What barriers are there to applying responsible and trustworthy AI in PSM (e.g. cost, capacity, trust)?

  • Should some applications be used at all?
  • What safeguards and governance mechanisms are needed?

What kinds of systems might be used by PSM?

What role might Natural Language Processing play in PSM – what prototypes might be developed and tested to demonstrate new forms of public value?


Our Aims

– Explore & Scope –

What is the current context?
What are the key challenges?

– Clarify & Prioritise –

What is most important and why?
What areas of research are needed or overlooked?
What expertise is needed to understand and tackle these issues?
Who are the key stakeholders? Partners/collaborators?

– Build Research Agenda –

Which combination of research questions, approaches, and collaborations should make the cut for a funding bid?
What tools, methods, and partnerships are needed?



Our project links to three of the University of Edinburgh’s priority research themes. We connect with:

  • Culture & Creative Economies
  • Living & Working Digitally
  • Societal & Planetary Sustainability

This work was funded by the Scottish Funding Council, via the Bayes Centre at the University of Edinburgh as part of the Data-Driven Innovation initiative and the Edinburgh Futures Institute (EFI).