Any views expressed within media held on this service are those of the contributors, should not be taken as approved or endorsed by the University, and do not necessarily reflect the views of the University in respect of any particular issue.
AI is increasingly part of making and distributing news. Organisations such as public service media (PSM) are implementing AI-driven production tools and decision aids that among other things, profile audiences, personalise content, and automate tasks.
In this workshop, we asked:
What are the pressing professional, technical, ethical, normative and organisational questions raised by AI in journalism?
How is AI impacting PSM journalistic work (practices, processes, routines, and conventions)?
If we agree that a more structural transformation of journalism is taking place – what needs researching in relation to PSM?
Media companies gather and produce large amounts of video, audio and textual material, including news in different languages as well as scripts and subtitles. Natural Language Processing (NLP) techniques are being used to help them deal with this abundance of data, for example to extract semantic data, classify content, and analyse sentiment. Machine translation and text-to-speech/speech-to-text systems underpin production tools and are driving new forms of delivery such as voice with the aim of personalising content and attracting new generations of audience.
In this workshop, we asked:
How is/might NLP be used in PSM?
What areas are ripe for exploration and development to deliver new forms of public value?
What combination of disciplines and skillsets are needed for responsible research and innovation in this field?
We identified one overarching theme and four sub-themes:
Data and AI are increasingly central to public service media (PSM) ambitions and strategies for the future but there are serious concerns around the development, use and societal impacts of AI.
Ethical principles for AI now abound but how such ethics are enacted in practice varies widely across contexts, and PSM must align such guidance with their own mission- and value-driven frameworks. This workshop focuses on the related issues of value and values – asking questions about how AI and data-driven systems can contribute to the creation of public value and how public service values can be (re-)articulated and translated into computational form, and embedded in socio-technical systems.
In this workshop, we explored:
How might AI be used to help PSM deliver public value?
How might this be achieved while upholding or strengthening public service values (e.g. impartiality, independence, objectivity, fairness, accuracy, universality, diversity, accountability)?
Is there a need to re-imagine/re-articulate these values to ensure any benefits of AI are harnessed and shared, and risks minimised?
What research and development is needed to answer these (and other) crucial questions?
In recent years, attention has turned to the environmental, social and economic costs of designing, developing, and using AI across various industries.
Equality and inclusion has often been an afterthought in AI design and development, leading to new and exacerbated inequalities and harms in both digital and offline realms. The trend is for AI models to become larger and more expensive, which has implications for how inclusive they are for PSM of varying sizes and with differing budgets, and for their carbon footprint. PSM need to be asking questions about how they can use AI for sustainability and about the sustainability of AI.
We focused on scoping out what kinds of questions PSM need to be asking about their use of AI and data-intensive systems. These included:
Does PSM need to re-imagine ways of working (e.g. mechanisms of governance) to create fairer and more sustainable futures for people and the planet?
How might PSM ensure oversight of their AI-related supply chains with regard to environmental sustainability and fair working practices?
What are best practice examples from which PSM can learn – and what initiatives currently exist within PSM?
Which AI applications or approaches may be most appropriate/beneficial and which may exacerbate inequalities and harms?
Is there a role for PSM to use AI to better editorially cover these issues? Or might this create contradictions with stated sustainability and equality goals?
What research is needed to advance credible AI sustainability and equality strategies?
The impact of AI on people’s working lives and labour has become a pressing topic. Automation of production, while promising to take on ‘heavy lifting’ and perform tasks at previously unachievable scale, can threaten to displace or replace workers, or degrade their roles.
Forms of invisible/hidden labour involved in AI (e.g. outsourced data labelling) often remain overlooked. As such, AI may lead to a more unequal future of work. We want to explore what specific challenges issues like these raise for public service media and whether/how PSM should play a role in ensuring ‘good work’ for all amid such rapid technological change.
In this workshop, we asked:
What key challenges do PSM face when preparing for further automation?
How are jobs evolving with AI and automation and which areas may suffer or grow? Who gets displaced?
How can PSM ensure standards throughout the supply chain? What forms of hidden labour are involved in AI – and what are the implications for PSM?
What role are unions playing in responding to and shaping the future of work in relation to AI? What new forms of organisation and action exist or are needed?
Might PSM need to enable workers to better understand, reconfigure, resist, and generally have more agency in relation to AI in their working lives?