The impact of AI on people’s working lives and labour has become a pressing topic. Automation of production, while promising to take on ‘heavy lifting’ and perform tasks at previously unachievable scale, can threaten to displace or replace workers, or degrade their roles.

Forms of invisible/hidden labour involved in AI (e.g. outsourced data labelling) often remain overlooked. As such, AI may lead to a more unequal future of work. We want to explore what specific challenges issues like these raise for public service media and whether/how PSM should play a role in ensuring ‘good work’ for all amid such rapid technological change.

In this workshop, we asked:

  • What key challenges do PSM face when preparing for further automation?
  • How are jobs evolving with AI and automation and which areas may suffer or grow? Who gets displaced?
  • How can PSM ensure standards throughout the supply chain? What forms of hidden labour are involved in AI – and what are the implications for PSM?
  • What role are unions playing in responding to and shaping the future of work in relation to AI? What new forms of organisation and action exist or are needed?
  • Might PSM need to enable workers to better understand, reconfigure, resist, and generally have more agency in relation to AI in their working lives?

We identified 3 themes:

Work and Labour Graphic


 

  1. Including & empowering workers in relation to data, AI & automation

  • Work is needed to understand how we can include PSM workers in the conversation around AI in a way which is fairly remunerated, non-intimidating and inclusive

    • What facilitation skills, approaches, methods are needed to shape these conversations? How might we include workers in (action-focused) research?

    • Move towards data reciprocity – workers having access to data gathered about them and being skilled up to analyse that data and organise by using it. This relates also to forms of ‘sousveillance’, digital worker inquiry/worker data science

    • Data and algorithmic impact assessment, which is on the rise, can also be a route into these conversations

  • Often the narrative in AI focuses on the technology itself and the role of the worker/user is ignored/seen as of secondary importance.

    • In addition, the speed at which technology is moving means any focus on supporting workers often prioritises dealing with their immediate material circumstances

  • Does explainability have potential to open up conversation?

    • Many different levels of explainability exist. Some examples of field work have shown the documentation (although it exists) is not helpful in explaining concepts/use to a generic user and this requires more human/personal interaction

    • Does explainability create room for resistance?

    • Is it beneficial to shift the perspective from explaining (e.g. AI or ML) to workers and users asking questions in order to ensure worker voice and consultation?

  • There are questions around whether data-driven technologies are taking on editorial decision making roles and greater appreciation that those who are affected by technologies need to be involved in shaping them

 

  1. Better understanding role of AI & implications in relation to work

  • Automated systems in recruitment, management and surveillance of workers using AI and algorithmic technology are widespread

    • Need to explore this in PSM context – are they following best practice in this arena? What safeguards are in place?

  • There is also a need to better understand what the state of current, practical use of AI more broadly in PSM working practices

  • The ethics of the AI supply chain in which PSM engage should be a core focus – policies and outsourcing practices of large tech giants are often hidden from public view and may have lower standards of integrity, which do not align with PSM values

    • Potential risk that PSM to feed into worker exploitation without being aware of it, e.g. with large tech companies sub-contracting with unethical data-labelling companies

    • Tied up in this are questions of coloniality and the unequal (re-)distribution of work

  • At the highest level, PSM have responsibilities around informing people and contributing to democratic debate, but we can also consider PSM as an industry which should be developing best practices around workers rights in a digital age. Should more work be done here?

  • Should PSM play a role in making data governance in the public interest and worker rights issues a public concern?

    • What role does/should PSM have in convening/collaborating with organisations (e.g. Higher Education, unions) and people to hold these public conversations?

    • What responsibility does PSM have to engage the widest possible audience on issues of AI in work and labour? We know many of the most severely affected are often the least served by public media/service provision

Work and Labour workshop outcomes
Post-it note of workshop outcomes
  1. Automating media production & the quality of work

  • AI is often framed as replacing repetitive work but also has the power to degrade work practices and take over decision-making which should be made by humans.

    • What can PSM do to ensure they understand the outcomes they want from AI and prevent making damaging decisions about what to automate and how?

  • As AI filters more into the newsroom and production space, there is a need to understand how meaning-making is impacted when cognition and control are distributed between “intelligent” systems and human practitioners

    • In particular, PSM journalists engaging with AI and algorithmic systems often have no specialised knowledge of them, which opens potential for misunderstanding, misuse, over-reliance and aversion, carrying potentially higher ethical and reputational risks

  • Adopting AI and automated processes is often justified managerially on the basis that it might free up creative time

    • Should effort be invested in a broad lifecycle analysis to see whether these benefits really exist?

  • There is an impression that PSM often feels the practical need to mirror certain practices from the commercial sector, which is increasingly ‘Silicon Valley’ in its approach.

    • How can PSM distinguish itself from commercial digital services, in terms of an ethical data-driven approach?

  • The ways work is measured and evaluated are full of power dynamics and becoming increasingly based on metrics from data – but may be one area PSM can take a distinctive approach

    • How can PSM ensure they value more than what is easily quantifiable?

  • What risks are there that PSM workers internalise the logic that value can be quantified and measured by metrics, which today’s data-driven organisational landscape is creating?

    • Public service value goes beyond the numerically measurable and adopting forms of commensuration and associated reduction of nuance risks failing to recognise, and in turn deplete, some of PSM’s core contribution

    • How can workers challenge and resist these logics? Do they need to push back from within these structures or do they have to step outside to resist?

 

With thanks to our participants

Dr Karen Gregory
Senior Lecturer in Sociology, University of Edinburgh

Dr Corentin Curchod
Senior Lecturer in Strategic Management & Organisation, University of Edinburgh

Dr Rashné Limki
Lecturer in Work and Organisation Studies at University of Edinburgh –

Anna Votsi
EFI Research Proposals Manager, University of Edinburgh

Dr Alex Taylor
Reader in Human Computer Interaction, City University of London

Dr Oliver Bates
Research Fellow, Lancaster University

Dr Marisela Gutierrez Lopez
Senior Research Associate in School of Sociology, Politics and International Studies, Bristol University

Mary Towers
Employment Rights Policy Officer in the Rights, International, Social and Economic department, TUC

Lalya Gaye
AI & Data Initiative Coordinator, European Broadcasting Union

Dr Rhia Jones
Research Lead, BBC

Dr Mike Evans
UX Research Lead, BBC