Any views expressed within media held on this service are those of the contributors, should not be taken as approved or endorsed by the University, and do not necessarily reflect the views of the University in respect of any particular issue.

Tradeoffs: AI in the future of work

The adoption of artificial intelligence (AI) in workplaces has increased in recent times, leading to some post-capitalist future theories where humans are entirely freed from work through full automation (Russell and Susskind, 2021; Srnicek and Williams, 2015). At first, this vision of a future without work seems attractive – it promises a hedonistic form of freedom and wellbeing focused solely on pleasure and the pursuit of leisure activities. However, this conception of a no-work future underplays the importance of work in human existence and the provision of deeper psychological meaning.

Several studies have demonstrated that individuals who are employed tend to exhibit significantly higher levels of psychological wellbeing compared to those who are unemployed (McKee-Ryan et al., 2002; Russell and Susskind, 2021). The act of working itself allows humans to actively participate in an endeavor and experience “eudaimonic” wellbeing—a state of wellbeing that requires key human qualities like ethics, empathy, and a sense of purpose (Kraut et al., 2022). Merely relieving humanity of the obligation to work would deprive people of this crucial element of genuine fulfillment.

As AI capabilities improve, we are seeing these technologies increasingly deployed to assist and automate processes across a variety of industries and sectors, including knowledge work fields like law, medical science, etc. AI assistance can enhance efficiency for certain tasks and workflows. However, studies have shown that employers’ motivations for adopting AI are often driven more by factors like cost savings and competitive pressures than careful consideration of humanistic impacts . Despite the expectation of financial benefits, the outcomes of AI adoption in the workplace do not consistently live up to expectations in terms of productivity gains and cost reductions (Howcroft and Taylor, 2022). This disconnect between the hoped-for outcomes and reality underscores how decisions around increasing automation are frequently made without adequate regard for preserving human roles, meaning, and wellbeing – creating circumstances that could adversely impact workers both financially and psychologically (Russell and Susskind, 2021).

Consequently, instead of imagining either a future of full “freedom from labor” through automation or one solely prioritising human labor, it seems imperative to develop a new paradigm—a “co-pilot” model for the future of work—that harmonizes human-AI collaboration in a way that preserves key elements of humanness like meaning, ethics, and empathy, while still capitalising on AI’s augmentative potential.

A potential critique of this proposed eudaimonic co-pilot future is that overly prioritising humanistic qualities could hinder productivity and operational efficiency, especially given AI’s potential strengths in areas like endurance, lack of susceptibility to illness, and immunity to disruptions like pandemics. Several analysts have argued that crises and challenges of this nature represent key scenarios where AI could potentially be helpful compared to human workers, providing an ethical rationale for increasingly automating work (Srnicek and Williams, 2015). From this view, a work future that tries to integrate too many humanistic elements, like “hyperempathy,” could be seen as an impediment. However, choosing a path of unrestricted technological innovation and abundance at the expense of core human values would likely create highly negative repercussions. Economists warn that full automation coupled with insufficient redistribution of productivity gains would lead to the radical commodification of what little human labor remains, as workers are pitted against each other and AI agents in an increasingly competitive race to secure whatever paid work is still available (Howcroft and Taylor, 2022). Under such a scenario, the minority who own and control AI gains and production would take a wildly disproportionate share of the wealth generated, exacerbating inequality. Consequently, technological profits accrue only to the ultrarich class.

These dynamics demonstrate the deeply capitalistic realities that often drive technological adoption – with the focus primarily on profit maximization rather than human wellbeing (Smith, 2010). To avoid widening these inequalities, the creation of a new work future will require developing economic and policy counterbalances to offset both innovative technological disruption and increasing wealth.

 

Reference

Howcroft, D., and Taylor, P. (2022). Automation and the future of work: A social shaping of technology approach. New technology, work, and employment, 38:351–370. Doi: 10.1111/ntwe.12240

Kraut, R., Zalta, E. N. & Nodelman, U. (2022). “Aristotle’s Ethics”. The Stanford Encyclopedia of Philosophy, Fall 2022 Edition. Available online: https://plato.stanford.edu/archives/fall2022/entries/aristotle-ethics/

Rusell, S. and Susskind, D. (2021). Positive AI Economic Futures. Insight Report, World Economic Forum. 

Srnicek, N, & Williams, A. (2015). Inventing the Future : Postcapitalism and a World Without Work, Verso, London. Available from: ProQuest Ebook Central, pp 79-143.

Leave a reply

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

css.php

Report this page

To report inappropriate content on this page, please use the form below. Upon receiving your report, we will be in touch as per the Take Down Policy of the service.

Please note that personal data collected through this form is used and stored for the purposes of processing this report and communication with you.

If you are unable to report a concern about content via this form please contact the Service Owner.

Please enter an email address you wish to be contacted on. Please describe the unacceptable content in sufficient detail to allow us to locate it, and why you consider it to be unacceptable.
By submitting this report, you accept that it is accurate and that fraudulent or nuisance complaints may result in action by the University.

  Cancel