Any views expressed within media held on this service are those of the contributors, should not be taken as approved or endorsed by the University, and do not necessarily reflect the views of the University in respect of any particular issue.

Intelligent Machines, Part 1. On Defecating Ducks and Invisible Labor

In September 2018, the British Academy and The Royal Society have published an evidence report on the impact of Artificial Intelligence on the future of work. The review, which aims at helping policy makers to tackle the “disruptive effect” of AI (2018: 4), suggests that around “10-30% of jobs in the UK are highly automatable, meaning AI could result in significant job losses” (22). However, when it comes to define the nature of such jobs, let alone to indicate what “automatable” means, the report is significantly vague. We read “There are many different perspectives on ‘automatability’, with a broad consensus that current AI technologies are best suited to ‘routine’ tasks, while humans are more likely to remain dominant in unpredictable environments, or in spheres that require significant social intelligence” (24).

As inconsistent as it may sound, the same report previously defines AI as “an umbrella term that describes a suite of technologies that seek to perform tasks usually associated with human intelligence. ‘The science and engineering of making intelligent machines’” (13). Then, what kind of “intelligence” do these machines have?

Robot staff at Henn na Hotel, in Japan (image via The Guardian)

 

“AI is coming for our jobs”. When we hear such claims, we immediately start thinking about the McDonald’s self-ordering kiosk, or the dinosaur robot receptionist managing the front desk at Henn na Hotel in Japan. Except that none of those machines are actually “intelligent”. The Oxford Dictionary defines AI as “the theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages”. Thus, AI is commonly associated with tasks performed by Machine Learning, the ability of an algorithmic system to learn from data and improve its own performance. In this sense, Google Search engine, or the YouTube algorithm, are examples of AI, while the abovementioned job-stealing dinosaur is not. Indeed, the latter only responds to a limited number of pre-defined inputs, following on customers’ interaction with a touchscreen at the counter. Therefore, is automation sufficient to define Artificial Intelligence?

In The Defecating Duck, or, the Ambiguous Origins of Artificial Life, Jessica Riskin provides a brilliant historical account of eighteenth-century attempts of building “automata”, technical and philosophical experiments aimed to provide evidence for a mechanistic explanation of life and at the same time, conversely, to assess the boundary between humanity and machinery. Jacques Vaucanson’s “Defecating Duck”, a mechanical animal apparently able to simulate the process of digestion until its very end, addresses this tension since, as a close observer noticed in 1783, the food input and the excrement output were not related. The Duck was, as many contemporary automata, a fraud as well as an “ongoing taxonomic exercise, sorting the animate from the inanimate, the organic from the mechanical, the intelligent from the rote, with each category crucially defined, as in any taxonomy, by what is excluded from it” (613).

Jacques de Vaucanson’s inventions (image via ArchiSlim).

Once recruited by Louis XV as Inspector of Silk Manufactures, in 1741 Vaucanson developed the automatic loom, thus drawing a distinction between “intelligent” and “unintelligent work”. According to its inventor, the loom was so simple to use than “’the most limited people’, even ‘girls’ could be ‘substituted for those who…[are] more intelligent, [and] demand a higher salary’” (628). Indeed, the distinction between intelligent and unintelligent labor was a key feature of social hierarchy of the Ancien Régime. The model of the solitary artist (the genius), as opposed to the labor of invisible technicians or other support personnel, is still persistent in our scientific culture (as shown in Steven Shapin’s story of The Invisible Technician).

As recent works have shown (here and here), behind the scientific and technological development is a process of exclusion and intentional deskilling of workers. The definition of AI goes hand in hand with the value assigned to human labor, thus suggesting that a critical understanding of the former should always include the analysis of the socio-political contingencies that shape the latter.

***

[part 2]

References

The Impact of Artificial Intelligence on Work, An Evidence Synthesis on Implications for Individuals, Communities, and Societies, British Academy, The Royal Society, September 2018 (available online at https://royalsociety.org/~/media/policy/projects/ai-and-work/evidence-synthesis-the-impact-of-AI-on-work.PDF?la=en-GB)

Riskin, Jessica, 2003, “The Defecating Duck, or, The Ambiguous Origins of Artificial Life”, Critical Inquiry, Vol. 29, No. 4(2003), 599-633.

css.php

Report this page

To report inappropriate content on this page, please use the form below. Upon receiving your report, we will be in touch as per the Take Down Policy of the service.

Please note that personal data collected through this form is used and stored for the purposes of processing this report and communication with you.

If you are unable to report a concern about content via this form please contact the Service Owner.

Please enter an email address you wish to be contacted on. Please describe the unacceptable content in sufficient detail to allow us to locate it, and why you consider it to be unacceptable.
By submitting this report, you accept that it is accurate and that fraudulent or nuisance complaints may result in action by the University.

  Cancel