Any views expressed within media held on this service are those of the contributors, should not be taken as approved or endorsed by the University, and do not necessarily reflect the views of the University in respect of any particular issue.

Intelligent Machines, Part 2. The Dark Underbelly of Automation

A Click Farm in China (image via ppcprotect)

If you have watched Silicon Valley, the tv series following the struggles of a startup tech company called Pied Piper, you may recall the closing scene of third season’s penultimate episode. Facing the risk of bankruptcy, Pied Piper’s business advisor decides to buy fake app users from a South-Asian “click farm”, in the hope of attracting possible investors.

Click farms, large group of employees compensated to boost the visibility of websites or social media accounts, are just one example of the outsourced, underpaid human labor on which Western tech firms rely.

Astra Taylor has coined the term “fauxtomation” to define the process that renders invisible the human labor and reinforce the illusion that machines are smarter than they are.  A typical example of fauxtomation is Amazon Mechanical Turk (MTurk), one among the many crowdsourced platforms to recruit online human labor.

The “Turkers” – anonymous workers who live mostly in India and in lower-income South-Asian countries (Ross 2010) – perform “Human Intelligent Tasks” (“HIT”) for a minimum wage of $0.01 per assignment. The HITs include transcribing audio, inputting information into a spreadsheet, tagging images, researching email addresses or information from websites.

Before a machine is actually capable of understanding the connection between contents (e.g. to recognize a certain object or face from a picture), a human has to address the saliency of such content. Therefore, MTurkers manually label thousands of images, creating large-scale datasets upon which AI developers train Machine Learning algorithms.

Much of the automation of current AI technologies relies on this outsourced, low-paid workforce. As Taylor puts it, “Amazon’s cheeky slogan—’artificial artificial intelligence’—acknowledges that there are still plenty of things egregiously underpaid people do better than robots” (Taylor 2018).

A copper engraving of the Turk (image via Wikipedia).

It is not surprising that Amazon’s platform is named after the famous Mechanical Turk or Automaton Chess Player, the chess-playing machine constructed in the late eighteenth-century by Wolfgang von Kempelen. Like other contemporary “automata” explored in the previous blog entry, Kempelen’s creature was a hoax. Indeed, a human operator sat inside the machine, able to play through a series of levers that controlled the Turk’s arms.

This racialized android, according to Ayhan Aytes, embodies the shift of the cognitive work from “the privileged labor of the Enlightened subject to unqualified crowds of the neoliberal cognitive capitalism” (Aytes 2013: 88). Crowdsourcing works here as a form of capitalistic exploitation of the “collective mind”, the MTurk “divide cognitive tasks into discrete pieces so that the completion of tasks is not dependent on the cooperation of the workers themselves” (Aytes 2013: 94). However, strategies of collective resistance have emerged; the Turkopticon has served for many years as a platform for Turkers to share experiences and avoid unprofitable HITs.

Sensationalistic claims on automation need to be carefully questioned, especially as this scenario, in Taylor’s words, “has not come close to being true. If the automated day of judgment were actually nigh, they wouldn’t need to invent all these apps to fake it” (2018).

***

[part 1]

References

Aytes, Ayhan, 2013, “Return of the Crowds: Mechanical Turk and Neoliberal States of Exception”, in Scholz, Trebor (edited by), Digital Labor. The Internet as Playground and Factory, Routledge, New York.

Ross, Joel, et al., 2010, “Who are the Crowdworkers? Shifting Demographics in Mechanical Turk.” CHI’10 extended abstracts on Human factors in computing systems, ACM.

Taylor, Astra, 2018, “The Automation Charade”, Logic Magazine (available online at: https://logicmag.io/05-the-automation-charade/).

 

Intelligent Machines, Part 1. On Defecating Ducks and Invisible Labor

In September 2018, the British Academy and The Royal Society have published an evidence report on the impact of Artificial Intelligence on the future of work. The review, which aims at helping policy makers to tackle the “disruptive effect” of AI (2018: 4), suggests that around “10-30% of jobs in the UK are highly automatable, meaning AI could result in significant job losses” (22). However, when it comes to define the nature of such jobs, let alone to indicate what “automatable” means, the report is significantly vague. We read “There are many different perspectives on ‘automatability’, with a broad consensus that current AI technologies are best suited to ‘routine’ tasks, while humans are more likely to remain dominant in unpredictable environments, or in spheres that require significant social intelligence” (24).

As inconsistent as it may sound, the same report previously defines AI as “an umbrella term that describes a suite of technologies that seek to perform tasks usually associated with human intelligence. ‘The science and engineering of making intelligent machines’” (13). Then, what kind of “intelligence” do these machines have?

Robot staff at Henn na Hotel, in Japan (image via The Guardian)

 

“AI is coming for our jobs”. When we hear such claims, we immediately start thinking about the McDonald’s self-ordering kiosk, or the dinosaur robot receptionist managing the front desk at Henn na Hotel in Japan. Except that none of those machines are actually “intelligent”. The Oxford Dictionary defines AI as “the theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages”. Thus, AI is commonly associated with tasks performed by Machine Learning, the ability of an algorithmic system to learn from data and improve its own performance. In this sense, Google Search engine, or the YouTube algorithm, are examples of AI, while the abovementioned job-stealing dinosaur is not. Indeed, the latter only responds to a limited number of pre-defined inputs, following on customers’ interaction with a touchscreen at the counter. Therefore, is automation sufficient to define Artificial Intelligence?

In The Defecating Duck, or, the Ambiguous Origins of Artificial Life, Jessica Riskin provides a brilliant historical account of eighteenth-century attempts of building “automata”, technical and philosophical experiments aimed to provide evidence for a mechanistic explanation of life and at the same time, conversely, to assess the boundary between humanity and machinery. Jacques Vaucanson’s “Defecating Duck”, a mechanical animal apparently able to simulate the process of digestion until its very end, addresses this tension since, as a close observer noticed in 1783, the food input and the excrement output were not related. The Duck was, as many contemporary automata, a fraud as well as an “ongoing taxonomic exercise, sorting the animate from the inanimate, the organic from the mechanical, the intelligent from the rote, with each category crucially defined, as in any taxonomy, by what is excluded from it” (613).

Jacques de Vaucanson’s inventions (image via ArchiSlim).

Once recruited by Louis XV as Inspector of Silk Manufactures, in 1741 Vaucanson developed the automatic loom, thus drawing a distinction between “intelligent” and “unintelligent work”. According to its inventor, the loom was so simple to use than “’the most limited people’, even ‘girls’ could be ‘substituted for those who…[are] more intelligent, [and] demand a higher salary’” (628). Indeed, the distinction between intelligent and unintelligent labor was a key feature of social hierarchy of the Ancien Régime. The model of the solitary artist (the genius), as opposed to the labor of invisible technicians or other support personnel, is still persistent in our scientific culture (as shown in Steven Shapin’s story of The Invisible Technician).

As recent works have shown (here and here), behind the scientific and technological development is a process of exclusion and intentional deskilling of workers. The definition of AI goes hand in hand with the value assigned to human labor, thus suggesting that a critical understanding of the former should always include the analysis of the socio-political contingencies that shape the latter.

***

[part 2]

References

The Impact of Artificial Intelligence on Work, An Evidence Synthesis on Implications for Individuals, Communities, and Societies, British Academy, The Royal Society, September 2018 (available online at https://royalsociety.org/~/media/policy/projects/ai-and-work/evidence-synthesis-the-impact-of-AI-on-work.PDF?la=en-GB)

Riskin, Jessica, 2003, “The Defecating Duck, or, The Ambiguous Origins of Artificial Life”, Critical Inquiry, Vol. 29, No. 4(2003), 599-633.

css.php

Report this page

To report inappropriate content on this page, please use the form below. Upon receiving your report, we will be in touch as per the Take Down Policy of the service.

Please note that personal data collected through this form is used and stored for the purposes of processing this report and communication with you.

If you are unable to report a concern about content via this form please contact the Service Owner.

Please enter an email address you wish to be contacted on. Please describe the unacceptable content in sufficient detail to allow us to locate it, and why you consider it to be unacceptable.
By submitting this report, you accept that it is accurate and that fraudulent or nuisance complaints may result in action by the University.

  Cancel