Any views expressed within media held on this service are those of the contributors, should not be taken as approved or endorsed by the University, and do not necessarily reflect the views of the University in respect of any particular issue.

Facial Recognition: No Prior Knowledge Needed

In 2016, an Israeli startup launched a facial recognition software, Faception, allegedly able to identify personality traits based on facial images. Facial recognition is not new, the use of biometric for security purposes has rapidly increased after 9/11 and automated surveillance is now commonly employed in immigration control and predictive policing. However, Faception fits into a specific theoretical leading trend in recognition technology, that is what Luke Stark has called the “behavioral turn”: the integration of psychology and computation in the attempt to quantify human subjectivity (Stark 2018).

Personality types (image via Faception).

Indeed, the Israeli company claims to be able to successfully identify personality “types” such as “an Extrovert, a person with High IQ, Professional Poker Player or a Terrorist”. According to their website, an academic researcher, for instance, is “endowed with sequential thinking, high analytical abilities, a multiplicity of ideas, deep thoughts and seriousness. Creative, with a high concentration ability, high mental capacity, and interest in data and information”.

Leaving aside the problematic taxonomy (how being an academic researcher is a personality type at all?), a section of Faception website unveils the “theory behind the technology”.

Mentioning a study conducted at the University of Edinburgh on the role of genetics in personality development, together with an unspecified research on the portions of DNA that influence the arrangement of facial feature, the company concludes syllogistically and without further evidence that “the face can be used to predict a person’s personality and behavior”.

The idea that it is possible to reveal behavioral traits from facial features – the practice of “physiognomy” – became popular in nineteenth century thanks to the Italian anthropologist Cesare Lombroso. He fueled the idea that phrenological diagnosis (the measurement of the skull) would have made identification of criminals possible. Lombroso believed criminality to be hereditary, and therefore visible in facial features, which would have been similar to those of savages or apes.

Despite having been discredited as a scientific theory for its racist and classist assumptions on human identity, today physiognomy is not completely out of the picture.

In a long interview with The Guardian, Stanford Professor Michael Kosinski – who in 2017 claimed that face recognition technology could successfully distinguish sexual orientation with more accuracy than humans – declared that he could see patterns in people Facebook’s profile pictures. “It suddenly struck me,” […] introverts and extroverts have completely different faces. I was like, ‘Wow, maybe there’s something there.’”

Researcher Joy Buolamwini created a more diverse dataset to test commercial facial recognition softwares (image via Gender Shades).

The use and development of such technologies raises issues regarding identity, policy and discrimination, especially when employed for surveillance purposes. Recent studies within the field of computer science have shown that the design of facial recognition technology is still highly biased. Joy Buolamwini has tested three commercial software (IBM, Microsoft and Face++), showing that all of them misclassify women of color, while their error rate for lighter-skinned males is close to 0%. This is due to the lack of diverse datasets (which at the moment include mostly images of white men) upon which AI developers train their algorithms. Once encoded, such misclassification could propagate throughout the infrastructure (Buolamwini and Gebru 2018).

Finally, Faception website enlists the advantages of its software: objectivity, accuracy, real time evaluation and ultimately, “no prior knowledge needed”. This technology doesn’t require any associated data or context to determine, with lombrosian certainty, the essence and fate of its target, instantly assessing whether it is a pedophile or a bingo player.

***

References

Buolamwini, Joy, and Timnit Gebru, 2018, “Gender shades: Intersectional accuracy disparities in commercial gender classification”, Conference on Fairness, Accountability and Transparency, 77-91.

Stark, Luke, 2018, “Algorithmic psychometrics and the scalable subject”, Social Studies of Science, 48(2), 204-231.

Intelligent Machines, Part 1. On Defecating Ducks and Invisible Labor

In September 2018, the British Academy and The Royal Society have published an evidence report on the impact of Artificial Intelligence on the future of work. The review, which aims at helping policy makers to tackle the “disruptive effect” of AI (2018: 4), suggests that around “10-30% of jobs in the UK are highly automatable, meaning AI could result in significant job losses” (22). However, when it comes to define the nature of such jobs, let alone to indicate what “automatable” means, the report is significantly vague. We read “There are many different perspectives on ‘automatability’, with a broad consensus that current AI technologies are best suited to ‘routine’ tasks, while humans are more likely to remain dominant in unpredictable environments, or in spheres that require significant social intelligence” (24).

As inconsistent as it may sound, the same report previously defines AI as “an umbrella term that describes a suite of technologies that seek to perform tasks usually associated with human intelligence. ‘The science and engineering of making intelligent machines’” (13). Then, what kind of “intelligence” do these machines have?

Robot staff at Henn na Hotel, in Japan (image via The Guardian)

 

“AI is coming for our jobs”. When we hear such claims, we immediately start thinking about the McDonald’s self-ordering kiosk, or the dinosaur robot receptionist managing the front desk at Henn na Hotel in Japan. Except that none of those machines are actually “intelligent”. The Oxford Dictionary defines AI as “the theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages”. Thus, AI is commonly associated with tasks performed by Machine Learning, the ability of an algorithmic system to learn from data and improve its own performance. In this sense, Google Search engine, or the YouTube algorithm, are examples of AI, while the abovementioned job-stealing dinosaur is not. Indeed, the latter only responds to a limited number of pre-defined inputs, following on customers’ interaction with a touchscreen at the counter. Therefore, is automation sufficient to define Artificial Intelligence?

In The Defecating Duck, or, the Ambiguous Origins of Artificial Life, Jessica Riskin provides a brilliant historical account of eighteenth-century attempts of building “automata”, technical and philosophical experiments aimed to provide evidence for a mechanistic explanation of life and at the same time, conversely, to assess the boundary between humanity and machinery. Jacques Vaucanson’s “Defecating Duck”, a mechanical animal apparently able to simulate the process of digestion until its very end, addresses this tension since, as a close observer noticed in 1783, the food input and the excrement output were not related. The Duck was, as many contemporary automata, a fraud as well as an “ongoing taxonomic exercise, sorting the animate from the inanimate, the organic from the mechanical, the intelligent from the rote, with each category crucially defined, as in any taxonomy, by what is excluded from it” (613).

Jacques de Vaucanson’s inventions (image via ArchiSlim).

Once recruited by Louis XV as Inspector of Silk Manufactures, in 1741 Vaucanson developed the automatic loom, thus drawing a distinction between “intelligent” and “unintelligent work”. According to its inventor, the loom was so simple to use than “’the most limited people’, even ‘girls’ could be ‘substituted for those who…[are] more intelligent, [and] demand a higher salary’” (628). Indeed, the distinction between intelligent and unintelligent labor was a key feature of social hierarchy of the Ancien Régime. The model of the solitary artist (the genius), as opposed to the labor of invisible technicians or other support personnel, is still persistent in our scientific culture (as shown in Steven Shapin’s story of The Invisible Technician).

As recent works have shown (here and here), behind the scientific and technological development is a process of exclusion and intentional deskilling of workers. The definition of AI goes hand in hand with the value assigned to human labor, thus suggesting that a critical understanding of the former should always include the analysis of the socio-political contingencies that shape the latter.

***

[part 2]

References

The Impact of Artificial Intelligence on Work, An Evidence Synthesis on Implications for Individuals, Communities, and Societies, British Academy, The Royal Society, September 2018 (available online at https://royalsociety.org/~/media/policy/projects/ai-and-work/evidence-synthesis-the-impact-of-AI-on-work.PDF?la=en-GB)

Riskin, Jessica, 2003, “The Defecating Duck, or, The Ambiguous Origins of Artificial Life”, Critical Inquiry, Vol. 29, No. 4(2003), 599-633.

css.php

Report this page

To report inappropriate content on this page, please use the form below. Upon receiving your report, we will be in touch as per the Take Down Policy of the service.

Please note that personal data collected through this form is used and stored for the purposes of processing this report and communication with you.

If you are unable to report a concern about content via this form please contact the Service Owner.

Please enter an email address you wish to be contacted on. Please describe the unacceptable content in sufficient detail to allow us to locate it, and why you consider it to be unacceptable.
By submitting this report, you accept that it is accurate and that fraudulent or nuisance complaints may result in action by the University.

  Cancel