Any views expressed within media held on this service are those of the contributors, should not be taken as approved or endorsed by the University, and do not necessarily reflect the views of the University in respect of any particular issue.

“Programming Anti-Languages”. Glitch Bodies and Queer Digital Practices

A glitch portrait of myself.

A previous blog entry addressed the need for the inclusion of counter-narratives of the Internet in the public discourse on digital technologies. I offer here a very short account of alternative practices and theoretical frameworks stemming from feminist and queer politics.

 

In 1997, the American Internet provider MCI (later bought by the telecommunication corporation Verizon) broadcasted a dazzling TV advert, describing the web in such terms: “People here communicate mind to mind. There is no race. There are no genders. There is no age. There are no infirmities. There are only minds. Utopia? No, the Internet”.

The commercial reflects the enthusiasm that people in the 90’s had for the “cyber-space”, a “virgin territory which could be shaped and developed according to a different set of values than those which predominated in the physical spaces of our world” (Evans 2013: 82).

Unfortunately, as we have seen before, the capitalization of ICTs has soon crushed the cyber-dream.

However, hidden in the structured rigidity of the Internet, there is an interstitial moment of freedom, the glitch. The baffling pixel fragmentation of an image, the interference of white noises or the alteration in audio files. Digital artists define the glitch “a happy accident. […] a good place to be to find pleasure in things that are normally upsetting” (PBS 2012).

To address the revolutionary role of glitching, the writer and artist Legacy Russell has coined the term “Glitch Feminism”, a feminism that “embraces the causality of ‘error’. Glitch fragments representation as queer is opposed to body normativity, “Glitch Feminism is not gender-specific—it is for all bodies that exist somewhere before arrival upon a final concretized identity that can be easily digested, produced, packaged, and categorized by a voyeuristic mainstream public” (Russell 2012).

Another glitchy me.

New digital strategies, such as the glitch, may be embraced to challenge the inequalities encoded in the ICT infrastructure. The artistic project developed by Zach Blas, “Queer Technology”, provocatively addresses “the heteronormative, capitalist, militarized underpinnings of technological architectures, design, and functionality. Queer Technologies includes, transCoder, a queer programming anti-language” (Blas).

Since the early days of the web, feminist scholars and activists have explored the potential of digital practices, long asking themselves: could we use technology to dismantle inequality?

Following on Donna Haraway’s A Cyborg Manifesto, encouraging a liberating, chimeric union between humans, animals and machines (Haraway 1991), “cyberfeminists” were fierce advocates of the ICTs as a means to eliminate gender division. However, soon after the First Cyberfeminist International (which took place in Germany, in 1997), the artist Faith Wilding promptly argued that “contrary to the dream of many Net utopian, the Net does not automatically obliterate hierarchies through free exchanges of information across boundaries. Also, the Net is not a utopia of nongender. It is already socially inscribed with regard to bodies, sex, age, economics, social class, and race” (Wilding 1998: 9).

In the later 2000’s, more critical versions of cyberfeminism have arisen. “Black Cyberfeminism”, for instance, recognizes how “categorical inequalities” (discriminations on the grounds of gender, ethnicity, religion, class, dis-ability, etc.) are reproduced and even reinforced through the digital infrastructure (MacMillan Cottom 2017).

How should we reconfigure techno-scientific practices in the push for social justice? The Xenofeminist Manifesto, developed by the Laboria Cuboniks collective, addresses this question: “Technoscientific innovation must be linked to a collective theoretical and political thinking in which women, queers, and the gender non-conforming play an unparalleled role” (Laboria Cuboniks).

***

References

Blas, Zach, “Queer Technologies 2007-2012”, retrieved from http://www.zachblas.info/works/queer-technologies/.

Evans, Karen, 2013, “Rethinking Community in The Digital Age?”, in Orton-Johnson, Kate, and Nick Prior, (edited by), Digital sociology: Critical perspectives, Springer

Haraway, Donna, 1991, “A Cyborg Manifesto: Science, Technology, and Socialist-Feminism in the Late Twentieth Century”, in Simians, Cyborgs and Women: The Reinvention of Nature, New York, Routledge, 149-181.

Laboria Cuboniks, Xenofeminism. A Politics for Alienation (available online at http://www.laboriacuboniks.net/index.html#interrupt).

McMillan Cottom, Tressie, 2016, “Black Cyberfeminism: Ways Forward for Intersectionality”, in Daniels, Jessie, Gregory, Karen, and McMillan Cottom, Tressie, 2016, Digital Sociologies, Bristol, Policy Press.

Russell, Legacy, 2012, “Digital Dualism and the Glitch Feminism Manifesto”, Cyborgology (available online at https://thesocietypages.org/cyborgology/2012/12/10/digital-dualism-and-the-glitch-feminism-manifesto/).

Wilding, Faith, 1998, “Where is feminism in cyberfeminism?”, Paradoxa, Vol. 2. 6-13.

The Art of Glitch, PBS Digital, 2012, August 9, retrieved from https://www.youtube.com/watch?v=gr0yiOyvas4

Internet Realism: “Is there No Alternative?”

Zach Blas’ Tweets (image via DisMagazine).

 

Talking about his book, Capitalist Realism, Mark Fisher explains “Put at its simplest, capitalist realism is the widespread idea that capitalism is the only ‘realistic’ political economic system”.

Following on Margaret Tatcher’s famous slogan that “there is no alternative” to the free market, Fisher argues that this idea has long served as an ideological legitimation of capitalism. “We’ve now got a generation of young adults who have known nothing but global capitalism and who are accustomed to culture being pastiche and recapitulation” (Wilson 2017).

Similarly, the artist and writer Zach Blas has argued that in the last two decades the Internet has become “a totalized sociocultural condition. Like capitalism, the internet has come to exist as a totality, with no outside, no alternative, no ending” (Blas 2016). It has merged into the materiality of the physical world (the Internet of Things) and has grown into “a mode of subjectivation, a set of feelings, a sense of longing, a human condition, a metanarrative” (Ibidem). Is there still a world outside the Internet?

The idea that every aspect of reality is rendered into its digital counterpart (the data) is central to what Shoshana Zuboff has called “Surveillance Capitalism”, this new “ubiquitous networked institutional regime that records, modifies, and commodifies everyday experience from toasters to bodies, communication to thought, all with a view to establishing new pathways to monetization and profit” (Zuboff 2015: 81). The network, recalling Blas’ concerns, serves as a tool for control, management and commodification of every human experience.

What happens when even intimate experiences are monetized? Are there any sites left for resistance or even political struggle becomes another profitable opportunity? These are some of the questions posed by Goldsmiths researchers Beverley Skeggs and Simon Yuill.

They developed a custom-built software to detect Facebook tracking, targeting and advertising activity, inside and outside the platform (e.g. the interaction with sponsored companies), and found out that Facebook pays more attention to and monetized more effectively users who are “high net worth individual(s). […] The more coherent and predictable we can be, the easier it is to fragment and disaggregate our data in order to trade it” (Skeggs, Yuill 2016: 391). Paradoxically, the more political integrity we perform online, the more profitable we become.

The need for a clearer understanding of such opaque mechanisms is undeniable, as well as the ability to imagine something outside the Internet. Zach Blas’ concept (and artistic project) Contra-Internet stems from this necessity. Building on Paul Preciado’s Contra-Sexual Manifesto, Contra-Internet is “the refusal of naturalizations, hegemonies, and normalizations of the Internet that have contributed to its transformation into a locus of policing and control” (Browne, Blas 2017). It is the attempt to fragment the totality of the network and to conceive an alternative such as, for instance, “mesh-networking”, a non-hierarchal, independent, local network that doesn’t rely on the Internet.

We need to include such alternative in the common narrative of the digital. In doing so, perhaps, as the soundtrack plays throughout Blas’ artistic performance, we could hear a new world.

***

References

Browne, Simone, and Zach Blas, 2017, “Beyond the Internet and All Control Diagrams”, The New Inquiry, January 24(2017).

Fisher, Mark, 2009, Capitalist realism: Is there no alternative? John Hunt Publishing.

Skeggs, Beverley, and Simon Yuill, 2016, “Capital experimentation with person/a formation: how Facebook’s monetization refigures the relationship between property, personhood and protest”, Information, Communication & Society, 19.3: 380-396.

Wilson, Rowan, “They Can Be Different in the Future Too: Mark Fisher interviewed”, Verso, January 16, 2017 (available online at https://www.versobooks.com/blogs/3051-they-can-be-different-in-the-future-too-mark-fisher-interviewed).

Zuboff, Shoshana, 2015, “Big other: surveillance capitalism and the prospects of an information civilization”, Journal of Information Technology, 30.1, 75-89.

Facial Recognition: No Prior Knowledge Needed

In 2016, an Israeli startup launched a facial recognition software, Faception, allegedly able to identify personality traits based on facial images. Facial recognition is not new, the use of biometric for security purposes has rapidly increased after 9/11 and automated surveillance is now commonly employed in immigration control and predictive policing. However, Faception fits into a specific theoretical leading trend in recognition technology, that is what Luke Stark has called the “behavioral turn”: the integration of psychology and computation in the attempt to quantify human subjectivity (Stark 2018).

Personality types (image via Faception).

Indeed, the Israeli company claims to be able to successfully identify personality “types” such as “an Extrovert, a person with High IQ, Professional Poker Player or a Terrorist”. According to their website, an academic researcher, for instance, is “endowed with sequential thinking, high analytical abilities, a multiplicity of ideas, deep thoughts and seriousness. Creative, with a high concentration ability, high mental capacity, and interest in data and information”.

Leaving aside the problematic taxonomy (how being an academic researcher is a personality type at all?), a section of Faception website unveils the “theory behind the technology”.

Mentioning a study conducted at the University of Edinburgh on the role of genetics in personality development, together with an unspecified research on the portions of DNA that influence the arrangement of facial feature, the company concludes syllogistically and without further evidence that “the face can be used to predict a person’s personality and behavior”.

The idea that it is possible to reveal behavioral traits from facial features – the practice of “physiognomy” – became popular in nineteenth century thanks to the Italian anthropologist Cesare Lombroso. He fueled the idea that phrenological diagnosis (the measurement of the skull) would have made identification of criminals possible. Lombroso believed criminality to be hereditary, and therefore visible in facial features, which would have been similar to those of savages or apes.

Despite having been discredited as a scientific theory for its racist and classist assumptions on human identity, today physiognomy is not completely out of the picture.

In a long interview with The Guardian, Stanford Professor Michael Kosinski – who in 2017 claimed that face recognition technology could successfully distinguish sexual orientation with more accuracy than humans – declared that he could see patterns in people Facebook’s profile pictures. “It suddenly struck me,” […] introverts and extroverts have completely different faces. I was like, ‘Wow, maybe there’s something there.’”

Researcher Joy Buolamwini created a more diverse dataset to test commercial facial recognition softwares (image via Gender Shades).

The use and development of such technologies raises issues regarding identity, policy and discrimination, especially when employed for surveillance purposes. Recent studies within the field of computer science have shown that the design of facial recognition technology is still highly biased. Joy Buolamwini has tested three commercial software (IBM, Microsoft and Face++), showing that all of them misclassify women of color, while their error rate for lighter-skinned males is close to 0%. This is due to the lack of diverse datasets (which at the moment include mostly images of white men) upon which AI developers train their algorithms. Once encoded, such misclassification could propagate throughout the infrastructure (Buolamwini and Gebru 2018).

Finally, Faception website enlists the advantages of its software: objectivity, accuracy, real time evaluation and ultimately, “no prior knowledge needed”. This technology doesn’t require any associated data or context to determine, with lombrosian certainty, the essence and fate of its target, instantly assessing whether it is a pedophile or a bingo player.

***

References

Buolamwini, Joy, and Timnit Gebru, 2018, “Gender shades: Intersectional accuracy disparities in commercial gender classification”, Conference on Fairness, Accountability and Transparency, 77-91.

Stark, Luke, 2018, “Algorithmic psychometrics and the scalable subject”, Social Studies of Science, 48(2), 204-231.

Intelligent Machines, Part 2. The Dark Underbelly of Automation

A Click Farm in China (image via ppcprotect)

If you have watched Silicon Valley, the tv series following the struggles of a startup tech company called Pied Piper, you may recall the closing scene of third season’s penultimate episode. Facing the risk of bankruptcy, Pied Piper’s business advisor decides to buy fake app users from a South-Asian “click farm”, in the hope of attracting possible investors.

Click farms, large group of employees compensated to boost the visibility of websites or social media accounts, are just one example of the outsourced, underpaid human labor on which Western tech firms rely.

Astra Taylor has coined the term “fauxtomation” to define the process that renders invisible the human labor and reinforce the illusion that machines are smarter than they are.  A typical example of fauxtomation is Amazon Mechanical Turk (MTurk), one among the many crowdsourced platforms to recruit online human labor.

The “Turkers” – anonymous workers who live mostly in India and in lower-income South-Asian countries (Ross 2010) – perform “Human Intelligent Tasks” (“HIT”) for a minimum wage of $0.01 per assignment. The HITs include transcribing audio, inputting information into a spreadsheet, tagging images, researching email addresses or information from websites.

Before a machine is actually capable of understanding the connection between contents (e.g. to recognize a certain object or face from a picture), a human has to address the saliency of such content. Therefore, MTurkers manually label thousands of images, creating large-scale datasets upon which AI developers train Machine Learning algorithms.

Much of the automation of current AI technologies relies on this outsourced, low-paid workforce. As Taylor puts it, “Amazon’s cheeky slogan—’artificial artificial intelligence’—acknowledges that there are still plenty of things egregiously underpaid people do better than robots” (Taylor 2018).

A copper engraving of the Turk (image via Wikipedia).

It is not surprising that Amazon’s platform is named after the famous Mechanical Turk or Automaton Chess Player, the chess-playing machine constructed in the late eighteenth-century by Wolfgang von Kempelen. Like other contemporary “automata” explored in the previous blog entry, Kempelen’s creature was a hoax. Indeed, a human operator sat inside the machine, able to play through a series of levers that controlled the Turk’s arms.

This racialized android, according to Ayhan Aytes, embodies the shift of the cognitive work from “the privileged labor of the Enlightened subject to unqualified crowds of the neoliberal cognitive capitalism” (Aytes 2013: 88). Crowdsourcing works here as a form of capitalistic exploitation of the “collective mind”, the MTurk “divide cognitive tasks into discrete pieces so that the completion of tasks is not dependent on the cooperation of the workers themselves” (Aytes 2013: 94). However, strategies of collective resistance have emerged; the Turkopticon has served for many years as a platform for Turkers to share experiences and avoid unprofitable HITs.

Sensationalistic claims on automation need to be carefully questioned, especially as this scenario, in Taylor’s words, “has not come close to being true. If the automated day of judgment were actually nigh, they wouldn’t need to invent all these apps to fake it” (2018).

***

[part 1]

References

Aytes, Ayhan, 2013, “Return of the Crowds: Mechanical Turk and Neoliberal States of Exception”, in Scholz, Trebor (edited by), Digital Labor. The Internet as Playground and Factory, Routledge, New York.

Ross, Joel, et al., 2010, “Who are the Crowdworkers? Shifting Demographics in Mechanical Turk.” CHI’10 extended abstracts on Human factors in computing systems, ACM.

Taylor, Astra, 2018, “The Automation Charade”, Logic Magazine (available online at: https://logicmag.io/05-the-automation-charade/).

 

Intelligent Machines, Part 1. On Defecating Ducks and Invisible Labor

In September 2018, the British Academy and The Royal Society have published an evidence report on the impact of Artificial Intelligence on the future of work. The review, which aims at helping policy makers to tackle the “disruptive effect” of AI (2018: 4), suggests that around “10-30% of jobs in the UK are highly automatable, meaning AI could result in significant job losses” (22). However, when it comes to define the nature of such jobs, let alone to indicate what “automatable” means, the report is significantly vague. We read “There are many different perspectives on ‘automatability’, with a broad consensus that current AI technologies are best suited to ‘routine’ tasks, while humans are more likely to remain dominant in unpredictable environments, or in spheres that require significant social intelligence” (24).

As inconsistent as it may sound, the same report previously defines AI as “an umbrella term that describes a suite of technologies that seek to perform tasks usually associated with human intelligence. ‘The science and engineering of making intelligent machines’” (13). Then, what kind of “intelligence” do these machines have?

Robot staff at Henn na Hotel, in Japan (image via The Guardian)

 

“AI is coming for our jobs”. When we hear such claims, we immediately start thinking about the McDonald’s self-ordering kiosk, or the dinosaur robot receptionist managing the front desk at Henn na Hotel in Japan. Except that none of those machines are actually “intelligent”. The Oxford Dictionary defines AI as “the theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages”. Thus, AI is commonly associated with tasks performed by Machine Learning, the ability of an algorithmic system to learn from data and improve its own performance. In this sense, Google Search engine, or the YouTube algorithm, are examples of AI, while the abovementioned job-stealing dinosaur is not. Indeed, the latter only responds to a limited number of pre-defined inputs, following on customers’ interaction with a touchscreen at the counter. Therefore, is automation sufficient to define Artificial Intelligence?

In The Defecating Duck, or, the Ambiguous Origins of Artificial Life, Jessica Riskin provides a brilliant historical account of eighteenth-century attempts of building “automata”, technical and philosophical experiments aimed to provide evidence for a mechanistic explanation of life and at the same time, conversely, to assess the boundary between humanity and machinery. Jacques Vaucanson’s “Defecating Duck”, a mechanical animal apparently able to simulate the process of digestion until its very end, addresses this tension since, as a close observer noticed in 1783, the food input and the excrement output were not related. The Duck was, as many contemporary automata, a fraud as well as an “ongoing taxonomic exercise, sorting the animate from the inanimate, the organic from the mechanical, the intelligent from the rote, with each category crucially defined, as in any taxonomy, by what is excluded from it” (613).

Jacques de Vaucanson’s inventions (image via ArchiSlim).

Once recruited by Louis XV as Inspector of Silk Manufactures, in 1741 Vaucanson developed the automatic loom, thus drawing a distinction between “intelligent” and “unintelligent work”. According to its inventor, the loom was so simple to use than “’the most limited people’, even ‘girls’ could be ‘substituted for those who…[are] more intelligent, [and] demand a higher salary’” (628). Indeed, the distinction between intelligent and unintelligent labor was a key feature of social hierarchy of the Ancien Régime. The model of the solitary artist (the genius), as opposed to the labor of invisible technicians or other support personnel, is still persistent in our scientific culture (as shown in Steven Shapin’s story of The Invisible Technician).

As recent works have shown (here and here), behind the scientific and technological development is a process of exclusion and intentional deskilling of workers. The definition of AI goes hand in hand with the value assigned to human labor, thus suggesting that a critical understanding of the former should always include the analysis of the socio-political contingencies that shape the latter.

***

[part 2]

References

The Impact of Artificial Intelligence on Work, An Evidence Synthesis on Implications for Individuals, Communities, and Societies, British Academy, The Royal Society, September 2018 (available online at https://royalsociety.org/~/media/policy/projects/ai-and-work/evidence-synthesis-the-impact-of-AI-on-work.PDF?la=en-GB)

Riskin, Jessica, 2003, “The Defecating Duck, or, The Ambiguous Origins of Artificial Life”, Critical Inquiry, Vol. 29, No. 4(2003), 599-633.

css.php

Report this page

To report inappropriate content on this page, please use the form below. Upon receiving your report, we will be in touch as per the Take Down Policy of the service.

Please note that personal data collected through this form is used and stored for the purposes of processing this report and communication with you.

If you are unable to report a concern about content via this form please contact the Service Owner.

Please enter an email address you wish to be contacted on. Please describe the unacceptable content in sufficient detail to allow us to locate it, and why you consider it to be unacceptable.
By submitting this report, you accept that it is accurate and that fraudulent or nuisance complaints may result in action by the University.

  Cancel