It’s the metrics, not the Matrix, part 3: Degenerative AI

A blend of previous posts’ images with Karl Marx escaping the (Medieval) Metrics Matrix – generated using DALL-E and mixed with Photopea by the author and numerous unacknowledged art and data workers
Image credit: A blend of previous posts’ images with Karl Marx escaping the (Medieval) Metrics Matrix – generated using DALL-E and mixed with Photopea by the author and numerous unacknowledged art and data workers.
In this post, Dr Vassilis Galanos continues his exploration of metrics, arguing that the passive acceptance of a metrics-oriented culture is what feeds, establishes, and normalises hype and high adoption rates of Generative Artificial Intelligence (GenAI) machinery. This post is part 3 of 3, and belongs to the Hot Topic theme: Critical insights into contemporary issues in Higher Education.
In the previous two posts (Higher Education State Critical and Rigorously Established Fear), I argued that, in Marxian terms, the surplus value generated from intellectual labour in academia is enhancing the institution’s reputation and funding. This shift is slowly reversing the map (numerical indices) with the territory (learning and teaching experience), using the latter to navigate the former, instead of the opposite. In this post, I argue that the passive acceptance of this metrics-oriented culture is what feeds, establishes, and normalises hype and high adoption rates of Generative Artificial Intelligence (GenAI) machinery, such as OpenAI’s ChatGPT, Microsoft’s Copilot, or Anthropic’s Claude. This is a fast-forward historical recap of what preceded the emergence of GenAI. Student grades, attendance, and staff’s citations fuel the academic-industrial complex by fostering connections between businesses ready to absorb the highly-graded students while partnering with high-ranking research initiatives. This relationship turns human intellect into marketable products, since the relationship is mutually parasitical: as much as the industry wishes to present an outwardly-facing, scientifically and moral high-ground with the approval of the academy,  while the latter wishes to benefit by collaborations with the industry that increase revenue, prestige, and can be presented as societal impact. This process is reminiscent of the commodification trends in industrial capitalism, where labour was quantified and valued based on its contribution to profit. Social media metrics further entrench this commodification, transforming intellectual achievements into social capital. Shoshana Zuboff calls this “behavioural surplus” – a term that might as well fit within the academic landscape. Students and staff self-regulate, ever aware of the metrics that loom over them, dictating their academic behaviour. This is the darker side of the notion of the quantified self the vision in which people would continuously track in order to optimise their performance (and well-being) that fell into decay once the profit-driven motives of self-tracking industries were sufficiently experienced, from health apps to selfie share, in most cases training customised advertisement algorithms and facial recognition software, often used for military and policing purposes. This self-tracking culture fits perfectly into academia’s metrics obsession. Students monitor their grade point average (GPA) like investors tracking stocks, and researchers obsess over their h-index scores like they’re anxiously awaiting show reviews. While waiting for these longer-term affirmations, theye both gain temporary satisfaction through social media interactions in secret hope one of their outputs (research, business, or otherwise) will become viral – indeed, the algorithm for virality in social media and academia might be very similar. From what has already been mentioned, GenAI visions uplift this obsession to an extreme level. Initially, it presents itself as capable to produce the boring aspects of a text that everyone wishes to avoid, for example, the opening and concluding sentences, the formal proofreading and grammatical/syntactical corrections, the angle ideation, and short explanations for relatively common-sense knowledge. Supposedly, thus, they save time which, in theory, can be used for leisure (of course, in Academia, the concept of “leisure” is very controversial and means different things to different people – should we prohibit the consumption of an academic text while on holidays? For one, we do not prohibit the academic study of leisure, especially if it attracts big grants). Upon closer inspection, time is not saved at all, especially for those in precarious temporary contracts, or with student loans, or who need a promotion, or on scholarship deadlines. What Generative AI’s time-efficient output may do, is increase the amount of produced content, but without an unchanged time-table (the contracted hours, or the time prior to entering the job market). Students and teachers become mere data nodes, constantly producing text to feed the technical and the social machinery. This aligns with the historical trajectory of technological advancements that have progressively extended the volume and precision of bureaucratic production and control while intensifying the intellectual labour within the same time interval. And the final Marx quote for the day:
“The shortening of the hours of labour creates, to begin with, the subjective conditions for the condensation of labour, by enabling the workman [workperson] to exert more strength in a given time. So soon as that shortening becomes compulsory, machinery becomes in the hands of capital the objective means, systematically employed for squeezing out more labour in a given time” (Marx 2013: 285).
Endless self-tracking and performance optimisation, powered by GenAI and sustained by social media metrics culture, thus turns the academic journey into a frantic dash for better numbers. While Generative AI claims to offer personalised feedback and guidance, it amplifies the anxiety around self-improvement and harmonising output within “acceptable” frames. Students and staff focus more on meeting target numbers than engaging deeply with their work (or others’), mirroring the constant self-optimisation driven by social media feedback loops. If, as academics, we also think of ourselves as activists, not merely observing but influencing the politics of what we study (for some, this is inevitable anyway for we cannot suppress our influence – the question is whether we admit it and what we do with it), we should consider how the infrastructures that oppress social groups we wish to defend are entrenched through the criteria we develop and use to measure success and failure. Words like “success,” “failure,” “impact,” “assessment,” “measurement,” “mark,” “rank,” or “grade” carry legacies of phallogocentrism (the internet is still complete with videos of males measuring their manhood in toilets), imperialism and colonialism, military hierarchy and operationalism, and nonhuman and human enslavement (marked on the flesh by branding iron until today). In this metrics-driven landscape (where “data-driven” is but a euphemism), academia risks becoming a parody of itself. Here, surveillance, commodification, and self-quantification dominate, supported by a broader culture of social media views and reactions that enables as reductionist a thinking as the nine emotions featured in the recent film, Inside-Out 2. Generative AI, the latest instalment in the history of automated education, intensifies these trends, aiming to squeeze more surplus profit out of education and research, which in turn exacerbates an aesthetic of the safe and acceptable writing we have already established in academic circles. This, in turn, normalises a degenerative culture of unimaginative repetition. Hence, I prefer to call it ‘Degenerative AI’. My rant is over. I am leaving you with the following song about numbers from the 1969 season of Sesame Street, composed by Denny Zeitlin and featuring vocals by Grace Slick of Jefferson Airplane: https://www.youtube.com/watch?v=G5stWhPNyec References for Part 1, 2 and 3 Andreski, S. (1973). Social sciences as sorcery. New York: St. Martin’s Press, May. Archer, M. (2024). Unsustainable: Measurement, Reporting, and the Limits of Corporate Sustainability. NYU Press. Cixous, H. (1974). Prénoms de personne. Paris: Seuil. Cixous, H. (1994). The Hélène Cixous Reader. (Susan Sellers, Ed.). Routledge. Derrida, J. (1979). Spurs: Nietzsche’s styles. University of Chicago Press. Marx, K. (2013). Capital: A critical analysis of capitalist production (S. Moore, E. Aveling, & E. Untermann, Trans.). Wordsworth. Zuboff, S. (2022). Surveillance capitalism or democracy? The death match of institutional orders and the politics of knowledge in our information civilization. Organization Theory, 3(3).

photograph of the authorVasileios Galanos

Dr Vassilis Galanos, SFHEA is a visitor at the Edinburgh College of Art and works as Lecturer in Digital Work at the University of Stirling. Vassilis investigates historico-sociological underpinnings of AI and internet technologies, and how expertise and expectations are negotiated in these domains. Recent collaborations involved the history of AI at Edinburgh, interrogations of generative AI in journalism (BRAID UK), artist-data scientist interactions (The New Real), and community-led regeneration interfacing with data-driven innovation (Data Civics). Vassilis has co-founded the AI Ethics & Society research group and the History and Philosophy of Computing’s (HaPoC) Working Group on Data Sharing, also acting as Associate Editor of Technology Analysis and Strategic Management.