It’s the metrics, not the Matrix, part 3: Degenerative AI

A blend of previous posts’ images with Karl Marx escaping the (Medieval) Metrics Matrix – generated using DALL-E and mixed with Photopea by the author and numerous unacknowledged art and data workers
Image credit: A blend of previous posts’ images with Karl Marx escaping the (Medieval) Metrics Matrix – generated using DALL-E and mixed with Photopea by the author and numerous unacknowledged art and data workers.

In this post, Dr Vassilis Galanos continues his exploration of metrics, arguing that the passive acceptance of a metrics-oriented culture is what feeds, establishes, and normalises hype and high adoption rates of Generative Artificial Intelligence (GenAI) machinery. This post is part 3 of 3, and belongs to the Hot Topic theme: Critical insights into contemporary issues in Higher Education.


In the previous two posts (Higher Education State Critical and Rigorously Established Fear), I argued that, in Marxian terms, the surplus value generated from intellectual labour in academia is enhancing the institution’s reputation and funding. This shift is slowly reversing the map (numerical indices) with the territory (learning and teaching experience), using the latter to navigate the former, instead of the opposite. In this post, I argue that the passive acceptance of this metrics-oriented culture is what feeds, establishes, and normalises hype and high adoption rates of Generative Artificial Intelligence (GenAI) machinery, such as OpenAI’s ChatGPT, Microsoft’s Copilot, or Anthropic’s Claude.

This is a fast-forward historical recap of what preceded the emergence of GenAI. Student grades, attendance, and staff’s citations fuel the academic-industrial complex by fostering connections between businesses ready to absorb the highly-graded students while partnering with high-ranking research initiatives. This relationship turns human intellect into marketable products, since the relationship is mutually parasitical: as much as the industry wishes to present an outwardly-facing, scientifically and moral high-ground with the approval of the academy,  while the latter wishes to benefit by collaborations with the industry that increase revenue, prestige, and can be presented as societal impact.

This process is reminiscent of the commodification trends in industrial capitalism, where labour was quantified and valued based on its contribution to profit. Social media metrics further entrench this commodification, transforming intellectual achievements into social capital. Shoshana Zuboff calls this “behavioural surplus” – a term that might as well fit within the academic landscape. Students and staff self-regulate, ever aware of the metrics that loom over them, dictating their academic behaviour. This is the darker side of the notion of the quantified self the vision in which people would continuously track in order to optimise their performance (and well-being) that fell into decay once the profit-driven motives of self-tracking industries were sufficiently experienced, from health apps to selfie share, in most cases training customised advertisement algorithms and facial recognition software, often used for military and policing purposes.

This self-tracking culture fits perfectly into academia’s metrics obsession. Students monitor their grade point average (GPA) like investors tracking stocks, and researchers obsess over their h-index scores like they’re anxiously awaiting show reviews. While waiting for these longer-term affirmations, theye both gain temporary satisfaction through social media interactions in secret hope one of their outputs (research, business, or otherwise) will become viral – indeed, the algorithm for virality in social media and academia might be very similar.

From what has already been mentioned, GenAI visions uplift this obsession to an extreme level. Initially, it presents itself as capable to produce the boring aspects of a text that everyone wishes to avoid, for example, the opening and concluding sentences, the formal proofreading and grammatical/syntactical corrections, the angle ideation, and short explanations for relatively common-sense knowledge. Supposedly, thus, they save time which, in theory, can be used for leisure (of course, in Academia, the concept of “leisure” is very controversial and means different things to different people – should we prohibit the consumption of an academic text while on holidays? For one, we do not prohibit the academic study of leisure, especially if it attracts big grants).

Upon closer inspection, time is not saved at all, especially for those in precarious temporary contracts, or with student loans, or who need a promotion, or on scholarship deadlines. What Generative AI’s time-efficient output may do, is increase the amount of produced content, but without an unchanged time-table (the contracted hours, or the time prior to entering the job market). Students and teachers become mere data nodes, constantly producing text to feed the technical and the social machinery. This aligns with the historical trajectory of technological advancements that have progressively extended the volume and precision of bureaucratic production and control while intensifying the intellectual labour within the same time interval.

And the final Marx quote for the day:

“The shortening of the hours of labour creates, to begin with, the subjective conditions for the condensation of labour, by enabling the workman [workperson] to exert more strength in a given time. So soon as that shortening becomes compulsory, machinery becomes in the hands of capital the objective means, systematically employed for squeezing out more labour in a given time” (Marx 2013: 285).

Endless self-tracking and performance optimisation, powered by GenAI and sustained by social media metrics culture, thus turns the academic journey into a frantic dash for better numbers. While Generative AI claims to offer personalised feedback and guidance, it amplifies the anxiety around self-improvement and harmonising output within “acceptable” frames. Students and staff focus more on meeting target numbers than engaging deeply with their work (or others’), mirroring the constant self-optimisation driven by social media feedback loops.

If, as academics, we also think of ourselves as activists, not merely observing but influencing the politics of what we study (for some, this is inevitable anyway for we cannot suppress our influence – the question is whether we admit it and what we do with it), we should consider how the infrastructures that oppress social groups we wish to defend are entrenched through the criteria we develop and use to measure success and failure.

Words like “success,” “failure,” “impact,” “assessment,” “measurement,” “mark,” “rank,” or “grade” carry legacies of phallogocentrism (the internet is still complete with videos of males measuring their manhood in toilets), imperialism and colonialism, military hierarchy and operationalism, and nonhuman and human enslavement (marked on the flesh by branding iron until today). In this metrics-driven landscape (where “data-driven” is but a euphemism), academia risks becoming a parody of itself. Here, surveillance, commodification, and self-quantification dominate, supported by a broader culture of social media views and reactions that enables as reductionist a thinking as the nine emotions featured in the recent film, Inside-Out 2.

Generative AI, the latest instalment in the history of automated education, intensifies these trends, aiming to squeeze more surplus profit out of education and research, which in turn exacerbates an aesthetic of the safe and acceptable writing we have already established in academic circles. This, in turn, normalises a degenerative culture of unimaginative repetition. Hence, I prefer to call it ‘Degenerative AI’.

My rant is over. I am leaving you with the following song about numbers from the 1969 season of Sesame Street, composed by Denny Zeitlin and featuring vocals by Grace Slick of Jefferson Airplane: https://www.youtube.com/watch?v=G5stWhPNyec

References for Part 1, 2 and 3

Andreski, S. (1973). Social sciences as sorcery. New York: St. Martin’s Press, May.

Archer, M. (2024). Unsustainable: Measurement, Reporting, and the Limits of Corporate Sustainability. NYU Press.

Cixous, H. (1974). Prénoms de personne. Paris: Seuil.

Cixous, H. (1994). The Hélène Cixous Reader. (Susan Sellers, Ed.). Routledge.

Derrida, J. (1979). Spurs: Nietzsche’s styles. University of Chicago Press.

Marx, K. (2013). Capital: A critical analysis of capitalist production (S. Moore, E. Aveling, & E. Untermann, Trans.). Wordsworth.

Zuboff, S. (2022). Surveillance capitalism or democracy? The death match of institutional orders and the politics of knowledge in our information civilization. Organization Theory, 3(3).


photograph of the authorVasileios Galanos

Dr Vassilis Galanos, SFHEA is a visitor at the Edinburgh College of Art and works as Lecturer in Digital Work at the University of Stirling. Vassilis investigates historico-sociological underpinnings of AI and internet technologies, and how expertise and expectations are negotiated in these domains. Recent collaborations involved the history of AI at Edinburgh, interrogations of generative AI in journalism (BRAID UK), artist-data scientist interactions (The New Real), and community-led regeneration interfacing with data-driven innovation (Data Civics). Vassilis has co-founded the AI Ethics & Society research group and the History and Philosophy of Computing’s (HaPoC) Working Group on Data Sharing, also acting as Associate Editor of Technology Analysis and Strategic Management.




It’s the metrics, not the Matrix, part 2: Rigorously Established Fear

Karl Marx escaping the Medieval Metrics Matrix – generated using DALL-E by the author and numerous unacknowledged art and data workers.
Image credit: Karl Marx escaping the Medieval Metrics Matrix – generated using DALL-E by the author and numerous unacknowledged art and data workers.

In this post, Dr Vassilis Galanos continues his exploration of metrics, its place in Higher Education, and the impact of the Research Excellence Framework on our work practices. This post is part 2 of 3, and belongs to the Hot Topic theme: Critical insights into contemporary issues in Higher Education.


In a previous post with Teaching Matters, I have written about how academic excellence evaluations such as the UK’s Research Excellence Framework (REF), claiming to measure research quality with some kind of objective precision, can foreground the development of digital machinery (such as Generative AI) that is adjustable to the REF’s objective (or better: objectifying) metrics. In this post, continuing the thread from part 1, I will connect the REF to the context of broader student and faculty numerical rankings. REF, that for many academics also stands for “Rigorously Established Fear”, often ends up fostering a competitive environment where volume trumps substance and impact is staged in wording but often not grounded in practice. As an example of this, as part of the Edinburgh Futures Institute’s Data Civics Observatory, I encountered the frustration of local communities in Edinburgh who complained about researchers using their underdeveloped neighbourhoods to justify their grant allocation, but disappeared upon the project’s end.

Niche or curiosity-driven disciplinary-questioning endeavours get side-lined while churned-out, quota-meeting research takes centre stage, especially in the context of academic-industry collaboration. Such collaboration is initially phrased as an attempt to open-up the world of Academia into the real world, but, in practice, it transforms Academia itself into a peculiar type of industry. This mirrors the rise of performance indicators in corporate bureaucracies, which seek to optimise efficiency at the expense of innovation and creativity.

This obsession with optimisation and efficiency further increases the distance between metric-driven reporting as just a symbol and as practical social change (as Matthew Archer recently showed in his 2024 book ‘Unsustainable: Measurement, Reporting, and the Limits of Corporate Sustainability,’ or, as Stanislav Andreski beautifully put it in 1970, “evasion in the guise of objectivity”; “quantification as camouflage’; and ‘techno-totemism and creeping crypto-totalitarianism”).

As an individual progresses up the academic ladder from student to staff, the REF exercise takes the emotional place occupied by the marker’s assessment and staff mentor’s supervision as the higher and sufficiently invisible entity of surveillance. This mirrors Marx’s description of a factory, which, in our case, is the university (my additions in square brackets):

“The technical subordination of the workman [read: worker, but also student, lecturer, professor, etc] to the uniform motion of the instruments of labour [including marking schemes, impact assessments, article production, grant allocation mechanisms], and the peculiar composition of the body of workpeople, consisting as it does of individuals of both sexes and of all ages, give rise to a barrack discipline, which is elaborated into a complete system in the factory [and academia], and which fully develops the before mentioned labour of overlooking, thereby dividing the workpeople into operatives and overlookers, into private soldiers and sergeants of an industrial army. […] The place of the slave-driver’s lash is taken by the overlooker’s book of penalties [including late submission penalties, resits, redundancy of academics who did not produce REFable outcomes, and more]” (Marx 2013: 293).

In the next, and final, post of this three-part series, I will conclude this conversation by situating the emergence of Generative Artificial Intelligence (GenAI) within the afore-described process of metrics-oriented culture.


photograph of the authorVasileios Galanos

Dr Vassilis Galanos, SFHEA is a visitor at the Edinburgh College of Art and works as Lecturer in Digital Work at the University of Stirling. Vassilis investigates historico-sociological underpinnings of AI and internet technologies, and how expertise and expectations are negotiated in these domains. Recent collaborations involved the history of AI at Edinburgh, interrogations of generative AI in journalism (BRAID UK), artist-data scientist interactions (The New Real), and community-led regeneration interfacing with data-driven innovation (Data Civics). Vassilis has co-founded the AI Ethics & Society research group and the History and Philosophy of Computing’s (HaPoC) Working Group on Data Sharing, also acting as Associate Editor of Technology Analysis and Strategic Management.




It’s the metrics, not the matrix: Part 1 – Higher Education State Critical

Image of Karl Marx escaping the Metrics Matrix – generated using DALL-E
Image: Karl Marx escaping the Metrics Matrix – generated using DALL-E by the author and numerous unacknowledged art and data workers

In this post, Dr Vassilis Galanos dissects what metrics really mean for students, educators, and researchers in the wider academy. This post is part 1 of 3, and belongs to the Hot Topic theme: Critical insights into contemporary issues in Higher Education.


As the heading suggests, it’s not some Matrix-like virtual reality conspiracy controlling all things academic – it’s the metrics. For about 20 years now, from undergraduate student to Lecturer, I’ve experienced numbers like student grades, attendance monitoring points, seminar participation marks, journal rankings, research excellence frameworks (REF), and citation scores as structural elements we increasingly have to face, understand, and be assessed against. Yet, at the same time, we find ourselves being less outspoken about these metrics and what they mean for our daily lives.

Following a long legacy of bureaucratic solutionism, they’re supposed to streamline and improve academic management and recognition, but often end-up reducing the – supposedly – rich, varied experience of academia to a dry set of spreadsheets, impact factor badges, and transcript competitions.

As a person who studies the history of the internet in parallel with artificial intelligence (AI) (and an avid social media user myself, turning my life into an open experiment), I’ve seen the rise of social media metrics like ‘likes’, ‘follows’, and ‘faves’ being established as a “free-for-all” venue for numerical recognition. I have also seen how they further normalise our obsession with numbers, converging with the proliferation of AI and algorithmic technologies to amplify and entrench this metric-driven culture. When you add Generative AI into the mix, the metrics game shifts into hyper-drive with an efficiency that an Orwell-Huxley hybrid couldn’t have predicted.

For the past six months, I’ve spent time with Karl Marx’s The Capital, volume 1, so I decided to dissect what these metrics really mean, using insights from surveillance studies, Marxian economics, and the quantified self, with a nod to the history of numerical classifications from mathematics to economics. To complete the pun: from the Matrix, to metrics, to Marx.

Grades as assessment

Grades are the old standby for assessing students, neatly categorising their efforts and even identities into A, B, C, and “better luck next time.” Or, to use a term co-constructed by Hélène Cixous (1975, 1994: 29) and Jacques Derrida (1979: 97), they encapsulate the education of a phallogocentric system – one that is at the same time serving a masculine (phallocentric) ideal of military rankings and the dominion of rationality (reasoned logic as Logos, that is, logocentric). This creates a linear trajectory in which there is less space for winners and those in higher ranks.

Grading turns the wonderfully messy process of learning into bite-sized numbers, much like fast food turns diverse cuisines into generic meals – always with the opportunity to pay a bit more in order to have access to luxurious gastronomy. This simplification often strangles creativity and critical thinking. For the imaginative and divergent thinkers, it’s like being shoved into a production line where only uniformity gets rewarded.

The politics of such numerical simplification finds its roots back to the early applications of mathematics in standardising measurements for trade and commerce as well as military precision. Here’s Marx:

“The division of labour, as carried out in Manufacture, not only simplifies and multiplies the qualitatively different parts of the social collective labourer, but also creates a fixed mathematical relation or ratio which regulates the quantitative extent of those parts […]. It develops, along with the qualitative sub-division of the social labour-process, a quantitative rule and proportionality for that process” (Marx 2013: 241).

The presentation of presence

Attendance records act as the school’s hall monitor, ensuring students physically show up. Digital systems like biometric scans offer precise tracking but also inch dangerously close to a Big Brother type of oversight. This constant scrutiny is more than just checking who’s present – it’s a subtle method of enforcing compliance and cultivating a culture of stress and control. The evolution of such monitoring systems can be linked to the development of bureaucratic systems in the 19th century, which relied on statistical data to manage and control populations. Interestingly, this enforcement of being present in fear that attendance is being monitored, is transformed within social media environments into “fear of missing out” (FOMO).

The presentation of presence as something to compete for is an interesting parallel between (a) attendance monitoring as part of one’s entertainment/leisure lifestyle, and (b) the joy of education as an enforced evil that is effected only by attendance supervision. Marx again:

“An industrial army of workmen, under the command of a capitalist, requires, like a real army, officers (managers), and sergeants (foremen, overlookers), who, while the work is being done, command in the name of the capitalist. The work of supervision becomes their established and exclusive function” (Marx 2013: 230

(Keep in mind that the French word “surveillance” literally translates into “supervision” or “overseeing” – worth considering every time you have a “supervision meeting” with your dissertation supervisor or your line manager).

The power of citations

For faculty, journal rankings and citation metrics are the currency of the academic marketplace (as it is very precisely put in everyday vocabularies). Top-tier publications and a heap of citations bring career benefits like tenure and grants. But navigating this numbers game often means playing it safe, avoiding the unconventional or interdisciplinary work that might not score high on the metrics scale. This focus on numeric evaluation echoes the econometric models that gained prominence in the 20th century, emphasising quantifiable data over qualitative insights. As an extension of econometrics, the 20th century saw the evolution of bibliometrics, scientometrics, and infometrics, as a quantifiable measure of impact of research.

Compounding the issue, social media metrics like ‘likes’ and ‘followers’ further normalise academics’ predisposition towards popular, mainstream topics that satisfy the instantaneity of a present-oriented appreciation of science. This is often at the expense of deeper, more substantive inquiries, which extend into the past and future. Indeed, the academic culture behind creating ‘tweetable’ abstracts of abstracts (“threads”) after an attention-grabbing title that is meant to be retweeted indicates the time pressure under which scholarly content is produced, disseminated, and consumed – “content” in the recent social media flavour of the word.

In the next part of this Teaching Matters contribution, I will relate the question concerning metrics to the Research Excellence Framework (REF) exercise.


photograph of the authorVasileios Galanos

Dr Vassilis Galanos, SFHEA is a visitor at the Edinburgh College of Art and works as Lecturer in Digital Work at the University of Stirling. Vassilis investigates historico-sociological underpinnings of AI and internet technologies, and how expertise and expectations are negotiated in these domains. Recent collaborations involved the history of AI at Edinburgh, interrogations of generative AI in journalism (BRAID UK), artist-data scientist interactions (The New Real), and community-led regeneration interfacing with data-driven innovation (Data Civics). Vassilis has co-founded the AI Ethics & Society research group and the History and Philosophy of Computing’s (HaPoC) Working Group on Data Sharing, also acting as Associate Editor of Technology Analysis and Strategic Management.