One way for us to understand and “document how the neoliberal shift transformed institutions – in ways that facilitated the action of classificatory engines” (Fourcade and Healy, 2013) is to look at “one of the most surveilled groups in history” (Sparkes, 2021), a group continuously being ranked by metrics that measure their value: academics. Academics have historically been a privileged group accredited with cultural, economic capital, and prestige – even though it is becoming more important to recognize how this is changing with the normalization of precarious contracts and the increasing difficulty early career researchers are having in finding jobs (Bone et al., 2018; Gallas, 2018; Stringer et al., 2018). Still, since academics are viewed as a more privileged group, discussing the issue of surveillance in academia sounds like a marginal issue and one that is irrelevant to the debate around the harms of datafication and metrics, as the effects of these are probably most harshly felt by the less privileged in society. However, the extensive use and reliance on metrics in academia to shape academics’ life chances can tell us much about how neoliberal forces intersect with technological developments to shape certain aspects of society. 

 

Metrics in context: 

 

The relevance of metrics to academia is deeply tied with the neoliberalization of higher education (HE) that accompanied the move towards a mass education or a HE for all. The expansion of HE within a meritocratic framework led to the increased stratification of HE institutions (HEI) (Grodsky and Jackson, 2009). Increasing stratification was accompanied by the establishment of league tables to position HEI, apparently, as a response to the public demand for more information about university quality (Dill and Soo, 2005). In this system universities compete in ranking tables to legitimate increased selectivity and attract prospective “better quality” students. Commercial interests become embedded in such ranking competitions as tuition fees rise and universities look for increasing their enrollment rates as a means for profit extraction (Robertson, 2010). 

 

Rankings often depend on the amount and quality of research that a university produces (Dill and Soo, 2005). The recognition of the quality and value of research is tied with where it is published, in higher or lower impact factor journals and how often it and its author are cited (McKiernan et al., 2019). A commercial context is embedded within and led to the development of such measures. The journal impact factor (JIF), for instance, developed within a privatized publishing industry. In this industry commercial publishers set the prices for and sell journals to universities (Pirie, 2009). These are prices that have been on a steep rise since the 1970s, in what is referred to as the serials crisis. In this context of rising journal prices, the JIF was, hence, established to help libraries “make indexing and publishing decisions” (McKiernan et al., 2019). And while this measure was not meant to inform on the value of the researcher or the article published, it is now increasingly being used “as a proxy measure to rank journals – and, by extension, the articles and authors published in these journals (McKiernan et al., 2019). 

 

From rankings to quantifying researchers: 

 

Alongside the development of the JIF which quantifies the impact of research, developed the h-index which quantifies the value of the researchers themselves, a metric also emphasized by universities that aim to better their place on the rankings. In an autoethnography, Andrew Sparkes (2021) reflects on his experiences with departmental meetings that encouraged researchers to make “spectacles of themselves” for increasing their citations and h-indices. The value of the academic, which has effects on their contract and job opportunities within HEI, is determined by endless metrics that calculate their worth based on the frequency of their publications and citations. Academia, hence, starts to reduce people to a score by metrically placing academics “in a hierarchy of status according to apparently ‘objective’ managerial determinations of individual success and value to institutional prestige” (Sparkes, 2021). An audit culture, which is the product of the intersection between privatization and digitization, neoliberalises academia by enforcing a condition where “values of accountancy become a central organizing principle in the governance and management” of all aspects of the public sphere including academia (Shore, 2008, p. 279). 

 

Through examining the rise in the centrality of these metrics in shaping the life chances of academics and in shaping the meaning of academia itself, one can see that the institutional home to many of the critics of dataveillance is not immune to this process. Metrics such as the JIF and the h-index classify researchers and shape their life chances. Furthermore, these metrics enact a particular meaning of academia and maintain a HE system that is highly privatized, expensive, and exploitative. The quantification of the value of research and researchers only feeds and legitimates the mechanics of a privatized system. Metrics used in this context reify certain norms, and set standards that enforce conformity and change the meaning and purpose of research (Beer, 2016). Metrics influence the research as researchers become the objects of quantified measures of impact. The values perpetuated by such measures become “inscribed in social practices thereby influencing the self-understanding of a practice and its role in society” (Schwandt, 2015 as cited in Sparkes, 2021). As research adapts to apparently objective standards of what is impactful and what is not it becomes the object of the metric, rather than the other way around. 

 

Resisting data-ist values: 

 

The value of research and researchers cannot be quantified and measured according to standards set by commercial interests and to be forced to provide proof for this seems ridiculous. I could simply speak as to the danger of such measures from my own experience. The value of the researchers and lecturers working at the sociology department where I did my undergraduate was immense for my ability to develop a sociological imagination. I am in part able to write this blog because of the analytical tools the department has equipped me with. I discovered recently that apparently the department had been in crisis; their place in the rankings was dropping and their staff had low levels of citations. How often the people at the department were publishing or getting cited did not mean much or have any value for my learning experience. On the contrary, logically, an extensive focus on research could have a negative effect on how much time and effort is being spent on teaching. Of course, for a university that is hoping to climb up the ranking tables, what the students or lecturers viewed as valuable effort did not seem to matter. The metrics had more power in determining how valuable the work being done in the department was. 

 

It is crucial for academics to resist the quantification of selves within the realm of academia, as a starting point for critiquing other spheres that are being subjugated to datafication, the hegemony of metrics, and the rise of a dataism devoid of truth (Han, 2017). I emphasize this point due to an issue that triggered the writing of this blog. Recently, Clarivate, an analytics company responsible for calculating and publishing metrics relevant for academia, published a list of the “highly cited researchers of 2021.” On Twitter, numerous researchers who were on the list retweeted the link and expressed their gratitude and pride at being featured. Such Tweets, especially by academics who were themselves at times critics of the quantification of selves, failed to see the problematic with such lists and the systems of datafication that they legitimate. This is particularly worrying for imagining resistance against the hegemony of such ideas. If the group that is most equipped with this knowledge and the harms of datafication is itself not resistant to the workings of such a system or even embracive of its idea of merit, then critiques become hypocritical apolitical publications that only serve the system, which they are critiquing. Of course, resistance is easier said than done and academics often have little agency, but a critical view of the competitive ethos that is being fostered, such as by highly cited lists, and an attempt at educating future generation scholars, who often have very little knowledge on this issue, can come a long way in the long run. 

 

Bibliography: 

Beer, D., 2016. Metric Power. London: Palgrave Macmillan.

Bone, K., Jack, G. and Mayson, S., 2018. Negotiating the Greedy Institution: A typology of the lived experiences of young, precarious academic workers. Labour & Industry: a journal of the social and economic relations of work, 28(4), pp.225-243.

Dill, D.D. and Soo, M., 2005. Academic Quality, League Tables, and Public Policy: A cross-national analysis of university ranking systems. Higher education, 49(4), pp.495-533.

Fourcade, M. and Healy, K., 2013. Classification Situations: Life-chances in the neoliberal era. Accounting, Organizations and Society, 38(8), pp.559-572.

Gallas, A., 2018. Introduction: The proliferation of precarious labour in academia. Global Labour Journal, 9(1).

Grodsky, E. and Jackson, E., 2009. Social Stratification in Higher Education. Teachers College Record, 111(10), pp.2347-2384.

McKiernan, E.C., Schimanski, L.A., Nieves, C.M., Matthias, L., Niles, M.T. and Alperin, J.P., 2019. Meta-research: use of the journal impact factor in academic review, promotion, and tenure evaluations. Elife, 8, p.e47338.

Pirie, I., 2009. The Political Economy of Academic Publishing. Historical Materialism, 17(3), pp.31-60.

Robertson, S.L., 2010. Corporatisation, Competitiveness, Commercialisation: New logics in the globalising of UK higher education. Globalisation, Societies and Education, 8(2), pp.191-203.

Shore, C., 2008. Audit Culture and Illiberal Governance: Universities and the politics of accountability. Anthropological theory, 8(3), pp.278-298.

Sparkes, A.C., 2021. Making a Spectacle of Oneself in the Academy Using the H-Index: From Becoming an Artificial Person to Laughing at Absurdities. Qualitative Inquiry, p.10778004211003519.

Stringer, R., Smith, D., Spronken-Smith, R. and Wilson, C., 2018. My Entire Career Has Been Fixed Term”: Gender and precarious academic employment at a New Zealand university. New Zealand Sociology, 33(2), pp.169-201.

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *