The readings for Ethical Data Futures have been challenging my approach to tech and AI in my Futures Project. In particular, the concept of “refusal” or “informed refusal,” addressed by scholars such as Ruha Benjamin, has pushed my thinking on likely and possible futures for these technologies. 

I struggled with the idea of refusal at first, because it seemed to me that while fighting or rejecting an inevitable change in society might be effective in the short term as a form of necessary resistance, it would ultimately lead only to entrenched inequities as those who embraced the technology flourished in the new economic and social reality developing around it. Essentially, the idea of rejection as a helpful and generative practice — particularly rejection of something that, in my view, has many potential benefits, albeit alongside many harms — felt intensely counterintuitive to me. But I realized for two reasons that this way of thinking is limited.

First, of course, the idea of constant, inevitable technological “progress” is a colonialist narrative pushed by the tech industry and those whose interests align with it; this sort of “progress” is not, in fact, inevitable if we push ourselves to imagine and build beyond it. Although I was previously aware of this insidious narrative, I hadn’t realized that I too had fallen prey to it in my conception of the futures in my project: I was framing the question as, “In a future where AI is more powerful and pervasive, how do humans live and grow with/alongside it?” — without truly considering that choosing not to live with it would be not only a real possibility, but one that opens new futures rather than simply closing them off.

Secondly, I came to understand that the idea of refusal does not mean wholesale rejection of a swathe of technology like “AI” overall, but rather informed and targeted rejection of particular instances of the technology that uphold systems of oppression. Essentially, it means refusing the myth that improving the accuracy and fairness of a technology is ever sufficient, when the technology itself is based on inherently unjust societal foundations that must be dismantled. This nuance helps to show how refusal can be not only negative/destructive but also generative. Benjamin explains that we “need to institutionalize informed refusal rather than leave it to already vulnerable individuals to question those in authority” (971). In a society where refusal is respected in this way as a ubiquitous right, it can serve as a signal to the community that something deeper is wrong, that surface-level fixes of a particular technology are not addressing people’s real needs. From there, systems could be implemented to investigate and address those deeper needs.

This has two important implications for my project. First, it opens the possibility of different modes of self-determination for individuals and individuals-within-communities, and implies different communal decision-making structures and priorities. The Ethical Data Futures reading suggests that collective decision-making, particularly in the context of decisions about high-stakes algorithms, doesn’t need to look like our current systems of western democracy (Robinson 33). AI is so often envisioned — whether in utopian or dystopian tellings — as part of a globalized system of technologically driven power which undergirds economics, politics, society. But the idea of refusal and its community orientation suggest that a globalized future (much like a technologically saturated future) is not inevitable. In this area, I’m thinking of A Psalm for the Wild-Built by Becky Chambers, which strikes me as a hopeful and caring example of community-centered futures in the context of significant shifts in humans’ relationship with technology. Inspired by Chambers’s example, I’m considering small, re-localized communities as an alternative to tech-driven globalization. In this context, AI and other technologies could still fulfill the functions of specialization, automation, and connection that are often touted as their strengths. But instead of specializing and automating to enable even more powerful globalized hegemonies, and instead of connecting in service of globalization, they could specialize for community contexts and help people connect with those in their space and with space/environment itself. 

(Tangentially, this approach makes me wonder what the traces/fossils/skeleton of the globalized past would look like in this kind of society, and whether they could offer some kind of organizing principle for the interactive elements of my piece — something to consider in future posts.)

In addition, the refusal framework seems to suggest one potential, hopeful future direction for AI, big data, and the technologies that we are currently struggling to integrate fairly into society (or decide where not to integrate). I may use my piece to envision a future in which strong refusals of certain technologies or use cases — combined, of course, with other social/economic/political forces — have catalyzed a shift, bringing about a reorganization of society and significant changes in systems that have long needed to change. I’m not interested in envisioning a utopia in which technology has solved all problems, indirectly or otherwise, so the likely inevitable ongoing conflicts and unrest that would stem from such a seismic social shift would still play a role in my imagined future. There would still be refusals pointing to problems in society, but society would have evolved to listen and internalize them. 

One final thought, for now, about refusal: I wonder whether one of the ideas it suggests is that dynamic checks and balances between technology and human decision-making — rather than humans entirely rejecting technology — is a critical part of an equitable future for AI. Bias is inevitable wherever humans, or human-designed technologies, are involved, but if we can balance different types of bias by thoughtfully integrating human and machine decision-making, perhaps this is a path toward fair and sustainable uses of technology. Of course, the question is always, “Fair for whom? Sustainable for whom?” And, as Robinson points out, an important part of all of this is leaving room for the answer not to be an algorithm, by increasing focus on “[t]he way a problem and its solutions are imagined” as the very first step of the process (51).

 

Works Cited:

Benjamin, Ruha. “Informed Refusal: Toward a Justice-Based Bioethics.” Science, Technology, & Human Values, vol. 41, no. 6, Aug. 2016, pp. 967–90, https://doi.org/10.1177/0162243916656059.

Chambers, Becky. A Psalm for the Wild-Built. Tordotcom, 2021.

Robinson, David G. Voices in the Code. Russell Sage Foundation, 2022.

Chapter 9: Refusal / Anja Hendrikse Liu by is licensed under a