After the Story Roots for Sustainable Futures intensive, I have continued to reflect on the experience of embodied and in-place storytelling, and realized that I haven’t thought much about the potential implications for the body and physicality that hybridization with AI would have on my scholar character. In particular, AI seems like it could challenge our understandings of the limits of self (and in fact, I’ve been thinking about what “our” means, and the kinds of assumptions I’m making, coming from a western-dominated context). This begs the question of how we can develop anti-colonial and queer understandings of the body/self through engagement with AI — and what kinds of AI futures would enable this, given that current AI technologies are often hostile and discriminatory toward non-normative bodies.

In western popular conceptions, at least, AI seems to often be treated as an intangible idea or a digital presence. The exception is depictions of robots, which are often futuristic or speculative, in contrast to AI technologies currently widespread in the real world; the latter are popularly grouped under an ill-defined umbrella of “AI,” which is as often attached to the idea of a broad societal force/movement as to specific technologies. But in reality, as I’ve seen in courses like Ethical Data Futures, today’s AI has tangible effects on the real world and on people/bodies existing within it. This often manifests through systemic forces that affect eminently physical things like healthcare, housing, police activity, etc. On an individual level, too, the brain is connected to the body (it seems obvious, but I find myself forgetting sometimes), which means that neurological hybridization with AI — with all the drastic restructurings of self, thought, and expression that entails — would necessarily cause physical effects as well. 

I’m particularly interested in what this means for a young person like my protagonist, who would be going through the physical transformation of puberty alongside their AI-driven transformation. Does AI have the effect of prematurely “adult-ing” a human partner? Or, conversely, does it slow human development? Does it de/emphasize certain senses or modes of perception? Does it deemphasize the physical body overall, thus making it difficult for the human to remember basic functions like eating or sleeping? How would such hybridization impact the emotional and/or physical experience of puberty, e.g., learning how to feel at home in your body?

Regarding the boundaries of self, would a human-AI hybrid feel a different relationship to space (physical and digital) or time, given that their consciousness would no longer be understood as entirely internally driven, and would also be uncoupled to some extent from human lifespans/timescales? If the hybridization enabled connection to a network of other intelligences, this would strongly undermine the western conception of individuality (and would therefore have social, cultural, and even economic or governmental implications as well). 

This resonates with a paper that I recently read as I explored these ideas of AI and selfhood, by Lagerkvist and colleagues, who interrogate exactly what sort of humans are centered in human-centered AI (namely, the “‘liberal humanist subject’—the autonomous, independent, certain economic man” (171)). In addition to addressing the risks, the researchers also point to some ways in which biohacking can be used as a tool of resistance, aiming to protect marginalized communities (177); this makes me wonder how such hybrid bodies and modes of existence could potentially become part of a future of selfhood that radically resists the ways in which AI currently centers certain selves. Lagerkvist and colleagues also argue for “resituating [humans] as embedded in the technological environment,” as “coexisters,” in order to build “an existentially sustainable [society] in which we become human with technologies” (172; 171). This idea of “becoming human with technologies” is exactly what I’m interested in. The researchers assert that it requires an ethics that centers “the existential body—the relational, intimate and frail human being,” while simultaneously embracing expanded understandings of humans’ lived experience as we become “coexisters” (172). They make a particularly interesting point that selfhood is constructed in relation to otherness (a function that is undermined when humans are reduced to their biometric data). All of this makes me think that one of the important strands my story must explore is the new forms of relational identity (with other humans, machines, and the more-than-human) that hybridization would allow, as well as the ways that existing relations would be changed and/or preserved.

Another interesting reading on this topic was by Amaya, who identifies three types of onticity (i.e., existence) for humans in relation to machines and digital technology: “a vulnerable and precarious onticity” for people subject to and shaped by machines (622); an onticity “of the makers and controllers of the technology” (623); and an onticity “that fluctuates between the precarious one and the agentic one” in which “[w]e are with our tools and we are tools” (Amaya argues this is the most common form, particularly in the west) (623). In discussing several books. Amaya observes that technology and computation may be understood as failing to capture reality while simultaneously powerfully affecting reality; conversely, technology may be seen as “the exteriorization of interiority, as the social, political, epistemic, and material manifestation of the personal” (622). Although of course both can be true to varying extents, I think it’ll be important to decide which version predominates in my imagined future — whether AI generally steers humans away from their selves, or whether it can embody selfhood (or whether, perhaps, certain modes of relationality and hybridization might lead to each outcome, depending on the person and context). 

Relating to this last point, Gabora argues that “unless AI programs possess RAF [Reflexively Autocatalytic Foodset-generated] structure, they are merely extensions of our dual level RAF selves, much like tools, or artificial limbs. They are not selves … they are merely tools, and their creativity is an expression of us” (6). This seems to be the case for current AI technology; it is a reflection of selfhood. But if it could embody not just human selves but also its own selfhood — which Gabora posits is a possibility if it can develop “self-generating, self-mending, and self-perpetuating” awareness on both the “somatic (biological)” level and the “mental (cultural)” level (4) — what implications would that have when humans are refigured as “coexisters”? Would a human hybridized with an AI would actually form a new kind of self, or whether the AI would in fact still be a tool of the human? This also brings up the question of the relative power dynamics between the human and AI in such a relationship: If the human self dominates and the AI merges with it, is this truly a new kind of self, or simply an elaboration of the human? Conversely, if the AI dominates and the human merges with it — assuming that the AI does not have a “self” in our current understanding of the term (as Gabora argues is true for current AI technology) — does this mean the human self is lost? Or does the human lend selfhood to the AI, despite being overpowered? And is there any possibility of an exactly equal partnership between human and AI, and what would that look like?

 

Works Cited:

Amaya, Hector. “On human and technological boundaries.” Journal of Communication, vol. 73, 2023, pp. 621–623. https://doi.org/10.1093/joc/jqad034.

Gabora, Liane. “How to Get from ‘AI Tools’ to ‘AI Selves’.” PsyArXiv, 27 Dec. 2022. https://doi.org/10.31234/osf.io/a95gu.

Lagerkvist, Amanda, et al. “Body stakes: an existential ethics of care in living with biometrics and AI.” AI & Society, vol. 39, 2024, pp. 169–181. https://doi.org/10.1007/s00146-022-01550-8.