Week 3: Closing in on a concept

The running theme of my coursework, readings, and reflections during week three was identification and dissociation, othering and empathy. From Dr. Alexander’s discussions of the uncanny in Interdisciplinary Futures to the Edwin Merwin Bot in the Text Remix readings and material, I drew out a running thread of the power of language and story to render the familiar strange and the strange familiar.

These reflections are going to get very abstract and tangential for a minute, but bear with me. I’m not about to dive headfirst into semiotics; I’d just like to skim the surface for a second as I try to link this all back around to the kind of project which could make good use of my interest in data representations paired with my English-student tendency to interact with everything in the world as if it were a text.

Language assumes empathy. I would not be sitting here now writing any of these words unless I were fairly certain my reader was enough like me to give those character forms roughly the same meaning I give them. Lots of thinkers have modeled this in lots of sophisticated ways, but for my purposes here I’ll just say that the text I’m putting out creates a sort of shared space where I can reach out into some kind of perceived sameness I envision in you, and you reach back through the text to an assumed anchor point of sameness in me. Language allows us to meet somewhere in the middle. (Much like the physics problems which let you ignore gravity, and friction, and every real world complication, I’m going to pretend briefly that miscommunication doesn’t exist and none of the concerns brought up by deconstructionists matter, not because I’m actually that optimistic, but just as a learning exercise). But it also creates distance. The thread it uses to tie us together has more than a little slack. If I have to explain myself, it is because I can’t trust that I will be understood. If I have to communicate at all, it must mean that something inside of you is different enough from something inside of me that I feel I must do something to engage, correct, or accommodate for that difference by speaking. There would be no need for speech in a hive mind.

So what does it mean when we speak or write to ourselves? We do it all the time. I’m doing it now. (Yes, I know someone will have to grade this eventually—my apologies to them for the length of this ramble. But this is a reflective post first and foremost, so I’m mainly writing it to an audience of one.) I’m creating distance inside myself, holding my own thoughts at arm’s length to see what I think of them. Writing helps me see myself as a stranger, de-identify just far enough to poke and prod and evaluate my ideas in ways which are difficult if not impossible without this pretense of language.

I want my project to tackle this odd sensation of talking to oneself as it relates to digital identity. From a human-centered lens, how can the ways we “write” ourselves into data systems help us to gain this perspective? Pushing further, how can posthuman ontologies help us to go from viewing digital identities as externalized selves to recognizing them as something entirely other, something whose existence is not fully under our comprehension or control? AI and generative language models can give us yet another step of dissociation from our digital representations—what happens when the versions of ourselves we send out into the world start talking back?

Here’s an excerpt from some reflecting I did on the Merwin Bot video in Text Remix:

I’m not sure that generative poetry can actually decenter the human in quite the way the video describes. After all, the algorithms are coded in programming languages written by humans, executed through hardware created to model human logic, then trained on human language, which is also the output. However, there are certainly parts of nonhuman agency and otherness which shine through the cracks of this human constructedness–the difficulty of tracking the “thought process” from input and output through a neural network is one instance of this. But we are still forcing the computers to speak in our language, then treating the output as if they were speaking to us. To truly decenter the human and acknowledge the other would mean that we could not apply language, as humans understand it, to our understanding of computers; we could not force any of our frameworks onto their existence. In short, we could not empathize, could not find common ground. Genuine deanthropocentrism, if it were possible, would leave us devoid of empathy, unable to acknowledge anything except difference, isolated within an utterly incomprehensible universe.

So instead I choose to see the process of engaging in computer language experiments as an outpouring of empathy–something utterly human, but which reaches out to an other, even if we can only try to understand that other on our own terms. We must of course acknowledge that there are aspects of the other which will never be intelligible in our language or our understanding, although there may be moments of almost seeing past ourselves where we confront the enormity of that which we cannot understand (like the Romantic notion of the sublime–that sense of encroaching into something so completely beyond ourselves that we are overcome with terror, delight, and awe).

These are the sorts of questions that keep coming up in my mind—how can we use computational tools to ‘other’ digital objects in an empathetic way? To help us see the gap between us and them not as a flaw in representation but as an opportunity to explore and appreciate the differences. Such work could, I think, not only help to prevent the harms when we identify too closely with our “digital identities” but also to encourage us to create digital art which does more than merely mimic the analog, recognizing its particular quirks and affordances and valuing them.

Now I just need to figure out how to explore this on a reasonable scope for the project and this program. Eventually, I would love to do some really hands-on NLP work to create interactive tools or even installations that let viewers directly experience dissociation from their digital identities—this could be something as simple as a chatbot finetuned on their social media feeds to a much more complicated model which personifies their pervasive data, targeted ads, basically every part of themselves which exists “out there” in the web. For now, though, I think a smaller scale proof-of-concept makes more sense. Maybe I can use my own data to experiment with the kinds of models that lend themselves well to these encounters and the kinds of insights they might provide. Or maybe I could present a fictionalized version of these ideas, based off of some corpus of text and data either gathered or produced as part of my research.

Leave a Reply

Your email address will not be published. Required fields are marked *