An apparent paradox: 

We think of machines as foreign and “other.” Robotic means stiff and inhuman; often it’s easy to distinguish machine from human because machines have an uncanniness, a quality that we perceive as unnatural. 

And yet

We put so much effort into anthropomorphizing machines that it is also often easy to mistake them for humans or for real life. 

This, I think, is a somewhat separate phenomenon from the machines that we’ve designed to mimic the products of human processes without necessarily anthropomorphizing those machines. DALL-E and other AI image generators are an example of this other category of human adjacency; the products they produce may be indistinguishable from human products, but it is because of the intrinsic artistic qualities of the images, not because interaction with the tools themselves makes us believe they’re human. 

On the other hand, social media seems to provide a particularly pertinent example of the anthropomorphizing tendency in the form of bots. Some bots are designed to interact with humans on social media, in a human mode, and ultimately can combine with algorithmic curation to create such an effective echo chamber that users start to believe it’s real life. True, the replicability of human interaction on social media may say more about the nature and limitations of that interaction than about the bots — but nonetheless, they’re a good example of attempts to make machines communicate in human modes. Another example is technologies like Siri and Alexa, which often become “she”s rather than “it”s.

The dangers here are self-evident; could we reduce them if we weren’t so invested in anthropomorphizing, if we could let machines be machines? But at the same time, can we avoid anthropomorphizing, given that humans are social creatures? We are lonely; we’re looking for something we recognize “out there” (in the universe, in digital space, everywhere we don’t fully comprehend). But is it ethically permissible for us to continue seeking this sort of connection with machines, knowing the harms it causes?

Also, what does “we” really mean here? In one of the insidious narratives of digital capitalism, individuals are held responsible for their own safety and data, when in reality, corporations are the driving force behind developing, controlling, and monetizing many of these technologies. Then again, could speculative methods help to envision alternative power structures, where communities could gain genuine connections through these technologies without falling into the dangers that they pose within structures of digital capitalism? I was inspired by Lynda’s paper about randomization and grief, which points to the possibility of finding human emotional resonances in machine processes — of, for instance, reading computerized randomization as an expression and exploration of grief (Clark). I would hesitate to advocate for entirely rejecting the possibilities presented by emotional connection and identification with machines. 

In a roundabout way, this all connects back to my Futures Project as I grapple with the question of how readers, characters, and I as the author can or should interact with computers and/or AI. The question isn’t really about whether we can find connection with machines, because it seems clear that we can (at least in certain circumstances). Rather, the question is what we do with that connection, knowing all of the implications — personal and structural — that are attached to it. Moreover, do we conceive of the connection in human terms (bringing machines into our own spaces and worlds and structures), or do we invite ourselves into the worlds and structures native to machines? Is such a thing truly possible, given that all machines are ultimately created by humans? What might a hybrid between the two look like? This all feels deeply connected to my interest in machine writing/reading that goes beyond attempting to replicate human writing/reading as closely as possible on human terms.

This brings up a number of themes that I’d like to consider more deeply in relation to my project:

  • Mirroring
  • Destroying yourself even as you seek to fill a human/fundamental need
  • Searching for companionship and recognition (to know and to be known)
  • Reality/unreality/knowledge/illusion
  • Worlds within worlds (which then makes me think of things like dreams, VR, “mind palaces,” whether constructed by human or machine)

 

Work Cited:

Clark, Lynda. “All the Small Things: Depicting the randomization of grief in (digital) short fiction.” Short Fiction in Theory and Practice, vol. 12, no. 1, 2022, pp. 7–17.