Prompted by a discussion in Text Remix about the (im)possibility of decentering the human in discussions of AI/machine creativity and writing, I’ve been trying to grapple with the question of how humans’ emotional and empathetic responses relate to our interactions with literature, and how those reactions are impacted when AI or automated processes enter the picture — as part of the writing process, the reading process, or both.

I’ve found myself dissatisfied with the idea of solely exploring AI/tech with the purpose of mimicking human processes as closely as possible. I have an intuitive sense (though I’m still interrogating this idea) that the emotional response generated by machine-writing-mimicking-human-writing can’t quite match up to that engendered by human writing, due to the lack of intent on the part of the machine: Humans have a unique understanding of the meanings of words, and so we create literature with the intention of using and building upon those meanings — through each word itself, but also through symbolism, form, sound, and the emergent meanings of different combinations of words. This leads me to think that, so long as we haven’t developed general AI, the most interesting explorations (from the perspective of creating work that holds resonance for human readers) will come from a partnership between human and machine — or the human using the machine as a tool/medium — because so far any work independently created by a machine will lack that crucial meaning and intention. And furthermore, the most exciting possibilities for that partnership, in my view, come not from trying to do “the same thing but better” as what human writers already do, but rather from considering the unique capabilities and strengths of the machine to create new (or newly inflected) forms of creativity and writing.

This is all very well, but I came upon what seems to be a snag in this line of thinking when I considered the strong emotional reactions that the concepts of AI and machine creativity themselves (as distinct from the products of AI/machine creativity) can incite in humans. Anecdotally and personally, I’ve noticed a sense of awe, and often fear, when people are confronted by machines that appear to have creativity like that of humans — a sensation, perhaps, bordering on the Romantic sublime, which scholars such as Vincent Mosco, Leo Marx, and David Nye have termed the “technological sublime,” “algorithmic sublime,” or “digital sublime.” Others have questioned this construction, noting that algorithms are intrinsically embedded within and a product of culture and society, and furthermore that they are often not as opaque as the popular media would have us believe — they lack the mysterious and transcendent qualities associated with the sublime (Ames). But I’m not sure if this argument convinces me. The sublime is about individual experience, and it’s undeniable that many people (myself included) sometimes experience AI or other technology as awe-inspiring, frightening, incomprehensible, and hugely compelling. In Narratives of Digital Capitalism, we discussed how humans have, in some ways, begun to view technology as an object of worship, parallel to the presence of the divine in the Romantic sublime. The crux of it is that AI can create emergent knowledge and processes that spring up in a way that can be disproportionately compelling even to people who have some understanding of the technology — just as the Grand Canyon still evokes existential awe, even for people who understand plate tectonics (as the Romantics did not). So to me, this suggests that any human-machine partnership I imagine must take into consideration this compelling quality — whether to leverage it, how to leverage it, how to create something that captures the sublimity of the unknowable in the machine while maintaining the intentionality of emotional arcs and empathy that human work can create.

As I reflect on this, a few potential ethical challenges are also coming to mind. Most prominently: Given the understanding we do have about algorithms/AI and the biases and harms they can perpetuate, is it responsible to continue to place them within a framework that treats them like nature or like divine constructions — inevitable, ineffable, infallible? Or, rather, is it possible to preserve the undeniably compelling and beautiful aspects of these technologies without perpetuating the equally undeniable harms they enact? As someone who is fascinated by these technologies, I would like to believe there’s a way to accomplish this, rather than having to strip away the wondrous elements entirely. I hope there’s a way to thoughtfully and ethically combine the emotionally compelling aspects of both human and machine creativity — just as the Romantic poets combined the sublimity of the natural world with their own words to create work that still resonates today.

 

Compost heap:

I’m finding myself thinking a lot about Harrow the Ninth (about which I’m writing a separate blog post, after realizing it was much more than a compost heap idea), as well as about an idea that a friend studying the metaphysics of biology shared with me: that in some ways, processes are more fundamental than things, and therefore that things are conditional upon processes, and things are precipitates of processes. I don’t know that this is always true, but it feels like an important idea for my project, given how much of creativity is process, and equally, how important process is in my attempts to understand AI/algorithmic writing and reading. It makes me wonder what different kinds of things can precipitate from the same process, and what kinds of unique things are precipitated by the processes particular to AI and machines.

 

Work Cited

Ames, Morgan G. “Deconstructing the Algorithmic Sublime.” Big Data & Society, vol. 5, no. 1, SAGE Publishing, Jan. 2018. https://doi.org/10.1177/2053951718779194.