As I worked on my submission for the Futures Project themes form, I realized that the project idea that I’m most excited about (so far) is a creative piece exploring the processes of reading, writing, and interpretation that could surround a single story that has evolved over time — and, in particular, how these processes evolve in the context of AI, algorithmic reproduction, digital surveillance, etc. 

So far, I’ve envisioned it as a paratextual short story or novelette that revolves around a particular “foundational” story that evolves over the course of an imagined future history. The main character would be a fictional researcher (or perhaps multiple characters — researchers, students, members of different cultures and/or professions), whose marginalia and analyses of the consecutive iterations of the original story themselves become a part of the story. I’m particularly interested in exploring how the characters’ interactions with the story evolve due to the development of technology, and how the evolutions of the story itself may reveal effects that the characters’ conscious reflections wouldn’t necessarily include.

After a recent conversation with my World of Story group members, I’m also considering expanding the piece beyond a traditional written story — perhaps into a dynamically updating and/or interactive website. This might allow me to write my own code to interact with the text, adding a “meta” layer to the thematic explorations in the text itself. There is a certain beauty in the evolution of a story over time as it responds to changing contexts in the real world; is there also beauty in the highly accelerated evolutions of language that AI enables, even if these changes are algorithmically generated rather than existing in a causal relationship with the real world? Or, could a different causal relationship become equally relevant? (e.g., one in which the AI responds to human-generated paratext, and humans in turn respond to the AI, which creates a version of the “reflexive” cycle that Jane Alexander described in her recorded talk about the Biopolis project — except that the reflexivity would now include not just the author (or author-as-reader) themself, but also an AI “reader”, which would enter the process before any external human reader…)

My ideas around this project were certainly influenced by my elective this week, Narratives of Digital Capitalism. In particular, many of the readings and discussions touched on the idea that algorithms and the tech companies that control them have begun deconstructing our “right to the future tense” — our “ability to imagine, intend, promise, and construct a future” — by not only predicting our behavior, but also influencing and steering it through the manipulation of data (Zuboff). I am wondering whether the process of creativity itself may be a way to reclaim agency over our futures, because the kind of synthesis and meaning-making that creativity requires is one frontier that technology has not yet been able to replicate. I was particularly struck by the idea that poetry and other forms of creative writing have become subversive texts because their layers of meaning resist datafication and commodification and make them “suspect” in the eyes of algorithms (Thornton). 

It seems to me that one way to subvert these troubling dynamics is by using AIs and algorithms themselves to create such “suspect” creative language. Then the question becomes whether and how human authors can partner with machines to create the dense and richly symbolic meanings of poetic language, in a way that honors and builds upon the traditions of poetry while also taking advantage of unique algorithmic possibilities. I have a few ideas around that so far:

  • Could algorithms draw out patterns undetectable to the human eye, which humans can then make sense of metaphorically and symbolically in relation to the rest of the text and to the world? For instance, a program or AI model might be able to identify subtle changes in large corpuses of language that evolve over time, or similarities and divergences between language surrounding various topics. The machine couldn’t interpret these meaningfully, but a human could.
  • Is there a sort of poetry that could be made as a conversation between the most “likely” line (provided by AI) and the poet’s response to that? (This makes me think of Google poetics, also mentioned in Narratives of Digital Capitalism, where people create “found” poetry from Google’s search suggestions.)
  • Machines are good at introducing or simulating randomness in a way that humans can’t. Is there a way that introducing randomness to a creative piece could be thematically evocative? The world can certainly seem random at times, and even cruel in its randomness. Is it possible to envision some sort of random, algorithmic kindness? (What would “algorithmic kindness” even look like? Is it reductively anthropomorphizing?)
  • While the form of AI most widely known today (LLMs) always moves toward an average or most likely response, there are other algorithms that may not — e.g., a program designed to optimize for a particular outcome, which might produce unexpected results by breaking (conscious or subconscious) human conventions in order to reach the desired outcome. Could machine reading/writing based on this kind of algorithm actually have a destabilizing effect? Could it introduce ambiguity and question our foundational assumptions — maybe in a similar way as how Jane Alexander describes (human) creative writing opening up ambiguities? How would the objective be communicated to the machine in this context, if it’s something as subjective as “to succeed at reading/writing”?

Perhaps what unifies these preliminary thoughts is the idea of writing and reading, but not in the way a human would — namely, my curiosity about experimenting with algorithmic writing and reading, not in an attempt to replicate human processes, but to learn different approaches that can nonetheless contribute to the creativity, emotional resonance, agency, and linguistic artistry that I love about (human) creative writing. 

 

Compost heap: additional ideas from Narratives of Digital Capitalism that resonated with me

  • Zuboff:
    • “If the digital future is to be our home, then it is we who must make it so” — emphasizing human claim on the future, and human responsibility for it. In the context of my project, this suggests that in order for future processes of writing/reading to offer a home, they must retain some human element.
    • Describes the Aware Home, an imagined future technology like a smart home, except built on the assumption that the data belongs to the people who live in the home… reminder that digital tech doesn’t necessarily have to lead to surveillance capitalism and loss of privacy, etc. Good reminder to think beyond today’s normative narratives of digitalization.
  • Thornton:
    • Suggest that, when digitized, language loses its context and its linearity. This makes me think about conscious, human-driven forms and processes that play with or remove the linearity of text… although in that case, they seek to create meaning, rather than stripping it. Maybe human/machine collaboration is, once again, critical here.
    • Fleshes out the idea of “language-as-data.” This seems an interesting counterpart to Salter’s concept of “writing-as-art.”
  • Crawford and Paglen: 
    • A couple of their phrases stood out to me, including “archaeology of datasets” (in context of “excavating” training sets for AI systems to understand how they work) and “disappearing datasets” (data sets taken down when they were revealed to be problematic in some way). I’m wondering how else might disappearing datasets manifest or data archaeology manifest — whether literally (perhaps in speculative fiction), or figuratively.
  • Noble:
    • Discusses how the keepers of info/knowledge upon whom we rely are changing, from librarians/researchers/etc to search engines; we trust them to “separate wheat from chaff.” Halavais says they’ve “become an object of faith,” immune to criticism or appeal (25). There’s a lot to consider around faith, worship, and the ways AIs may seem to represent a new higher power. 

 

Works Cited:

Crawford, Kate, and Trevor Paglen. “Excavating AI: The Politics of Images in Machine Learning Training Sets.” Excavating AI, The AI Now Institute, NYU, 19 Sept. 2019, https://excavating.ai. 

Floridi, Luciano. “AI as Agency Without Intelligence: On ChatGPT, Large Language Models, and Other Generative Models.” Philosophy & Technology, vol. 36, no. 15, 2023, https://doi.org/10.1007/s13347-023-00621-y. 

Noble, Safiya Umoja. Algorithms of Oppression: How Search Engines Reinforce Racism. New York University Press, 2018. 

Thornton, Pip. “geographies of (con)text: language and structure in a digital age.” Computational Culture, no. 6, Nov. 2017, http://computationalculture.net/geographies-of- context-language-and-structure-in-a-digital-age/.

Zuboff, Shoshana. “Home or Exile in the Digital Future.” The Age of Surveillance Capitalism : the Fight for the Future at the New Frontier of Power. Profile Books, 2019.