I’ve been doing more research into human-AI co-creation, specifically for creative writing, and have noticed a couple of trends that are somewhat disheartening (from my view as a writer), but also somewhat exciting (in that they present important areas for critical reflection and creativity). The first is the tendency that I’ve noted before to want to use AI to simply mimic or replace parts of the creative process that humans are capable of doing on their own — faster, perhaps, or with more iterations, but essentially doing what humans can do. The second, and more distressing, is the tendency to assume that various parts of the human creative process should be delegated to AI, and that the product will not be harmed by doing so. 

As I’ve said before, I believe AI can have a place in the creative process, but as an addition rather than a replacement for human work. I recently listened to an interview between Ezra Klein and Ethan Mollick (author of Co-Intelligence: Living and Working with AI), who elegantly summarized my concerns in this area. They pointed out that, within the creative process or the learning process, the places where our ideas change and develop are the places where we struggle (Klein, “How Should I”). Working through challenges — reading a challenging text, wrestling our ideas into an outline, creating a rough draft from scratch, or revising and refining a draft to say what we really mean — is when we actually figure out what we think. But so much of the research on human-AI co-creation that I’ve been reading has started from the assumption that some of these functions should be delegated to AI. 

I was particularly disturbed by Fiialka et al.’s assertion that “AI agents can significantly alleviate the creative writing process, making it more pleasant for the human author” (4). I do not subscribe to the belief that great art/writing requires great suffering. But, equally, I do not believe that making the writing process pleasant is a desirable goal. Thinking hard is not pleasant, but that is where we hone our ideas. To me, Fiialka and colleagues’ paper seems to blindly accept one of the messages/worldviews that the medium of AI sends: namely, that creativity can and should be easy; that efficiency (i.e., a pleasant and quick writing process) is the highest priority, and that anything else is wasting time (Klein, “Will A.I. Break”). By dismissing the work of brainstorming, drafting, or revising as “monotonous” (17), approaches to co-creation like that of Fiialka and colleagues miss the point.

Klein and Mollick discuss a potential antidote to such approaches. They assert that a “good” or productive relationship with AI is not about extracting wisdom from the AI, but rather about relational interactions with the AI that extract greater wisdom from the user — which feels much more in line with my own belief that AI should be additive rather than duplicative. As they point out, part of what makes AI revolutionary is that it interacts with humans in a “human-like” manner, unlike anything the internet can provide. In this context, they suggest that users should always keep in mind that they need to be in relationship with AI, and therefore that the ways in which users customize AIs for that relationship is crucial. 

It struck me, though, that Klein’s conversation with another guest, Nilay Patel, complicates Mollick’s embrace of the relational capabilities of AI. Klein and Patel discuss the question of whether humans intrinsically need relationships where they must put in relational effort in order to determine an outcome — e.g., the effort of keeping and supporting a friend, which is quite different from simply customizing an AI so that it maintains the specific kind of dialogical dynamic that the user wants. 

All of this seems to indicate that perhaps there is something about the illusion or appearance of a relationship that allows AI to help spark human creativity, without requiring a “real” human relationship. Much as I questioned whether taking the work out of creative writing is desirable, I would also question whether taking the work out of human relationships is desirable. And, as always, this makes me wonder whether the AI is actually adding something new, or whether it’s simply taking the place of a human interlocutor who could do the same work.

At this point, I still feel that my reading and research has not brought me closer to understanding what an ethical and creatively interesting (to me) human-AI collaboration would look like (though I have solidified some of my opinions about what I think it should not look like). I still find myself more drawn to non-AI computational tools for creative work. This inclination, though, made me reflect on another part of Klein’s conversation with Mollick, in which they make the claim that science fiction has let us down in its portrayals of AI. Classic sci-fi AIs are the kind of AIs that are good at math and computation, but the AIs of our current world are actually much better at creativity (by all of our commonly used measures). This mismatch between expectations and reality has contributed to many misconceptions about AI and its strengths and weaknesses, and to poor intuition (at least among laypeople and casual users) about the same. Listening to this podcast gave me a new overarching goal for my project: to write sci-fi about AI that does not let us down in this way.

 

Works Cited:

Fiialka, Svitlana, et al. “The use of ChatGPT in creative writing assistance.” XLinguae, vol. 17, no. 1,  2024, pp. 3–19. DOI: 10.18355/XL.2024.17.01.01.

Klein, Ezra, host. “How Should I Be Using A.I. Right Now?” The Ezra Klein Show, The New York Times, 2 Apr. 2024, Apple Podcasts, https://podcasts.apple.com/us/podcast/the-ezra-klein-show/id1548604447?i=1000651164959.

Klein, Ezra, host. “Will A.I. Break the Internet? Or Save It?” The Ezra Klein Show, The New York Times, 5 Apr. 2024, Apple Podcasts, https://podcasts.apple.com/us/podcast/the-ezra-klein-show/id1548604447?i=1000651522107.