Speculative methods are a creative and innovative approach to exploring and understanding possible futures. Unlike traditional forecasting or predictive methods, speculative methods do not aim to predict specific outcomes but instead focus on envisioning various potential futures to better prepare for uncertainty and change. 

Summary, Speculative Methods, Possible Futures Research Methods, EFI KIPP Resource

The Narrative Futures course is, in many ways, not quite what I expected.

Arriving several weeks late and stepping straight into the two-day intensive teaching block for the ‘World of Story’ module, I found myself wishing I had paid less attention to the reading lists when trying to build a picture of the course, and more attention to the assessment criteria — for herein lie the clues to the underlying ethos of what we are being taught.

‘AI and Storytelling’ is assessed on the quality and creativity of our own AI-assisted storymaking; ‘Text Remix’ on (as I understand it) the artistic merit of experimental Dadaist reconfigurations of text; ‘Religious Identity Through Story’ on the act of retelling a religious story in a new form.

‘Speculative Fiction’ I had already guessed would be at least partly assessed in this way, but even that I had expected to be more about the social functions of spec fic, perhaps exploring historical case studies of its impact (or, plausibly, utter lack of impact) on mainstream thought or scientific direction — in short, to be largely about investigating what spec fic does, rather than how to do it ourselves. I haven’t yet taken the course, but based on the introductory session it seems likely to skew heavily towards creative practice. We will only be assessed on the piece of creative writing we produce — though, of course, creating a successful story does require a deep understanding of the genre.

The more investigation-oriented modules (I’m using the term ‘modules’ so that ‘course’ can unambiguously refer to ‘Narrative Futures’) are marked on very different criteria. ‘Insights Through Data’, for example, expects us to do pretty much exactly what it says on the tin. ‘Narratives of Digital Capitalism’ is about investigating the impact of digital platforms on society. ‘Mental Health in the Anthropocene’, ‘Narrative and Computational Text Analysis’, ‘Migration and Forced Displacement in a Digital Age’…  naively, there seem to be more outwardly verifiable ways of determining whether what we do in these courses is, in some sense, ‘correct’; whether our work reflects an accurate grasp of the way the world is.

Like any taxonomy, this distinction doesn’t stand up to scrutiny. After all, isn’t it one of the key ideas of the course that we’re always telling stories about our data — choosing which statistical operations to perform, which results to pay attention to, how to describe what the results mean? On the flipside, surely it will take creativity to conceive of effective AI governance policies, however ‘truth-seeking’ courses like ‘Data and Artificial Intelligence Ethics, Law and Governance’ are? The more I look into the various modules, the more it seems that even the more scientific electives are assessed on our operationalisation of the content knowledge, rather than just our understanding of it: how ‘effective’ was the thing we made? Did it communicate not just truthfully but well?

Correspondingly, perhaps these creative modules are meant to be experiments in operationalising our knowledge of the world, but with a greater focus on the medium of communication than on any particular content. Fiction, art, and storytelling are no doubt essential for what you might in a corporate setting call “comms”; there are many ways to communicate about the world beyond just straightforward news reporting. This is also what we’re trying to do when representing data, for instance.

But I still think the distinction points to something real — to different philosophical approaches, perhaps, or different intentions. Nobody doubts that we’re all here to try to build a better future, but how to carve up this broad, abstract goal into near term learning and practice is something that might, like the course modules, fall into two rough camps:

The first was described to me by a coursemate as akin to what computer scientists call a “greedy algorithm”. When seeking an endpoint (here: a good future) across a complex map, a greedy algorithm looks at the range of immediate steps and makes a choice based on the comparative merits of these near term options. It then iterates on this to proceed across the map, reassessing at each stage, relying on its readings of the proximate terrain at each location. This feels somewhat analogous to the more truth-seeking modules — understanding how to read data, what’s going on in the world right now, the immediate risks of emerging technologies, present-day infractions upon our privacy and what they expose us to, how the situation might worsen.

The second approach feels more like working backwards — setting our sights on a good future (or one particular element of a good future), and postulating some of the steps we might need to take along the way, even if this leaves our immediate next moves unclear. ‘Setting our sights’ might be the core principle here, at least for many of the modules in question – envisioning a future, before we set about creating it. It takes creativity to imagine a better world — whether you’re doing that visually, textually, or in any other form. After all, “…speculative methods do not aim to predict specific outcomes but instead focus on envisioning various potential futures.”

The first two-day intensive, then, was an exploration of creative practice. Six different artistic forms, six sessions that started with questions as basic as: “What is a play?” We were given examples of these forms to play with — even treated to a live performance of oral storytelling, featuring song as well as speech. The intention, perhaps, was to give us a rudimentary toolkit with which we could start making art that informs, inspires, and influences society in some way.

“I heard from a friend that the first bit of teaching for the Narrative people was somebody singing to them.”

I’ve got to be honest, this stung a little bit.

What I’m wrestling with here is my inability to reconcile the creativity-assessment approach with my conviction that to become an artistic practitioner with any kind of audience or influence, you have to go as deeply into one medium as you can — preferably starting in childhood and not stopping for anything. Since this course was never going to make us into professional speculative fiction writers, I had assumed there would be a greater emphasis on analysing and understanding speculative fiction than on creating it (ditto visual narratives, AI-based storytelling, interactive digital media…) Practicing an art can certainly help with understanding it, but if my goal were primarily to understand, practice would form only a part of that, not the whole.

But why shouldn’t we be envisioning a good future anyway, even if our visions won’t be widely seen? Sure, perhaps individuals do need some capacity to imagine utopian/dystopian futures in order to act well in the present. Perhaps small audiences matter, too. Seeing it all as a lesson in ‘comms’ is a more comfortable way to think about it — but I think the EFI has grander ambitions than this; if the quote is accurate, they’re going to mark us on our speculation as well as our communication.

There’s no problem with this per se, but I guess my intuition is that in practice, the long-term envisioning part of future-making will happen elsewhere, beyond the reach of us mere mortals — by the Elon Musks of the world, by the great media machines, by country-wide cultural shifts, by the incentive landscapes of governments and corporations, and by whatever destructive Malthusian forces are setting the rules of the game.

I’d like to be proven wrong about this. But I can’t get away from the idea that if we’re going to have any impact on the future by learning about story, it will be by seeing these powerful (narrative-driven) forces in operation, understanding their influence, and gently nudging them one way or the other. Our ability to do this seems to depend more on grasping the realities of how the world is now (and how contemporary narratives are affecting/building that) than on putting pen to paper ourselves to conceptualise a nicer way for the future to be. As a result, I can see (inevitably, perhaps) more payoffs for the “greedy” approach than for the “visionary” one, loaded though those terms are.

Perhaps I’m fundamentally misunderstanding what the program is designed to do. Most likely I can make my own designs for it. The EFI has an evident drive to push us beyond investigating a problem, into creating something about the problem (whether that’s an activist’s toolkit or a piece of narrative or a representation of some data), and I think this is admirable and important. But I’m noticing that I want to pay close attention to these two distinct frameworks — not to neglect one of them or the other (though I am certainly tempted to lean more towards the scientific than the creative), but to be aware of the way the tutors will approach their subjects, and to choose my modules wisely. I want my learning to align as closely as possible with my overall intentions for the course.

And it’s with all this in mind that I’m starting to think about my project.

It’s a long way off yet, two years before it begins in earnest, but this gives me the chance to sit with the ideas of the course, to contemplate deeply how I want to act on what I’m learning.

Some of the questions that feel most exciting to me right now are around how storytelling shapes people’s worldviews in the present: how do we frame our information or our data to pull opinion one way or the other? (With the underlying question of: and how can we do this more ethically?) What is the social function of storytelling, and do stories measurably influence action? Do they contain meaningful ethical convictions? If so, are widely-consumed contemporary stories encouraging actions that align with the moral demands of the modern world, or just with those of the ancestral environment? What are the moral demands of the modern world, given the dangers of emerging technologies? What stories did we tell each other at important moments in history — the cold war, for instance — and did these stories change what happened? Who controls which stories are shared and seen in these cases? Are truthful stories mimetically viable? What stories does Moloch tell?

We’re at a key moment of change right now; AI is in the (rapid, unstoppable) process of changing our world fundamentally; advanced biotech is hot on its heels. Globally, the way we describe what’s happening might change the way the future goes. I almost certainly have absolutely no power to change the way the human story unfolds this century; I want to try to understand it, nonetheless.