Category Archives: future counterfactuals

Stumbling into eternity

Why IR needs deep future counterfactual thinking

Chernobyl Pripyat exclusion zone by Pedro Moura Pinheiro (  licensed under creative commons 2.0 attribution non-commercial share-alike (

Chernobyl Pripyat exclusion zone by Pedro Moura Pinheiro (
licensed under creative commons 2.0 attribution non-commercial share-alike (

Almost thirty years after the world’s worst (yet) nuclear disaster, work is nearing completion on the 110 meter arch that will seal off Chernobyl’s reactor number four and allow for the removal of the melted nuclear fuel beneath it. The labourers building the arch are working against time: the concrete sarcophagus built to contain the effects of the explosion will reach the end of its expected lifespan in 2016. Then again, it might expire sooner, if the partial collapse of the turbine hall next to the reactor in February, 2013 is any indication. The arch itself is only a temporary solution, since there is currently no means of disposing of the waste. According to site manager Phillippe Casse, cited in the article, the disposal of the waste “could be done in 50 years’ time. Perhaps there will be the technology to solve the problem then.” In other words, this problem is being delegated to the future and its inhabitants.

The article highlights the staggering temporal challenge that radioactive material poses. Nuclear materials remain radioactive for tens of thousands of years. Yet many of the materials and strategies used to contain it – for instance, the concrete in which reactor four is currently encased –  are only effective on vastly shorter timescales.

The 2011 documentary Into Eternity delves into this problem by exploring the world’s first final nuclear waste facility: Onkalo in Finland.  Onkalo is hewn out of solid rock, constructed over decades and built to last 100 000 years. It seems to reflect an encouraging degree of proactivity and future-thinking with regards to the problem of nuclear waste.

But forward-thinking brings its own problems. Humans (even when aided by their most advanced technologies) struggle to think on timescales that reflect the half-life of nuclear particles. The effects of radioactive materials are distributed across the deep future, or what Timothy Morton calls the ‘future future’: a time so distant that it seems beyond the grasp of human cognition and, I shall argue, ethics.

Into Eternity’s director, Michael Madsen, is fascinated by this issue. He frames the documentary as a direct message to beings living thousands of human generations in the future. In a series of interviews with prominent members of engineers and advisors from the nuclear authorities of Finland and Sweden, he raises some tough questions. For instance, how can contemporary humans prevent distant future generations of humans from entering Onkalo? Can we trust thousands of future generations to transmit warnings about the site, or are we better off encouraging them to forget its location? Even if these future beings can decipher the messages left at the site, will they dismiss them as myth – just as contemporary scientists dismiss runes and other symbols left by previous civilizations?  Even these questions presuppose that the human species will exist long enough to guard these materials until they are no longer dangerous. Given the timescales involved, even this cannot be taken for granted.

Geiger counter by Jayneandd (  licensed under creative commons 2.0 attribution (

Geiger counter by Jayneandd (
licensed under creative commons 2.0 attribution (

If your ethics are  anthro-instrumental, then you can dismiss these problems: if there are no more humans, than who cares what else is harmed by radiation? But let’s assume that other beings do matter, and not only the ones that currently exist, but also the possible beings that may exist in the deep future. This is one way of saying that care for possible futures, and for future possible beings, is an ethical good irrespective of its value to humans as they currently exist. From this viewpoint, even if humans do not exist in the future, something to which we (now) could be ethically attached might be harmed by our actions, and so we should take it into account when pondering different courses of action. All right, then – how can we begin to think in this way?

The respondents in Into Eternity rely on one of the only tools that humans have for projecting into the future with limited or no empirical data: their imaginations. More specifically, they use a technique called future counterfactual reasoning: the act of imagining possible future scenarios and asking ‘what if…?’ they occurred.

Future counterfactual thinking is not, generally speaking, an accurate predictor of ‘the’ (that is, one specific) future. Rather, its function is to attune humans to multiple possible futures and consider how they – or, I would argue, future others – might react in these possible future conditions. As Stephen Weber puts it (in one a small handful articles in the IR literature devoted to future counterfactuals), the purpose of this kind of thinking is “to open minds, to raise tough questions about what we think we know, and to suggest unfamiliar or uncomfortable arguments that we had best consider”.  He argues that effective future counterfactual scenarios challenge the ‘official futures’ on which analysts and policy-makers rely. They focus our attention on ruptures and discontinuities, apparent anomalies, and catalytic events. For Weber, a good future counterfactual changes the boundary conditions for discussion, making it possible to address what, in the physical sciences, are often called ‘category two problems’. These are problems that exceed the limits of science in its current form – including the now (unfairly) infamous category of ‘unknown unknowns’.

Into Eternity’s interviewees use this form of thinking to ponder the problem of communicating the secrets of Onkala to future beings. They consider a number of possible scenarios: for example, one in which people eventually return to live around the site of Onkala; one in which earthquakes or wars destroy the site and its archive; one in which future beings try to open the site deliberately because they value its contents. They also consider the possible outcomes of their attempts to communicate into the distant future. For instance, they ask whether it would be effective to construct a sinister ‘landscape of thorns’ around the site to frighten intruders, or whether a reproduction of Eduard Munch’s ‘Scream’ would do the trick. They rely on this kind of future counterfactual thinking to make crucial decisions about Onkala’s future.

Counterfactual thinking is one of the few tools at human disposal for responding to some of the biggest problems we face. But counterfactual thinking remains underdeveloped – and sometimes openly scorned – in international security. In fact, Richard Ned Lebow titled his 2010 book on the historical counterfactuals Forbidden Fruit  precisely because mainstream IR treats places this technique somewhere on a continuum from rampant subjectivity to the corruption of scientific knowledge. Even those IR scholars, like Lebow, who engage with counterfactuals do so in a fairly conservative and instrumental way. The vast majority of this literature is devoted to past counterfactuals as a means of challenging theories and explanations of present conditions. This, in turn, is expected to help policy-makers to be more attentive and open-minded in their (near) future strategic actions. Moreover, these authors focus on relatively narrow timeframes (perhaps a few decades, or a century at most). They rely on existing, accessible empirical data and social-scientific methods for collecting it. And within IR discourses, most of the available work on this subject focuses on establishing the plausibility of other possible outcomes of historical events – that is, on the predictive value of counterfactual thinking. This is because counterfactual thinking is usually viewed as a means of improving strategic thinking – for instance, how to prevent (or win) the next war.

Future counterfactuals have also made a small impact on contemporary IR. Some academics and use future counterfactuals in order to inform policy making, theory-building and teaching. Others have scenario-based workshops in which they brainstorm, for instance, possible outcomes of the Syria crisis by 2018 or the potential use of nuclear weapons for terrorism or as a result of inter-state conflict. And as far back as 2000, a group of US think tanks ran a large-scale simulation in which they asked current and former government officials to react to a small-pox outbreak. Indeed, Operation Dark Winter exposed the total lack of preparedness on the part of the relevant agencies: supplies of vaccines were quickly exhausted and the (fictional) medical system collapsed quickly.

But these approaches to counterfactual thinking cannot help very much with the kinds of problems discussed above, which span millennia into the future, often cannot be studied empirically due to their massive timescales, cannot rely on existing knowledge, assumptions or conditions, and cannot be predicted with reasonable accuracy. In fact, the least problematic element is the past-orientation of historical counterfactuals – after all, a past counterfactual simply involves placing oneself in the past and thinking forward into a counterfactual future.

Even the future counterfactual exercises discussed above extend only a short distance into the future (in some cases, only a few years).  They do not help us to understand future possible worlds dramatically different from our own. Instead, they focus on very similar versions of existing conditions, with a few minor mutations (despite the fact that complexity theorists, and most proponents of scenario thinking, acknowledge this to be unrealistic in nonlinear systems). In these scenarios, most of what we know today still holds true, and our ways of knowing it are treated as reliable. Crucially, the beings that might be harmed are those that exist now, or in the near future. Finally, and crucially, these scenarios and counterfactuals are oriented towards informing strategy, not preparing us to face the ethical challenges posed by meta-threats like nuclear disaster.

Does this mean that counterfactual thinking is useless for thinking about harm in the deep future? No, but it does suggest that we need dramatically to change how we do counterfactual thinking. This is not a matter of making ‘better’ (in the sense of more plausible or empirically accurate) counterfactual questions and scenarios. Instead, it is a matter of using counterfactual thinking to do different things, several of which deserve to be highlighted.

First, it should help us to break with deterministic understandings of the future, which can lead to a sense of nihilism. For instance, apocalyptic climate discourses give humans the impression that we are mired in a deterministic universe, and that nothing we do can change the situation. This may be true, but in case it is not, it is important to retain a sense of multiple possibilities and contingency, and to explore the range of responses we might make to them. Future counterfactual thinking – particularly approaches that impel us to imagine multiple worlds – can help to achieve this, or at least to orient ourselves towards it.

Second, one of the advantages of counterfactual thinking in general is that it undermines the notion that there is only one possible future. As such, it can help humans to cope better with (and perhaps even embrace) contingency and non-linearity, conditions with which we do not relish. Simply accustoming ourselves to multiple possible futures, and radically different worlds, can help us to retain (or perhaps to attain) a sense of efficacy,however modest, in the face of extreme uncertainty. This can combat the affective states of nihilism, resentment or depression that might otherwise accompany thinking about meta-threats. It also attunes us to possibilities, not only that our worst nightmares might not happen, but also that other, unknowable futures might exist. Since we cannot know these futures now, we cannot assume with any certainty that they will be either positive or negative, and so we must remain open to a range of possibilities. In a word, deep future counterfactual thinking is conducive to hope, albeit of a tempered kind.

Radiation chamber by Thomas Bougher ( licensed under creative commons 2.0 attribution non-derivs non commercial generic (

Radiation chamber by Thomas Bougher ( licensed under creative commons 2.0 attribution non-derivs non commercial generic (

Third, deep future counterfactual thinking can help us to imagine multiple possible worlds that may seem extreme, fantastical or horrific to us (for instance, human extinction). This helps to combat what I call futural amnesty, or  forgetting the future. Futural amnesty is distinct from denial, for instance of the kind that we find in debates on climate change. Denial is, in one sense, affirmation; it involves acknowledging the possibility of a phenomenon or event, then systematically negating what, to the opposite viewpoint, appear to be its positive features. In contrast, futural amnesty is a deep-seated unwillingness to think, or be confronted by, a possibility that one might otherwise be forced to accept or deny. It is a refusal to recognize things that cannot be fully grasped, an unwillingness to think even the conditions of their unthinkability. Its most frequent refrains are ‘how could we possibly know?’ or ‘let’s not even think about that’.

By appealing to futural amnesty, people let themselves off the ethical hook not only of responding to, but also of imagining situations beyond their grasp. Yet, like amnesty related to the past, its function is to allow humans to ‘get on with life’, to live without the constant presence of horror and enormity. It allows them to draw a line in the near to medium future (perhaps a few generations, or even one’s own lifespan) beyond which they can forget to think, and behind which they can shelter. So futural amnesty is a protective and generous strategy. But it is also one that stops humans from confronting what might be the most important ethical challenges they could face. Future counterfactuals break through futural amnesty and the social taboos that hold it in place, forcing us to imagine the unknowable or unthinkable.

Doing this is, in turn, crucial in helping us consider our responses to such events: what we value, what we might try to protect, and how we can respond to other beings. In other words, future counterfactual thinking is deeply ethical. By imagining the effects of our actions into the deep future, we may start to think about the harms that we might do (unintentionally) not only to known others, but also to unknowable others. And this is not only useful in thinking about future actions and their effects, but also in helping us to realize our effects on currently existing others that are radically different from us. Indeed, good counterfactual thinking will not detract from the value we place on ourselves and other beings now but rather heighten them, attuning us to ethical challenges both present and (future) future. From this perspective, (deep) future counterfactual thinking is a means of enhancing our ethical sensibilities, confronting our worst nightmares, and trying to remain ethically open in the face of them.

IR needs to develop these aspects of counterfactual thinking, and to make it central to discussions of international ethics. Counterfactual thinking is not scientific, or objective, or empirically robust. It cannot give us predictions or certainty, and it can’t prove that everything will be ok, or tell us how to ensure this.  But it can help us to see possibilities, to scope the boundaries of our knowledge, to appreciate the limits of our agency and to expand our ethical sensibilities. In the strategic-instrumental discourses that (still) dominate IR, this may not seem like much of a weapon to wield against meta-threats like nuclear disaster. But it may be all we’ve got.

As the author of the Chernobyl article discussed above states, “every stage of the [arch] project has been a step into the unknown”. Indeed, when we think ethically about meta-threats, we are stumbling into the unknown – quite literally, into eternity –with little to guide us. This goes far beyond what Hannah Arendt called ‘thinking without banisters’: it is thinking without stairs, and perhaps without even a human body to climb them. If future counterfactual thinking can help us even in a modest way to do this, then we should make it a top priority.


%d bloggers like this: