Category Archives: Technology

Spiked: violence, coloniality and the Anthropocene

This online mini-exhibition is presented in advance of the initiation of the Anthropocene Re-working Group (with Zoe Todd), which will take place at the Conference “Landbody: Indigeneity’s Radical Commitments” at the Centre for 21st Century Studies, Milwaukee, 5-7 May 2016. 

The full text of our presentation is available here: Earth violence text Mitchell and Todd

Since this is a work in progress, please let us know if you would like to reproduce it. For the same reason, all rights are reserved for the use of these images. . Contact me if you’d like to share, reproduce or alter them. 

20160417_194038

Strata by Audra Mitchell. All rights reserved.

 

Since the early 2000s, there has been a scramble amongst scientists to define the boundaries of the ‘Anthropocene’. In the rush to mark and claim this era, hundreds of scientists and some social scientists are racing to find a definitive ‘golden spike’. The golden spike is a discursive, imagined, yet very real placetime in which scientists intend to drive a stake, claiming the conversion of the Earth into a human dominion. Most notably, the ‘Anthropocene Working Group’ of the subcommission on Quarternary Stratigraphy is planning this year to announce where/when the spike should be driven. It will choose amongst numerous proposals, including the detonation of the first nuclear weapons, the Industrial revolution, and the beginning of large-scale agriculture.

In so doing, this group of overwhelmingly white, male scholars of the physical sciences, whose meetings are closed to the public, plan to make a claim on behalf of ‘humanity’ over the history, future and fate of the planet.

Critics of the Anthropocene are producing excellent work on the domination of scientific perspectives amongst Anthropocene discourses,on Anthropocentric narratives that magnifies human agency and entrenches the human/nature divide, and the inaccuracies of claims that ‘humans’ as a whole are responsible for the phenomena transforming the Earth. Yet there has been little focus on the role of foundational violence in the Anthropocene and the distinctively colonial violence enacted through the forces re-shaping the Earth and the discourses arising to describe them. Recently, the geographers Mark Maslin and Simon Lewis have made an important contribution to this discussion. They argue that the beginning of the Anthropocene should be placed in 1492, the year when the colonization of what would become the Americas resulted in the genocide of Indigenous peoples. Maslin and Lewis focus on the ecological outcomes of this period of mass violence and expropriation.

20160417_194711

Spiked by Audra Mitchell. All rights reserved.

Building beyond this,  Zoe Todd and I are initiating a new artistic/performative/collective thought experiment focused on role of violence in the Anthropocene. We will be looking at multiple modes of violence, including the detonation of nuclear weapons and the slow violence of capital accumulation, industrialization and extinction. Each of these phenomena, central to the concept of the Anthropocene, are rooted in the historical/geological moments and trajectories of violence that are colonisation. To this end, we are inaugurating a public ‘Anthropocene Re-working Group’ whose goal is to explore the violences shaping the planet in open-ended, multi-media, multi-disciplinary ways (more on this to follow…)

20160417_194112

Entanglement by Audra Mitchell. All rights reserved.

To begin this project, I wanted to get my hands on some actual spikes to think and feel through the discourse of a ‘golden spike’. Engaging with these spikes allowed me to reflect on their materiality and their potential for violence. Handling them enabled me to sense their  weight and shape, their utility as weapons, the intention of penetration with which they were forged, their appropriative nature, as the stakes through which claims to land and ‘resources’ are made. These particular spikes, salvaged from a defunct stretch of railroad, also evoked the violence of industrialisation, the expropriation of Indigenous lands across North  America and the near-extinction of the American buffalo as a result of hunting from trains. Even their material basis is poignant: it brings to mind and hand the metals torn from soil and stone to fuel the demand for industrial resources and capital speculation.

I composed these images in order to encourage contemplation of the ‘golden spike’ as a central and meaning-multiplying  embodiment of the impulse to mark and bound the Anthropocene. These are my initial responses to the idea of the golden spike and the intention to tell different stories about the violence of the Anthropocene. I hope that this nascent project will encourage and foster the exchange of many alternative stories, images and ideas.

20160417_194216

Death/metal by Audra Mitchell. All rights reserved.

20160417_194024

Planetary Boundaries by Audra Mitchell. All rights reserved.

20160417_193642_001

Subcommittee by Audra Mitchell. All Rights Reserved.


Posthuman security: reflections

This month’s post comes courtesy of E-IR. It offers some reflections on the discussions related to ‘posthuman security’ that have been brewing over the past couple of years. It is part of a series that also includes contributions from Elke Schwartz, Matt McDonald and (coming soon) Carolin Kaltofen. Thanks to Clara Eroukmanoff and the E-IR editorial team for putting this series together.

This article has also been published on Global Policy Journal’s blog.  

 

Posthuman Security: Reflections from an Open-ended Conversation

 


Making a ‘cene’

Image

Image by Samovaari (http://bit.ly/1drmtcr) Licensed under Creative Commons Generic 2.0 Attribution Non-commercial (http://bit.ly/1fIYVEV )

Looking to become less self-centred and more reflective about the harm you do to the world? Interested in adopting a broader perspective, considering the well-being of others and maybe even gaining some humility about your place in the universe? What better way than to name an entire geological era after yourself?

The concept of the ‘anthropocene’ is a warning to humans that they must acknowledge and mitigate their destructive effects on the Earth. Its coiner, Paul Crutzen, has used it to draw attention to how human action  has shaped the planet and its complex systems in ways that are unpredictable, long-lasting (in geological timescales), and potentially threatening to humans as well as many other kinds of beings.

‘Anthropocene’ is not just a descriptive term. It is meant to function as a mirror held up to humanity, enabling it to reflect on the long-term damage our species has wrought. So, it should be a valuable concept for anyone interested in critiquing  human dominance and its effects.

But in fact, the existing concept of the ‘anthropocene’ magnifies and sometimes even valorizes radical anthropocentrism, reverence of human agency and the desire to gain mastery over nature.  In fact, instead of calling for an end to the logics that have created potentially irreversible change, it expresses an anxiety that humans have not yet made the world in their own image. In other words, it does not so much reflect an appeal to move beyond a world shaped by human agency, but rather to achieve one.

Although the concept is hotly debated, a scan of the literature suggests that much of the controversy surrounds when it can be said to have started (see some recent contributions to this debate, for instance here , here and here ), how it can be measured, or whether it exists at all (the position of climate change deniers). I think that the concept itself should be controversial, not for the empirical claims that it makes, but rather for the ontological assumptions it entrenches – and for the fears and desires it projects.

First, as scholars like Bruno Latour and Phillippe Descola have pointed out,  the dominant concept of the ‘anthropocene’ is rooted in a radical dichotomy between ‘nature’ and ‘the human’. This is evident in Crutzen’s claim that the major marker of the anthropocene is the deviation of the climate from ‘natural behaviour’ as a result of human actions.  Indeed, Crutzen and Steffen argue that, although the Earth’s climate is subject to variations, human activity has shifted it “well outside the range of natural variability exhibited over the last half million years”. In a similar vein, scholars concerned with restoring ecosystems to correct for these changes aim to return to a ‘natural state’ (for these researchers this is problematically defined as the states existing before European colonization). In each of these cases, human activity is treated as an independent force that acts on (rather than in, or as part of) the Earth and its complex systems, glossed as ‘nature’.

Aside from its ontological and ethical implications, this divide it is also a powerful source of securitization. ‘Nature’, in these discourses, is often treated as a threatening force that is at best indifferent, and at worst hostile, towards human flourishing. For instance, Zalasiewicz et al claim that if human terraforming stopped entirely, that “nature would soon take over these constructions, reducing them to ruins in a matter of centuries. After a few millennia, perhaps only a patchy layer of concrete and building rubble would remain”. A similar argument is made by Alan Weisman in his fascinating counterfactual book, The World Without Us. Weisman argues that, if a human-specific virus wiped humanity from the planet tomorrow, everything from houses to subway systems may be destroyed in a matter of decades by the ‘return’ of nature. Likewise,  James Lovelock claims that, when the energy crisis he predicts for the next couple of decades occurs, cities will not only be destroyed, but also consumed. As he puts it, “within a week, all that was alive is dead. The corpses are slowly repossessed by the natural world” (89)

In these cases, ‘nature’ is presented as a quasi-hostile force that would destroy humans if they were to relax their grip on the controls. In fact, these narratives draw on a notion of malevolence that echoes the animism that is so often maligned by Western secular science.

At least, however, this understanding of malevolent ‘nature’ nods at the agency of nonhumans, but it does so in a very limited way. Weisman’s book teems with beings that crowd, thrust, crack, wind, pound and burn their way through human-made artefacts. In this one sense, it is very attuned to the ‘actancy’ of beings other than humans. But, oddly, Weisman focuses almost exclusively on their destructive potential vis a vis human civilization. He doesn’t mention that, or how, their actancy was just as crucial in processes of worldmaking – including those in which humans are not a significant presence. As a result, the causal force of most other beings is treated as largely hostile.

It’s no coincidence that many of these discourses predict a future in which humans are  gone, decimated or severely reduced in capabilities. The upshot of all this is that future counterfactuals about the anthropocene often reflect a deathly fear of the end of the anthropocene. This is often linked, however subtly, to the demise of the human, which suggests that humans must control the planet in order to survive in it.

This highlights a paradox at the centre of the concept of the ‘anthropocene’: although the concept is supposed to help us to critique human dominance, it does not encourage humans to relinquish their grip on the control panel. On the contrary, it offers images that make it seem all the more necessary and urgent for humans to redouble their control over ‘nature’ in order to avoid being destroyed. This places the desire to gain control – that is, to self-consciously bring an ‘anthropocene’ age into being – at the heart of this concept.

This desire  has produced conflicting images of nature as a piece of inert matter for humans to control (an image which is not at all new). Indeed, the powerful idea behind anthropocene thinking is that humans have made their own geological epoch, turning our ‘redesigned atmosphere’ into a ‘human artifact’ (Weisman, 2008).

Image by Steve Lynx (http://bit.ly/1fIZMWh) Licensed under Creative Commons Generic 2.0 Attribution-Non-commercial ( http://bit.ly/1fIZleo)

Image by Steve Lynx (http://bit.ly/1fIZMWh) Licensed under Creative Commons Generic 2.0 Attribution-Non-commercial ( http://bit.ly/1fIZleo)

Proponents of the concept offer different images of human planetary craftsmanship. For some authors, ‘nature’ is shaped like raw materials by human tools (Zaliewicz et al, 2011 – see above), while for others, human activity is akin to a natural force – but not actually counted as one (Crutzen and Steffen, 2003 – see above). Steffen et al cite Vladimir Vernadsky’s treatise Geochemistry, in which it was claimed that the Earth had entered a ‘psychozoic era’,  in which human consciousness and reason had reshaped ‘living matter and inert matter’. Similarly, Lovelock has called humans the ‘nervous system’ of the planet, as if mind were a unique property of humans, which they project onto other beings.

From this perspective, nonhuman beings are either dead matter to be hewn, or living matter to be manipulated. Indeed, Steffen et al go on to claim that one of the key features of the anthropocene in the 21st century is the human mastery of ‘living matter’, or ‘life itself’, through the engineering (or commandeering) of its molecular and genetic bases. The idea that ‘nature’ is inert suggests that humans are the only source of agency or force acting on a  motionless, dead Earth, ignoring the multiple sources of agency to which Latour (amongst others) draws our attention.

This raises another red flag with the current concept of the anthropocene: it vastly overestimates, and valorizes, human agency as the dominant force in the universe. Indeed, the crux of Crutzen’s argument is that human activity has usurped ‘natural’ forces as the primary determinant of the Earth’s future. Simon Dalby argues that “the much-quoted line from Genesis about humanity as having dominion over nature…can now simply be read as a statement of fact – that is the point of the Anthropocene” (p. 164).

The idea of dominion is key. As in other narratives focused on human exceptionalism, the point is not simply that humans can change the planet on a massive scale, but also that they are the only ones capable of doing it. Smith and Zeder acknowledge that other animals can engage in niche construction, but humans are the only beings to make the entire planet their ‘niche’. The upshot is that human agency is treated as unique, as a form of meta-agency that supercedes – or at least can match  – all other forms of causality and force.

This, in turn, effaces the role that other beings play in the emergence of the phenomena in question. Millions of processes – chemical reactions, the adaptation of species in relation other living and non-living beings, geological processes and so on – have interacted with human agency to produce them.  Of course, scientific discourses of the anthropocene mention these processes, but they treat them as features of nature, rather than co-actants in the formation of worlds.

Dalby’s reference to the Biblical notion of human dominance also reflects a powerful idea: that humans have literally usurped roles once assigned to deities or higher powers. Donna Haraway suggests that the concept of the ‘anthropocene’ is a secular version of the old Christian story in which all of the Earth labours to give birth to humanity, its ultimate destroyer. I would argue that the Western secular transformation of this story has also added something new to the mix. Elsewhere, I have argued that a hallmark of Western secular belief is the transferral to humans of tasks and capabilities once assigned to the divine. This includes the duty to intervene in the lives of humans and other beings, and even to define their forms of being.

This belief is reflected clearly in notions of geo-engineering – one of the proposed solutions to the threats faced by humans in the ‘anthropocene’ – which elevate human agency to a deity-like status.  As Stephen Schneider puts it, “in literature and myth, only gods and magicians had access to controls over the elements” (p.3844), but geo-engineering places this task squarely in human hands. This is a textbook example of the Western secular belief that divine agency has been transferred to human hands.

Geo-engineering  takes the basic idea of the anthropocene – the alteration of the planetary system by humans – and packages it as a virtue, perhaps even a necessity for human survival. Whether schemes to artificially whiten clouds, create massive algae blooms to sink carbon or even implement a massive sunshade in space to deflect solar radiation, these mega-projects all rely on concentrated, magnified human domination of other beings to sustain anthropocene conditions. Many scientists have raised doubts about geo-engineering, but they focus primarily on the uncertainty surrounding its effectiveness or its effects. Very few, if any, have raised questions about the wisdom of accentuating anthropocentric logics in order to solve the problems they have helped to create.

Indeed, the idea of geo-engineering prescribes one of its most potent sources of the ‘anthropocene’ crisis as a cure. That is, they almost invariably call for more, and more massive,  anthro-instrumental action, the bottom line of which is keeping the Earth comfortably habitable for humans. Granted, Lovelock argues in his typically controversial way that one way of responding to climate crisis is to, like a 19th century doctor who knows little about the disease with which his patient is grappling, ‘let nature take its course’. But in the same breath, he argues that large-scale geo-engineering projects may be necessary to ensure the survival of the human and many other species. In either case, these discourses return to the deep anxiety that the conditions for human life will end, and the powerful desire to create an era in which they can be preserved.

A major alternative response to the problems of the ‘anthropocene’, the ‘planetary boundaries approach’ reflects a wariness about placing too much faith in god-like projects whose outcomes we can’t confidently predict. Instead, it seeks to return human beings to the conditions of the Holocene. Proponents of this approach argue that this is possible if we can find thresholds ‘intrinsic to nature’  (for instance, freshwater use or oceanic acidification), and either return below them or refuse to cross them. This, they claim, will “offe[r] a safe operating space in which humanity can pursue its further development and evolution” (Steffen et al, 2011, 860 – see above). The planetary boundaries approach seems to avoid the worst anthropocentric excesses of geo-engineering. But ultimately, its goals are the same: to ‘return’ to – or perhaps to  create for the first time conditions – that are ideal for humans. Again, the single-bottom line of anthro-instrumental thinking lies at that heart of this approach.

Image by Derringdos http://bit.ly/1drmtcr Licensed under Creative Commons Generic 2.0 Attribution-Non-commercial ( http://bit.ly/1fIZleo)

Image by Derringdos http://bit.ly/1drmtcr Licensed under Creative Commons Generic 2.0 Attribution-Non-commercial ( http://bit.ly/1fIZleo)

In sum, existing discourses of the anthropocene promote a quite strident form of anthropocen( e) trism. This means that adopting and using the concept is problematic for anyone who wants to challenge the major pillars of human dominance and exceptionalism: the human/nature divide, the notion of an inert and/or hostile ‘nature’, and the deification of human agency. In its current form, the term ‘anthropocene’ is also problematic for those who want to see a movement away from the deification of human agency.

So should weak or non-anthropocentrists  boycott the concept of the anthropocene? On the contrary, we should struggle to shape it. Most importantly, we should try to expose the fear and desire that drive the current calls to amplify human control and to complete the human domination of the cosmos.

Crucially, its emphasis could shift toward a kind of ‘multiple-bottom line’ in which human survival (or comfort) was one amongst many considerations.  Yes, this might involve contemplating – and I don’t mean welcoming, let alone celebrating –  the idea that the human population might take a big hit or even disappear. This, in turn, would mean accepting that the planet would not, in fact, end as a result of our demise. Thinking about these scenarios is a good way of exploring the outer boundaries imposed by human fear and desire. But there are also many less extreme scenarios, which might involve emphasizing the needs of other species when thinking about ideal planetary ‘conditions’ and understanding that change does not affect all forms of being uniformly.

To explore the possibility that humans could live and even thrive in a geological era they don’t dominate is not necessarily to call for a return to a pre-industrial or ‘primitive’ form of human life. On the contrary, it involves distinguishing between the concept of flourishing and that of domination, and finding ways of life that reflect the former.

Finally, a re-jigged concept of the anthropocene might challenge the dictum that the efforts of humans to (re)shape the world are uniformly ‘bad’ for ‘nature’ (a notion which is even reflected in critics of geo-engineering). As Rosi Braidotti points out, terraforming (or directed world-building) is one way in which humans intersect with other beings and, in Deleuzian language, ‘become-Earth’. It might be that the best way forward is to look for forms of terra-forming that are more aware and respectful of the other beings with which humans co-constitute worlds, that acknowledge and draw on various forms of agency, actancy and complex causality.

Most people  who use the term ‘anthropocene’ want to see an end to the enormous damage that may result from human interventions in the Earth system. But do they call for an end to an era of human domination? Not very often. While the conditions associated witht he anthropocene are treated as deeply undesirable, the image of an anthropocene – an age controlled by humans – is the subject of desire lying beneath this discourses . To make this argument is not to deny the catastrophic events and phenomena described by those who subscribe to the concept of the ‘anthropocene’. Rather, it is to contest the ontological and affective underpinnings of the concept, and the subtle ways in which it pushes us into highly damaging logics and beliefs. We should not assume that the concept of the ‘anthropocene’ automatically performs a critical function. It needs to be appropriated – perhaps even subverted – in order to do this.

 

 


Posthuman Valentines

If you’re feeling depressed by schmaltz this Valentine’s day, here are some posthumanist greeting cards to lighten your mood.

Image by DigitalRalph (http://bit.ly/1g3TnV6) Licensed under Creative Commons Generic 2.0 Attribution (http://bit.ly/1g3TBeY)

Image by DigitalRalph (http://bit.ly/1g3TnV6) Licensed under Creative Commons Generic 2.0 Attribution (http://bit.ly/1g3TBeY)

 

Photo by Midway Journey (http://bit.ly/1eFUV82 ) Licensed under Creative Commons 2.0 Generic Non-commercial Share-alike (http://bit.ly/1g3V3hg)

Photo by Midway Journey (http://bit.ly/1eFUV82
) Licensed under Creative Commons 2.0 Generic Non-commercial Share-alike (http://bit.ly/1g3V3hg)

 

Photo by Truth-out.org (http://bit.ly/1g3VzMl  ) licensed under Creative Commons 2.0 Generic Non-commercial Share-alike (http://bit.ly/1eFV4Z2 )

Photo by Truth-out.org (http://bit.ly/1g3VzMl ) licensed under Creative Commons 2.0 Generic Non-commercial Share-alike (http://bit.ly/1eFV4Z2
)

 

 


Stumbling into eternity

Why IR needs deep future counterfactual thinking

Chernobyl Pripyat exclusion zone by Pedro Moura Pinheiro (http://bit.ly/19czoBH)  licensed under creative commons 2.0 attribution non-commercial share-alike (http://bit.ly/1fdBmTD)

Chernobyl Pripyat exclusion zone by Pedro Moura Pinheiro (http://bit.ly/19czoBH)
licensed under creative commons 2.0 attribution non-commercial share-alike (http://bit.ly/1fdBmTD)

Almost thirty years after the world’s worst (yet) nuclear disaster, work is nearing completion on the 110 meter arch that will seal off Chernobyl’s reactor number four and allow for the removal of the melted nuclear fuel beneath it. The labourers building the arch are working against time: the concrete sarcophagus built to contain the effects of the explosion will reach the end of its expected lifespan in 2016. Then again, it might expire sooner, if the partial collapse of the turbine hall next to the reactor in February, 2013 is any indication. The arch itself is only a temporary solution, since there is currently no means of disposing of the waste. According to site manager Phillippe Casse, cited in the article, the disposal of the waste “could be done in 50 years’ time. Perhaps there will be the technology to solve the problem then.” In other words, this problem is being delegated to the future and its inhabitants.

The article highlights the staggering temporal challenge that radioactive material poses. Nuclear materials remain radioactive for tens of thousands of years. Yet many of the materials and strategies used to contain it – for instance, the concrete in which reactor four is currently encased –  are only effective on vastly shorter timescales.

The 2011 documentary Into Eternity delves into this problem by exploring the world’s first final nuclear waste facility: Onkalo in Finland.  Onkalo is hewn out of solid rock, constructed over decades and built to last 100 000 years. It seems to reflect an encouraging degree of proactivity and future-thinking with regards to the problem of nuclear waste.

But forward-thinking brings its own problems. Humans (even when aided by their most advanced technologies) struggle to think on timescales that reflect the half-life of nuclear particles. The effects of radioactive materials are distributed across the deep future, or what Timothy Morton calls the ‘future future’: a time so distant that it seems beyond the grasp of human cognition and, I shall argue, ethics.

Into Eternity’s director, Michael Madsen, is fascinated by this issue. He frames the documentary as a direct message to beings living thousands of human generations in the future. In a series of interviews with prominent members of engineers and advisors from the nuclear authorities of Finland and Sweden, he raises some tough questions. For instance, how can contemporary humans prevent distant future generations of humans from entering Onkalo? Can we trust thousands of future generations to transmit warnings about the site, or are we better off encouraging them to forget its location? Even if these future beings can decipher the messages left at the site, will they dismiss them as myth – just as contemporary scientists dismiss runes and other symbols left by previous civilizations?  Even these questions presuppose that the human species will exist long enough to guard these materials until they are no longer dangerous. Given the timescales involved, even this cannot be taken for granted.

Geiger counter by Jayneandd (http://bit.ly/19cyx41)  licensed under creative commons 2.0 attribution (http://bit.ly/1fdBmTD)

Geiger counter by Jayneandd (http://bit.ly/19cyx41)
licensed under creative commons 2.0 attribution (http://bit.ly/1fdBmTD)

If your ethics are  anthro-instrumental, then you can dismiss these problems: if there are no more humans, than who cares what else is harmed by radiation? But let’s assume that other beings do matter, and not only the ones that currently exist, but also the possible beings that may exist in the deep future. This is one way of saying that care for possible futures, and for future possible beings, is an ethical good irrespective of its value to humans as they currently exist. From this viewpoint, even if humans do not exist in the future, something to which we (now) could be ethically attached might be harmed by our actions, and so we should take it into account when pondering different courses of action. All right, then – how can we begin to think in this way?

The respondents in Into Eternity rely on one of the only tools that humans have for projecting into the future with limited or no empirical data: their imaginations. More specifically, they use a technique called future counterfactual reasoning: the act of imagining possible future scenarios and asking ‘what if…?’ they occurred.

Future counterfactual thinking is not, generally speaking, an accurate predictor of ‘the’ (that is, one specific) future. Rather, its function is to attune humans to multiple possible futures and consider how they – or, I would argue, future others – might react in these possible future conditions. As Stephen Weber puts it (in one a small handful articles in the IR literature devoted to future counterfactuals), the purpose of this kind of thinking is “to open minds, to raise tough questions about what we think we know, and to suggest unfamiliar or uncomfortable arguments that we had best consider”.  He argues that effective future counterfactual scenarios challenge the ‘official futures’ on which analysts and policy-makers rely. They focus our attention on ruptures and discontinuities, apparent anomalies, and catalytic events. For Weber, a good future counterfactual changes the boundary conditions for discussion, making it possible to address what, in the physical sciences, are often called ‘category two problems’. These are problems that exceed the limits of science in its current form – including the now (unfairly) infamous category of ‘unknown unknowns’.

Into Eternity’s interviewees use this form of thinking to ponder the problem of communicating the secrets of Onkala to future beings. They consider a number of possible scenarios: for example, one in which people eventually return to live around the site of Onkala; one in which earthquakes or wars destroy the site and its archive; one in which future beings try to open the site deliberately because they value its contents. They also consider the possible outcomes of their attempts to communicate into the distant future. For instance, they ask whether it would be effective to construct a sinister ‘landscape of thorns’ around the site to frighten intruders, or whether a reproduction of Eduard Munch’s ‘Scream’ would do the trick. They rely on this kind of future counterfactual thinking to make crucial decisions about Onkala’s future.

Counterfactual thinking is one of the few tools at human disposal for responding to some of the biggest problems we face. But counterfactual thinking remains underdeveloped – and sometimes openly scorned – in international security. In fact, Richard Ned Lebow titled his 2010 book on the historical counterfactuals Forbidden Fruit  precisely because mainstream IR treats places this technique somewhere on a continuum from rampant subjectivity to the corruption of scientific knowledge. Even those IR scholars, like Lebow, who engage with counterfactuals do so in a fairly conservative and instrumental way. The vast majority of this literature is devoted to past counterfactuals as a means of challenging theories and explanations of present conditions. This, in turn, is expected to help policy-makers to be more attentive and open-minded in their (near) future strategic actions. Moreover, these authors focus on relatively narrow timeframes (perhaps a few decades, or a century at most). They rely on existing, accessible empirical data and social-scientific methods for collecting it. And within IR discourses, most of the available work on this subject focuses on establishing the plausibility of other possible outcomes of historical events – that is, on the predictive value of counterfactual thinking. This is because counterfactual thinking is usually viewed as a means of improving strategic thinking – for instance, how to prevent (or win) the next war.

Future counterfactuals have also made a small impact on contemporary IR. Some academics and use future counterfactuals in order to inform policy making, theory-building and teaching. Others have scenario-based workshops in which they brainstorm, for instance, possible outcomes of the Syria crisis by 2018 or the potential use of nuclear weapons for terrorism or as a result of inter-state conflict. And as far back as 2000, a group of US think tanks ran a large-scale simulation in which they asked current and former government officials to react to a small-pox outbreak. Indeed, Operation Dark Winter exposed the total lack of preparedness on the part of the relevant agencies: supplies of vaccines were quickly exhausted and the (fictional) medical system collapsed quickly.

But these approaches to counterfactual thinking cannot help very much with the kinds of problems discussed above, which span millennia into the future, often cannot be studied empirically due to their massive timescales, cannot rely on existing knowledge, assumptions or conditions, and cannot be predicted with reasonable accuracy. In fact, the least problematic element is the past-orientation of historical counterfactuals – after all, a past counterfactual simply involves placing oneself in the past and thinking forward into a counterfactual future.

Even the future counterfactual exercises discussed above extend only a short distance into the future (in some cases, only a few years).  They do not help us to understand future possible worlds dramatically different from our own. Instead, they focus on very similar versions of existing conditions, with a few minor mutations (despite the fact that complexity theorists, and most proponents of scenario thinking, acknowledge this to be unrealistic in nonlinear systems). In these scenarios, most of what we know today still holds true, and our ways of knowing it are treated as reliable. Crucially, the beings that might be harmed are those that exist now, or in the near future. Finally, and crucially, these scenarios and counterfactuals are oriented towards informing strategy, not preparing us to face the ethical challenges posed by meta-threats like nuclear disaster.

Does this mean that counterfactual thinking is useless for thinking about harm in the deep future? No, but it does suggest that we need dramatically to change how we do counterfactual thinking. This is not a matter of making ‘better’ (in the sense of more plausible or empirically accurate) counterfactual questions and scenarios. Instead, it is a matter of using counterfactual thinking to do different things, several of which deserve to be highlighted.

First, it should help us to break with deterministic understandings of the future, which can lead to a sense of nihilism. For instance, apocalyptic climate discourses give humans the impression that we are mired in a deterministic universe, and that nothing we do can change the situation. This may be true, but in case it is not, it is important to retain a sense of multiple possibilities and contingency, and to explore the range of responses we might make to them. Future counterfactual thinking – particularly approaches that impel us to imagine multiple worlds – can help to achieve this, or at least to orient ourselves towards it.

Second, one of the advantages of counterfactual thinking in general is that it undermines the notion that there is only one possible future. As such, it can help humans to cope better with (and perhaps even embrace) contingency and non-linearity, conditions with which we do not relish. Simply accustoming ourselves to multiple possible futures, and radically different worlds, can help us to retain (or perhaps to attain) a sense of efficacy,however modest, in the face of extreme uncertainty. This can combat the affective states of nihilism, resentment or depression that might otherwise accompany thinking about meta-threats. It also attunes us to possibilities, not only that our worst nightmares might not happen, but also that other, unknowable futures might exist. Since we cannot know these futures now, we cannot assume with any certainty that they will be either positive or negative, and so we must remain open to a range of possibilities. In a word, deep future counterfactual thinking is conducive to hope, albeit of a tempered kind.

Radiation chamber by Thomas Bougher (http://bit.ly/19cxIbt) licensed under creative commons 2.0 attribution non-derivs non commercial generic (http://bit.ly/1fdBmTD)

Radiation chamber by Thomas Bougher (http://bit.ly/19cxIbt) licensed under creative commons 2.0 attribution non-derivs non commercial generic (http://bit.ly/1fdBmTD)

Third, deep future counterfactual thinking can help us to imagine multiple possible worlds that may seem extreme, fantastical or horrific to us (for instance, human extinction). This helps to combat what I call futural amnesty, or  forgetting the future. Futural amnesty is distinct from denial, for instance of the kind that we find in debates on climate change. Denial is, in one sense, affirmation; it involves acknowledging the possibility of a phenomenon or event, then systematically negating what, to the opposite viewpoint, appear to be its positive features. In contrast, futural amnesty is a deep-seated unwillingness to think, or be confronted by, a possibility that one might otherwise be forced to accept or deny. It is a refusal to recognize things that cannot be fully grasped, an unwillingness to think even the conditions of their unthinkability. Its most frequent refrains are ‘how could we possibly know?’ or ‘let’s not even think about that’.

By appealing to futural amnesty, people let themselves off the ethical hook not only of responding to, but also of imagining situations beyond their grasp. Yet, like amnesty related to the past, its function is to allow humans to ‘get on with life’, to live without the constant presence of horror and enormity. It allows them to draw a line in the near to medium future (perhaps a few generations, or even one’s own lifespan) beyond which they can forget to think, and behind which they can shelter. So futural amnesty is a protective and generous strategy. But it is also one that stops humans from confronting what might be the most important ethical challenges they could face. Future counterfactuals break through futural amnesty and the social taboos that hold it in place, forcing us to imagine the unknowable or unthinkable.

Doing this is, in turn, crucial in helping us consider our responses to such events: what we value, what we might try to protect, and how we can respond to other beings. In other words, future counterfactual thinking is deeply ethical. By imagining the effects of our actions into the deep future, we may start to think about the harms that we might do (unintentionally) not only to known others, but also to unknowable others. And this is not only useful in thinking about future actions and their effects, but also in helping us to realize our effects on currently existing others that are radically different from us. Indeed, good counterfactual thinking will not detract from the value we place on ourselves and other beings now but rather heighten them, attuning us to ethical challenges both present and (future) future. From this perspective, (deep) future counterfactual thinking is a means of enhancing our ethical sensibilities, confronting our worst nightmares, and trying to remain ethically open in the face of them.

IR needs to develop these aspects of counterfactual thinking, and to make it central to discussions of international ethics. Counterfactual thinking is not scientific, or objective, or empirically robust. It cannot give us predictions or certainty, and it can’t prove that everything will be ok, or tell us how to ensure this.  But it can help us to see possibilities, to scope the boundaries of our knowledge, to appreciate the limits of our agency and to expand our ethical sensibilities. In the strategic-instrumental discourses that (still) dominate IR, this may not seem like much of a weapon to wield against meta-threats like nuclear disaster. But it may be all we’ve got.

As the author of the Chernobyl article discussed above states, “every stage of the [arch] project has been a step into the unknown”. Indeed, when we think ethically about meta-threats, we are stumbling into the unknown – quite literally, into eternity –with little to guide us. This goes far beyond what Hannah Arendt called ‘thinking without banisters’: it is thinking without stairs, and perhaps without even a human body to climb them. If future counterfactual thinking can help us even in a modest way to do this, then we should make it a top priority.

 
 

Who are you callin’ a drone? On hating robots and hating humans

Photo by asterix611 licensed under creative commons attribution 2.0 generic

Photo by asterix611 licensed under creative commons attribution 2.0 generic

On 2 October, 2013, a small ‘recreational aircraft’ equipped with a camera crashed into a pylon near the Sydney harbour bridge, prompting an investigation involving the civil aviation authorities and the counter-terrorism unit. The vehicle was a quadcopter , a kind of machine often used by researchers and hobbyists to record video footage. The next day, across the globe in Manhattan, another quadcopter crashed into a busy street, nearly striking a pedestrian – and bearing with it a clear image of its operator, who was subsequently arrested on charges of reckless endangerment.

While the Australian investigation continues at the time of writing, there is no indication that either of these robots was being used for anything other than recreational purposes. Certainly, both incidents posed a threat to public safety and broke civil aviation rules, but this level of threat could just as easily have been caused by the malfunctioning of a remote-controlled toy or hot-air balloon (not to mention a crash between ‘manned’ vehicles). They were a far cry from the lethal strikes carried out by unmanned aerial vehicles (UAVs) by the US over Pakistan and Yemen.

Yet both episodes were reported as ‘drone crashes’ and framed as harbingers of a future dominated by rampant ‘drone attacks’.

The ‘d’ word

It’s easy to understand why journalists use this language. The word ‘drone’ immediately invokes an image of deadly, terrifying, soulless machines, of the aptly named Predators and Reapers increasingly used by Western states to conduct the ‘war against terror’. Indeed, many automated robots – unmanned aerial vehicles (UAVs) in particular – carry out lethal strikes from a distance of thousands of miles, or force civilians to live in conditions of constant anxiety.

And still others deliver cakes and pizzas, help football teams to hone their technique, or take part in dance performances. Calling all of these creatures ‘drones’ is a very bad idea.

In a recent blog post, Keven Gosztola argues that we should emphatically use the word ‘drone’ to draw attention to the rise of robotic warfare and its implications for civil liberties and human rights. Reporting on a ‘drone and aerial robotics’ conference held in New York, he states that many delegates refused to use the term, opting instead for technical terms such as ‘remotely-piloted aircraft’.

Echoing Carol Cohn’s pioneering work on nuclear euphemisms, he suggests that these highly technical names mask the lethal purposes of many militarized robots. Usually, he suggests, these names are used by people with ties to the military, law enforcement, defence contractors or businesses, who have vested interests in turning a blind eye to the moral implications of their work.Others, he argues, refrain from using this word because it contributes to public criticism which might undermine national security or express sympathy towards ‘terrorists’.

Gosztola’s conclusion is that we should use the ‘d word’, and do so deliberately, as a means of critiquing the military-industrial complex and its ever more efficient ways of killing. I agree with him, but with an important caveat: we should use this term, but we should use it precisely. That is, we should use it to refer to those robots designed and/or deployed to carry out lethal strikes and surveillance by governments or non-state actors. But we should be extremely careful about applying it to anything else, for several reasons.

Preserving the political power of ‘buzz’ words

The most obvious reason is that calling all robots ‘drones’ dilutes the normative force of the term. As Gosztola’s article points out, the term ‘drone’ is a buzzword (no pun intended) whose power lies in its ability to generate immediate fear and revulsion, which might in turn translate into outrage and public action. ‘Buzzword’ need not be a derogatory term. Instead, it can refer to a term that evokes passionate emotions and channels political action.  However, like all buzzwords, it loses its force if it is applied to anything and everything that moves mechanically. If it is used to refer to all robots, or to robots that might, hypothetically, be used in violent ways, then it will be stretched so far as to be meaningless. In this case, arguments about ‘drones’ will collapse into endless debates about the inability to distinguish between, let alone regulate, technologies that could be used either for harmless or beneficial purposes or for killing (military or otherwise).

Robots don’t kill people – people do

This raises another issue: the term ‘drone’ suggests that there is something monstrous about the machines themselves. But the problem is not robotic technology – rather, it is with the people who use it and their reasons for doing so. There is not (yet) any such thing as a fully autonomous robot, let alone one capable of developing a personality with traits such as sadism or malevolence. These qualities remain, for now at least, distinctly human.

Indeed, all of the machines discussed above fall into the categories of human-in-the-loop and human-on-the-loop systems. So what turns these machines into efficient ‘killers’? The clue is in the name. Robotic systems may assist in selecting targets and may, in the near future, do so with minimal or no human input. But currently and for the foreseeable future it is humans who determine whether a robot is used for counter-insurgency or to monitor endangered orang-utan populations.

In this sense, there is nothing inherently evil about robots. Rather, the problem is that these robots are dominated and instrumentalized by humans, who use them to kill and oppress other humans. There is no reason to believe that fully autonomous robots would, without human interference, behave violently towards humans or any other set of beings unless humans programmed them to do so (see below). So, using the term ‘drone’ to describe robots lets humans off the hook, using machines as scapegoats for the human capacities for violence, destruction strategic killing.

Hating robots, hating humans (and other beings)

Photo by nebarnix licensed under creative commons attribution 2.0 generic.

Photo by nebarnix licensed under creative commons attribution 2.0 generic.

This brings us to a final, and perhaps less obvious, reason why we shouldn’t call all robots ‘drones’: doing so promotes robot hatred. The term ‘drone’ works as a buzzword because it taps into a deep and widespread human fear. The simple fact is, a lot of us are terrified by robots. We assume that there is something about them that poses an existential threat to us. But I think that this has a great deal more to do with humans than it does with the robots in question.

One explanation for this is quite simple: we are terrified of things that are unlike us. Robots are, for the most part, made of metals, plastics and other inorganic compounds. They are not alive in the strict, biological sense that underpins Western science. They don’t possess the kinds of emotional, cognitive or normative restraints that we expect our fellow humans to have, and on the basis of which we predict their behaviour. The standard argument is that humans are hard-wired to care for beings that are like them and neglect or, at worst, harm those that are unlike them. Extrapolating from this idea, many humans fear that, given the chance, robots would run amok killing every human in sight. There is a deep irony in this argument: we assume that the robots would harm us because they are radically different from us. Yet this leap of logic requires us to project onto ‘robots’ a notoriously human pattern of behaviour:  hostility to others on the basis of difference. In other words, we fear robots precisely because they might act like we do.

However, we are also terrified by the similarities between ourselves and certain kinds of robots. Some robots encroach on territory that humans have long regarded as ‘ours’. By moving independently or self-repairing, some robots undermine the human belief that we are the only truly autonomous beings. When they make use of algorithmic decision-making to pathways for movement, predict obstructions to their movement or identity things in their environment, they undermine the idea that ‘intelligence’ is the unique preserve of humans (other sentient organisms also raise this issue). Robots with certain capacities for human-like behaviour expose and transgress the boundaries humans set up in order to distinguish themselves from other beings and to cement their dominance (for a useful discussion of this, see Jairus Grove’s work).

It gets even more complicated than this. Robots aren’t just a useful foil for human nature. They also represent the things we find disgusting or repugnant in ourselves. In fact, it’s on the basis of these very properties that humans ‘humanize’ themselves, and ‘dehumanize’ others (whether humans or nonhumans). We do this through the process of abjection, in which we form an identity by rejecting the aspects of ourselves that both repel and compel us.

Abjection plays an important role in dehumanization. According to the social-psychological theory of infra-dehumanization, people make subconscious decisions about whether or not a being is human by assessing different sets of properties. ‘Human nature’ properties (which are also possessed by a number of non-humans) include warmth, responsiveness and autonomous agency. ‘Human uniqueness’ properties include ‘refined emotions’, self-control and moral responsibility. If a being is deemed to be low in ‘human uniqueness’ properties, the theory suggests, we treat it like a nonhuman animal. And if it’s thought to be low in ‘human nature’ properties, we treat it like a robot. In other words, one of the main ways we dehumanize is by treating certain beings like robots.

Here’s where abjection gets dangerous. According to the logic of dehumanization, this kind of self/other thinking creates a sharp divide between the beings which are treated as subjects of ethical consideration, and those that aren’t. It encourages humans to dispose of those beings deemed to be nonhuman in instrumental ways –  that is, to subject them to violence, harm or destruction if they threaten us or are simply useful in meeting our needs.

This logic underpins racism, xenophobia and other inside/outside distinctions that enable humans to kill with impunity. According to theorists of dehumanization, it is precisely this cognitive process that has made mass genocide and mass killing (of humans and other animals)  both thinkable and do-able.

This brings us to another irony: dehumanizing others enables us to kill with the cold, calculating sense of impunity of which we deeply suspect robots.

By treating all robots as (potential) ‘drones’ – that is, as inhuman and dehumanizing monsters – we cement the self/other logic described above. We also over-generalize,  demonizing robots unnecessarily and treating each robot as a threat to our humanity. The use of robots by humans to target civilians from afar and surveil populations almost certainly is a threat of this kind. But the simple existence of robots with various levels of autonomy is not.

The ‘d’ word is for dissent – not demonization (or doomsday predictions)

So, from this perspective, hating robots is deeply linked with hating humans, and with hating aspects of humanity. I’m not arguing that robot-phobia will convert the average person into a genocidal killer. Nor am I suggesting that we should all welcome companion robots into our homes or mourn the loss of robots destroyed in combat. And I’m certainly not claiming that hatred of robots is equivalent to hating humans in moral terms (that is a whole other can of worms). What I’m suggesting is that invoking the word ‘drone’ to describe any and every robot encourages this kind of self/other dichotomy, and the myth of absolute human superiority that it underwrites.

I do, however, think that robot-hatred can have an effect on how we treat humans and other beings. Elaine Scarry argues that exposure to things we find beautiful can evoke in humans a response of empathy and care that we then extend further in our relation to other people and things. I suspect that the reverse is also true. That is, if we allow or celebrate hatred of an entire set of other beings and normalize this kind of thinking, then it is likely to shape our ethical relations with all kinds of others – humans included.

Photo by strangejourney licensed under creative commons attribution 2.0 generic

Photo by strangejourney licensed under creative commons attribution 2.0 generic

From this perspective, if the term ‘drone’ helps to raise awareness and mobilize dissent against the instrumentalization of robots for extra-judicial killings and surveillance, then it should be used in these cases, as Gosztola suggests, to bolster public critique of the use of force. It should not be used as a blanket term to whip up generalized anti-robot fervour or to stoke public panics about  a future (and present) shared with robots.

Fear and outrage at drone warfare – that is, the systematic use of robots for killing and suppression – is rational, warranted and utterly crucial to contemporary political debates. But the fear of robots in general is just another narrow-minded expression of our own insecurities about being human.


%d bloggers like this: