Category Archives: robots and machines

Posthuman security: reflections

This month’s post comes courtesy of E-IR. It offers some reflections on the discussions related to ‘posthuman security’ that have been brewing over the past couple of years. It is part of a series that also includes contributions from Elke Schwartz, Matt McDonald and (coming soon) Carolin Kaltofen. Thanks to Clara Eroukmanoff and the E-IR editorial team for putting this series together.

This article has also been published on Global Policy Journal’s blog.  

 

Posthuman Security: Reflections from an Open-ended Conversation

 


Posthuman Valentines

If you’re feeling depressed by schmaltz this Valentine’s day, here are some posthumanist greeting cards to lighten your mood.

Image by DigitalRalph (http://bit.ly/1g3TnV6) Licensed under Creative Commons Generic 2.0 Attribution (http://bit.ly/1g3TBeY)

Image by DigitalRalph (http://bit.ly/1g3TnV6) Licensed under Creative Commons Generic 2.0 Attribution (http://bit.ly/1g3TBeY)

 

Photo by Midway Journey (http://bit.ly/1eFUV82 ) Licensed under Creative Commons 2.0 Generic Non-commercial Share-alike (http://bit.ly/1g3V3hg)

Photo by Midway Journey (http://bit.ly/1eFUV82
) Licensed under Creative Commons 2.0 Generic Non-commercial Share-alike (http://bit.ly/1g3V3hg)

 

Photo by Truth-out.org (http://bit.ly/1g3VzMl  ) licensed under Creative Commons 2.0 Generic Non-commercial Share-alike (http://bit.ly/1eFV4Z2 )

Photo by Truth-out.org (http://bit.ly/1g3VzMl ) licensed under Creative Commons 2.0 Generic Non-commercial Share-alike (http://bit.ly/1eFV4Z2
)

 

 


Who are you callin’ a drone? On hating robots and hating humans

Photo by asterix611 licensed under creative commons attribution 2.0 generic

Photo by asterix611 licensed under creative commons attribution 2.0 generic

On 2 October, 2013, a small ‘recreational aircraft’ equipped with a camera crashed into a pylon near the Sydney harbour bridge, prompting an investigation involving the civil aviation authorities and the counter-terrorism unit. The vehicle was a quadcopter , a kind of machine often used by researchers and hobbyists to record video footage. The next day, across the globe in Manhattan, another quadcopter crashed into a busy street, nearly striking a pedestrian – and bearing with it a clear image of its operator, who was subsequently arrested on charges of reckless endangerment.

While the Australian investigation continues at the time of writing, there is no indication that either of these robots was being used for anything other than recreational purposes. Certainly, both incidents posed a threat to public safety and broke civil aviation rules, but this level of threat could just as easily have been caused by the malfunctioning of a remote-controlled toy or hot-air balloon (not to mention a crash between ‘manned’ vehicles). They were a far cry from the lethal strikes carried out by unmanned aerial vehicles (UAVs) by the US over Pakistan and Yemen.

Yet both episodes were reported as ‘drone crashes’ and framed as harbingers of a future dominated by rampant ‘drone attacks’.

The ‘d’ word

It’s easy to understand why journalists use this language. The word ‘drone’ immediately invokes an image of deadly, terrifying, soulless machines, of the aptly named Predators and Reapers increasingly used by Western states to conduct the ‘war against terror’. Indeed, many automated robots – unmanned aerial vehicles (UAVs) in particular – carry out lethal strikes from a distance of thousands of miles, or force civilians to live in conditions of constant anxiety.

And still others deliver cakes and pizzas, help football teams to hone their technique, or take part in dance performances. Calling all of these creatures ‘drones’ is a very bad idea.

In a recent blog post, Keven Gosztola argues that we should emphatically use the word ‘drone’ to draw attention to the rise of robotic warfare and its implications for civil liberties and human rights. Reporting on a ‘drone and aerial robotics’ conference held in New York, he states that many delegates refused to use the term, opting instead for technical terms such as ‘remotely-piloted aircraft’.

Echoing Carol Cohn’s pioneering work on nuclear euphemisms, he suggests that these highly technical names mask the lethal purposes of many militarized robots. Usually, he suggests, these names are used by people with ties to the military, law enforcement, defence contractors or businesses, who have vested interests in turning a blind eye to the moral implications of their work.Others, he argues, refrain from using this word because it contributes to public criticism which might undermine national security or express sympathy towards ‘terrorists’.

Gosztola’s conclusion is that we should use the ‘d word’, and do so deliberately, as a means of critiquing the military-industrial complex and its ever more efficient ways of killing. I agree with him, but with an important caveat: we should use this term, but we should use it precisely. That is, we should use it to refer to those robots designed and/or deployed to carry out lethal strikes and surveillance by governments or non-state actors. But we should be extremely careful about applying it to anything else, for several reasons.

Preserving the political power of ‘buzz’ words

The most obvious reason is that calling all robots ‘drones’ dilutes the normative force of the term. As Gosztola’s article points out, the term ‘drone’ is a buzzword (no pun intended) whose power lies in its ability to generate immediate fear and revulsion, which might in turn translate into outrage and public action. ‘Buzzword’ need not be a derogatory term. Instead, it can refer to a term that evokes passionate emotions and channels political action.  However, like all buzzwords, it loses its force if it is applied to anything and everything that moves mechanically. If it is used to refer to all robots, or to robots that might, hypothetically, be used in violent ways, then it will be stretched so far as to be meaningless. In this case, arguments about ‘drones’ will collapse into endless debates about the inability to distinguish between, let alone regulate, technologies that could be used either for harmless or beneficial purposes or for killing (military or otherwise).

Robots don’t kill people – people do

This raises another issue: the term ‘drone’ suggests that there is something monstrous about the machines themselves. But the problem is not robotic technology – rather, it is with the people who use it and their reasons for doing so. There is not (yet) any such thing as a fully autonomous robot, let alone one capable of developing a personality with traits such as sadism or malevolence. These qualities remain, for now at least, distinctly human.

Indeed, all of the machines discussed above fall into the categories of human-in-the-loop and human-on-the-loop systems. So what turns these machines into efficient ‘killers’? The clue is in the name. Robotic systems may assist in selecting targets and may, in the near future, do so with minimal or no human input. But currently and for the foreseeable future it is humans who determine whether a robot is used for counter-insurgency or to monitor endangered orang-utan populations.

In this sense, there is nothing inherently evil about robots. Rather, the problem is that these robots are dominated and instrumentalized by humans, who use them to kill and oppress other humans. There is no reason to believe that fully autonomous robots would, without human interference, behave violently towards humans or any other set of beings unless humans programmed them to do so (see below). So, using the term ‘drone’ to describe robots lets humans off the hook, using machines as scapegoats for the human capacities for violence, destruction strategic killing.

Hating robots, hating humans (and other beings)

Photo by nebarnix licensed under creative commons attribution 2.0 generic.

Photo by nebarnix licensed under creative commons attribution 2.0 generic.

This brings us to a final, and perhaps less obvious, reason why we shouldn’t call all robots ‘drones’: doing so promotes robot hatred. The term ‘drone’ works as a buzzword because it taps into a deep and widespread human fear. The simple fact is, a lot of us are terrified by robots. We assume that there is something about them that poses an existential threat to us. But I think that this has a great deal more to do with humans than it does with the robots in question.

One explanation for this is quite simple: we are terrified of things that are unlike us. Robots are, for the most part, made of metals, plastics and other inorganic compounds. They are not alive in the strict, biological sense that underpins Western science. They don’t possess the kinds of emotional, cognitive or normative restraints that we expect our fellow humans to have, and on the basis of which we predict their behaviour. The standard argument is that humans are hard-wired to care for beings that are like them and neglect or, at worst, harm those that are unlike them. Extrapolating from this idea, many humans fear that, given the chance, robots would run amok killing every human in sight. There is a deep irony in this argument: we assume that the robots would harm us because they are radically different from us. Yet this leap of logic requires us to project onto ‘robots’ a notoriously human pattern of behaviour:  hostility to others on the basis of difference. In other words, we fear robots precisely because they might act like we do.

However, we are also terrified by the similarities between ourselves and certain kinds of robots. Some robots encroach on territory that humans have long regarded as ‘ours’. By moving independently or self-repairing, some robots undermine the human belief that we are the only truly autonomous beings. When they make use of algorithmic decision-making to pathways for movement, predict obstructions to their movement or identity things in their environment, they undermine the idea that ‘intelligence’ is the unique preserve of humans (other sentient organisms also raise this issue). Robots with certain capacities for human-like behaviour expose and transgress the boundaries humans set up in order to distinguish themselves from other beings and to cement their dominance (for a useful discussion of this, see Jairus Grove’s work).

It gets even more complicated than this. Robots aren’t just a useful foil for human nature. They also represent the things we find disgusting or repugnant in ourselves. In fact, it’s on the basis of these very properties that humans ‘humanize’ themselves, and ‘dehumanize’ others (whether humans or nonhumans). We do this through the process of abjection, in which we form an identity by rejecting the aspects of ourselves that both repel and compel us.

Abjection plays an important role in dehumanization. According to the social-psychological theory of infra-dehumanization, people make subconscious decisions about whether or not a being is human by assessing different sets of properties. ‘Human nature’ properties (which are also possessed by a number of non-humans) include warmth, responsiveness and autonomous agency. ‘Human uniqueness’ properties include ‘refined emotions’, self-control and moral responsibility. If a being is deemed to be low in ‘human uniqueness’ properties, the theory suggests, we treat it like a nonhuman animal. And if it’s thought to be low in ‘human nature’ properties, we treat it like a robot. In other words, one of the main ways we dehumanize is by treating certain beings like robots.

Here’s where abjection gets dangerous. According to the logic of dehumanization, this kind of self/other thinking creates a sharp divide between the beings which are treated as subjects of ethical consideration, and those that aren’t. It encourages humans to dispose of those beings deemed to be nonhuman in instrumental ways –  that is, to subject them to violence, harm or destruction if they threaten us or are simply useful in meeting our needs.

This logic underpins racism, xenophobia and other inside/outside distinctions that enable humans to kill with impunity. According to theorists of dehumanization, it is precisely this cognitive process that has made mass genocide and mass killing (of humans and other animals)  both thinkable and do-able.

This brings us to another irony: dehumanizing others enables us to kill with the cold, calculating sense of impunity of which we deeply suspect robots.

By treating all robots as (potential) ‘drones’ – that is, as inhuman and dehumanizing monsters – we cement the self/other logic described above. We also over-generalize,  demonizing robots unnecessarily and treating each robot as a threat to our humanity. The use of robots by humans to target civilians from afar and surveil populations almost certainly is a threat of this kind. But the simple existence of robots with various levels of autonomy is not.

The ‘d’ word is for dissent – not demonization (or doomsday predictions)

So, from this perspective, hating robots is deeply linked with hating humans, and with hating aspects of humanity. I’m not arguing that robot-phobia will convert the average person into a genocidal killer. Nor am I suggesting that we should all welcome companion robots into our homes or mourn the loss of robots destroyed in combat. And I’m certainly not claiming that hatred of robots is equivalent to hating humans in moral terms (that is a whole other can of worms). What I’m suggesting is that invoking the word ‘drone’ to describe any and every robot encourages this kind of self/other dichotomy, and the myth of absolute human superiority that it underwrites.

I do, however, think that robot-hatred can have an effect on how we treat humans and other beings. Elaine Scarry argues that exposure to things we find beautiful can evoke in humans a response of empathy and care that we then extend further in our relation to other people and things. I suspect that the reverse is also true. That is, if we allow or celebrate hatred of an entire set of other beings and normalize this kind of thinking, then it is likely to shape our ethical relations with all kinds of others – humans included.

Photo by strangejourney licensed under creative commons attribution 2.0 generic

Photo by strangejourney licensed under creative commons attribution 2.0 generic

From this perspective, if the term ‘drone’ helps to raise awareness and mobilize dissent against the instrumentalization of robots for extra-judicial killings and surveillance, then it should be used in these cases, as Gosztola suggests, to bolster public critique of the use of force. It should not be used as a blanket term to whip up generalized anti-robot fervour or to stoke public panics about  a future (and present) shared with robots.

Fear and outrage at drone warfare – that is, the systematic use of robots for killing and suppression – is rational, warranted and utterly crucial to contemporary political debates. But the fear of robots in general is just another narrow-minded expression of our own insecurities about being human.


%d bloggers like this: