The Morality of Drones

War is rapidly becoming a video game. Here, from the NY Times, is a fascinating behind the scenes look at the increasing reliance on drones by the US military:

The Guard members, along with Air Force crews at a base in the Nevada desert, are 7,000 to 8,000 miles away from the planes they are flying. Most of the crews sit at 1990s-style computer banks filled with screens, inside dimly lit trailers. Many fly missions in both Iraq and Afghanistan on the same day.

On a recent day, at 1:15 p.m. in Tucson -- 1:15 the next morning in Afghanistan -- a pilot and sensor operator were staring at gray-toned video from the Predator's infrared camera, which can make even the darkest night scene surprisingly clear.

On the one hand, anything that can reduce the casualty count of American soldiers is a good thing - it's certainly safer to be in a trailer in Arizona that stationed in Afghanistan. And yet, there's also something troubling when acts of violence, such as firing a missile with a massive payload, become so impersonal and detached. The drone can kill dozens of people with a single button press, but the pilot doesn't hear the screams or see the blood - he just watches as a cloud of gray dust fills his computer screen. Furthermore, I think there's some suggestive evidence that the brain makes moral decisions differently when we're not confronted with the visceral evidence of violence.

Consider this elegant experiment, which I've written about before, led by neuroscientist Joshua Greene of Harvard. Greene asked his subjects a series of questions involving a runaway trolley, an oversized man and five maintenance workers. (It might sound like a strange setup, but it's actually based on a well-known philosophical thought puzzle.) The first scenario goes like this:

You are the driver of a runaway trolley. The brakes have failed. The trolley is approaching a fork in the track at top speed. If you do nothing, the train will stay left, where it will run over five maintenance workers who are fixing the track. All five workers will die. However, if you steer the train rightâ¯this involves flicking a switch and turning the wheelâ¯you will swerve onto a track where there is one maintenance worker. What do you do? Are you willing to intervene and change the path of the trolley?

In this hypothetical case, about ninety five percent of people agree that it is morally permissible to turn the trolley. The decision is just simple arithmetic: it's better to kill fewer people. Some moral philosophers even argue that it is immoral to not turn the trolley, since such passivity leads to the death of four extra people. But what about this scenario:

You are standing on a footbridge over the trolley track. You see a trolley racing out of control, speeding towards five workmen who are fixing the track. All five men will die unless the trolley can be stopped. Standing next to you on the footbridge is a very large man. He is leaning over the railing, watching the trolley hurtle towards the men. If you sneak up on the man and give him a little push, he will fall over the railing and into the path of the trolley. Because he is so big, he will stop the trolley from killing the maintenance workers Do you push the man off the footbridge? Or do you allow five men to die?

The brute facts, of course, remain the same: one man must die in order for five men to live. If our ethical decisions were perfectly rational, then we would act identically in both situations, and we'd be as willing to push the man as we are to turn the trolley. And yet, almost nobody is willing to actively throw another person onto the train tracks. The decisions lead to the same outcome, yet one is moral and one is murder.

Greene argues that pushing the man feels wrong because the killing is direct: We are using our body to hurt his body. He calls it a personal moral situation, since it directly involves another person. In contrast, when we just have to turn the trolley onto a different track, we aren't directly hurting somebody else. We are just shifting the trolley wheel: the ensuing death seems indirect. In this case, we are making an impersonal moral decision. I'd argue that making war decisions from an Arizona trailer - should I fire this missile at a terrorist target, even if there might be civilian casualties? - is much more closer to the second condition, in which all one has to do is turn the wheel.

What makes this thought experiment so interesting is that the fuzzy moral distinctionâ¯the difference between personal and impersonal decisionsâ¯is built into our brain. It doesn't matter what culture you live in, or what religion you subscribe to: the two different trolley scenarios trigger distinct patterns of activation. When the subjects were asked whether or not they should turn the trolley, their rational decision-making machinery was turned on. A network of brain regions assessed the various alternatives, sent their verdict onwards to the prefrontal cortex, and the person chose the clearly superior option. Their brain quickly realized that it was better to kill one man than five men.

However, when people were asked whether they would be willing to push a man onto the tracks, a separate network of brain areas was activated. These folds of gray matterâ¯the superior temporal sulcus, posterior cingulate and medial frontal gyrusâ¯seem to be responsible for interpreting the thoughts and feelings of other people. As a result, these subjects automatically imagined how the poor man would feel as he plunged to his death on the train tracks below. They vividly simulated his mind, and concluded that pushing him was a capital crime, even if it saved the lives of five other men. Pushing a man off a bridge just felt wrong.

War, of course, is all about killing. And yet, as the sharp escalation in civilian casualties in Afghanistan has demonstrated, one doesn't want killing to become too easy. We bypass our innate moral emotions at our own peril.

More like this

In the latest Seed, there's an interesting dialogue between political scientist James Fowler and physicist Albert-Laszlo Barabasi. I was particularly intrigued by their ruminations on the network dynamics of Facebook: JF: When we move from five friends in real life to 500 on Facebook, it's not the…
It's hard to believe it's been four years since the war began. If you missed Bob Woodruff's important documentary on the epidemic of brain injuries caused by war, I highly suggest watching it. According to Woodruff, up to 10 percent of all veterans suffer some sort of brain injury - often caused by…
An article in The Boston Globe on the recent research about how we make moral judgements. "MORAL PHILOSOPHERS and academics interested in studying how humans choose between right and wrong often use thought experiments to tease out the principles that inform our decisions. One particular…
Back on the old blog, I wrote a series of posts in which I detailed a revolution in moral psychology. Sparked largely by recent empirical and theoretical work by neuroscientists, psychologists studying moral judgment have transitioned from Kantian rationalism, that goes back as far as, well, Kant (…

I love the trolley problem (recall it from my Intro to Ethics/PHIL 105) course.

Similarly, there's the doctor problem where you as the doctor have to decide whether or not you should let five patients who need organ transplants die or kill the perfectly healthy patient who just came in for a checkup who so happens to match the blood type, etc. of the five people who need the organs.

This is a lot more involved than just pushing a large man off a bridge, so I can't help but wonder what areas of the brain would light up in the machine, especially if you ask a bunch of actual doctors what they would do in this highly-improbably situation.

Also, the whole detachment from the act of inflicting pain/death by means of merely pushing a button reminds me of the Milgram experiment:
http://en.wikipedia.org/wiki/Milgram_experiment

If a soldier is being ordered to push a button that effectively detonates a bomb that kills thousands of men, women and children, can they still psychologically distance themselves with the statement "I was just following orders?"

Once again, it's always a pleasure to read your blog.
-JV

There is an additional piece of information - the maintenance worker is an employee who has taken a known risk, knowing that the tracks might contain a train, while the man on the railing above is a bystander. Faced with this decision, I would hope that hearing and sight could help the men on the tracks, but no senses could help the bystander plunging to the tracks below. I think I could easily rule out dislodging the bystander as an option. For me, there is a clear distinction between someone taking a risk with knowledge as part of their job, and someone looking on with no relationship to the activities unfolding.

Given that distinction, if all of them were workers, I could not weigh 5 against 1. What if the one maintenance worker were about to father a child who would one day emerge as a leader, saving the lives of millions through his peacekeeping efforts or his scientific discoveries. In my mind there is no way to weigh lives by counting. I would probably go back to my roots and utter a prayer. I would fail miserably at military strategy.

This is my answer to the âTrolleyâ gedanken experiment:

Everything is subjunctive until it is fixed, and everyone is responsible for his/her experience whether consciously or unconsciously due to the myriad choices of action or inaction that resulted in whatever drama one finds oneself.

Each of the five workers (participants in the hypothetical drama) cooperates in their decision to choose to be struck by the train or not; any of them could decide to move, as could the 1 person on the other track.

The same applies to the stout rubbernecker on the bridge. He could intuit your malevolent intent (via horripilation on the back of the neck or other latent senses etc.) and has an opportunity to sidestep, step back, hold on to the rail or perform any number of actions which would trivialize his participation in the drama until he is pushed and the subjunctive becomes actualized.

In this scenario it is impossible to know whether one would be able to push the man to land in a manner which would stop the trolley from killing the 5 workers so the outcome being less certain than the track switching; it is perceived as a dubious situation which requires calculation (as in the blown Free Throw) and serves to give one pause. It is also coupled with the necessity of physically invading anotherâs personal space and perpetrating a violent corporal act which has the potential for retaliation if the debridgement were unsuccessful. Also as Jonah stated, our empathetic conditioning (in most cases) serves as an obstacle to directly causing others harm.

If one murders one person to save 5 by pushing a button or a body it is still murder and I would not interfere with the participants (usurping their free will) in the drama as I believe it is their choice to participate or not. My role in the drama is relegated to one of witnessing carnage if it comes to pass or witnessing a close call if not.

No one has the right to physically force an injurious outcome upon another, whether it is the loss of one to save another or a billion. It is immoral in principle and a violation of anotherâs innate rights. Simple loss cutting arithmetic is for those who have chosen to participate in a perceived untenable drama by their own free will and rationalize any action taken. Less than ideal means NEVER justify ideal ends.

The drone pilots (video gamers â likely inured by repeated exposure to cyber violence) in Nevada choose to participate in remote murder (even if it is sanctioned by consensus) of their own free will. If these âpilotsâ are able to compartmentalize their actions for the nonce, I wager some major Post PTSD will rise from their âmoralâ center; else they will manifest misanthropic behavior (as this tendency is likely not too far beneath the surface in order for them to participate in such activity in the first place â or it could also be an exaggerated sense of âmoralityâ.)

Assessing moral responsibility for âeasyâ impersonal murder (morality by consensus) will be compounded if technology advances to the state where autonomous drones âdecideâ who lives and dies. In this day and age, it is amazing that enough human beings have not mentally âevolvedâ to the degree that war/killing is an unacceptable solution to resolve differences.

Peace

While your science discussion is very interesting, it has absolutely nothing to do with the topic at hand. The pilots of modern attack aircraft do not "hear the screams [n]or see the blood". Most of the time they release their ordinance from between 10,000 feet and 20,000 feet and it is guided to its target by laser (or gravity with non-smart ordinance). Even when doing low passes there is simply no direct exposure to the sights and sounds of war. Slate (the online magazine) did a very interesting series on this topic that lays out the primary reason for using drones. To summarize drones allow the military to reduce risk to innocent third parties; while increasing the likelihood that enemy combatants will be hit; and reducing risk to air force pilots.

To expand, the primary reasons for using drones are that drones are smaller (and less likely to be spotted); use less fuel (can stay in the air longer) and do not suffer from pilot fatigue (so they can stay in the air longer). This allows the drones to...stay in the air for longer periods of time. This allows them to follow targets for longer periods of time and thus hold fire until the target can be attacked with the least loss of life to innocent third parties. A jet plane can only stay in the air for a short period of time, as such its pilots are left with the choice of attacking/or not over a relatively short time frame.

While, as you say, fighting via drone reduces visceral engagement, I'd be curious to know if that ends up increasing or decreasing civilian casualties on the whole.

Visceral engagement cuts both ways. It can lead to greater forbearance, by making consequences more strongly felt; but it can also lead to greater violence, due to fear or hatred. If your targets don't really feel like people, you are less likely to care about them; but hating them is going to be similarly abstract.

If you are there in person, identifying a noncombatant as a combatant means you just killed a civilian, and may become acutely aware of it. However, identifying a combatant as a noncombatant could easily be the last mistake you ever make. I suspect that that creates a certain understandable desire to shoot first and double check later.