War is rapidly becoming a video game. Here, from the NY Times, is a fascinating behind the scenes look at the increasing reliance on drones by the US military:
The Guard members, along with Air Force crews at a base in the Nevada desert, are 7,000 to 8,000 miles away from the planes they are flying. Most of the crews sit at 1990s-style computer banks filled with screens, inside dimly lit trailers. Many fly missions in both Iraq and Afghanistan on the same day.
On a recent day, at 1:15 p.m. in Tucson — 1:15 the next morning in Afghanistan — a pilot and sensor operator were staring at gray-toned video from the Predator’s infrared camera, which can make even the darkest night scene surprisingly clear.
On the one hand, anything that can reduce the casualty count of American soldiers is a good thing – it’s certainly safer to be in a trailer in Arizona that stationed in Afghanistan. And yet, there’s also something troubling when acts of violence, such as firing a missile with a massive payload, become so impersonal and detached. The drone can kill dozens of people with a single button press, but the pilot doesn’t hear the screams or see the blood – he just watches as a cloud of gray dust fills his computer screen. Furthermore, I think there’s some suggestive evidence that the brain makes moral decisions differently when we’re not confronted with the visceral evidence of violence.
Consider this elegant experiment, which I’ve written about before, led by neuroscientist Joshua Greene of Harvard. Greene asked his subjects a series of questions involving a runaway trolley, an oversized man and five maintenance workers. (It might sound like a strange setup, but it’s actually based on a well-known philosophical thought puzzle.) The first scenario goes like this:
You are the driver of a runaway trolley. The brakes have failed. The trolley is approaching a fork in the track at top speed. If you do nothing, the train will stay left, where it will run over five maintenance workers who are fixing the track. All five workers will die. However, if you steer the train right⎯this involves flicking a switch and turning the wheel⎯you will swerve onto a track where there is one maintenance worker. What do you do? Are you willing to intervene and change the path of the trolley?
In this hypothetical case, about ninety five percent of people agree that it is morally permissible to turn the trolley. The decision is just simple arithmetic: it’s better to kill fewer people. Some moral philosophers even argue that it is immoral to not turn the trolley, since such passivity leads to the death of four extra people. But what about this scenario:
You are standing on a footbridge over the trolley track. You see a trolley racing out of control, speeding towards five workmen who are fixing the track. All five men will die unless the trolley can be stopped. Standing next to you on the footbridge is a very large man. He is leaning over the railing, watching the trolley hurtle towards the men. If you sneak up on the man and give him a little push, he will fall over the railing and into the path of the trolley. Because he is so big, he will stop the trolley from killing the maintenance workers Do you push the man off the footbridge? Or do you allow five men to die?
The brute facts, of course, remain the same: one man must die in order for five men to live. If our ethical decisions were perfectly rational, then we would act identically in both situations, and we’d be as willing to push the man as we are to turn the trolley. And yet, almost nobody is willing to actively throw another person onto the train tracks. The decisions lead to the same outcome, yet one is moral and one is murder.
Greene argues that pushing the man feels wrong because the killing is direct: We are using our body to hurt his body. He calls it a personal moral situation, since it directly involves another person. In contrast, when we just have to turn the trolley onto a different track, we aren’t directly hurting somebody else. We are just shifting the trolley wheel: the ensuing death seems indirect. In this case, we are making an impersonal moral decision. I’d argue that making war decisions from an Arizona trailer – should I fire this missile at a terrorist target, even if there might be civilian casualties? – is much more closer to the second condition, in which all one has to do is turn the wheel.
What makes this thought experiment so interesting is that the fuzzy moral distinction⎯the difference between personal and impersonal decisions⎯is built into our brain. It doesn’t matter what culture you live in, or what religion you subscribe to: the two different trolley scenarios trigger distinct patterns of activation. When the subjects were asked whether or not they should turn the trolley, their rational decision-making machinery was turned on. A network of brain regions assessed the various alternatives, sent their verdict onwards to the prefrontal cortex, and the person chose the clearly superior option. Their brain quickly realized that it was better to kill one man than five men.
However, when people were asked whether they would be willing to push a man onto the tracks, a separate network of brain areas was activated. These folds of gray matter⎯the superior temporal sulcus, posterior cingulate and medial frontal gyrus⎯seem to be responsible for interpreting the thoughts and feelings of other people. As a result, these subjects automatically imagined how the poor man would feel as he plunged to his death on the train tracks below. They vividly simulated his mind, and concluded that pushing him was a capital crime, even if it saved the lives of five other men. Pushing a man off a bridge just felt wrong.
War, of course, is all about killing. And yet, as the sharp escalation in civilian casualties in Afghanistan has demonstrated, one doesn’t want killing to become too easy. We bypass our innate moral emotions at our own peril.