Philip Zimbardo, professor emeritus of psychology at Stanford University and the guy who conducted the Stanford Prison Experiment in 1971, writes today in the Chronicle of Higher Education about the lessons and conduct of that pyschological test.
The conclusion of his updated reflections on the good-v-evil pairing:
Group pressures, authority symbols, dehumanization of others, imposed anonymity, dominant ideologies that enable spurious ends to justify immoral means, lack of surveillance, and other situational forces can work to transform even some of the best of us into Mr. Hyde monsters, without the benefit of Dr. Jekyll’s chemical elixir. We must be more aware of how situational variables can influence our behavior. Further, we must also be aware that veiled behind the power of the situation is the greater power of the system, which creates and maintains complicity at the highest military and governmental levelswith evil-inducing situations, like those at Abu Ghraib and Guantánamo Bay prisons.
Zimbardo’s 1971 experiment was meant to study an array of variables: “role-playing, coercive rules, power differentials, anonymity, group dynamics, and dehumanization” as set against “a collection of the “best and brightest” of young college men in collective opposition to the might of a dominant system.” He wanted to see, that is, how good people act when given the possibility of bad action. His was a study in power, situational dynamics, and behavioral change in the face of confrontation. I’d say, more basically, it was about power.
Notably, and perhaps famously, he had to end the two-week experiment “after only six days because it was running out of control.” The “good” people were abusing the prisoners, having been placed in a context that allowed them to exercise power in an unconstrained manner.
Zimbardo was working in the tradition, but moving beyond, of Stanley Milgram’s 1963 demonstration “that a majority of ordinary citizens would continually shock an innocent man, even up to near-lethal levels, if commanded to do so by someone acting as an authority.” In that Milgram experiment — now widely cited in ethics classes the world over — the “‘authority’ figure in this case was merely a high-school biology teacher who wore a lab coat and acted in an official manner. The majority of people shocked their victims over and over again despite increasingly desperate pleas to stop.”
Buit Zimbardo’s writing today because of the current relevance of the subject. In particular:
Among the dozen investigations of the Abu Ghraib abuses, the one chaired by James R. Schlesinger, the former secretary of defense, boldly proclaims that the landmark Stanford study “provides a cautionary tale for all military detention operations.” In contrasting the relatively benign environment of the Stanford prison experiment, the report makes evident that “in military detention operations, soldiers work under stressful combat conditions that are far from benign.” The implication is that those combat conditions might be expected to generate even more extreme abuses of power than were observed in our mock prison experiment.
I understand these issues as questions of promoting or discouraging values — what is good and what is bad is a product of how we understand and recognize goodness and badness in the actions of others and then ourselves. We recognize and understand those things through the expression of values and value systems. This doesn’t mean “good” and “bad” are random or relative, strictly speaking, but they are situational.
In the literature on science and engineering ethics (that subset of “ethics” I know more closely), I’ve found most useful those studies that seek to find how value systems are promoted, encouraged, shunned, and discussed within organizations. Answers to those value system issues come from attention to organizational structure, and to the modes through which the values become apparent to members of those groups. Be it a corporation, a classroom, a laboratory, a military unit, a prison, a governmental agency, or other, people live and work within systems that define acceptable values. It isn’t a matter of structure alone — that values are handed down to us from on-high, and as part of elaborate social structures (like economic, or political, or class structures, e.g.) that offer individuals no opportunity for resistance or disagreement — nor is it just a matter of simple or direct individual choice. But at the meso-level where structure and agency intermingle, where individuals work within broader structures to affect change and to respond, accede to, and resist the constraints placed upon them, a lot of the interesting engineering ethics studies suggest, we find claims about what is good and what is bad. To push Zimbardo’s decades-old experiment, we would need more, not fewer, analyses of power in society, analyses that help us see how value systems that lead to “bad” outcomes are generated through the organizations of society that we build and work within. (And, since this is Scienceblogs, that couldn’t be answered by neuroscience alone.)
All the more distressing that Abu Ghraib, for one thing, was able to slide by the news cycles without more conversation on the demands given to the soldiers from the military heirarchy, and from the values of terrorist-fighting promoted in speech, policy, and direct action by the Bush Adminsitrsiotn as a whole (and far more than just Rumsfeld the individual).
As perhaps a side note, and an end to the post:
One wonders, then, what the notion of “progress” refers to. Are we getting “better” as a society? Are we getting anywhere? Within the value system that places scientific progress as the hallmark of humanity’s advance over the past, when could we –and could we ever — gain a degree of ethical, not just scientific, progress?