Part II with David Hess, author of Alternative Pathways in Science and Industry, follows below.
All entries in the author-meets-bloggers series can be found here.
TWF: What specific areas do you examine in the book?
DH: I look at science and industry in five main fields, which I selected because of their close connections with issues of the environment and sustainability: agriculture, energy, waste and manufacturing, infrastructure, and finance. Across each of the five fields I examine four types of alternative pathways: the two described above plus pathways oriented toward social justice issues such as community sovereignty and access to goods and technology for low-income citizens.
TWF: Could you say more about "undone" science? This is a central concept in the book and one, I realize, that has been taken up in wider discussions.
DH: I developed this term years ago when working on another technology- and product-oriented movement, the movement for complementary and alternative medicine (CAM) approaches to cancer treatment. Among CAM cancer therapy advocates and clinicians (but not among the dominant oncology profession) it was widely recognized that there were many promising nutritional and herbal therapies available but little funding to determine safety, scope, and efficacy. Because the pharmaceutical industry invests only in patentable products rather than public domain interventions, clinical trials research for nutritional and herbal therapies has moved forward at a slow pace.
TWF: So, in each of the areas you discuss (agriculture, energy, waste and manufacturing, infrastructure, and finance), some questions remain unasked. It isn't a matter of bad science, but one of science that has not been practiced--done--yet?
DH: Yes. More generally the problem of undone science appears poignantly for social movement organizations, such as environmental organizations, which look for research on a topic of interest to them and often find that the work hasn't been done. This problem is recognized in the Marxist and feminist traditions in science studies, but it hasn't been well explored by the broader STS field. Last year and this year Scott Frickel and I have been organizing some work on the topic, and I think we're beginning to see some significant work by several scholars on various dimensions of the problem, such as Sahra Gibbon, Jeff Howard, Joanna Kempner, and Gwen Ottinger.
TWF: Are there any other books like this one?
DH: There is a network of researchers who agree with the fundamental premise that social movement studies and science and technology studies could benefit from an approximation. Some of that work has appeared under the rubric of the "new political sociology of science," which Scott Frickel and Kelly Moore have drawn together in an edited volume with that title. Examples of people who have contributed to this general area include, in addition to Scott and Kelly, Barbara Allen, Phil Brown, Steve Epstein, Krista Harper, Andrew Jamison, Daniel Kleinman, Brian Martin, Sabrina McCormick, Michal Ostwerweil, Dana Powell, and Roddey Reid.
TWF: You've been at Rensselaer--in the Troy-Albany-Schenectady-Saratoga, NY, area--for some time, right? How has that locale influenced your views on these industry-activist relationships?
DH: I've lived in the rust belt of Ohio and upstate New York almost my entire life, so I have a great deal of personal experience with deindustrialized cities. I've also lived in South America, so I have experienced another side of globalization, particularly the severe economic dislocations that accompanied the dismantling of import-substitution policies during the 1980s. At two points in my life, once in Ohio during the 1960s and once in South America, I've lived with soldiers on the street, and those memories haunt my understanding of the power of the state to quell challenges from social movements. I've also watched the Capital District of New York State, where I have lived for two decades, undergo a transition from deindustrialized rustbelt toward the export-oriented growth model of high-tech, triple-helix (industry-state-university) partnerships around nanotechnology and clean energy. Again, my work here has been to help articulate an alternative pathway, this time in regional development thinking and policies. In this case I've helped to found an organization [Capital District Local First] dedicated to building the local, independently owned business sector oriented toward community health and justice. I think this approach presents many opportunities for more sustainable and equitable models of regional development.
More a question for Dr. Hess than Dr. Cohen:
Has there been any confusion resulting from the term "undone science"? That is, while Dr. Hess is using it more in a sense of questions not asked, research not done, have others considered it more literally - results or practices reversed or "un-done"?
That's a good question. I haven't heard anyone confuse "undone science" with research results that reverse existing knowledge, but that's probably because I am careful to define the term when I use it. Reversals are also an issue, because they take a finding that we think we know something about and put it into question. They often generate scientific controversies.
I think what you're pointing to is that we need more than one term to talk about what we don't know. I was just at the sociology conference this past weekend, and we had a panel on this issue. It soon became clear to all of us that there are many ways to think about what we don't know. I was most concerned about the problem that activists and social movement organizations face: trying to make a political claim and finding not only that the research isn't there to support it but also finding that the research isn't being funded. In this sense we know what we want to know, but the work isn't being done or isn't done yet--it's "undone." But there are a lot of other ways to think about related issues, as you're suggesting. Some of my colleagues are talking about a category of research questions that are so complicated (at least today) that they lead us into the unknowable (even a lot of well-funded research wouldn't tell us much), and some others are talking about questions that we don't even know we should be asking. So I think we need a whole vocabulary to sort out the different types of ignorance.
I've heard of two different ways of addressing the questions you describe - or at least getting close.
Perhaps not original with fmr. Secretary of Defense Rumsfeld, but often credited to him, is a typology of "knowns," each not really as certain as they may seem.
Known knowns - Established knowledge.
Unknown knowns - Answers to questions that haven't been asked, or knowledge that hasn't been connected with particular questions. The second case could be something like approaching a research question in one field that has been answered in other fields.
Known unknowns - Questions asked that, as you mention, may be too complicated to know, or required additional work to discern.
Unknown unknowns - Questions that we aren't asking and don't know that we should ask them.
There are also Type III and Type IV errors, related to the statistical Type I and Type II errors. Type III is sometimes considered as the right answer to the wrong question, but statistically is the rare case where you correctly deduce a statistical difference, but think the difference is of the opposite magnitude. With Type IV, statistically it would be the wrong reason for rejecting an incorrect hypothesis, or the wrong answer to the wrong question. Sadly, I became dizzy writing that paragraph, so I don't think I've captured it effectively, or at least made the extrapolation from statistics to general knowledge.
I understand! I teach Type I and II errors to my students, and they usually have enough trouble without getting into the other types. It does become dizzying. Another head-spinner is the following that I learned from the alternative cancer therapy journalist/scholar Robert Houston: assuming or bracketing safety, it is actually better to accept a therapy that we believe is effective but actually isn't than to reject a therapy that we believe is not effective but actually is. In the first case the therapy goes on the market but is eventually proven to be ineffective, but in the second case we never know about the mistake we made. So this is a way of thinking about how to correct errors through policies, such as regulatory policies, that lean one way or the other.