No technology is inherently good or evil, it’s the use of that technology that determines its value. A blade can be used in surgery to save a life, or as a weapon to take one. The ballistics that enable missiles to destroy enemies also enables the launch of communication satellites and exploration of other worlds. For quite a while, I’ve been reading +Jeff Jarvis‘ commentary on these issues in the realm of the internet. His principal argument is that regulation that aims to block technology in order to keep people safe will also block the innovation and potential benefits of that technology.
This same argument has been pushed front and center in the life sciences over the last few months due to a pair of studies on the flu that became controversial due to their perceived potential for misuse. Carl Zimmer wrote a great explainer on the history of the studies and the controversy, as well as an explanation of the only one of the papers that has been published. Briefly – two labs were investigating what could make H5N1 “Bird Flu” become transmissible in humans. This virus appears to be quite deadly in people, but as yet incapable of transmitting from person to person. Humans working closely with birds can occasionally become infected, and in many of those cases, the infection is quite deadly, but there are no documented cases of humans catching H5N1 influenza from other humans.
These controversial papers set out to understand how these viruses could evolve to infect humans more readily and transmit person-to-person. In the process of doing the research, some feared they might have made a deadly virus that could cause a world-wide pandemic through accidental release or malicious intent. As a result, a government panel recommended that the research be redacted prior to publication. Now, it turns out that the viruses the research produced weren’t actually that deadly after all, and eventually the papers were published in full.
I don’t want to relitigate this story, because it’s mostly over, everything turned out ok, and it’s been covered extensively and well by other people (see: those other links). But the question of dual-use research – research that has the potential to produce both positive and negative outcomes. In principal, all research is dual-use, but i hope you’ll agree, some has more potential for abuse than others.
A couple of weeks ago, Geoff Brumfiel wrote a commentary in Nature laying out a number of potentially controversial bits of research:
I mostly think it’s a great piece – he discusses the potential controversy waiting research into refining nuclear fuel, brain scanning technology that will allow more complete pictures of what people are thinking, geoengineering (putting particulates into the air to counteract the effects of global warming), and genetic screening and engineering of fetuses early in development. If it were me, I would have added genetically modified food crops, but Brumfiel takes a generally thoughtful and measured approach.
However, I think the framing – asking whether this science is “good or bad” – exactly misses the point. The research itself cannot be good or bad, nor can the knowledge gleaned be inherently good or bad. As with technology, it’s the intent of the people using the knowledge that matters. Of course, as I mentioned earlier, there are some avenues of research that have more potential to be used with malicious intent than others, and we as scientists and as a society need to determine how to address those forms of research.
I should be clear here: I think it’s entirely possible that some types of research should be off limits – the risks may outweigh the benefits. But in general, reducing the potential for misuse is a better goal than abandoning the research all together. In all of these cases, the research is being done to address a real world problem, and there are risks in not pursuing the research as well.