I got a lot of interesting responses to my post about DIYbio and how modeling innovation in biotech on computer hacker culture may lead to a science that is less “democratized” than what is being proposed. My friend Adam pointed me to Jaron Lanier‘s work criticizing the “open” and “free” culture movements online as both unfair and leading to cultural stagnation. While I don’t agree with all of Lanier’s arguments about the prospects of an open digital culture, he makes a lot of really important points that resonate with my feelings about the future of science based on the open online model, in particular for synthetic biology. He addresses synthetic biology specifically in his book You Are Not a Gadget, but it is his discussions of the difficulty in building large software even as hardware improves exponentially according to Moore’s Law, the lock-in of stultifying software standards, and the economics of the cloud that are particularly interesting and valuable to a discussion about the future of synthetic biology.
On synthetic biology he writes that “wikified” biology, science that breaks down the already loose institutional barriers between individual scientists and between individual species–as genetic sequences are passed between people who put them into different organisms–will limit both biological evolution and technological innovation. He argues that cellular boundaries around early genetic sequences are what drove evolution out of the primordial ooze, and by breaking down the importance of those cellular boundaries, that encapsulation, we lose the locality that drives evolutionary novelty. He says the same of academia:
Academic efforts are usually well encapsulated, for instance. Scientists don’t publish until they are ready, but publish they must. So science as it is already practiced is open, but in a punctuated, not continuous, way. The interval of nonopenness–the time before publication–functions like the walls of a cell. It allows a complicated stream of elements to be defined well enough to be explored, tested, and then improved.
Here I agree that academic science is already quite open, although perhaps not open enough in some cases. If you publish a paper that includes genetic constructs that you built, you are required to send that genetic material to anyone from an academic lab that asks to use it for research purposes. I have never had a problem getting something I needed from labs in the synthetic biology community or from anywhere else. The ultra-secretive nature of some labs before publication, however, can certainly be detrimental. I have friends who aren’t allowed to present their work at department lab meetings, for fear of being scooped by colleagues down the hall. This overprotective, fearful environment holds back the students who can’t get any outside feedback on their work, and can hold back genuinely collaborative scientific progress.
At the same time I don’t want to build off of work that hasn’t been vetted or proven in some way (which doesn’t necessarily have to be publication). A totally open repository of genetic parts, as it basically exists now in the form of the synthetic biology Parts Registry, therefore can have a lot of problems. The registry has countless parts that are essentially nonfunctional, but you can only tell after considerable time wasted searching for a part, finding and getting it, and then sequencing, testing, and verifying part functions. This work does improve the quality of the registry, “wikifying” biological data collection by outsourcing quality control to unpaid users, but we still won’t necessarily approach the quality of parts made and maintained by individual scientists. By divorcing the part from its original context, the lab or scientist who built and tested it, we end up losing some of the value of the work that went into producing it, and we lose some of the ability for genuine collaboration. DNA (and even DNA plus detailed data characterizing its function in a specific lab) alone isn’t necessarily enough for a creative and innovative project in synthetic biology.
Completely abstracting the functions of genetic material, environmental contexts, and species boundaries is dangerous as well, and I don’t agree with Lanier that this is the future we’re headed towards, if only because this is just not how biology works. Biology is powerful exactly because it doesn’t work like computers. There is no standard way of doing anything. Genetic pieces are passed between bacterial cells (or even between food and the bacteria in our guts) but context and evolution within individual cells and populations matter for life. Interactions between genetic material and the cellular environment, and each cell with its ecological and “social” context creates the adaptable, evolvable, beautiful diversity we seen in the natural world.
By defining arbitrary standards early on in our understanding of how these genetic elements work in their rich biological contexts, and early in our ability to engineer novel functions, we lose sight of much of the complexity of biology and get stuck in what could become difficult and useless technological cycles. Indeed, the BioBrick standard as it was first defined a decade ago does not allow for proteins to be fused to one another in-frame, and uses restriction enzymes that are rare and expensive. BioBrick cloning is wonderfully convenient for certain applications, but enforcing a standard that doesn’t take into account the realities of how biological parts are made and used in different contexts and for different projects is inefficient, which is why there are almost as many “standards” for pseudo-BioBrick cloning are there are labs in synthetic biology today.
Lanier warns against any movement to enforce a specific way of doing things, any locking in of standardized forms in technology development. By defining one industry standard, we may open up an easier way to make things at industrial scale, but we also lose diversity in how we think about and use whatever we’re standardizing, making true innovation more difficult. He argues that this is particularly true in software development, where standards locked in during the early days of computing are difficult to throw off, particularly when designing and maintaining large-scale software packages and operating systems. Small programs are easy to make in new paradigms, but large programs are slow to change and extremely costly to manage and innovate, despite leaps and bounds in the speed of computer hardware.
The same can be said for synthetic biology, where small genetic networks with innovative but limited novel behaviors can be routinely and relatively easily made, but large-scale combinations of smaller networks or synthetic pathways with more than a dozen genetic components remain elusive. Many commentators on biological technologies claim that they are progressing even faster than Moore’s Law, with the prices of gene sequencing and synthesis dropping precipitously every year. But this price drop does not necessarily equate to a similarly exponential ability to understand gene sequences or create complex new biological behaviors. Synthetic biology will not necessarily follow Moore’s Law because human scientific creativity and evolutionary change are fundamentally different from how transistors work.
Creativity in synthetic biology designs, in novel syntheses of biological knowledge, bio-technical expertise, and engineering concepts are done by groups of individual hard-working people. These people and their work should be valued as something special, something that can’t be done by simply increasing the number of base pairs of DNA being synthesized. So too should we value the power of evolved biological systems as something different from the designed electronic systems that inhabit our world today. What do we gain by trying to fit biology into the structures that have become locked into computer engineering? What would we gain if instead we created a new kind of engineering, one that centered on learning more from and about the biological world?