While the Allen Atlas of gene expression has already proven itself to be a valuable research tool, I think the project’s most profound long-term impact will come from its methodological innovations. For the most part, modern science remains a field of artisans, of technicians and grad students doing experiments by hand. However, because the Allen Institute needed to generate such vast amounts of data, they realized that a different approach was required. And so they pioneered a high-throughput assembly line, in which specialized robots perform the intricate manual labor normally done by trained scientists:
The cavernous and antiseptic main lab on the second floor of the Allen Institute is dominated by five big black boxes, each the size of a Smart Car. These are robots, specially constructed by lab-automation company Tecan. At the center of each is a glass window, through which all the action can be observed: A metal arm equipped with a series of long plastic pipettes moves endlessly back and forth, squirting a variety of liquids onto slices of brain. The accompanying mechanical noises–a comforting chorus of squeaks, clanks, and beeps–sound like the androids from WALL-E. At the moment, each robot is processing 192 brain slices per day, allowing the lab to analyze nearly a thousand every 24 hours. (Other bots perform more specialized tasks, like delicately adding glass covers to the tissue samples.) They work through the night, continuing to do science while their human counterparts sleep.
As a former lab tech, there was something unsettling about watching these robots at work. I realized that my wet lab skills had suddenly become obsolete, that there was now an expensive machine which could perform the same mundane scientific tasks but without the inevitable errors. (And I made lots of experimental errors: I was a terrible tech.) At the moment, these hulking robots remain way too expensive for most labs, especially since grad students are so cheap. But I do wonder how the increasing automation of the experimental process – and I think such automation is inevitable – will change the daily schedule of most scientists. Will it free up more time for thinking? What is lost when robots generate the data?
In the 19th century, most scientists were inductivists: they made lots of observations and then came up with an elegant theory to explain what they’d seen. (Look, for instance, at Darwin and Freud.) Of course, this practice changed over the course of the 20th century, as scientists became increasingly reliant on carefully controlled experimentation. Peer-reviewed papers had less and less theory and more and more methods. The stereotypical scientist went from being a Victorian gentleman in tweed to being the guy in the white lab coat, latex gloves and plastic goggles holding a micropipette.
Will the industrialization of biological science tip the scales back towards theory? Perhaps in the future we will associate scientists less with the act of experimentation and more with the act of explanation. In other words, biologists might become a bit more like theoretical physicists.
The astonishing photos are by David Clugston.