The Connectome

I've got a long article in Nature this week on Jeff Lichtman (of Brainbow fame) and the birth of connectomics, which seeks to construct a complete wiring diagram of the brain:

At first glance, Jeff Lichtman seems to be hanging long strips of sticky tape from the walls of his Harvard lab. The tape flutters in the breeze from the air-conditioner. But closer inspection reveals that this is not tape: it is the brain of a mouse, rendered into one long, delicate strip of tissue and fixed onto a plastic film. When the film is tilted to the light, the tissue becomes visible, like the smear of a greasy fingerprint.

These smudges are the creation of a new brain-slicing machine invented by Lichtman, a molecular and cellular biologist at Harvard University, along with Kenneth Hayworth, a graduate student at the University of Southern California, Los Angeles. Called the automatic tape-collecting lathe ultramicrotome (ATLUM), the machine resembles an old-fashioned film projector with two large reels. At its centre is a fixed diamond blade that cuts continuously into a rotating mouse brain, much like an apple parer. The end result is a seamless sliver of tissue, less than 10 nanometres thick and around 5 metres long, that is deposited on the plastic film spinning around the spools.

Although Lichtman appreciates the technical precision of the ATLUM -- "That's a real diamond!" he says -- he is most excited about its scientific potential. Researchers in his lab are starting to put these slices under an electron microscope to visualize the intricate web of connections between neighbouring neurons. Lichtman eventually hopes to have a 'farm' of several dozen such microscopes scanning tissue around the clock. Even then it would take months, if not years, to capture all the connections in the strip from a single brain. "When you cut the brain this thin, there's just such a massive amount to see," he says. "It does require us to think about imaging on a different scale."

Lichtman likes to think on a different scale. In recent years, he has become a leading proponent of a new field that is working to create a connectome, a complete map of neural wiring in the mammalian brain. Currently, such a map exists only for the nematode Caenorhabditis elegans, which has 302 neurons. The adult human brain, in contrast, contains 100 billion neurons and several trillion synaptic connections. "I know the goal sounds daunting," Lichtman says. He insists that such a wiring diagram is an essential undertaking, because it will allow scientists to see, for the first time, the path that information takes as it is shuttled from cell to cell, and how all these cells and the information they transmit weave together to create a conscious brain.

I'm especially interested in Lichtman's contention that the standard deductive model of modern science - generate a theory and then find a way to test it - isn't adequate for solving the brain. Instead, he argues that neuroscience needs to return to the inductive approach of Victorian science, so that scientists begin by carefully observing the brain and only then generate testable ideas. (Darwin, for instance, was an inductivist.) The basic idea is that the brain is so complicated an organ that it's nearly impossible to generate decent theories a priori.

More like this

Every once in a while, I get some glib story from believers in the Singularity and transhumanism that all we have to do to upload a brain into a computer is make lots of really thin sections and reconstruct every single cell and every single connection, put that data into a machine with a…
Hippocampus: Broad Overview Tamily Weissman, Jeff Lichtman, and Joshua Sanes, 2005 from Portraits of the Mind: Visualizing the Brain from Antiquity to the 21st Century by Carl Schoonover The first time I created a transgenic neuron, it was in a worm, C. elegans -- a tiny, transparent cousin of the…
If you have the time, pick up a copy of the latest Nature. There is a paper that describes how a lab, based here at Harvard Medical School, used a random gene splicing strategy to express various fluorescent proteins in each neuron inside of the brain of a transgenic mouse. As a result of the…
There's an interesting conversation in the New York Times: a neuroscientist, Kenneth D. Miller, argues that brain uploading ain't gonna happen. I agree with him, only in part because of the argument from complexity he gives. Much of the current hope of reconstructing a functioning brain rests on…

I disagree with a renewed emphasis on an inductive approach. If you take a look at chaos theory and epigenetic phenomena, the brain is way too complex to look at things inductively. In such a dynamic system where a single neuron could have 10,000 synapses with other neurons as well as autocommunication(which then also have 10,000 synapses to other neurons), how can you design a decent controlled study around that. I am not saying that an inductive analysis doesn't have its uses, but our whole research system has been precedented on this reductionist thinking and it has led to limitations in our understanding of complex/dynamic phenomena like that which occurs in the brain.

It's like trying to understand the stock market, everything can make sense that a company should go up (or down) based on its fundamentals and the needs of a market, but that still has no bearing on how the market will respond, how the rest of the sector is doing, and so on and so forth. You have to accept the presence of the unknown, the unpredictability, and base new research designs on this inherent nature to give new insights as to how to increase the probability of such and such events occurring, rather than focusing on specific, "intended" changes in one pathway at a time. In the brain and especially neurotransmitter interactions, "right" can be "left", and "left" can be "right" depending on limitless confounding variables. Individual research studies on specific pathways can give insights, but to piece together anything useful from these studies, would be incredibly time-consuming and expensive. A whole is not necessarily the sum of its parts, when those parts can synergize and create dynamic phenomena ESPECIALLY in a biological system where things like signal amplification and positive/negative feedback loops are the norm.

Jonah: Compare your closing paragraph above with this excerpt from your piece in Seed, "Out of the Blue":

   Neuroscience is a reductionist science. It describes the brain in terms of its physical details, dissecting the mind into the smallest possible parts. This process has been phenomenally successful. Over the last 50 years, scientists have managed to uncover a seemingly endless list of molecules, enzymes, pathways, and genes. The mind has been revealed as a Byzantine machine. According to Markram, however, this scientific approach has exhausted itself. "I think that reductionism peaked five years ago," he says. "This doesn't mean we've completed the reductionist project, far from it. There is still so much that we don't know about the brain. But now we have a different, and perhaps even harder, problem. We're literally drowning in data. We have lots of scientists who spend their life working out important details, but we have virtually no idea how all these details connect together. Blue Brain is about showing people the whole."

   In other words, the Blue Brain project isn't just a model of a neural circuit. Markram hopes that it represents a whole new kind of neuroscience. "You need to look at the history of physics," he says. "From Copernicus to Einstein, the big breakthroughs always came from conceptual models. They are what integrated all the facts so that they made sense. You can have all the data in the world, but without a model the data will never be enough."

I hate to belabor the point, but induction is a myth. Science always proceeds by constructing and testing theories or models. However, in certain domainsâand neuroscience is certainly one of themâthis process must involve intense engagement with empirical data, sometimes massive amounts of it. Still, assumptions are being made all along the way about what data to collect. Good scientists find ways to check these assumptions wherever they can, notwithstanding the fact that there is never any final guarantee that they've checked them well enough.

Hi. Please correct me if I'm wrong, but isn't this just a matter of semantics? Basically, what I get from Lichtman's comments about becoming inductivists again is the message that we need to take a step back, look empirically at how connection patterns are organized, and then start making theories. Isn't this what everyone is saying, though? I'm not getting the real meat of how a top-notch neural inductivist versus a top-notch neural deductivist would practically differ in their modalities and mentalities.

Thank you in advance to anyone who can help illuminate how this is more than semantic.

By Michael F. (not verified) on 29 Jan 2009 #permalink

They'd both go looking for data. The inductivist would formulate a theory about how things are and then look for "proof" ie try to disprove it and fail, while also showing that surprising predictions come true. A deductionist would get the data first and then say, ah, so that's how it works, I suspected as much but I didn't want to taint the experiment with bias by saying so ahead of time.

Marco,

did you get it backward? or am I really confused?

Beck F

By Beck Frank (not verified) on 31 Jan 2009 #permalink