What the Eye Sees and the Brain Says

What does a tiny patch of salamander retina see when it watches a movie? Weizmann Institute scientists, together with Dr. Ronen Segev at Ben-Gurion University, performed this experiment - literally showing film sequences to snippets of live retina tissue and recording the interactions between their 100 or so active neurons.

i-7df9e13240442e1cffc310f57fddb8d3-Sally2-thumb-250x249-68255.jpg

Seeing, as we know, is 3/4 interpretation and, contrary to common belief, this interpretation begins in the eye, before an image ever reaches the brain. Dr. Elad Schneidman and his colleagues found that unique patterns of neuron activity could be identified: There were clear differences, for instance, when the retinal tissues viewed images of natural scenery rather than unnatural random noise.

But Schneidman is less interested in what a retina sees than in what it "says." His goal is ambitious: He is attempting to identify the basic, underlying rules for brain activity - the "grammar" needed to begin understanding its "language." That language, according to Schneidman, is grounded in networks of neurons that are actively linked to one another. Thus standard, single-neuron experiments are not up to the task. Salamander retina tissue presents a relatively small, self-contained network that can be manipulated with movies and recorded in the lab.

Unfortunately for neuroscientists, networks are much harder to grasp than pairs of neurons; the theoretical number of connections rises drastically with the number of potential nodes and the types of possible interactions. Fortunately, when the data becomes too complex, theoretical models step in. Schneidman and his colleagues adapted a physics model that describes what happens in large groups of magnets acting on one another in a magnetic field (their main alteration - converting the -1 associated with a negative magnetic pole to a 0, representing a silent neuron). Based on two- three- and four-way connections, the researchers were actually able to identify individual "phases": patterns that could be linked to a specific message.

The model implies that the brain's language is, theoretically, open to translation. Out of billions of quadrillions of possible interconnections, as few as 500 phrases might be enough to begin assembling a basic grammar. Schneidman thinks this is more a mere analogy: Brain cells use something we recognize as language (once we can observe it on the proper scale) precisely because it is both learnable and an efficient tool for communicating.

i-32b9e264ca4fd0ff314671ea53114a59-Sally1.jpg

More like this