Ecoding Diversity: What is Orthogonal Coding?

ResearchBlogging.orgOne of the problems brains must overcome to behave effectively is to discretely encode all the different responses that they can produce. Considering movement alone, you can move in a lot of different ways. Selecting which one is appropriate is troublesome in itself, but encoding all of them is a challenge. It is like trying to organize the Library of Congress so that you can instantly find exactly what you want. Your brain must come up with some way to encode each of these responses separately because if it didn't than you might engage in one response when you really meant another. How the brain separately encodes different responses is a problem of encoding diversity.

I talked a similar problem in encoding diversity before with respect to auditory stimuli. The question then was how does the brain encode all the different sounds you can hear? Though tempting at first, it isn't done by assigning one neuron to represent each sound because ultimately you would run out of neurons. (This is called the grandmother neuron fallacy because if it were true you would need one neuron to represent your grandmother, and what if you lost that neuron? Could you then not perceive your grandmother?) The solution that I talked about in that post was the idea that you could represent different stimuli using ensembles -- different combinations of neurons. Because this combinatorial code uses many, many units, the number of stimuli it can represent is essentially infinite.

Ensemble encoding is one way to encode stimulus diversity. Now, I want to talk about an extension of that concept that I will call orthogonal coding. To explain it, I am going to use an example from a recent paper Sigala et al. that showed that different steps in tasks are encoded orthogonally in the prefrontal cortex of monkeys.

What is orthogonal coding, and why is it important?

Orthogonal encoding is an extension of ensemble encoding because it includes the notion that neurons are not either on or off; they fire at different rates at different times. The idea with ensemble encoding is that if I have a large set of neurons, I can take a subset of them and assign them to stimulus A. If that subset is firing, stimulus A is there. However, there can be more complexity because whether a particular neuron is in an ensemble says nothing about how much it is firing. For example, one neuron in an ensemble could fire just a little when stimulus A is there and a lot when stimulus B is there. This difference in the firing rate for the neuron contains relevant information about the stimulus. In short, ensemble encoding is important, but more information can be contained if we include how much the neurons fire in addition to whether they fire.

But what is the point of this word "orthogonal," Jake? Frankly that is a weird word to use, and I remember it only vaguely from high school math.

To show you what I mean, I want to describe a numerical construct that neuroscientists use to understand firing in a particular part of the brain during a particular task: the activity vector.

Using equipment developed in the last decade or so, neuroscientists have the ability to record the activity from large numbers of neurons (usually in the hundreds depending on the brain region and the equipment) simultaneously in animals that are awake and performing tasks. This gives us much more data than we had in the past. But with all technical advances come challenges: how do we represent the data of hundreds of neurons firing simultaneous?

One way would be to take the firing of each neuron recorded and form it into a large vector. Each unit in the vector would represent the average firing rate over a particular length of time for a particular neuron. (The selection of the time is important, so more on that later.) We call this vector the activity vector. You can think of that vector moving through a multidimensional space called the activity space. Different things the animal is doing or perceiving cause this vector to move. As neuroscientists, we want to understand how neuronal activity relates to behavior, so we relate the movements of this vector through activity space to the behavior of the animal.

To illustrate, let's consider a simple case. We are going to take an activity vector for two neurons A and B.

Say we record from two neurons in the same region, A and B, when an animal is performing a particular task. We count the number of action potentials for each and calculate a firing rate (APs/time). Then we make a graph like the one below.

i-77b6f6078d3d9e91e81f6768579b9a99-orthog1.JPG

The firing rate of each neuron is represented as a vector in this two-dimensional space.

Now that is fine and good, but in order to say something interesting about the brain we need to perform an experiment comparing two situations. Let's say we record from A and B again during situations 1 and 2, and we get a graph that looks like the following.

i-67885dd87251d86974b0ebd0f5609a36-orthog2.JPG

You can see in this example that situation 1 and 2 produce relatively similar firing patterns in neurons A and B. We can tell this because the firing rates of the two neurons are pretty much the same in both situations. From a mathematical perspective, if we were to calculate the correlation, these vectors would be highly correlated. We might conclude that A and B are doing something similar in 1 and 2, but it would be important to confirm that in many more situations before we drew that inference.

Let's say instead that the graph looks like this.

i-a6d6a07f2283b3be4f7972293b3105fc-orthog3.JPG

In this case, the activity in A and B is very different in situations 1 and 2. In fact, if were were to take the dot product of the two vectors, we would find that it is pretty close to zero indicating that these vectors are nearly orthogonal. In this case, we could conclude that neurons A and B differentiate situations 1 and 2. If A and B are neurons from a region of the brain responsible for generating the response to 1 and 2, we might say that A and B differentiate the response. (Again, many more experiments would be required to prove that inference.)

(House keeping: when we actually do experiments like this one, we use some mathematical tricks that I haven't mentioned here. First, neuronal firing is highly variable both within and between neurons. As a consequence rather than the bulk firing rate we tend to use the firing rate for a particular neuron normalized to that neuron's average firing across all the tasks. This prevents a single highly variable or high firing neuron from distorting analysis. It also means that the axes for the graph would go below zero, where the neuron fell below the average firing rate. In addition, because firing data is compiled over multiple trials of the same task, sometimes a neuron might fire during a trial but not during another. Neurons are often unreliable about their firing. Thus, when doing these experiments, the normalized activity vector often is also often scaled for reliability.)

What is the significance of orthogonality with respect diversity encoding?

In the example above only two neurons were given. If we stipulate that different situations are encoded by vectors that orthogonal to one another -- to enhance the contrast between them -- that doesn't leave that many different vectors. The number of different situations that could be encoded is limited. On the other hand, consider an activity space with hundreds of axes -- one for each neuron. The addition of each axis would double the number of orthogonal vectors that could be created from any particular vector. Thus, as the number of axes increases, the number of orthogonal vectors than can be created increases exponentially. The significance of orthogonality in encoding diversity is that it allows an infinite number of different things to be encoded discretely. The more dimensions there are in the activity space, the more differently all the items to be encoded in that activity space can be.

This idea of activity vectors encoding things discretely is big among neuroscientists who study these things right now. It helps us address the problem of having too much data and relating the activity in different situations to one another. That being said, there are a couple caveats to remember.

First, the activity vector is a means to conceptualize changes in activity. It is an abstract construct. That does not mean that it actually exists or that it has significance. Experimental design is still very key to determine whether changes in the activity vector actually relate to changes in behavior.

Second, orthogonal coding is only one means of encoding diversity. When we compile an activity vector, we have to select the time periods over which to count action potentials. In general (as is the case below), these time periods -- usually called epochs -- are selected to be significant in relation to the task the animal is actually performing, e.g. the time between telling the animal to respond and the actual response, that could be an epoch. However, it is possible that your epoch could be arbitrary or even irrelevant to the way information is actually represented. Also, they ignore the intrinsic timing of neuronal firing in the brain. The brain has a lot of oscillatory activity. It is what scientists measure using an EEG. We think that when an neuron fires relative to this oscillatory activity might also carry information, and this information is discarded in the activity vector. (Unless of course you are deciding the epochs relative to the oscillations, but that is more complicated than I want to get into...)

Sigala et al. and orthogonal encoding of sequential tasks

Now that we have defined orthogonal encoding, I want to talk about an excellent example in Sigala et al.. The authors recorded in monkeys from a region of the brain called the prefrontal cortex. We know that the prefrontal cortex is involved in organizing complex behavior. One of the things that it organizes is multi-step responses: responses where a sequence of behaviors is required rather than a reflexive single response. The authors wanted to understand how the prefrontal activity differentiates different parts of the task. (For those interested, this recording was done in neurons distributed cross ventro- and dorsolateral prefrontal cortex.)

The task they used works like this: a monkey with the recording equipment in this part of its brain would sit in a chair with its eyes fixed on a central point on a screen. Then a cue (a picture of some sort) would appear on either side of the fixation point, but the animal wasn't supposed to look at the cue. Rather, the animal would wait during the period called the delay until a target associated with the cue would appear. (The target was another picture.) The animal would then move its eyes towards the target. If it did all of this correctly, it would receive a treat. Important to this study, there were three different cue-target pairings and three different epochs for recording: cue, delay and target. There were also two directions in which the eyes could move: contralateral (opposite side) to the cue or ipselateral (same side) to the cue.

The authors compiled activity vectors for 324 neurons in the regions during the task in two separate animals. (More neurons were recorded, but some did not change their firing rates during the task.) The activity in these neurons was broken down by epoch, by the cue-target pairing used, and whether the target was ipselateral or contralateral. These numbers were used to make activity vectors for each epoch/pairing/direction set.

The authors then compared the activity vectors to see which ones were most similar. There are a lot of ways to depict this data, but I like the one below the best. The following is Figure 4 from the paper. It is a tree diagram showing the correlations of different activity vectors across multiple trials. Sets of vectors closer on the tree are more similar.

i-d2e39e5f6af8b26fdf77536d9a5d4a09-orthogonalitypaper.jpeg

The left shows the category of vector. For example, the top left shows the vectors recorded when the target was being presented on the same side as the cue for each of the three cue-target stimulus pairings (1, 2, 3).

There are some interesting things about this data. Note how each of the cue-target pairings cluster together for each epoch (cue, target, delay) and a each direction (ipse- and contralateral). What this means is that for very different visual cues, the activity in these regions was very similar. Just in case you don't believe me, below are the cue-target pairings used. (From Figure 1B)

i-c389ab78636ea8641bd23b5d71dc82c2-cues.jpg

These pictures are very different, yet the activity during these periods was very similar.

What appears to be the primary determinant of similar activity in these regions is not the stimulus but rather the component of the task that the animal is performing. This is illustrated by the fact that while all of the target epoch or cue epoch vectors cluster together -- they are very similar -- all of the target vs. all of cue vectors are very separate from one another -- they are very different. Likewise activity vectors when the animal is moving in one direction -- ipse or contra -- tend to cluster.

The inference the authors make from these data is that the prefrontal cortex organizes encoding for sequential tasks orthogonally. The purpose of that orthogonal encoding is to differentiate the different steps in the task rather than the different stimuli. The activity for varying stimuli is similar; what is significant is the step the animal is on in the task.

The authors summarize the significance of their findings:

Many previous experiments show dense coding of task-relevant information across populations of PFC neurons. Neural activity may be selective for particular phases of a complex task, and for stimulus or other information within one phase. Here we examined the similarity structure of this neural representation in a cue-target association task.

For different task phases, we found approximately orthogonal patterns of PFC activity. Although many cells discriminated different task phases, this was not achieved by separate cell populations uniquely responsive to a particular phase. Instead, many cells were active in each phase, and a cell's activity in one phase was not predictive of its activity in others... Within one task phase, in contrast, we found correlated activity patterns for different stimuli/trial types. Together, these results show a hierarchical representation, with one basic activity pattern associated with each task phase, and stimulus information coded by modulations of the phase pattern. Both within and between task phases, PFC representations were also modulated by hemifield, with generally lower correlations between hemifields.

Selective neural activity associated with different task phases has been described in different species, and in lateral prefrontal, orbital prefrontal, and anterior cingulate cortex. In maze tasks, for example, cells in the rat frontal cortex may fire at trial onset, or while approaching or leaving a goal, often rather independently of current maze location. As in our data, previous studies show activity of single PFC cells at one or multiple task phases. Our finding of distributed, orthogonal coding for discrete task phases gives quantitative form to such results. The benefits of orthogonal coding are well known, providing efficient representation and discrimination of many independent events in a fixed population of cells. Sequential activity can contain an arbitrary number of successive steps, each requiring different information and operations. Successive steps may be seen as different contexts determining what information is relevant and what must be done. Orthogonal coding may allow PFC to support a large number of these somewhat independent steps or task contexts.

In contrast, similarity between two distributed representations allows for similarity of their functional effects. In our task, phase information is presumably important in controlling the separate operations appropriate for each phase: for cues, to retrieve the associated target; for delays, to wait while maintaining a target description; for targets, to await stimulus offset before releasing a saccade. Correlated coding for different stimuli at the same task phase -- strongly within hemifield, and weakly between hemifields -- may reflect similar operations applied to different information content. (Emphasis mine. Citations removed.)

Overall, I think this is a beautiful experiment that very clearly illustrates the significance of an important concept in coding diversity: orthogonal encoding. Congratulations to the authors on their excellent work.

Sigala, N., Kusunoki, M., Nimmo-Smith, I., Gaffan, D., Duncan, J. (2008). Hierarchical coding for sequential task events in the monkey prefrontal cortex. Proceedings of the National Academy of Sciences, 105(33), 11969-11974. DOI: 10.1073/pnas.0802569105

Categories

More like this