Every now and then, I read some science from some other dimension. That is, the methods are so unusual, the relevant theories so fringe, or the conclusions so startling that I feel like the authors must be building on work from a completely separate science, with its own theories and orthodoxy. This can be good or bad, and is usually the latter. But in the case of Zhang & Luck’s recent papers, it’s very, very good.
To appreciate what they’ve done, here’s a little background from this dimension’s science – specifically, the science of forgetting. The phenomenon of “forgetting” has been the subject of much study, and a number of questions remain controversial:
- Is forgetting a process in which items completely vanish from memory, or are these items merely inaccessible to the way people search their memory?
- Does forgetting occur because memories simply decay over time, or because memories get overwritten? (either process could occur completely or partially, depending on the answer to the first question)
- Can forgetting occur intentionally (a la Freudian suppression) or does forgetting only emerge from secondary causes (for example, by practicing the retrieval of other competing items?)
These are just some of the questions addressed in decades of memory research, and clear answers continue to elude the field. But in the midst of these heated and long-standing debates, Zhang and Luck did the following:
1) developed two new theories based on a new question,
2) tested these theories with a new method to mathematically model behavior, and
3) were able to conclusively rule out one of these theories
Hopefully this gives you some appreciation for the sheer creativity required for this work to be done. Now, on to their question:
Is the precision of memory “analogue,” such that memories differ in resolution, or is it more digital, such that memories are either present or absent, with fixed resolution?
It’s a natural question – just not one that ever occurred to anyone else! Or at least, no one else thought of the right method to address it:
- have subjects remember items that vary continuously in some dimension (color or shape) over a variable delay period (1-10 seconds),
- have them report their memory of a particular item by selecting among the full continuous distribution of possible items
- analyze the precision of their responses as a mixture of two distinct processes: complete forgetting (in which responses should be randomly distributed across all possible responses) and gradual decay in resolution (in which responses should be normally distributed around the correct response, such that the standard deviation of this “bell curve” describes the decay in resolution)
- determine which changes with time, and which changes with the number of items subjects need to remember!
I’ll quit the ass-kissing and just spill the beans: it turns out that memories are stored with discrete “digital” precision. That is, as demonstrated in their Nature paper, the precision of memories is unaffected by the number of items remembered (unlike what would be expected of a more analog model, in which “memory resources” might be fluidly distributed across many to-be-remembered items, sacrificing precision of each memory for the total capacity of memory). And as demonstrated in their Psych Science paper, memories for these items persist over time in a way that is also digital: items are completely forgotten rather than degraded in precision (contrary to what one would expect from an analogue system, usually characterized by “graceful” rather than “catastrophic” failure).
Since I’ve discussed the Nature result before, I’ll just get nitty-gritty on the Psych Science paper: Zhang & Luck found that using a mixture of a random (flat) and normal (bell-curve) distribution of memory precision, only the influence of the random distribution was significantly affected by longer durations of memory maintenance (4 vs 10sec). A trend towards a slight decay in memory precision was observed from 1 to 4 seconds, but could just reflect an iconic memory trace (the displays were not masked). The mixture models always accounted for >96% of the variance in behavior, indicating a good fit of theory to data. It just so happens that only one theory is right: the fate of forgotten memories is sudden death, and not gradual decay. Well, at least that’s true for these color and shape stimuli, these 12 adults, these particular delays, and this particular procedure (in which subjects were unable to verbally rehearse the to-be-remembered material, as enforced by articulatory suppression).
Zhang & Luck conclude that little loss in precision or capacity occurs between 1 and 4 seconds of memory maintenance, but that significant losses in capacity (and not precision) occur following 4seconds. They suggest that the results are consistent with other “all-or-none transitions” from other psychological domains, suggestive of the underlying operations of a “thresholded” system (as we know neurons are). But the results go a little deeper than that…
The authors raise the possibility that precision and capacity are determined by the formation of neural “attractor states,” in which self-recurrent activity forms a semi-stable but irregular representation of neural activity capable of maintaining information. Once these attractors cross some threshold, they become too irregular or unstable to actually maintain memories. They suggest that “it is possible that some aspect of the memory representation declines gradually over time, leading to sudden termination when a threshold has been reached (just as a gradual increase in temperature may eventually cause a computer to suddenly shut down),” but if this is the case, then very little reduction in precision is possible before the attractors suffer from “sudden death.”
On the other hand, one aspect of these results is fairly ambiguous: what exactly is suffering from sudden death? There are at least two possibilities: sudden death applies to the continuously-varying stimulus dimension (color or shape, in this case), or it applies to the association of those dimensions with the current trial or current location. Interestingly, if either these trial or location bindings gradually decay over time, such decay could manifest as apparently random responding. That is, since trial orders and locations were randomized, confusion of one location or trial with another could appear to be random at the aggregate level.
While the results could support many neural network models of working memory, including discrete slot-based systems and attractor-based systems, it could be seen to pose a challenge to others. That is, according to some, working memory varies continuously in strength between individuals and across time. This view can only accomodate Zhang & Luck’s data if this strength variable differs only between subjects, and has a strongly bimodal quality (in dynamical systems speak, it must be meta-stable [mostly assuming one of a set number of possible states] and must exhibit a very strong bifurcation [transitions from one state to another must be fairly rapid]). These constraints are largely compatible with the neural network implementations of these “strength” theories, I think, so any challenge to those theories is more apparent than real.