I've got a new article in Nature this week on the growing number of learning and memory enhanced strains of mice, and what these smart rodents can teach us about the human mind. I also discuss Luria's The Mind of A Mnemonist and the stunning research demonstrating that the cognitive deficits of many neurodevelopmental disorders, including Neuroï¬bromatosis, Down's Syndrome and Fragile X, have actually proven to be reversible, at least in mouse models. The article is behind a paywall, but here's the lede:
Ten years ago, Princeton researcher Joe Tsien eased a brown mouse, tail first, into a pool of opaque water. The animal squirmed at first; mice don't generally like getting wet. But once released it paddled in a wide circle, orienting itself by the array of coloured shapes on the walls above the pool. Within seconds, the mouse headed straight for the safety of a small platform hidden just beneath the water's surface.
Most mice require at least six experimental sessions before they're able to consistently remember the location of the platform in a Morris water maze. But this animal needed only three.
Tsien, based at Princeton University in New Jersey at the time, named his creation Doogie after the teen-age genius in the television programme Doogie Howser, MD. The work was one of the earliest examples of neurosci-entists using genetic engineering to generate cognitively enhanced animals in a bid to understand memory and learning.
"There's something magical about taking a mind and making it work better," says Alcino Silva, a professor of neuroscience at the University of California, Los Angeles and one of the pioneers of the enhanced cognition field. "In neuroscience, we've learned so much from loss-of-function mutants. But we're only beginning to learn from these smart animals."
Researchers have now created or identified at least 33 different mutant mouse strains that, like Doogie, demonstrate enhanced learning and memory. The animals can learn faster, remember events for longer and solve complex mazes that confound their ordinary littermates. And because both humans and rodents rely on virtually identical molecular pathways to learn new information and form long-term memories, the hope is that the changes which enhance rodent memory will inform research into treatments for a wide variety of learning and memory deficits in humans, from dementia to dyslexia.
Importantly, work on these tiny creatures suggests that some deficits in human learning and memory, long believed to be locked in and made permanent during development, are actually reversible, a testament to the extraordinary plasticity of the brain. Moreover, the existence of such mice raises a tantalizing question: Can normally functioning human brains be improved? Already, a profusion of "smart" drugs, designed to help with attention deficits and sleep disorders have infiltrated college campuses and workplaces, where healthy people are hungry to get ahead. Within the next decade, it might be possible to take a pill that will not only help alleviate the symptoms of learning disorders but also act like an intellectual steroid, pumping up the brain's potential. What the mice have clearly shown, in ways that pill-popping humans have not, is that enhancement could have unexpected trade-offs.
And here's the end. I tried valiantly to work Borges' Funes the Memorious into my article, but, alas, the short story got cut for space:
The temptation, of course, is that otherwise healthy humans will wish to harness the potential of such drugs. "I think these drugs are going to lead to some real slippery-slope issues," says Martha Farah, a cognitive neuroscience at the University of Pennsylvania and the director of the Center for Neuroscience and Society. "There is no clear or objective line between a normal brain and one that needs treatment. For instance, we can say that we're only going to use these memory drugs for people with demonstrated memory decline. But your memory starts to diminish in your thirties. Does that mean every forty-year old is going to [be] taking these pills?"
Farah notes similar mission creep for attention deficit disorder medications. For instance, one in five respondents to a 2008 Nature web poll admitted to using drugs like Ritalin or Modafinil to enhance their focus and productivity. "You get more and more people taking them for less and less severe conditions," she says.
Little is known about side effects and tradeoffs, but initial clues offered by smart mice are worrying. Consider the H-ras strain developed by Silva, which is being tested in the windowless basement of the UCLA psychology building. When the mouse is placed in an enclosure where days before it had received a mild electric shock - a jolt so minor, most mice don't even react to it --this mouse cowers in the corner frozen with fear. Its enhanced memory is both blessing and burden. Silva cites other strains of smart mice that excel at solving complex exercises, such as the Morris Water Maze, but struggle with simpler conditions. "It's as if they remember too much," he says.
Farah sees a parallel between these mice and one of the few case studies of an individual with profoundly enhanced memory. In the early 1920s, the Russian neurologist A.R. Luria began studying the mnemonic skills of a newspaper reporter named Sherashevsky, who had been referred to the doctor by his editor. Sherashevsky had such a perfect memory that he often struggled to forget irrelevant details. After a single read of Dante's Divine Comedy, he was able to recite the complete poem by heart. While this flawless memory occasionally helped Sherashevsky at work - he never needed to take notes - Luria also documented the profound disadvantages of such an infinite memory. Sherashevsky, for instance, was almost entirely unable to grasp metaphors, since his mind was so fixated on particulars. "He [Sherashevsky] tried to read poetry, but the obstacles to his understanding were overwhelming," Luria writes. "Each expression gave rise to a remembered image; this, in turn, would conflict with another image that had been evoked."
For Luria, the struggles of Sherashevsky were a powerful reminder that the ability to forget is just as important as the ability to remember. Thus, enhancing human memory in individuals without severe cognitive deficits might actually be counterproductive.
Furthermore, many scientists are concerned that these animal models of enhanced cognition might obscure subtle side-effects, which can't be studied in rodents or primates. Farah is currently looking at the tradeoff between enhanced attention - she gives subjects a mild amphetamine - and performance on creative tasks. Other researchers have used computer models of memory to demonstrate that memory is actually optimized by slight imperfections, which allow us to see connections between different but related events. "The brain appears to have made a compromise in that having a more accurate memory interferes with the ability to generalize," Farah says. "You need a little noise in order to be able to think abstractly, to get beyond the concrete and literal."
And then there's the problem of non-cognitive side-effects. Because many of these learning and memory enhancements involve molecules that regulate a wide variety of fundamental cellular pathways, such as CREB, it might be impossible to restrict their action to the central nervous system. Doogie, for example, has been reported to suffer from increased sensitivity to chronic pain. Or consider Silva's enhanced strain: the H-ras mutation that makes the mice so smart is also the most common mutation present in dissected tumors.
"It often takes years to fully understand all the side-effects," Kandel warns. "The mice will help us work out some of the bugs, but these will still be very risky treatments. It's always going to come down to weighing the benefit versus the cost - there isn't going to be a magic cure. "
Like Kandel, Tully has spent much of the last decade trying to translate the biochemistry of memory into useful medical therapies. He remains enthusiastic, although he's also aware that the road ahead is littered with false leads, mistaken theories and treatments that work in mice but fail in clinical trials. It's been ten years since Tsien created Doogie, and yet only a few potential medications have even entered human clinical trials.
"When I began working on these learning and memory drugs, I had no gray hair and I thought I'd find a drug that might be able to help my parents," Tully says. "Now my hair is mostly white and my parents are dead. I'm just hoping that we can find a drug in time for me."
Can't help but think of Algernon. And Charlie.
Of course this article begs the question of how intelligence can be measured, and the fact that there are different kinds of intelligence (but I suppose that would be a different article entirely...)
Still; Luria observes that his memory-savant patient could not comprehend metaphors. I suppose he means Literary metaphor, but it is interesting to reconsider this in light of Lackoff's metaphor theory; would the patient have trouble understanding a sentence like, "Our relationship is on the wrong path."?
What would a higher intelligence look like? Perhaps we would have trouble recognizing it, being that we are so self-centered. Recall documented stories from Anthropology of First Contact between Westerners and people from small scale societies. The people without the benefits of Western Civilization often found the Westerner's preoccupations and ways of thinking to be stupid, often hilariously so! And if you believe, as I do, that many aspects of small scale societies' ways of thinking are superior to those of Western Civilizations, then the misunderstanding goes the other way as well.
So who is to say that Luria was not failing to recognize the unimaginable genius of his patient? Very smart people do well on standardized tests and succeed in society, but genius is more difficult to measure. Look, for example, at Henry Darger, the outsider artist who created one of the most extraordinary and latently respected bodies of work of the 20th c. He worked as a janitor and was generally considered to be mentally disabled by his peers. He seems to have never learned simple facts - i.e. that girls don't have penises - but his art is clearly the product of an imagination worlds larger than most of our own.
Furthermore, the flashes of insight gained by people in the throes of religious ecstasy, or on psychedelics are often seen as incomprehensible or pedantic by minds in the state of consciousness referred to by pyschonauts as "consensus reality." If you have ever tried to document one such of these insights, you may be familiar with the feeling of insight as well as the disappointment at its untranslatability. So which is important to evaluating intelligence; is it our inner experience, or empirically measurable real-world behavior?
When Blake writes:
"To see a world in a grain of sand,
And a heaven in a wild flower,
Hold infinity in the palm of your hand,
And eternity in an hour"
he is trying to convey his sense of insight. If you are unwilling to accept that a higher mind's thought process may be unrecognizable by the average mind, then perhaps you will answer, "Chill out, Bill - it's only a grain of sand."
When a race of super-smart rodents has taken over the earth, humans will come to regret their meddling.
Rats of Nim?
Rats? No, hyper-dimensional mice who designed us as components of their super duper mega universe computer. They've been experimenting on us for aeons.
My research in the Kandel lab involved down-regulating CREB to see the effects on various genes involved in learning and memory (down-regulation caused amnesia). So, up-regulation would presumably cause some kind of super-memory, although I didn't test that hypothesis.
I wish I could respond directly to someone's comment. Oh well, here goes..
Re: Farley Gwazda
This article does not "beg the question of how intelligence can be measured". It actually directly asks whether more memory means more intelligence. While the measurement of intelligence has been long questioned, I believe most would assume that memory and intelligence, if not directly causal, were at least correlational.
Lehrer presents evidence that more memory does not necessarily mean more intelligence. This article obviously connects to broader topics, but that is what makes it such a good article. Begging the question is a logical fallacy, in which the author assumes the proposition within the premise.
Mermer; gÃ¼nÃ¼mÃ¼zde Ã¶zellikle mermer tozu olarak Ã§eÅitli sanayi dallarÄ±nda katkÄ± veya dolgu maddesi olarak ya da ana hammadde olarak kullanÄ±lmaktadÄ±r. Ancak, mermer tozu Ã§oÄu zaman ocaklardan patlatma yÃ¶ntemi ile Ã¶zel olarak elde edilen moloz bÃ¼yÃ¼klÃ¼ÄÃ¼ndeki mermer parÃ§alarÄ±nÄ±n kÄ±rÄ±lÄ±p Ã¶ÄÃ¼tÃ¼lmesi ile elde edilmektedir. Bu da mermer tozunu kullanan iÅletmelerde maliyetin artmasÄ±na neden olmaktadÄ±r.
Yes, Thank You!
Although I fear fighting to preserve the original meaning of "Begging the question" is a long lost cause. It makes me cringe, but if more people use the phrase incorrectly than correctly, the incorrect meaning must somewhere become the correct one.