cortex en Goodbye Scienceblogs <span>Goodbye Scienceblogs</span> <div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"><p>NOTE: This blog has moved. The Frontal Cortex is now over <a href="">here</a>.</p> <p>I've got some exciting news: Starting today, the Frontal Cortex will be moving over to the Wired <a href="">website</a>. Needless to say, the move comes with the usual mixture of emotions, as I've greatly enjoyed my four years as part of the Scienceblogs community. It's been an honor to share this space with such a fine collection of scientists and writers. I'm sad to be leaving. (I should note, by the way, that this move was planned long before Pepsi, etc.) However, I'm really thrilled to be joining the blog network of the publication that I contribute to in print. I apologize for the hassle of changing RSS feeds, bookmarks and all the rest, but I really hope we can make this move together. Regardless, thanks so much for reading!</p> </div> <span><a title="View user profile." href="/author/cortex" lang="" about="/author/cortex" typeof="schema:Person" property="schema:name" datatype="">cortex</a></span> <span>Wed, 07/21/2010 - 10:10</span> Wed, 21 Jul 2010 14:10:43 +0000 cortex 107675 at Twitter Strangers <span>Twitter Strangers</span> <div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"><p>Over at <a href="">Gizmodo</a>, Joel Johnson makes a convincing argument for adding random strangers to your twitter feed:</p> <blockquote><p>I realized most of my Twitter friends are like me: white dorks. So I picked out my new friend and started to pay attention.</p> <p>She's a Christian, but isn't afraid of sex. She seems to have some problems trusting men, but she's not afraid of them, either. She's very proud of her fiscal responsibility. She looks lovely in her faux modeling shots, although I am surprised how much her style aligns with what I consider mall fashion when she's a grown woman in her twenties. Her home is Detroit and she's finding the process of buying a new car totally frustrating. She spends an embarrassing amount of time tweeting responses to the Kardashian family.</p> <p>One of the best things about Twitter is that, once you've populated it with friends genuine or aspirational, it feels like a slow-burn house party you can pop into whenever you like. Yet even though adding random people on Twitter is just a one-click action, most of us prune our follow list very judiciously to prevent tedious or random tweets to pollute our streams. Understandable! But don't discount the joy of discovery that can come by weaving a stranger's life into your own. </p></blockquote> <p>I'd argue that the benefits of these twitter strangers extend beyond the fleeting pleasures of electronic eavesdropping. Instead, being exposed to a constant stream of unexpected tweets - even when the tweets seem wrong, or nonsensical, or just plain silly - can actually expand our creative potential.</p> <p>The explanation returns us to the banal predictability of the human imagination. In study after study, when people free-associate, they turn out to not be very free. For instance, if I ask you to free-associate on the word "blue," chances are your first answer will be "sky". Your next answer will probably be "ocean," followed by "green" and, if you're feeling creative, a noun like "jeans". The reason for this is simple: Our associations are shaped by language, and language is full of cliches. </p> <p>How do we escape these cliches? <a href="">Charlan Nemeth</a>, a psychologist at UC-Berkeley, has found a simple fix. Her experiment went like this: A lab assistant surreptitiously sat in on a group of subjects being shown a variety of color slides. The subjects were asked to identify each of the colors. Most of the slides were obvious, and the group quickly settled into a tedious routine. However, Nemeth instructed her lab assistant to occasionally shout out the wrong answer, so that a red slide would trigger a response of "yellow," or a blue slide would lead to a reply of "green". After a few minutes, the group was then asked to free-associate on these same colors. The results were impressive: Groups in the "dissent condition" - these were the people exposed to inaccurate descriptions - came up with much more original associations. Instead of saying that "blue" reminded them of "sky," or that "green" made them think of "grass," they were able to expand their loom of associations, so that "blue" might trigger thoughts of "Miles Davis" and "smurfs" and "pie". The obvious answer had stopped being their only answer. More recently, Nemeth has <a href="">found</a> that a similar strategy can also lead to improved problem solving on a variety of creative tasks, such as free-associating on ways to improve traffic in the Bay Area. </p> <p>The power of such "dissent" is really about the power of surprise. After hearing someone shout out an errant answer - this is the shock of hearing blue called "green" - we start to reconsider the meaning of the color. We try to understand this strange reply, which leads us to think about the problem from a new perspective. And so our comfortable associations - the easy association of blue and sky - gets left behind. Our imagination has been stretched by an encounter that we didn't expect.</p> <p>And this is why we should all follow strangers on Twitter. We naturally lead manicured lives, so that our favorite blogs and writers and friends all look and think and sound a lot like us. (While waiting in line for my <a href="">cappuccino</a> this weekend, I was ready to punch myself in the face, as I realized that everyone in line was wearing the exact same uniform: artfully frayed jeans, quirky printed t-shirts, flannel shirts, messy hair, etc. And we were all staring at the same gadget, and probably reading the same damn website. In other words, our pose of idiosyncratic uniqueness was a big charade. Self-loathing alert!) While this strategy might make life a bit more comfortable - strangers can say such strange things - it also means that our cliches of free-association get reinforced. We start thinking in ever more constricted ways.</p> <p>And this is why following someone unexpected on Twitter can be a small step towards a more open mind. Because not everybody reacts to the same thing in the same way. Sometimes, it takes a confederate in an experiment to remind us of that. And sometimes, all it takes is a stranger on the internet, exposing us to a new way of thinking about God, Detroit and the Kardashians.</p> </div> <span><a title="View user profile." href="/author/cortex" lang="" about="/author/cortex" typeof="schema:Person" property="schema:name" datatype="">cortex</a></span> <span>Tue, 07/20/2010 - 06:10</span> Tue, 20 Jul 2010 10:10:34 +0000 cortex 107674 at Stress <span>Stress</span> <div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"><p>I've got a new article in the latest Wired on the science of stress, as seen through the prism of Robert Sapolsky. The article isn't online yet (read it on the iPad!), but here are the opening paragraphs:</p> <blockquote><p>Baboons are nasty, brutish and short. They have a long muzzle and sharp fangs designed to inflict deadly injury. Their bodies are covered in thick, olive-colored fur, except on their buttocks, which are hairless. The species is defined by its social habits: The primates live in troops, or small groupings of several dozen individuals. These troops have a strict hierarchy, and each animal is assigned a specific ranking. While female rank is hereditary--a daughter inherits her mother's status--males compete for dominance. These fights can be bloody, but the stakes are immense: A higher rank means more sex. The losers, in contrast, face a bleak array of options: submission, exile, or death.</p> <p>In 1978, Robert Sapolsky was a recent college graduate with a biology degree and ajobinKenya.Hehadsetoffforayearof fieldwork by himself among baboons before he returned to the US for grad school and the drudgery of the lab. At the time, Sapol- sky's wilderness experience consisted of short backpacking trips in the Catskill Moun- tains; he had lit a campfire exactly once. Most of what he knew about African wildlife he'd learned from stuffed specimens at the Museum of Natural History. And yet here he was in Nairobi, speaking the wrong kind of Swahili and getting ripped off by everyone he met. Eventually he made his way to the bush, a sprawling savanna filled with zebras and wildebeests and marauding elephants. "I couldn't believe my eyes," Sapolsky remem- bers. "There was an animal behind every tree. I was inside the diorama."</p> <p>Sapolsky slowly introduced himself to a troop of baboons, letting them adjust to his presence. After a few weeks, he began recog- nizing individual animals, giving them nick- names from the Old Testament. It was a way of rebelling against his childhood Hebrew school teachers, who rejected the blasphemy of Darwinian evolution. "I couldn't wait for the day that I could record in my notebook that Nebuchanezzar and Naomi were off screwing in the bushes," Sapolsky wrote in A Primate's Memoir. "It felt like a pleas- ing revenge."</p> <p>Before long, Sapolsky's romantic vision of fieldwork collided with the dismal reality of living in the African bush. His feet itched from a fungal infection, his skin was cov- ered in bug bites, the Masai stole his stuff, he had terrible diarrhea, and he was des- perately lonely. Sapolsky's subjects gave him no glimpse of good fellowship. They seemed to devote all of their leisure time--and baboon life is mostly leisure time--to mischief and malevolence. "One of the first things I discovered was that I didn't like baboons very much," he says. "They're quite awful to one another, constantly scheming and backstabbing. They're like chimps but without the self-control."</p> <p>While Sapolsky was disturbed by the behavior of the baboons--this was nature, red in tooth and claw--he realized that their cruelty presented an opportunity to investi- gate the biological effects of social upheaval. He began to notice, for instance, that the males at the bottom of the hierarchy were thinner and more skittish."They just didn't look very healthy," Sapolsky says. "That's when I began thinking about how damn stressful it must be to have no status. You never know when you're going to get beat up. You never get laid. You have to work a lot harder for food."</p> <p>And so Sapolsky set out to test the hypothesis that the stress involved in being at the bottom of the baboon hierarchy led to health problems. At the time, stress was mostly ignored as a scientific subject. It was seen as an unpleasant mental state with few long- term consequences. "A couple of studies had linked stress to ulcers, but that was about it," he says. "It struck most doctors as extremely unlikely that your feelings could affect your health. Viruses, sure. Carcinogens, absolutely. But stress? No way." Sapolsky, how- ever, was determined to get some data. He wasn't yet thinking lofty thoughts about human beings or public health. His transformation into one of the leading researchers on the sci- ence of stress would come later. Instead, he was busy learning how to shoot baboons with anesthetic darts and then, while they were plunged into sleep, quickly measure the levels of stress hormones in their blood.</p> <p>In the decades since, Sapolsky's speculation has become scientific fact. Chronic stress, it turns out, is an extremely dangerous condition. And it's not just baboons: People are just as vulnerable to its effects as those low-ranking male apes. While stress doesn't cause any single disease--ironically, the causal link between stress and ulcers has been largely disproved--it makes most diseases significantly worse. The list of ailments connected to stress is staggeringly diverse and includes everything from the common cold and lower-back pain to Alzheimer's disease, major depressive disorder, and heart attack. Stress hollows out our bones and atrophies our muscles. It triggers adult onset diabetes and is a leading cause of male impotence. In fact, numerous studies of human longevity in developed coun- tries have found that "psychosocial" factors such as stress are the single most important variable in determining the length of a life. It's not that genes and risk factors like smoking don't matter. It's that our levels of stress matter more.</p> <p>Furthermore, the effects of chronic stress directly counteract improvements in medical care and public health. Antibiotics, for instance, are far less effective when our immune system is suppressed by stress; that fancy heart surgery will work only if the patient can learn to shed stress. As Sapolsky notes, "You can give a guy a drug-coated stent, but if you don't fix the stress problem, it won't really matter. For so many conditions, stress is the major long-term risk factor. Everything else is a short-term fix."</p> <p>The emergence of stress as a major risk factor is largely a testament to scientific progress: The deadliest diseases of the 21st century are those in which damage accumulates steadily over time. (Sapolsky refers to this as the "luxury of slowly falling apart.") Unfortunately, this is precisely the sort of damage that's exacerbated by emotional stress. While modern medicine has made astonishing progress in treating the fleshy machine of the body, it is only beginning to grapple with those misfortunes of the mind that undo our treatments.</p> <p>The power of this new view of stress--that our physical health is strongly linked to our emotional state--is that it connects a wide range of scientific observations, from the sociological to the molecular. On one hand, stress can be described as a cultural condition, a byproduct of a society that leaves some people in a permanent state of stress. But that feeling can also be measured in the blood and urine, quantified in terms of glucocorticoids and norepinephrine and adrenal hormones. And now we can see, with scary precision, the devastating cascade unleashed by these chemicals. The end result is that stress is finally being recognized as a critical risk factor, predicting an ever larger percentage of health outcomes.</p></blockquote> <p>There's a lot more in the article. Here's one example of how stress destroys the body. Elissa Epel, a former grad student of Sapolsky's and a professor of psychiatry at UCSF, has <a href="">demonstrated</a> that mothers caring for chronically ill report much higher levels of stress. That's not surprising. What is surprising is that these women also have dramatically shortened telomeres, those caps on the end of chromosomes that keep our DNA from disintegrating. (Women with the highest levels of stress had telomere shortening equal "to at least one decade of additional aging.") When our telomeres run out, our cells stop dividing; we've run out of life. Stress makes us run out of life faster. </p> </div> <span><a title="View user profile." href="/author/cortex" lang="" about="/author/cortex" typeof="schema:Person" property="schema:name" datatype="">cortex</a></span> <span>Mon, 07/19/2010 - 05:49</span> <div class="field field--name-field-blog-categories field--type-entity-reference field--label-inline"> <div class="field--label">Categories</div> <div class="field--items"> <div class="field--item"><a href="/channel/life-sciences" hreflang="en">Life Sciences</a></div> </div> </div> Mon, 19 Jul 2010 09:49:06 +0000 cortex 107673 at Smart Babies <span>Smart Babies</span> <div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"><p>Over at Sciam's <a href=";print=true">Mind Matters</a>, Melody Dye has a great post on the surprising advantages of thinking like a baby. At first glance, this might seem like a ridiculous conjecture: A baby, after all, is missing most of the capabilities that define the human mind, such as language and the ability to reason or focus. Rene Descartes argued that the young child was entirely bound by sensation, hopelessly trapped in the confusing rush of the here and now. A newborn, in this sense, is just a lump of need, a bundle of reflexes that can only eat and cry. To think like a baby is to not think at all.</p> <p>And yet, Dye notes that the very neural features that make babies so babyish might also allow them to learn about the world at an accelerated rate. Consider, for instance, the lack of a prefrontal cortex (PFC), which is woefully underdeveloped in infants. (Your PFC isn't fully developed until late adolescence.) One of the main consequences of not having an online PFC is that babies can't focus their attention. Alison Gopnik, a UC-Berkeley psychologist who has written a few wonderful books on baby cognition, suggests the following metaphor: If attention works like a narrow spotlight in adults - a focused beam illuminating particular parts of reality - then in babies it works more like a lantern, casting a diffuse radiance on their surroundings. </p> <p>The lantern mode of attention can make babies seem very peculiar. For example, when preschoolers are shown a photograph of someone - let's call her Jane - looking at a picture of a family, they make very different assumptions about Jane's state of mind. When the young children are asked questions about what Jane is paying attention to, the kids quickly agree that Jane is thinking about the people in the picture. But they also insist that she's thinking about the picture frame, and the wall behind the picture, and the chair lurking in her peripheral vision. In other words, they believe that Jane is attending to whatever she can see.</p> <p>While this less focused form of attention makes it more difficult to stay on task - preschoolers are easily distracted - it also comes with certain advantages. In many circumstances, the lantern mode of attention can actually lead to improvements in memory, especially when it comes to recalling information that seemed incidental at the time. This suggests that the so-called deficits of the baby brain are actually advantages, and might be there by design. Here's Dye:</p> <blockquote><p>The superiority of children's convention learning has been revealed in a series of ingenious studies by psychologists Carla Hudson-Kam and Elissa Newport, who tested how children and adults react to variable and inconsistent input when learning an artificial language. Strikingly, Hudson-Kam and Newport found that while children tended to ignore "noise" in the input, systematizing any variations they were exposed to, adults did just the opposite, and reproduced the variability they encountered. </p> <p>So, for example, if subjects heard "elle va à la fac" 60% of the time and "elle va à fac" 40% of the time, adult learners tended to probability match and include "la" about 60% of the time, whereas younger learners tended to maximize and include "la" all of the time. While younger learners found the most consistent patterns in what they heard, and then conventionalized them, the adults simply reproduced what they heard. In William James' terms, the children made sense of the "blooming, buzzing confusion" they were exposed to in the experiment, whereas the adults did not.</p> <p>Children's inability to filter their learning allows them to impose order on variable, inconsistent input, and this appears to play a crucial part in the establishment of stable linguistic norms. Studies of deaf children have shown that even when parental attempts at sign are error-prone and inconsistent, children still extract the conventions of a standard sign language from them. Indeed, the variable patterns produced by parents who learn sign language offers insight into what might happen if children did not maximize in learning: language, as a system, would become less conventional. What words meant and the patterns in which they were used would become more idiosyncratic and unstable, and all languages would begin to resemble pidgins.</p></blockquote> <p>Or consider this experiment, designed by John Hagen, a developmental psychologist at the University of Michigan. A child is given a deck of cards and shown two cards at a time. The child is told to remember the card on the right and to ignore the card on the left. Not surprisingly, older children and adults are much better at remembering the cards they were told to focus on, since they're able to direct their attention. However, young children are often better at remembering the cards on the left, which they were supposed to ignore. The lantern casts its light everywhere.</p> <p>And it's not just complex learning that benefits from a quiet PFC. A recent brain scanning <a href="">experiment</a> by researchers at Johns Hopkins University found that jazz musicians in the midst of improvisation - they were playing a specially designed keyboard in a brain scanner - showed dramatically reduced activity in a part of the prefrontal cortex. It was only by "deactivating" this brain area - inhibiting their inhibitions, so to speak - that the musicians were able to spontaneously invent new melodies. The scientists compare this unwound state of mind with that of dreaming during REM sleep, meditation, and other creative pursuits, such as the composition of poetry. But it also resembles the thought process of a young child, albeit one with musical talent. Baudelaire was right: "Genius is nothing more nor less than childhood recovered at will."</p> </div> <span><a title="View user profile." href="/author/cortex" lang="" about="/author/cortex" typeof="schema:Person" property="schema:name" datatype="">cortex</a></span> <span>Wed, 07/14/2010 - 13:36</span> Wed, 14 Jul 2010 17:36:35 +0000 cortex 107672 at Political Dissonance <span>Political Dissonance</span> <div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"><p>Joe Keohane has a fascinating <a href="">summary</a> of our political biases in the Boston Globe Ideas section this weekend. It's probably not surprising that voters aren't rational agents, but it's always a little depressing to realize just how <em>irrational</em> we are. (And it's worth pointing out that this irrationality applies to both sides of the political spectrum.) We cling to mistaken beliefs and ignore salient facts. We cherry-pick our information and vote for people based on an inexplicable stew of superficial hunches, stubborn ideologies and cultural trends. From the perspective of the human brain, it's a miracle that democracy works at all. Here's Keohane:</p> <blockquote><p>A striking recent example was a study done in the year 2000, led by James Kuklinski of the University of Illinois at Urbana-Champaign. He led an influential experiment in which more than 1,000 Illinois residents were asked questions about welfare -- the percentage of the federal budget spent on welfare, the number of people enrolled in the program, the percentage of enrollees who are black, and the average payout. More than half indicated that they were confident that their answers were correct -- but in fact only 3 percent of the people got more than half of the questions right. Perhaps more disturbingly, the ones who were the most confident they were right were by and large the ones who knew the least about the topic. (Most of these participants expressed views that suggested a strong antiwelfare bias.)</p> <p>Studies by other researchers have observed similar phenomena when addressing education, health care reform, immigration, affirmative action, gun control, and other issues that tend to attract strong partisan opinion. Kuklinski calls this sort of response the "I know I'm right" syndrome, and considers it a "potentially formidable problem" in a democratic system. "It implies not only that most people will resist correcting their factual beliefs," he wrote, "but also that the very people who most need to correct them will be least likely to do so."</p></blockquote> <p>In <em>How We Decide</em>, I discuss the mental mechanisms behind these flaws, which are ultimately rooted in cognitive dissonance: </p> <blockquote><p>Partisan voters are convinced that they're rationalâ¯only the other side is irrationalâ¯but we're actually rationalizers. The Princeton political scientist Larry Bartels analyzed survey data from the 1990's to prove this point. During the first term of Bill Clinton's presidency, the budget deficit declined by more than 90 percent. However, when Republican voters were asked in 1996 what happened to the deficit under Clinton, more than 55 percent said that it had increased. What's interesting about this data is that so-called "high-information" votersâ¯these are the Republicans who read the newspaper, watch cable news and can identify their representatives in Congressâ¯weren't better informed than "low-information" voters. According to Bartels, the reason knowing more about politics doesn't erase partisan bias is that voters tend to only assimilate those facts that confirm what they already believe. If a piece of information doesn't follow Republican talking pointsâ¯and Clinton's deficit reduction didn't fit the "tax and spend liberal" stereotypeâ¯then the information is conveniently ignored. "Voters think that they're thinking," Bartels says, "but what they're really doing is inventing facts or ignoring facts so that they can rationalize decisions they've already made." Once we identify with a political party, the world is edited so that it fits with our ideology. </p> <p>At such moments, rationality actually becomes a liability, since it allows us to justify practically any belief. We use the our fancy brain as an information filter, a way to block-out disagreeable points of view. Consider this experiment, which was done in the late 1960's, by the cognitive psychologists Timothy Brock and Joe Balloun. They played a group of people a tape-recorded message attacking Christianity. Half of the subjects were regular churchgoers while the other half were committed atheists. To make the experiment more interesting, Brock and Balloun added an annoying amount of staticâ¯a crackle of white noiseâ¯to the recording. However, they allowed listeners to reduce the static by pressing a button, so that the message suddenly became easier to understand. Their results were utterly predicable and rather depressing: the non-believers always tried to remove the static, while the religious subjects actually preferred the message that was harder to hear. Later experiments by Brock and Balloun demonstrated a similar effect with smokers listening to a speech on the link between smoking and cancer. We silence the cognitive dissonance through self-imposed ignorance. </p></blockquote> <p>There is no cure for this ideological irrationality - it's simply the way we're built. Nevertheless, I think a few simple fixes could dramatically improve our political culture. We should begin by minimizing our exposure to political pundits. The problem with pundits is best illustrated by the classic work of <a href="">Philip Tetlock</a>, a psychologist at UC-Berkeley. (I've written about this before on this blog.) Starting in the early 1980s, Tetlock picked two hundred and eighty-four people who made their living "commenting or offering advice on political and economic trends" and began asking them to make predictions about future events. He had a long list of questions. Would George Bush be re-elected? Would there be a peaceful end to apartheid in South Africa? Would Quebec secede from Canada? Would the dot-com bubble burst? In each case, the pundits were asked to rate the probability of several possible outcomes. Tetlock then interrogated the pundits about their thought process, so that he could better understand how they made up their minds. By the end of the study, Tetlock had quantified 82,361 different predictions. </p> <p>After Tetlock tallied up the data, the predictive failures of the pundits became obvious. Although they were paid for their keen insights into world affairs, they tended to perform worse than random chance. Most of Tetlock's questions had three possible answers; the pundits, on average, selected the right answer less than 33 percent of the time. In other words, a dart-throwing chimp would have beaten the vast majority of professionals.</p> <p>So those talking heads on television are full of shit. Probably not surprising. What's much more troubling, however, is that they've become our model of political discourse. We now associate political interest with partisan blowhards on cable TV, these pundits and consultants and former politicians who trade facile talking points. Instead of engaging with contrary facts, the discourse has become one big study in cognitive dissonance. And this is why the predictions of pundits are so consistently inaccurate. Unless we engage with those uncomfortable data points, those stats which suggest that George W. Bush wasn't all bad, or that Obama isn't such a leftist radical, then our beliefs will never improve. (It doesn't help, of course, that our news sources are increasingly segregated along ideological lines.) So here's my theorem: The value of a political pundit is directly correlated with his or her willingness to admit past error. And when was the last time you heard Karl Rove admit that he was wrong?</p> </div> <span><a title="View user profile." href="/author/cortex" lang="" about="/author/cortex" typeof="schema:Person" property="schema:name" datatype="">cortex</a></span> <span>Tue, 07/13/2010 - 06:57</span> Tue, 13 Jul 2010 10:57:48 +0000 cortex 107671 at Will I? <span>Will I?</span> <div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"><p>We can't help but talk to ourselves. At any given moment, there's a running commentary unfolding in our stream of consciousness, an incessant soliloquy of observations, questions and opinions. But what's the best way to structure all this introspective chatter? What kind of words should we whisper to ourselves? And does all this self-talk even matter?</p> <p>These are the fascinating questions asked in a new <a href="">paper</a> led by Ibrahim Senay and Dolores Albarracin, at the University of Illinois at Urbana-Champaign and published in <a href="">Psychological Science</a>. The experiment was straightforward. Fifty three undergrads were divided into two groups. The first group was told to prepare for an anagram-solving task by thinking, for one minute, about whether they would work on anagrams. This is the "Will I?" condition, which the scientists refer to as the "interrogative form of self-talk". The second group, in contrast, was told to spend one minute thinking that they would work on anagrams. This is the "I Will" condition, or the "declarative form of self-talk". Both groups were then given ten minutes to solve as many anagrams as possible.</p> <p>At first glance, we might assume that the "I Will" group would solve more anagrams. After all, they are committing themselves to the task, silently asserting that they <em>will</em> solve the puzzles. The interrogative group, on the other hand, was just asking themselves a question; there was no commitment, just some inner uncertainty. </p> <p>But that's not what happened. It turned out that that the "Will I?" group solved nearly 25 percent more anagrams. When people asked themselves a question - Can I do this? - they became more motivated to actually do it, which allowed them to solve more puzzles. This suggests that the Nike slogan should be "Just do it?" and not "Just do it".</p> <p>Why is interrogative self-talk more effective? Subsequent experiments by the scientists suggested that the power of the "Will I?" condition resides in its ability to elicit intrinsic motivation. (We are intrinsically motivated when we are doing an activity for ourselves, because we enjoy it. In contrast, extrinsic motivation occurs when we're doing something for a paycheck or any "extrinsic" reward.) By interrogating ourselves, we set up a well-defined challenge that we can master. And it is this desire for personal fulfillment - being able to tell ourselves that we solved the anagrams - that actually motivates us to keep on trying. Here is an excerpt from the paper:</p> <blockquote><p>Self-posed questions about a future behavior may inspire thoughts about autonomous or intrinsically motivated reasons to pursue a goal, leading to interrogative self-talk and intention forming corresponding intentions and ultimately performing the behavior. In fact, people are more likely to engage in a behavior when they have intrinsic motivation (i.e., when they feel personally responsible for their action) than when they have extrinsic motivation (i.e., when they feel external factors such as other people are responsible for their action) in diverse domains from education to medical treatment to addiction recovery to task performance</p></blockquote> <p>Scientists have recognized the importance of intrinsic motivation for decades. In the 1970s, Mark Lepper, David Greene and Richard Nisbett conducted a classic study on preschoolers who liked to draw. They divided the kids into three groups. The first group of kids was told that they'd get a reward - a nice blue ribbon with their name on it - if they continued to draw. The second group wasn't told about the rewards but was given a blue ribbon after drawing. (This was the "unexpected reward" condition.) Finally, the third group was the "no award" condition. They weren't even told about the blue ribbons.</p> <p>After two weeks of reinforcement, the scientists observed the preschoolers during a typical period of free play. Here's where the results get interesting: The kids in the "no award' and "unexpected award" conditions kept on drawing with the same enthusiasm as before. Their behavior was unchanged. In contrast, the preschoolers in the "award" group now showed much less interest in the activity. Instead of drawing, they played with blocks, or took a nap, or went outside. The reason was that their intrinsic motivation to draw had been contaminated by blue ribbons; the extrinsic reward had diminished the pleasure of playing with crayons and paper. (Daniel Pink, in his excellent book <em>Drive</em>, refers to this as the "Sawyer Effect".)</p> <p>So the next time you're faced with a difficult task, don't look at a Nike ad, and don't think about the extrinsic rewards of success. Instead, ask yourself a simple question: Will I do this? I think I will.</p> <p>Update: Daniel Pink has <a href="">more</a>.</p> </div> <span><a title="View user profile." href="/author/cortex" lang="" about="/author/cortex" typeof="schema:Person" property="schema:name" datatype="">cortex</a></span> <span>Mon, 07/12/2010 - 04:53</span> <div class="field field--name-field-blog-categories field--type-entity-reference field--label-inline"> <div class="field--label">Categories</div> <div class="field--items"> <div class="field--item"><a href="/channel/education" hreflang="en">Education</a></div> </div> </div> Mon, 12 Jul 2010 08:53:24 +0000 cortex 107670 at Cages and Cancer <span>Cages and Cancer</span> <div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"><p>There's an absolutely fascinating new paper by scientists at Ohio State University in the latest <a href="">Cell</a>. In short, the paper demonstrates that mice living in an enriched environments - those spaces filled with toys, running wheels and social interactions - are less likely to get tumors, and better able to fight off the tumors if they appear. </p> <p>The experiment itself was simple. A large group of mice were injected with melanoma cells. After six weeks, the mice living in enriched environments had tumors that were approximately 75 percent smaller than mice raised in standard lab cages. Furthermore, while every mouse in the standard cages developed cancerous growths, 17 percent of the mice in the enriched enclosures showed no sign of cancer at all.</p> <p>What explains this seemingly miraculous result? Why does having a few toys in a cage help prevent the proliferation of malignant cells? While the story is bound to get more complicated - there nothing is simple about cancer, or brain-body interactions - the researchers present a strikingly straightforward chemical pathway underlying the effect:</p> <p><img src="" alt="i-d321ceab91733648fe5acfaaec1aaf6c-PIIS0092867410005659.fx1.lrg.jpg" /></p> <p>In short, the enriched environments led to increases in BDNF in the hypothalamus. (BDNF is a trophic factor, a class of proteins that support the survival and growth of neurons. What water and sun do for plants, trophic factors do for brain cells.) This, in turn, led to significantly reduced levels of the hormone leptin throughout the body. (<a href="">Leptin</a> is involved in the regulation of appetite and metabolism, and seemed to be down-regulated by slightly elevated levels of stress hormone.) Although a few earlier studies have <a href=";holding=npg">linked</a> leptin to accelerated tumor growth, it remains unclear how this happens, or if this link is really causal. </p> <p>It's important to not overhype the results of this study. <strong>Nobody knows if this data has any relevance for humans</strong>. Nevertheless, it's a startling demonstration of the brain-body loop. While it's no longer too surprising to learn that chronic stress increases cardiovascular disease, or that actors who win academy awards live much <a href="">longer</a> than those who don't, there is something spooky about this new link between nice cages and reduced tumor growth. Cancer, after all, is just stupid cells run amok. It is life at its most mechanical, nothing but a genetic mistake. And yet, the presence of toys in a cage can dramatically alter the course of the disease, making it harder for cancerous cells to take root and slowing their growth once they do. A slight chemical tweak in the cortex has ripple effects throughout the flesh.</p> <p>It strikes me that we need a new metaphor for the interactions of the brain and body. They aren't simply connected via some pipes and tubes. They are <em>emulsified</em> together, so hopelessly intertwined that everything that happens in one affects the other. Holism is the rule.</p> </div> <span><a title="View user profile." href="/author/cortex" lang="" about="/author/cortex" typeof="schema:Person" property="schema:name" datatype="">cortex</a></span> <span>Fri, 07/09/2010 - 06:09</span> Fri, 09 Jul 2010 10:09:05 +0000 cortex 107669 at Soda <span>Soda</span> <div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"><p>There's been lots of <a href="">chatter</a> about Pepsi lately, so I thought I'd run with the theme. I don't have much to add to the media commentary - I'm just sad to <a href="">see</a> some of my favorite bloggers leave this space - but I've got plenty to say about soft drinks. And little of it will please Pepsi.</p> <p>The first thing is that a soda tax is a great idea. Here's a compelling chart from a recent report published by US Department of Agriculutre's Economic Research Service (via <a href="">Yglesias</a>):</p> <p><img src="" alt="i-3ca8e88de329d2f65e6177a68ceab506-sodatax.jpg" /></p> <p>Some of these calories, of course, will be shifted to other categories of food - we'll drink less Pepsi, but we'll consume more <a href="">Doritos</a>. Nevertheless, there's compelling evidence that such "sin" taxes are actually quite effective. Just look at cigarettes: If you want to decrease the numbers of smokers, raising the price of cigarettes is the only proven solution. In fact, a 10 percent increase in the price of cigarettes causes a 4 percent reduction in demand. Teenagers are especially sensitive to these price changes: a 10 percent increase in price causes a 12 percent drop in teenage smoking. (The only bad news is that raising the price of cigarettes tends to increase the demand for marijuana. Apparently, the two products are in competition.) And let's not forget that nicotine is an extremely addictive substance. So I think there's good reason to think that a 10 percent hike in the price of sodas might be even more effective than a cigarette tax.</p> <p>And while we're on the subject of Pepsi, it's also worth noting that diet sodas don't work.<br /> Consider this recent paper in Behavioral Neuroscience, which found that rats fed artificial sweeteners gained more weight than rats fed actual sugar. Here's the <a href="">abstract</a>:</p> <blockquote><p>Animals may use sweet taste to predict the caloric contents of food. Eating sweet noncaloric substances may degrade this predictive relationship, leading to positive energy balance through increased food intake and/or diminished energy expenditure. Adult male Sprague-Dawley rats were given differential experience with a sweet taste that either predicted increased caloric content (glucose) or did not predict increased calories (saccharin). <strong>We found that reducing the correlation between sweet taste and the caloric content of foods using artificial sweeteners in rats resulted in increased caloric intake, increased body weight, and increased adiposity, as well as diminished caloric compensation and blunted thermic responses to sweet-tasting diets. These results suggest that consumption of products containing artificial sweeteners may lead to increased body weight and obesity by interfering with fundamental homeostatic, physiological processes.</strong></p></blockquote> <p>The scientists argue that fake sugar is dangerous because it subverts a crucial homeostatic mechanism, as the the brain uses the sweetness of a food to keep track of its intake. More sugar implies more calories; the tongue is a natural energy detector. The problem with diet sodas is that they make this system unreliable, so that the presence of of intense sweetness no longer means anything. (And it's not just rodents: a similar <a href="">effect</a> has been observed in humans.) The hypothalamus gets confused. The end result is that we lose touch with the energetic needs of our body. Instead of eating to sate a hunger, we just eat. And eat.</p> <p>This is the curse of soda: it's so bad for us that even diet sodas make us fat. </p> </div> <span><a title="View user profile." href="/author/cortex" lang="" about="/author/cortex" typeof="schema:Person" property="schema:name" datatype="">cortex</a></span> <span>Thu, 07/08/2010 - 05:48</span> <div class="field field--name-field-blog-categories field--type-entity-reference field--label-inline"> <div class="field--label">Categories</div> <div class="field--items"> <div class="field--item"><a href="/channel/brain-and-behavior" hreflang="en">Brain and Behavior</a></div> </div> </div> Thu, 08 Jul 2010 09:48:18 +0000 cortex 107668 at LSD <span>LSD</span> <div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"><p>There's a fascinating article in the latest Vanity Fair (not online) about the prevalence of LSD (aka lysergic <em>acid</em> diethylamide) among movie stars in 1950s Hollywood: </p> <blockquote><p>Aldous Huxley was one of the first in Los Angeles to take LSD and was soon joined by others, including the writer Anais Nin. The screenwriter Charles Brackett discovered "infinitely more pleasure" from music on LSD than he had ever before, and the director Sidney Lumet tried it under the supervision of a former chief of psychiatry for the U.S. Navy. Lumet says his three sessions were "wonderful," especially the one where he relived his birth...Another early experimenter was Clare Boothe Luce, the playwright and former American ambassador to Italy, who in turn encouraged her husband, Time publisher Henry Luce, to try LSD. He was impressed and several very positive articles about the drug's potential ran in his magazine.</p> <p>[SNIP]</p> <p>There is no question that, at least for a period of time, LSD truly transformed Cary Grant...Much to his friends' surprise, Cary Grant began talking about his therapy in public, lamenting, "Oh those wasted years, why didn't I do this sooner?"</p> <p>"The Curious Story Behind the New Cary Grant," headlined the September 1, 1959 issue of Look magazine, and inside was a glowing account of how, because of LSD therapy, "at last I am close to happiness." He later explained that "I wanted to rid myself of all my hypocrises. I wanted to work through the events of my childhood, my relationship with my parents and my former wives." More articles followed, and LSD even received a variation of the Good Housekeeping Seal of Approval.</p></blockquote> <p>Of course, celebrities no longer brag about their hallucinogenic experiments. (Instead, we're stuck with the slow motion tragedy of Lindsay Lohan.) Furthermore, the government now treats hallucinogens (such as LSD and mushrooms) as legally equivalent to heroin, crack and opium. Because LSD is a Schedule 1 drug, if you're arrested with more than a single dose (roughly .5 grams), and it's your first offense, the federal sentencing guidelines are as follows: "Not less than 5 years, and not more than 40 years". State penalties are similar, with possession typically leading to 1-3 years in jail. (This despite the fact that <a href="">study</a> after study shows that prescription drugs kill more people by overdose than illicit drugs.) If it were up to me, our classification of drugs would depend largely on their addictive potential, so that drugs with limited addiction potential (such as LSD) were far less regulated than highly addictive substances, such as crack, oxycontin, heroin, etc. But don't get me started on our endless war on drugs, which is an incoherent disaster.</p> <p>Back to LSD. Because the drug is so tightly regulated, there has been minimal research on how the drug works in the brain. Just look at these <a href=";hl=en&amp;btnG=Search&amp;as_sdt=2001&amp;as_sdtp=on">search</a> results: The most cited papers are more than thirty years old. (One scientist I talked to last year said the two main disincentives to use LSD in the lab were the lack of grants - the big funding institutions are only interested in addictive drugs - and the paperwork.) The end result is that we really don't know how LSD alters our sensory experience, except that it binds to a vast array of G-protein coupled receptors, including every dopamine receptor subtype and various serotonin receptor subtypes. All this excess neural activity leads to excitation in the upper echelons of the cortex, such as layers IV and V. And, somehow, those squirts of chemical lead people to conclude that they've found the secret of the universe.</p> <p>What's unfortunate is that LSD could be a powerful experimental tool. And not in the Timothy Leary sense: I'm talking about rigorous investigations into the neural substrate of consciousness. After all, one of the challenges of investigating human consciousness is that it's a continuous stream. As William James noted in 1890, "Consciousness does not appear to itself chopped up in bits. Such words as 'chain' or 'train' do not describe it fitly as it presents itself in the first instance. It is nothing jointed; it flows. A 'river' or a 'stream' are the metaphors by which it is most naturally described." The paradox, of course, is that the brain is composed of a trillion different joints, a seeming infinitude of binary neurons switching on and off. So how does that create <em>this</em>? How does three pounds of wet stuff give rise to the ceaseless cinema of subjective experience? </p> <p>To begin to answer this profound mystery - and it is <em>the</em> mystery of modern neuroscience - researchers have come up with a clever set of experimental paradigms. My favorite is <a href="">Christof Koch's</a> version of binocular rivalry. In theory, binocular rivalry is a simple phenomenon. We have two eyeballs; as a result, we are constantly being confronted with two slightly separate views of the world. The brain, using a little unconscious trigonometry, slyly erases this discrepancy, fusing our multiple visions into a single image. </p> <p>But Koch throws a wrench into this visual process. "What happens," he wondered, "if corresponding parts of your left and right eyes see two quite distinct images, something that can easily be arranged using mirrors and a partition in front of your nose?" Under ordinary circumstances, we superimpose the two separate images from our two separate eyes on top of each other. For example, if the left eye is shown horizontal stripes and the right eye is shown vertical stripes, we will consciously perceive a plaid pattern. Sometimes, however, the brain gets a little confused, and decides to pay attention to only one of our eyes. After a few seconds, we realize our mistake, and begin to pay attention to our other eye. As Koch notes, "The two percepts can alternate in this manner indefinitely."</p> <p>The end result of all this experimentally induced confusion is that the subject becomes aware - if only for a moment - of the artifice underlying perception. We realize that we have two separate eyes, which see two separate things. Koch wants to know where in the brain the struggle for ocular dominance occurs. Which neurons decide which eye to pay attention to? What cells impose a unity onto our sensory disarray?</p> <p>This is an elegant experimental paradigm. But it's also profoundly limited. Even if we can locate the cells that govern binocular rivalry, that's only a single "neural correlate of consciousness". It remains entirely unclear if those to-be-determined cells in the visual cortex govern all visual experience, or just the contradictions between our eyeballs. </p> <p>And this leads me back to LSD. Here's a compound that can consistently alter our entire sensory experience, so that the brain is made aware of its own machinery. We see ourselves seeing the world. (I wish Kant had tried LSD - he would have loved it.*) From the perspective of neuroscience, the hallucinogen is like a systematic version of binocular rivalry. If we knew how LSD worked, we might also gain insight into how ordinary experience works, and how that chemical soup creates the feeling of this, here, now. In other words, the molecular "joints" tweaked by the illegal compound can tell us something very interesting about the source of our unjointed stream of consciousness. Cary Grant was on to something.</p> <p>*Kant: "The imagination is a necessary ingredient of perception itself."</p> </div> <span><a title="View user profile." href="/author/cortex" lang="" about="/author/cortex" typeof="schema:Person" property="schema:name" datatype="">cortex</a></span> <span>Wed, 07/07/2010 - 07:04</span> Wed, 07 Jul 2010 11:04:58 +0000 cortex 107667 at Alcoholism <span>Alcoholism</span> <div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"><p>Brendan Koerner has a really fantastic <a href="">article</a> in the latest Wired on Alcoholics Anonymous (AA). It's a fascinating exploration of the organization, from its hallucinogen inspired birth (Bill Wilson was tripping on <a href="">belladonna</a> when he found God in a hospital room) to the difficulty of accurately measuring the effectiveness of AA:</p> <blockquote><p>The group's "cure rate" has been estimated at anywhere from 75 percent to 5 percent, extremes that seem far-fetched. Even the most widely cited (and carefully conducted) studies are often marred by obvious flaws. A 1999 meta-analysis of 21 existing studies, for example, concluded that AA members actually fared worse than drinkers who received no treatment at all. The authors acknowledged, however, that many of the subjects were coerced into attending AA by court order. Such forced attendees have little shot at benefiting from any sort of therapy--it's widely agreed that a sincere desire to stop drinking is a mandatory prerequisite for getting sober.</p> <p>Yet a growing body of evidence suggests that while AA is certainly no miracle cure, people who become deeply involved in the program usually do well over the long haul. In a 2006 study, for example, two Stanford psychiatrists chronicled the fates of 628 alcoholics they managed to track over a 16-year period. They concluded that subjects who attended AA meetings frequently were more likely to be sober than those who merely dabbled in the organization. The University of New Mexico's Tonigan says the relationship between first-year attendance and long-term sobriety is small but valid: In the language of statistics, the correlation is around 0.3, which is right on the borderline between weak and modest (0 meaning no relationship, and 1.0 being a perfect one-to-one relationship).</p></blockquote> <p>Koerner also investigates AA from the perspective of the brain. He focuses on the prefrontal cortex, that chunk of tissue behind the forehead that allows us to exert self-control, to order club soda instead of whiskey:</p> <blockquote><p>As dependence grows, alcoholics also lose the ability to properly regulate their behavior. This regulation is the responsibility of the prefrontal cortex, which is charged with keeping the rest of the brain apprised of the consequences of harmful actions. But mind-altering substances slowly rob the cortex of so-called synaptic plasticity, which makes it harder for neurons to communicate with one another. When this happens, alcoholics become less likely to stop drinking, since their prefrontal cortex cannot effectively warn of the dangers of bad habits.</p> <p>This is why even though some people may be fully cognizant of the problems that result from drinking, they don't do anything to avoid them. "They'll say, 'Oh, my family is falling apart, I've been arrested twice,'" says Peter Kalivas, a neuroscientist at the Medical University of South Carolina in Charleston. "They can list all of these negative consequences, but they can't take that information and manhandle their habits."The loss of synaptic plasticity is thought to be a major reason why more than 90 percent of recovering alcoholics relapse at some point.</p></blockquote> <p>It's now possible to see these changes in the prefrontal cortex at an extremely precise level. Interestingly, one of the most important changes has to do with how alcoholics (and addicts in general) process "prediction error" signals. In essence, a prediction error signal occurs when we expect to get a reward - and it doesn't matter if the reward is money, sex, praise or drugs - and we instead get nothing (or maybe even a negative outcome). The brain processes this disappointment as a prediction error. As Wolfram Schultz and others have demonstrated, such prediction errors are an incredibly efficient way to learn about the world, allowing us to update our internal models (all those predictions of good stuff) in light of our mistakes. </p> <p>This is an essential aspect of decision-making, as it allows us to avoid the mindless repetition of mistakes. Just look at what happens to monkeys when their prediction error pathway is surgically disrupted. The experiment went like this: monkeys were given a joystick that moved in two different directions. At any given moment, only one of the movements would trigger a reward (a pellet of food). To make things more interesting, the scientists switched the direction every twenty-five trials. If the monkeys had previously gotten in the habit of lifting the joystick in order to get a food pellet, they now had to shift their strategy. </p> <p>So what did the monkeys do? Animals with an intact prediction error pathway had no problem with the task. As soon as they stopped receiving rewards for lifting the joystick - this generated a prediction error - they started turning it in the other direction, which meant they continued to receive their pellets of food. However, monkeys that were missing their prediction error machinery demonstrated a telling defect. When they stopped being rewarded for moving the joystick in a certain direction, they were still able (most of the time) to change directions, just like a normal monkey. However, they were unable to <em>persist</em> in this successful strategy, and soon went back to moving the joystick in the direction that garnered no reward. They never learned how to consistently find the food, to turn a mistake into an enduring lesson. </p> <p>What do prediction errors have to do with addiction? One way to think about addiction is the abuse of a substance despite serious adverse consequences. We think the alcohol will make us happy - and it does, for a few minutes - but the drug will also lead to withdrawal, hangovers, ruined relationships, an empty wallet, etc. In other words, the costs of the drink far exceed its fleeting rewards. Why, then, do addicts keep on drinking? One possible explanation is that addicts can't properly process their prediction errors, so that all those negative outcomes get ignored. (In other words, we're like those monkeys who keep on pressing the joystick in the wrong direction.) The end result is that we never learn from our very costly decision-making mistakes.</p> <p>A new <a href="">paper</a> in the Journal of Neuroscience by Soyoung Q Park, et. al. provides compelling support for this hypothesis. The scientists began by giving twenty "abstinent alcohol-dependent patients" a simple reinforcement learning task featuring green smiling faces (positive feedback) and red frowning faces (negative feedback). </p> <p>The first thing to note is that it took the alcoholic patients significantly longer to figure out the game than a group of control subjects. Because the game was played inside an fMRI machine, the scientists were able to analyze the neural differences that led to the learning problems. Interestingly, the alcoholic patients didn't have a problem generating prediction errors in the striatum, the dopaminergic source of the prediction error signal. When they made a bad guess and saw the red frowning face, their addicted brains looked identical to brains of control subjects. Both groups instantly and automatically recognized their mistakes.</p> <p>It's what happened next that begins to explain the errant behavior of addicts. In the control group, this prediction error signal was quickly passed along to the prefrontal cortex, which used this new information to modulate future decisions. As a result, the control brain was able to quickly learn from its mistakes and minimize the number of red frowning faces.</p> <p>The alcoholic brain wasn't nearly as adept. Park et. al. found that, at least in this small group of addicted patients, there appeared to a connectivity problem between the striatum and the prefrontal cortex. As a result, when these subjects made a mistake, their prefrontal cortex wasn't fully informed - there was a reduced amount of "feedback-related modulation" - and this lack of modulation correlated with 1) an inability to succeed at the simple learning task and 2) the magnitude of their alcohol craving. (This data extends similar <a href="">results</a> observed in smokers.) In other words, the addicts who couldn't internalize their prediction errors were the most addicted. This suggests that it is the inability to learn from mistakes - even when these mistakes are destroying our life - that makes addiction so damn hard to escape. </p> <p>Now here's some blatant speculation. I think one reason AA is successful, at least for many of those who commit to the program, is that it's designed to force people to confront their prediction errors. Just look at the <a href="">twelve steps</a>, many of which are all about the admission of mistakes, from step number 1 ("We admitted we were powerless over alcohol--that our lives had become unmanageable") to step number 8 ("Made a list of all persons we had harmed, and became willing to make amends to them all") to step number 10 ("Continued to take personal inventory and when we were wrong promptly admitted it"). I'd suggest that the presence of these steps helps people break through the neuromodulatory problem of addiction, as the prefrontal cortex is forced to grapple with its massive failings and flaws. Because unless we accept our mistakes we will keep on making them. </p> </div> <span><a title="View user profile." href="/author/cortex" lang="" about="/author/cortex" typeof="schema:Person" property="schema:name" datatype="">cortex</a></span> <span>Thu, 07/01/2010 - 04:55</span> <div class="field field--name-field-blog-categories field--type-entity-reference field--label-inline"> <div class="field--label">Categories</div> <div class="field--items"> <div class="field--item"><a href="/channel/brain-and-behavior" hreflang="en">Brain and Behavior</a></div> </div> </div> Thu, 01 Jul 2010 08:55:12 +0000 cortex 107666 at