Gary Wolf has a fascinating and really well written article in the Times Magazine on the rise of the “quantified self,” or all those people who rely on microsensors to measure discrete aspects of their lives, from walking speed to emotional mood:
Millions of us track ourselves all the time. We step on a scale and record our weight. We balance a checkbook. We count calories. But when the familiar pen-and-paper methods of self-analysis are enhanced by sensors that monitor our behavior automatically, the process of self-tracking becomes both more alluring and more meaningful. Automated sensors do more than give us facts; they also remind us that our ordinary behavior contains obscure quantitative signals that can be used to inform our behavior, once we learn to read them.
Here’s a typical example of a life improvement reinforced by self-tracking:
After surgery for a back problem, [Sophie] Barbier had trouble sleeping. On CureTogether, a self-tracking health site, she learned about tryptophan, a common amino acid available as a dietary supplement. She took the tryptophan, and her insomnia went away. Her concentration scores also improved. She stopped taking tryptophan and continued to sleep well, but her ability to concentrate deteriorated. Barbier ran the test again, and again the graph was clear: tryptophan significantly increased her focus. She had started by looking for a cure for insomnia and discovered a way to fine-tune her brain.
On the one hand, this is a valiant example of self-experimentation, a demonstration of the potential of measuring and then measuring again. Kudos to Sophie for finding a way to improve herself. But I think it also exposes some of the inherent limitations of the approach. One of the main problems facing self-experimenters is the powerful role of expectations in shaping performance. If we think something is going to work, then it probably will work, at least for a little while.
Look, for example, at this witty little experiment. Baba Shiv, a neuroeconomist at Stanford, supplied a group of people with Sobe Adrenaline Rush, an “energy” drink that was supposed to make them feel more alert and energetic. (The drink contained a potent brew of sugar and caffeine which, the bottle promised, would impart “superior functionality”). Some participants paid full price for the drinks, while others were offered a discount. The participants were then asked to solve a series of word puzzles. Shiv found that people who paid discounted prices consistently solved about thirty percent fewer puzzles than the people who paid full price for the drinks. The subjects were convinced that the stuff on sale was much less potent, even though all the drinks were identical.
Studies like this demonstrate the necessity of blind controls. The brain is a gullible machine, which is why the very act of believing that tryptophan might work makes it much more likely to have an effect, at least at first. (In an ideal world, Barbier should have devised a placebo condition and then measured the difference in her ability to concentrate.) That’s why I’m a teeny bit suspicious of clear-cut results that come from tested hypotheses, especially when the results contradict the scientific literature. The very act of speculating about a causal relationship – say, for instance, the link between a pill and the ability to concentrate – warps the data, biasing our mind in a million little ways.
My own favorite form of self-experimentation has to do with wine. It’s pretty clear that we expect more expensive wines to taste better. (This expectation is visible in an fMRI machine.) But it’s also clear that, at least for amateurs, this expectation is mostly false: when you give people bottles of wine without any price information, there is no correlation between the cost of the wine and its subjective ratings. A $8 bottle is just as enjoyable as an $80 one.
Every few months, I conduct a blind taste test. (In general, I think the most useful forms of self-tracking will be the tracking of our innate biases.) I trek to Costco and my local wine store and pick up several bottles at various price points. The wines are poured into cheap decanters. And then I taste the wines over the course of a lazy afternoon, being sure to eat lots of crackers in between. I smell, swirl, sip and swallow. (I like my wine too much to spit it out.) I’m no Robert Parker, but I take a few notes and render my judgement. What have I discovered? Mostly I’ve learned that my ratings are woefully inconsistent. The same $18 pinot that I loved last year might get low marks at a later date. A Tuscan blend that seemed so generic now seems like it would be a perfect foil for pasta with tomatoes. In other words, when it comes to wine I have no idea what I’m talking about.
As a result, I now spend a little less time in the wine store. Instead of trying to find the house wine that will maximize my utility, I ask for advice and tend to buy the bottle with the most interesting backstory. I seek out obscure varietals and pretty labels and lower price points. The act of measuring my wine preferences, in other words, has taught me that my preferences aren’t worth very much.
I’ve also started spending more time thinking about beer, as I’ve got a hunch that my beer preferences will be more consistent. I haven’t done any controlled trials yet, and I’m not sure I want to, but I’m pretty convinced that I’ve found a perfect pale ale. Is this preference an illusion? Probably a little. But I don’t really care. This beer tastes really good. Sometimes, it’s more fun not measuring anything.