Greater Performance Improvements When Quick Responses Are Rewarded More Than Accuracy Itself.

Last month's Frontiers in Psychology contains a fascinating study by Dambacher, HuÌbner, and Schlösser in which the authors demonstrate that the promise of financial reward can actually reduce performance when rewards are given for high accuracy. Counterintuitively, performance (characterized as accuracy per unit time) is actually better increased by financial rewards for response speed in particular.

The authors demonstrated this surprising result using a flanker task. In Dambacher et al's "parity" version of the flanker, subjects had to determine whether the middle character in strings like "149" or "$6#" were even or odd. "149" is an incongruent stimulus; for trials of that type, the correct answer is typically produced more slowly than, say, cases like "$6#" (a neutral stimulus) - where there is no conflict arising from the characters that "flank" the middle character. Three critical manipulations were made to this relatively well-understood task:

1) The task progressed such that subjects were instructed to respond within 650ms to each of the first 192 trials. On the next 192 trials, subjects had to respond within 525ms. And, on the final 192 trials, subjects had to respond within 450ms. Correct answers within the deadline were associated with a gain of 10 points, and incorrect answers within the deadline were associated with a loss of 10 points. From this, Dambacher et al could calculate "speed-accuracy tradeoffs" - an estimate of the extent to which speed and accuracy are balanced across time - and so-called "accuracy-referenced reaction times" (ARRTs), or the reaction times that would be expected to yield a given level of accuracy.

2) Subjects were told that responses which fell outside of the deadline were associated with either a loss of 20 points (the so-called "Deadline Punished" condition, or DlP) or no change in points (the so-called "Deadline Not Punished" condition, or DlNP). Keep in mind that DlNP is effectively incentivizing accuracy - a subject can maximize his or her gains merely by waiting until he/she is perfectly certain of a response. In contrast, the DlP condition primarily incentivizes response speed - any response within the deadline is far far better than no response.

3) The total accumulated points were either related to the amount of money they would receive at the end of the experiment (the "money" condition) or these points were merely symbolic of their performance, with no bearing on the compensation they'd receive (the "symbolic" condition).

In describing the results, I will often refer to the "accuracy-adjusted reaction times" simply as performance. (This is reasonable shorthand, given that AART's can be understood to reflect "accuracy per unit time", which is about as pure a measure of "performance" as you can get in psychology.) The results:

Regardless of whether "misses" of the deadline were associated with a loss of points (i.e., the DlP condition) or no net change (i.e., the DlNP condition), performance was better for neutral than incongruent flankers. This is not surprising - there's clear additional difficulty in these incongruent trials, and so there's lower accuracy per unit time - less "bang for the buck," to use that bizarrely-lude colloquialism - in that condition.

Second, regardless of whether misses of the deadline were associated with loss of points or no net change, performance was better with shorter deadlines. This effect could seem odd: there is a decreasing benefit to accuracy as time elapses (or alternately, "decreasing bang for your buck"). Of course, this is not terribly strange; since people are rarely accurate 100% of the time even in unpaced tasks, an effect of this kind of almost a certainty.

Most strangely, however, is how the DlP and DlNP conditions differ. Monetary rewards (as compared to merely symbolic ones) actually reduced performance when misses of the deadline were associated with no net change in points. In other words, the use of monetary incentives actually yielded reduced accuracy-per-unit-time when subjects were incentivized solely to get things right; symbolic "points" were better in that case. Conversely, the use of real incentives yielded greater accuracy-per-unit-time when subjects were incentivized to get things right and respond quickly - that is, when misses of the deadline were penalized, in addition to errors within the deadline.

Why might performance deteriorate when you're incentivizing people to perform well at their own pace, and yet improve when you're incentivizing them to respond correctly within a deadline?

One straightforward possibility is simply that you have to reward both accuracy and speed to see an accuracy-per-unit-time benefit from monetary incentives over symbolic ones. Yet a second experiment demonstrates this not to be the case: performance was still worse for monetary rewards when errors and deadline misses were punished, but with a greater loss for the former than the latter. Here Dambacher et al have encouraged people to both respond correctly and to do so quickly, and yet accuracy-per-unit-time is still reduced with real incentives!

But they didn't stop there. In a third experiment, subjects were rewarded solely for correct responses within the deadline, and not punished for either errors or misses. Here performance was improved by monetary rewards: accuracy-per-unit-time was once again greater for monetary rather than merely symbolic rewards. This manipulation doesn't particularly incentivize accuracy - there is nothing more to be lost by an incorrect answer than by one that is too late. Rather, it primarily incentivizes committing a response within the deadline, on the chance that it might be correct.

The results are slightly less confusing when we move out of the deceptively "black-and-white" realm of these logical considerations, and instead think in more psychological terms. If response speed is not particularly important to you - that is, it's not associated with a "loss", which we know (thanks to Kahneman & Tversky) are more salient than "wins" - then you may be inclined to "check" your planned response before committing it. You might also be inclined to engage in such response monitoring when errors are disproportionately punished but response deadlines aren't.

If such response monitoring is in some sense "wasted effort" (either because correct trials undergo too much slowing in this response monitoring process, or because incorrect responses are simply unlikely to be identified as such), then you're going to see diminishing returns on accuracy across time - that is, less accuracy-per-unit-time, or less bang for your buck. Equivalently, if response speed is important, or if errors aren't associated with a loss, then you can just abandon your tendency to engage in inefficient response monitoring processes. As a result, you get better accuracy per unit time, and more bang for your buck.

Viewed from this perspective, it makes sense that incentivizing response speed leads to better "accuracy-per-unit-time" than incentivizing accuracy itself. But things get really interesting when we consider how these incentive effects relate to activations previously observed along the medial surface of the prefrontal cortex during tasks with incentive manipulations.

For example, Kouneheir et al observed dorsomedial activations in response to an incentive manipulation that could be viewed as disproportionately punishing errors (which, according to the findings above, should engage greater response checking or action monitoring). Indeed, a distinct literature implicates dorsomedial prefrontal cortex in processes of this kind, variously called "action monitoring," "action outcome monitoring", "conflict monitoring," or "error-likelihood prediction." While it has been recently argued (by Grinband et al; see also) that dorsomedial PFC is strongly responsive to time-on-task - that is, the longer a subject takes to respond, the stronger the activation in these areas - such observations are not necessarily incompatible with the idea that dorsomedial areas are subserving some kind of response "checking" process.

Could Dambacher et al's task shed some light on this domain? One possibility is that dorsomedial PFC activation could track the "diminishing returns" on accuracy that one receives as a function of increasing reaction time, particularly in cases where deadline misses are not punished. If this kind of effect was observed, it might differentiate what we could call a "modal hypothesis" of dorsomedial function (something like response checking) from more recent ideas - both the "pure" time-on-task effects pointed out by Grinband et al., and possibly also the "negative surprise" calculations that underlie recent incarnations of the action outcome monitoring hypothesis by Alexander and Brown. Alternatively, dorsomedial activation may be more related to response conflict per se, in which case one might expect less cingulate activity with diminishing returns of time on task with accuracy (e.g., if there is increasingly less conflict resolution occurring per additional unit of elapsed time. Finally, it is possible that a rostrocaudal gradient in dorsomedial prefrontal activation could be observed in a task of this kind, although I think the form of these gradients is currently too underconstrained by the extant literature for clear hypothesizing (e.g., Kouneheir et al vs Venkatraman et al).

More like this

I am 100% believe in these sayings "Greater Performance Improvements When Quick Responses Are Rewarded More Than Accuracy Itself." I've experience this when I was at collage. This is true and I know that everybody will experience this.

If such response monitoring is in some sense "wasted effort" (either because correct trials undergo too much slowing in this response monitoring process, or because incorrect responses are simply unlikely to be identified as such), then you're going to see diminishing returns on accuracy across time - that is, less accuracy-per-unit-time, or less bang for your buck. Equivalently, if response speed is important, or if errors aren't associated with a loss, then you can just abandon your tendency to engage in inefficient response monitoring processes. As a result, you get better accuracy per unit time, and more bang for your buck.