Whether it's a goalkeeper who needs to decide which way to dive, or a motorist who needs to swerve to avoid a pedestrian, people often have to make decisions in a small amount of time, based on a complex onrush of information. But even as their muscles launch them towards one particular fate, there is still room for indecisiveness. Arbora Resulaj from the University of Cambridge has found that people often change their decisions in the split-seconds after making them because of late-arriving information.
Neuroscientists have come up with several possible explanations for what happens in our brains when we make decisions. For example, the popular "drift-diffusion model" looks at simple choices with two possible outcomes. Take a motorist's decision about whether to swerve right or left to avoid a crash. They have to weigh up a lot of information: the speed of their car; the position of the pedestrian; the presence of surrounding traffic; and more.
The brain must wade through all of this noisy information and extract meaningful patterns. Once these signals pass a certain threshold - a "decision bound" - the brain commits to a choice and the car swerves either left or right. But this simplistic model can't account for the fact that people sometimes change their minds after decisions are made. The driver, for example, could rapidly swerve the other way. These are the last-minute changes of heart that Resulaj was interest in.
Working in the lab of Michael Shadlen (whose work I have covered before), Resulaj watched three people as they carried out a simple task, where they monitored a set of dots that each moved left or right at random. They had to decide on the overall direction of the dots and indicate their choice by moving a joystick left or right towards one of two targets.
According to the drift-diffusion model, people watch the dots until they get an overall sense of their movement. Their brains accumulate evidence until it gets enough to cross one of two thresholds - one for leftward decisions and one for rightward ones. But things weren't so simple. On a few of the tasks, the volunteers would change their minds, initially moving towards one target but then swapping for the other. In almost all cases, this sudden swap led to the more accurate answer.
The key to the experiment was that as soon as the volunteers moved at all, the dots disappeared, effectively cutting off their supply of information. So they weren't changing their plans based on new information. Instead, Resulaj thinks that their change of heart was driven by information that was still in transit at the time when they started to move.
Electrical impulses take time to reach the brain from our sense organs. This means that the initial decision is made with most, but not all, of the available evidence and that the process of gathering information can continue even after they start to act. That allows some time for rectifying and refining the original choice - around three to four tenths of a second to be exact.
Resulaj incorporated this into a new model, which is illustrated in the diagram below. Information builds up until it crosses the decision bound for a leftward movement (the blue line). Over the next few hundred milliseconds, any unprocessed data still has time to change the outcome. If these late arrivals concur with the existing information, the current course of action stays the same (the green line). But if it's contradictory enough to swing things back across another threshold - the "change-of-mind bound" - then the decision is reversed (the red line), and the arm moves right. So it's entirely possible to change a snap decision, but to do so, any last-minute decision must be particularly strong.
The actual movements of the dots supported this model. In the majority of trials, their average movements when they first appeared strongly corresponded to the recruit's initial choices. But in trials where they changed their mind, the dots' movements changed in the final hundred milliseconds before they disappeared and it was this backlog of information that led to a change in direction when it was finally processed. This trend also argues against the alternatives that people changed their minds because they remembered past information, or because they simply got the instructions wrong.
Now obviously, this model only applies to changes of mind over a short split-second time. Many of the decisions we make take place over lengthier time scales - we could, for example, reconsider evidence in light of a new fact. For the moment, it's unclear how the processes that Resulaj describes would affect these types of decisions, especially when memory is involved.
Reference: Nature doi:10.1038/nature08275 Can't find the paper? Be patient - most journals have a few hours' delay between the lifting of press embargoes and the publication of papers. For PNAS, this can be up to a week.
More on decision-making:
- Log in to post comments
Really interesting - seems to contradict framing theory that argues people reject new evidence that contradicts a prior decision. Maybe because the test encourages a different frame - be open to new evidence, and change your mind?
An interesting experiment would be to test people before and after this exercise to see how willing they are to change their minds based on new evidence, like whether Saddam Hussein had WMDs. Maybe this exercise might help....
#1, I don't think that was what this experiment was talking about. This refers to information obtained within a short amount of time.
As for the results, it would make perfect sense from an evolutionary point of view. Reacting to that last-second visual data of a flash of colour could mean the difference between escaping and being lunch.
How does this fit in with the theory that it is bad to go over and check your answers on an exam? Your initial response is usually the right one. Don't know if there has been any work done on proving or disproving that one.