The 2010 NFL Draft is later this month, and there is already plenty of speculation about which QB will go first, and which DT is a better choice, and which teams will trade up for a higher draft pick. The stakes for the teams are huge, as a failed draft pick will not only waste millions of dollars in salary but will also come with a high opportunity cost. So there is a strong incentive to get the decision right, and to have a decision-making system that leads to the right personnel pick.
And yet, that hasn’t happened. Instead, NFL teams remain tethered to useless metrics. Just look at the NFL scouting combine, which is a big job fair for prospective NFL players. (The combine includes everything from the 40 yard dash to a batter of psychological tests.) In recent years, the combine has become a major press event, and teams regularly cite combine results when justifying draft picks. But this is a mistake, as the combine is a big waste of time:
Combine measures examined in this study include 10-, 20-, and 40-yard dashes, bench press, vertical jump, broad jump, 20- and 60-yard shuttles, three-cone drill, and the Wonderlic Personnel Test. Performance criteria include 10 variables: draft order; 3 years each of salary received and games played; and position-specific data. Using correlation analysis, we find no consistent statistical relationship between combine tests and professional football performance, with the notable exception of sprint tests for running backs. From a practical standpoint, the results of the study should encourage NFL team personnel to reevaluate the usefulness of the combine’s physical tests and exercises as predictors of player performance. This study should encourage team personnel to consider the weighting and importance of various combine measures and the potential benefits of overhauling the combine process, with the goal of creating a more valid system for predicting player success.
In How We Decide, I devote a few pages to critiquing the Wonderlic, which is an intelligence test given to players at the combine. While Wonderlic scores are seen as important for prospective QB’s, there is scant evidence that the test predicts QB success. The underlying reason, I think, is that the kind of logical, abstract intelligence measured by the Wonderlic has little to do with the kind of decisions made on the football field; knowing pre-algebra doesn’t help you find the open man.
The larger lesson of the NFL draft is that it illustrates the astonishing difficulty of predicting success. Think about all the advantages that NFL teams have: years of detailed performance data from college and high school games; quantitative results from the combine; the ability to conduct personal interviews; thorough medical reports and drug tests; press coverage, etc. In other words, they know more about the people they’re drafting than just about any other employers on the planet. Nevertheless, they still can’t consistently predict which 350 pound offensive linemen will be able to block, and which QB’s will be able to endure the rigors of a blitzing defense, and which wide receivers will stay motivated even when they don’t get the ball. If these teams can’t get it right, with all this data and expertise, then how the hell can the rest of us? If we can’t even evaluate a narrow and specific athletic skill, then how can we ever expect to retain the best teachers, or pick the best politicians, or fund the best scientists, or hire the best executives?
But wait – it gets worse. Not only have NFL teams failed to find relevant and reliable variables for predicting future player performance, but they frequently pretend as though they know exactly what they’re doing. Instead of embracing their ignorance, most NFL teams default into overconfidence, which is why they regularly trade up draft picks. (The teams are so convinced that they’ve the identified the perfect player that they are willing to pay much higher salary and relinquish future draft picks.) Cade Massey and Richard Thaler have a new paper that looks at the “return on value” from early draft picks. They conclude that, on average, the first pick in the draft is the least valuable in the entire first round. Here’s Thaler explaining this research:
If the market for draft picks were “efficient,” meaning that the prices reflected intrinsic value, the resulting value for a team that trades up for a higher pick should be equal to the value of the picks it gives up. The price of moving up is steep: to move from the 11th pick to the 5th pick, for example, a team would have to forfeit its second-round pick as well. To be worth it, the player taken just six picks earlier would have to be a whole lot better — because both of the players given up could have become stars, too.
How confident should a team be that this early pick is better? Suppose we rank all the players at a given position — running back, linebacker, etc. — in the order they were picked in the draft, then compare any two in consecutive order on the list. What do you think is the chance that the player picked higher will turn out to be better — as judged, say, by number of games started in his first five years in the league?
If teams knew nothing, the answer would be 50 percent, as it would be for flipping a coin. If they had perfect knowledge, the answer would be 100 percent. Go ahead, make your guess.
The answer is 52 percent — an outcome that is barely better than that of a coin flip.
What this research makes clear is that powerful incentives don’t automatically lead to better decisions or effective models. (Not that we needed the NFL as proof – just look at Wall Street.) Even when our biases are penalized – and the sin of overconfidence has cost NFL teams a lot of money – we still cling to them. The powers of market forces are no substitute for a little self-awareness.