The 2010 NFL Draft is later this month, and there is already plenty of speculation about which QB will go first, and which DT is a better choice, and which teams will trade up for a higher draft pick. The stakes for the teams are huge, as a failed draft pick will not only waste millions of dollars in salary but will also come with a high opportunity cost. So there is a strong incentive to get the decision right, and to have a decision-making system that leads to the right personnel pick.
And yet, that hasn't happened. Instead, NFL teams remain tethered to useless metrics. Just look at the NFL scouting combine, which is a big job fair for prospective NFL players. (The combine includes everything from the 40 yard dash to a batter of psychological tests.) In recent years, the combine has become a major press event, and teams regularly cite combine results when justifying draft picks. But this is a mistake, as the combine is a big waste of time:
Combine measures examined in this study include 10-, 20-, and 40-yard dashes, bench press, vertical jump, broad jump, 20- and 60-yard shuttles, three-cone drill, and the Wonderlic Personnel Test. Performance criteria include 10 variables: draft order; 3 years each of salary received and games played; and position-specific data. Using correlation analysis, we find no consistent statistical relationship between combine tests and professional football performance, with the notable exception of sprint tests for running backs. From a practical standpoint, the results of the study should encourage NFL team personnel to reevaluate the usefulness of the combine's physical tests and exercises as predictors of player performance. This study should encourage team personnel to consider the weighting and importance of various combine measures and the potential benefits of overhauling the combine process, with the goal of creating a more valid system for predicting player success.
In How We Decide, I devote a few pages to critiquing the Wonderlic, which is an intelligence test given to players at the combine. While Wonderlic scores are seen as important for prospective QB's, there is scant evidence that the test predicts QB success. The underlying reason, I think, is that the kind of logical, abstract intelligence measured by the Wonderlic has little to do with the kind of decisions made on the football field; knowing pre-algebra doesn't help you find the open man.
The larger lesson of the NFL draft is that it illustrates the astonishing difficulty of predicting success. Think about all the advantages that NFL teams have: years of detailed performance data from college and high school games; quantitative results from the combine; the ability to conduct personal interviews; thorough medical reports and drug tests; press coverage, etc. In other words, they know more about the people they're drafting than just about any other employers on the planet. Nevertheless, they still can't consistently predict which 350 pound offensive linemen will be able to block, and which QB's will be able to endure the rigors of a blitzing defense, and which wide receivers will stay motivated even when they don't get the ball. If these teams can't get it right, with all this data and expertise, then how the hell can the rest of us? If we can't even evaluate a narrow and specific athletic skill, then how can we ever expect to retain the best teachers, or pick the best politicians, or fund the best scientists, or hire the best executives?
But wait - it gets worse. Not only have NFL teams failed to find relevant and reliable variables for predicting future player performance, but they frequently pretend as though they know exactly what they're doing. Instead of embracing their ignorance, most NFL teams default into overconfidence, which is why they regularly trade up draft picks. (The teams are so convinced that they've the identified the perfect player that they are willing to pay much higher salary and relinquish future draft picks.) Cade Massey and Richard Thaler have a new paper that looks at the "return on value" from early draft picks. They conclude that, on average, the first pick in the draft is the least valuable in the entire first round. Here's Thaler explaining this research:
If the market for draft picks were "efficient," meaning that the prices reflected intrinsic value, the resulting value for a team that trades up for a higher pick should be equal to the value of the picks it gives up. The price of moving up is steep: to move from the 11th pick to the 5th pick, for example, a team would have to forfeit its second-round pick as well. To be worth it, the player taken just six picks earlier would have to be a whole lot better -- because both of the players given up could have become stars, too.
How confident should a team be that this early pick is better? Suppose we rank all the players at a given position -- running back, linebacker, etc. -- in the order they were picked in the draft, then compare any two in consecutive order on the list. What do you think is the chance that the player picked higher will turn out to be better -- as judged, say, by number of games started in his first five years in the league?
If teams knew nothing, the answer would be 50 percent, as it would be for flipping a coin. If they had perfect knowledge, the answer would be 100 percent. Go ahead, make your guess.
The answer is 52 percent -- an outcome that is barely better than that of a coin flip.
What this research makes clear is that powerful incentives don't automatically lead to better decisions or effective models. (Not that we needed the NFL as proof - just look at Wall Street.) Even when our biases are penalized - and the sin of overconfidence has cost NFL teams a lot of money - we still cling to them. The powers of market forces are no substitute for a little self-awareness.
- Log in to post comments
The comments you make are representative of two extremes that we face in society today despite a plethora of new information available.
1) Driven dumb by data: This is one extreme that I find a lot of people to fall into. They play with so many different numbers that they can't separate the relevant from the meaningless. There are always meaningless numbers available for any equation (like your combine example).
2) System perpetuation: Some people refuse to acknowledge that the numbers available to them are more beneficial than their gut instinct. I believe that this happens most often when those that were successful in the current system are placed in charge. They believe that since they were successful in one system that it would be heretical to convert and, potentially, find something better. Perhaps they wouldn't even be acknowledged in the "new system".
An NFL front office assistant told me combine results might change where a player is rated within the tier he has already been graded into based on scouting/film analysis, but only rarely will it move a player out of that tier. He also said the part of the combine teams give the most weight to is the interviews. Keep in mind this is the front office view, and is not true of owners, who get really excited about combine results.
They conclude that, on average, the first pick in the draft is the least valuable in the entire first round.
Do they control for the fact that, barring trades, the team that makes this pick is the team with the worst record in the previous season? If your offensive line can't protect against the pass rush, it doesn't matter how good your QB is, he's going to be sacked early and often. Likewise, if your defense is vulnerable to the four-yards-and-a-cloud-of-dust strategy, your QB won't have enough time on the field to make up for the points the defense is giving away. There is a potential selection bias here.
OTOH, maybe that top pick is worth it if you get two or three lower picks in exchange. Then you might be able to put together a QB, offensive line, and defense that are all good-but-not-great, and win some games against teams that are great at one of these things but mediocre at the other two.
With regards to the Thaler paper, I highly recommend this post from the Advanced NFL Stats blog...
http://www.advancednflstats.com/2010/04/rethinking-massey-thaler-draft-…
He comes to a significantly different conclusion.
I wonder how much of this behavior is driven by the consumer-that is, the fan.
A fan want to hear that his team has drafted player X in the first round, who has some college record, and such and such combine scores. The real purpose served is hype-that the team has done EVERYTHING in their power to get the best, so that they will win.
The team is also investing in a potential trade.
Maybe there is some parallel to investor behavior there-actively managed funds on average don't do better than 'dumb' indexes, but their advanced investment strategies sound soo sexy to investors.
Of course the reliance on statistics to justify draft choices is hardly better than reading tea leaves. When you get to the level of potential draft picks for NFL, or any other professional sport, there is relative parity between nearly all of the possible picks. But, since GMs have to justify their position and protect themselves, they fall back on "data," as if quantification has some magic to it.
That said, the problem with the trading up study is that it misses the point. Teams trade up because they have specific needs to be filled, not because they are necessarily comparing the two athletes available. So, for instance, a team trades up to get a running back to fill a particular need. Comparing that athlete to another that they would have taken if they hadn't moved up is not relevant. That other athlete might have been playing another position. So, even if the lower drafted player is better than the higher one, it is irrelevant if the team has filled a spot they needed to cover.
At the end of the day, there is little justification for the emphasis placed on quant data. Noting that it is "objective" only obscures the reality that a good eye for talent uses physical skills as the price of entry. Then relies on intuition and studied experience to assess the softer, often qualitative and subjective, factors to make the best choice...for better or, too often, worse.
"The underlying reason, I think, is that the kind of logical, abstract intelligence measured by the Wonderlic has little to do with the kind of decisions made on the football field; knowing pre-algebra doesn't help you find the open man."
Hmmm... The military, which obsesses over IQ, has for decades found a very strong link between IQ and performance in almost any real-time task, including combat. So why the discrepencey between decades of data and the Wonderlic/QB connection? You'd think there would be a crossover.
PAGING STEVE SAILER?
A top college player excels with ease. In the NFL he suddenly finds himself getting in a world of hurt, mentally and physically. It is difficult to predict how a player will react to getting his ass kicked, it's even more difficult to predict how a player will react to getting his ass kicked after he's been handed millions of dollars.
Awesome post, dude! Colin Cowherd has been pointing out for years that the most successful NFL teams routinely trade *down* in the draft. It is also very interesting that many of the most successful NFL quarterbacks come not from elite programs, but from weaker ones. Cowherd's theory is that this is because QBs at Texas, USC, Oklahoma, and the like never get pressured, never have to play from behind, and know that they have awesome defenses on the other side of the ball. For this reason, Cowherd has predicted that Quinn will be the best QB in the draft this year: he has played from behind all the time, has been pressured a ton, and ND's defense was a joke, yet his TD/IC ratio is still outstanding.
The concept that combine numbers (and physical metrics in general) have no **correlation** does NOT mean they are **irrelevant.**
They undoubtedly define a threshold for physical abilities, which are not infinitely elastic.
If anything, this means that, of the players selected and drafted,success is not defined by these parameters, or potentially even random.
And while I haven't read this paper, I would like to see some meta analysis of some sort on statistical research of this kind. Over and over, I see researchers come to flawed conclusions through hefty data analysis because they do not understand what they are researching.
To Comrade PhysioProf; I think you meant to say that "Clausen" would be the best QB pick in the draft this year, not "Quinn". Either that, or you meant the 2007 NFL draft.
Though not mentioned in this article, the thing that irks me the most about the NFL draft are the post-draft grades that columnists and supposed experts give the teams... just days, or even hours after the draft. There's no way to grade the success of a team's draft until witnessing the players' performance (or lack of performance) on the field!
Yeah, Clausen! D'Oh!