These guys are proposing that we construct Fantasy Journals -- drafted sets of journal articles -- at meetings and scientific gatherings sort of like Fantasy Football. Each player would get access to say all the papers to be presented at the meeting (or a more limited number if that is too many). Your journal is selected from that number, and the winner is determined at the next year's meeting by the citation numbers for your papers.
Bergstrom et al. list the possible benefits:
This is a game that one would win by being good at picking the soon-to-be hot papers. Our lab would have a blast playing -- and if I challenged my graduate students to beat my picks, I can guarantee that they would read an increasing fraction of the literature in their efforts to put my in my place. After all, this is a game that lets scientists show off the one thing they may like to show off even more than their own research abilities -- their access to fresh, hot, timely, valuable information about the next great development in the field.
It's gossipy, it's self-referential, it's a sporting blend of skill, effort, and fortune, it's competitive -- all great features to have in a game. And it has the side effects of getting people to read the literature, it generates an interesting ensemble of individual "overlay journals" that reflect the interests of individual researchers, and it potentially generate large quantities of bibliometric evaluation data that could subsequently used in scientific search.
The trick of course as I understand it from Fantasy Football is to pick some sure bets and some long shots -- to sort of balance out your risk portfolio. You know that some papers are going to be a big winners, but those are going to be equally distributed early in the draft. However, you get your edge by knowing enough about the new papers by relatively unknown authors to know whether they are going to be huge.
We should totally play this at Neuroscience. You would have to limit it to your subfield probably -- something like Behavioral Neuroscience for me rather than all of Neuroscience, but it would still be super fun. Drinks to the winner?
The devil is always in the details, so here is what they say about implementation:
Implementation: We'd propose to develop a simple platform for playing this game. We can test it (1) among interested groups of friends, and (2) at a moderately-sized conference in computer science. In the latter case, the game would be to assemble a "dream team" from the papers at that particular conference -- with a prize to be awarded at the next year's meeting to the person who chose the best set of papers (perhaps measured by the number of citations in Google Scholar).
There are a few implementation challenges, not the least of which is figuring out exactly how to design the rules. We can divide this into (A) acquiring papers and (B) scoring.
Acquiring papers has to be done on some sort of bidding or draft-and-trading system. Everyone knows that the next paper by Nobel Laureate working in immunology is going to garner more citations in a year than the next paper by an unknown graduate student working in economics. The game gets interesting when there are constraints on who you can pick (e.g. a draft followed by a trading period), or when there is something like a futures market for the citations that papers will receive. At the same time, the rules here need be (a) very simple, (b) executable without getting everyone together to play simultaneously, (c) not requiring players to return repeatedly to make trades or adjustments in response to other players offers or moves, (d) scalable - no one wants to receive 198,035 potential papers in the '07 draft! This seems to rule out most draft and trading systems, as well as sophisticated futures markets. One system that might work would be tightly circumscribe the set of possible papers, and then offer fair odds against the picks that the players make. There must be other, even more clever designs out there!
Scoring has to be done in a way that returns results within a tolerable time frame (a few months or less, ideally) and that is hard to game. Here Google Scholar citation counts should work about as well as anything I can think of. If we allow people to pick papers that already have a non-zero citation count, we might also want to account for the fact that citation is a preferential attachment process in which highly cited papers attact more citations by virtue of their prominance in the references of other papers. There are doubtless other issues in implementation, but this should give a reasonable sense of the scope of the problem.
Debate away! Maybe someday you can claim: "Jesus edited my Fantasy Journal."
Hat-tip: Alex Tabarrok
- Log in to post comments
I'd so win in my field! There is an advantage in looking at the field first from inside, then step and watch from the outside and think of historical context for a few years.
So, lemme get this straight, you're limited to what was presented at the conference? That doesn't seem right. In fact, I don't even like the idea of drafting papers (either in the sense of writing them or in the sense of picking a list of papers). I say you should draft PIs. Your score is based on the number of papers the PI authors that year. Points are awarded based on the PI's position in the author list and the impact factor of the journal in which the paper is published. Just like in fantasy football, where you can only start two RB's per week, you're limited in how many Endowed faculty you can draft, how many associate profs, how many assist profs, etc.