[This post was originally published at webeasties.wordpress.com]
I’ve played video games most of my life. Starting with Tom Sawyer’s Island and Matterhorn Screamer (both released in 1988), the early Final Fantasies and Secret of Mana on Super Nintendo in middleschool, games like Starcraft and Half-life (Counter-Strike, Day of Defeat etc) in high school, and Halo in college. Grad school finally ended my 3 year love affair with World of Warcraft. I’ve always played for fun, but two papers in last week’s Nature show how video games can be put to even better use (both are behind pay-walls unfortunately).
The first is a perspective about the new uses of video games as educational tools, especially about science subjects.
Over the past decade, evidence has grown that computer-based play can support learning in schools. Pedagogical studies and evaluations, summarized in a 2006 joint report titled ‘Unlimited Learning’, by the UK government’s education department and a software publishers’ association, found that students whose lessons included interactive games were more engaged in curriculum content and demonstrated deeper understanding of concepts than those who did not use games. Better exam scores and teacher ratings resulted when computer games, both commercial and bespoke, were used as support materials. A plethora of organizations have sprung up to explore computer-based learning; in the United Kingdom, these include Futurelab in Bristol and the Serious Games Institute at Coventry University.
This is not, strictly speaking, a new idea – using games for learning has been steadily increasing in popularity for over a decade. There are even plans to develop a charter highschool centered entirely around gaming (the segment starts around minute 28). But I think it’s great to continue bringing attention to this idea. And this article is more about the idea of educating the general public about important scientific topics of general concern (like global warming).
The other article though, that one totally blew my mind.
People exert large amounts of problem-solving effort playing computer games. Simple image- and text-recognition tasks have been successfully ‘crowd-sourced’ through games1, 2, 3, but it is not clear if more complex scientific problems can be solved with human-directed computing. Protein structure prediction is one such problem: locating the biologically relevant native conformation of a protein is a formidable computational challenge given the very large size of the search space.
Here’s the deal: proteins are incredibly important to biological processes. Most people think about genes when they think about the basis of life, but a gene is just a code to make a protein. Proteins do all the work in a cell – they form the structural scaffold, make metabolic reactions work, and relay signals within and between cells. There are millions of different proteins in the living world, but they are simply different chains of amino acids. The key to their diverse functions is the way these chains fold up to make different shapes based on the chemical properties of amino acids.
We know a lot about what influences this folding, but predicting the shape of a protein based only on the amino acid sequence is very difficult and requires enormous computing power. Most of the time, to find the real shape of a protein, we rely on x-ray crystallography, which can be quite laborious and even impossible for certain types of proteins. What these guys did was to use human computing power instead.
They designed a game called “Foldit” in which human players were given proteins with some of the folds completed, and then asked them to manipulate these chains to make shapes that made the protein more stable. We know a lot about how the final shape affects stability, and these rules were plugged into algorithms that allowed each fold to get a particular score – the more stable (and therefore the more likely) the fold, the higher the score. They then allowed people (mostly non-scientists) to play and try to get the highest scores possible. Their results were then compared to the best predictive software available, called Rosetta. In some cases, the humans did much better than the computers, and in less time:
Human players are also able to distinguish which starting point will be most useful to them. Players were able to identify the model closest to the native structure, and to improve it further. Given the same ten starting models, the Rosetta rebuild and refine protocol was unable to get as close to the native structure as the top-scoring Foldit predictions[...]
Basically, the computer program goes through many possible predictions, and analyzes each one to see if it is an improvement. If it is, the computer continues from there and makes more predictions. But sometimes, getting to the right solution required moving through a very unfavorable shape. Human spatial intuition could see that, but the computer sees the unfavorable shape and concludes that it’s on the wrong track. Overall, the results were striking:
Foldit players performed similarly to the Rosetta rebuild and refine protocol for three of the ten blind puzzles. They outperformed Rosetta on five of the puzzles, including the two above cases where players performed significantly better. For two of the ten blind puzzles, the top-scoring Rosetta rebuild and refine prediction was numerically better than the Foldit solution but still basically incorrect.
Humans couldn’t best the computer in every instance though. When just given completely unfolded chains, humans were pretty bad at getting all the local folds right – there was just too much to do. But this is precisely where our current computer models do very well. Combining the best parts of the computer with the best parts of human intuition could greatly enhance our understanding of protein structure. And using this method, even non-scientists can make a huge contribution to this advancement, and have fun doing it. Thanks, video games!