Yesterday, I blogged about a recent article correlating a nation's research output related to human embryonic stem cells (hESCs) with its policies on hESC research. There was one particular source of uncertainty, though:
As Levine points out, he didn't actually count papers that published results on hESCs, but papers that cited the original hESC paper. Therefore--as he once again acknowledges--he's actually counting papers related to hESC research. Therefore, his results are much more open to interpretation than they would be otherwise. This could be quite interesting, because his results could indicate that restrictive policies inhibit research that's even just related to hESCs. Or, it could just mean that the results would actually be more extreme if he only counted papers actually presenting results on human embryonic stem cells (which isn't as interesting). Additional research would be required to determine which scenario is actually occurring.
I posed this in question form to the article's author, Aaron Levine, and his perspective was that the second scenario is more relevant:
It may be a little of both, but I think the second interpretation (that I actually underestimate the differences that would be seen if my data only included actual hESC articles) is more important. In particular, I think my methodology is susceptible to underestimating underperformance in countries with relatively large biomedical research communities, but little hESC research. This is because these sorts of countries--of which France, Germany and Japan are examples--cite the initial hESC article frequently enough in related non-hESC developmental biology articles to mask underperformance in actual hESC research.
This doesn't give us any new hard data, but it does indicate that Levine's results are a best-case scenario. Therefore, the consequences of policies not conducive to hESC research--including those in the US--could be more dire than his publication would suggest.
- Log in to post comments