Though seemingly simple life forms, microorganisms can display surprisingly complex behaviors, such as altruism and cheating, that are more often associated with "higher" organisms. This paradox makes microorganisms--which are more amenable to laboratory investigations than, say, dolphins or elephants--ideal for investigating social evolution. Take the social amoeba Dictyostelium discoideum. When food is plentiful, these amoebae live in soil as single cells, feasting on bacteria. But when starved, they converge by the tens of thousands, migrate toward the soil surface, and form a multicellular fruiting body with a ball of spores at the tip. Although most cells become spores, about a fifth of the amoebae die and become the stalk that lifts the spores above the ground, increasing their chances of dispersing to more favorable environments.
In social amoebae such as Dictyostelium discoideum, cells aggregate to form a multicellular slug that migrates and then forms a fruiting body, which contains live spores (which go on to make new amoebae) and dead stalk cells. Unlike animals where all the cells descend from one fertilized egg, social amoeba fruiting bodies can contain cells with different genotypes. This potential for chimerism creates a conceptual problem in that "cheater" cells could arise that preferentially become reproductive spores and force the victims to become stalk cells and die. One way that amoebae could avoid being cheated is if they recognize and preferentially aggregate with genetically similar cells while avoiding genetically distant cells--a process called kin discrimination. We tested whether cells of D. discoideum could discriminate in this way. We mixed cells from genetically distinct strains and found that they segregate during multicellular development. The degree of segregation increases in a graded fashion with the genetic distance between strains. Our results demonstrate the existence of kin discrimination in D. discoideum, an ability that is likely to reduce the potential for cheating and ensure that the death of the stalk cells provides a fitness advantage to related individuals.
The journal Impact factor (IF) is generally accepted to be a good measurement of the relevance/quality of articles that a journal publishes. In spite of an, apparently, homogenous peer-review process for a given journal, we hypothesize that the country affiliation of authors from developing Latin American (LA) countries affects the IF of a journal detrimentally. Seven prestigious international journals, one multidisciplinary journal and six serving specific branches of science, were examined in terms of their IF in the Web of Science. Two subsets of each journal were then selected to evaluate the influence of author's affiliation on the IF. They comprised contributions (i) with authorship from four Latin American (LA) countries (Argentina, Brazil, Chile and Mexico) and (ii) with authorship from five developed countries (England, France, Germany, Japan and USA). Both subsets were further subdivided into two groups: articles with authorship from one country only and collaborative articles with authorship from other countries. Articles from the five developed countries had IF close to the overall IF of the journals and the influence of collaboration on this value was minor. In the case of LA articles the effect of collaboration (virtually all with developed countries) was significant. The IFs for non-collaborative articles averaged 66% of the overall IF of the journals whereas the articles in collaboration raised the IFs to values close to the overall IF. The study shows a significantly lower IF in the group of the subsets of non-collaborative LA articles and thus that country affiliation of authors from non-developed LA countries does affect the IF of a journal detrimentally. There are no data to indicate whether the lower IFs of LA articles were due to their inherent inferior quality/relevance or psycho-social trend towards under-citation of articles from these countries. However, further study is required since there are foreseeable consequences of this trend as it may stimulate strategies by editors to turn down articles that tend to be under-cited.
Prevention of early, unintended pregnancy, abortions, and sexually transmitted infections among adolescents is a very high priority in the United States and Europe, and the United Kingdom has a target to halve pregnancy rates among under-18-year-olds by 2010 . School-based sexual health education provides an obvious approach, but evaluations of the effectiveness of such interventions, both within high-income [2,3] and low-income  countries have not been very encouraging. In this week's PLoS Medicine, Judith Stephenson and colleagues report the long-term results of the RIPPLE trial comparing peer-led and teacher-led approaches, which builds on previous studies of school-based sex education.
Peer-led sex education is widely believed to be an effective approach to reducing unsafe sex among young people, but reliable evidence from long-term studies is lacking. To assess the effectiveness of one form of school-based peer-led sex education in reducing unintended teenage pregnancy, we did a cluster (school) randomised trial with 7 y of follow-up. Twenty-seven representative schools in England, with over 9,000 pupils aged 13-14 y at baseline, took part in the trial. Schools were randomised to either peer-led sex education (intervention) or to continue their usual teacher-led sex education (control). Peer educators, aged 16-17 y, were trained to deliver three 1-h classroom sessions of sex education to 13- to 14-y-old pupils from the same schools. The sessions used participatory learning methods designed to improve the younger pupils' skills in sexual communication and condom use and their knowledge about pregnancy, sexually transmitted infections (STIs), contraception, and local sexual health services. Main outcome measures were abortion and live births by age 20 y, determined by anonymised linkage of girls to routine (statutory) data. Assessment of these outcomes was blind to sex education allocation. The proportion of girls who had one or more abortions before age 20 y was the same in each arm (intervention, 5.0% [95% confidence interval (CI) 4.0%-6.3%]; control, 5.0% [95% CI 4.0%-6.4%]). The odds ratio (OR) adjusted for randomisation strata was 1.07 (95% CI 0.80-1.42, p = 0.64, intervention versus control). The proportion of girls with one or more live births by 20.5 y was 7.5% (95% CI 5.9%-9.6%) in the intervention arm and 10.6% (95% CI 6.8%-16.1%) in the control arm, adjusted OR 0.77 (0.51-1.15). Fewer girls in the peer-led arm self-reported a pregnancy by age 18 y (7.2% intervention versus 11.2% control, adjusted OR 0.62 [95% CI 0.42-0.91], weighted for non-response; response rate 61% intervention, 45% control). There were no significant differences for girls or boys in self-reported unprotected first sex, regretted or pressured sex, quality of current sexual relationship, diagnosed sexually transmitted diseases, or ability to identify local sexual health services. Compared with conventional school sex education at age 13-14 y, this form of peer-led sex education was not associated with change in teenage abortions, but may have led to fewer teenage births and was popular with pupils. It merits consideration within broader teenage pregnancy prevention strategies.
Bioinformatics is at the crossroads of different scientific disciplines, in particular biology, mathematics, and computer sciences. While this interdisciplinary aspect undoubtedly contributes to the subject's attractiveness for researchers and students, it implies the ability to master heterogeneous skills. For biology students, preconceptions lead some to believe that in silico approaches are for computer-savvy specialists only. Teaching bioinformatics not only implies helping students to overcome perceived obstacles, such as mastering biostatistics or computational tools. It also requires momentous efforts to effectively drive home the message that applying bioinformatics tools and interpreting their results is an eminently biological endeavor.
In eukaryotes (and viruses), genes may be organized into coding and noncoding regions, called exons and (spliceosomal) introns, respectively (Box 1). Both types of sequences are transcribed into pre-mRNA, but whereas exons are used for protein synthesis, introns are spliced out during/immediately after transcription  (Figure 1). Although spliceosomal introns are widespread in the eukaryotic tree, they are unequally distributed across species as a consequence of ongoing intron gain and loss [2,3]. So for instance, 287 spliceosomal introns populate the entire genome of the baker's yeast (Saccharomyces cerevisiae) , but this number increases to ~4,760 in a different yeast species (Schizosaccharomyces pombe) and reaches ~38,000 and ~140,000 in the genomes of the fruit fly Drosophila melanogaster and Homo sapiens, respectively . Explaining the causes and functional implications of this uneven distribution requires understanding why spliceosomal introns exist in the first place and what the evolutionary origin(s) of these sequences are--a problem that has proved a conundrum for the past 30 years.
Thousands of human deaths from rabies occur annually despite availability of effective vaccines for humans following exposure, and for disease control in domestic dog populations. We established a 5-year contact-tracing study in northwest Tanzania to investigate risk factors associated with rabies exposure and to determine why human deaths from canine rabies still occur. We found that children were at greater risk of being bitten and of developing rabies than adults and that incidence of bites by suspected rabid animals was higher in an area with larger domestic dog populations. A large proportion (>20%) of those bitten by rabid animals are not recorded in official records because they do not seek post-exposure prophylaxis (PEP), which is crucial for preventing the onset of rabies. Of those that seek medical attention, a significant proportion do not receive PEP because of the expense or because of hospital shortages; and victims who are poorer, and who live further from medical facilities, typically experience greater delays before obtaining PEP. Our work highlights the need to raise awareness about rabies dangers and prevention, particularly prompt PEP, but also wound management. We outline practical recommendations to prevent future deaths, stressing the importance of education, particularly in poor and marginalized communities, as well as for medical and veterinary workers.
Evidence that neuroscience improves our understanding of economic phenomena [1-4] comes from a broad array of novel experimental findings, including demonstrations of brain regions that guide responses to fair [5,6] and unfair  social interactions, that resolve uncertainty during decision making , that track loss aversion  and subjective value , and that encode willingness to pay [11,12] and reward error signals [13,14]. Yet, neuroeconomics has been characterized as a faddish juxtaposition, not an integration, of disparate domains . More damningly, critics have charged that neuroscience and economics are fundamentally incompatible , an argument that resonates with many social scientists. Economics thrived for centuries in the absence of neuroscience and some economists argue that existing neuroeconomics research is not useful to mainstream economics [17,18].
Although randomized trials provide key guidance for how we practice medicine, trust in their published results has been eroded in recent years due to several high-profile cases of alleged data suppression, misrepresentation, and manipulation [1-5, 39]. While most publicized cases have involved pharmaceutical industry trials, accumulating empiric evidence has shown that selective reporting of results is a systemic problem afflicting all types of trials, including those with no commercial input . These examples highlight the harmful potential impact of biased reporting on patient care, and the violation of ethical responsibilities of researchers and sponsors to disseminate results accurately and comprehensively.
Background to the debate: The global burden of disease falls disproportionately upon the world's low-income countries, which are often struggling with weak health systems. Both the public and private sector deliver health care in these countries, but the appropriate role for each of these sectors in health system strengthening remains controversial. This debate examines whether the private sector should step up its involvement in the health systems of low-income countries.
All health-care professionals want their patients to have the best available clinical care--but how can they identify the optimum drug or intervention? In the past, clinicians used their own experience or advice from colleagues to make treatment decisions. Nowadays, they rely on evidence-based medicine--the systematic review and appraisal of clinical research findings. So, for example, before a new drug is approved for the treatment of a specific disease in the United States and becomes available for doctors to prescribe, the drug's sponsors (usually a pharmaceutical company) must submit a "New Drug Application" (NDA) to the US Food and Drug Administration (FDA). The NDA tells the story of the drug's development from laboratory and animal studies through to clinical trials, including "efficacy" trials in which the efficacy and safety of the new drug and of a standard drug for the disease are compared by giving groups of patients the different drugs and measuring several key (primary) "outcomes." FDA reviewers use this evidence to decide whether to approve a drug.