There are several things that cause extinction, but ultimately it is always the same: The last individual (or small number of individuals) of a species die. That may sound like a trivial explanation for extinction but consider what happens when you work backwards from that tragic moment in time. Well, you have more individuals in a population that was once much larger but was reduced in size somehow, which then dwindled to the last few, the last one, then zero. But how did that small population go from hundreds to a few then to zero? Most likely for no particular reason other than this: The number of individuals in a population can be made to vary in roughly absolute terms, as well as relative terms. Say there is a population of a rare rabbit, Bugs bunnii living in a region the size of Yellowstone National Park. If you introduce 1,000 coyotes to this area, they will eat rabbits indiscriminately, all species, and along the way they may consume almost all of the B. bunnii leaving maybe three or four. Then, those wemaining wabbits die off because they are all males, or for some other totally dumb and tragic reason.
In other words, many extinction events probably involve plain old bad luck, which happens to come along when a population was already depleted. But, still working backwards, how did the population become so vulnerable by being numerically small to begin with? This is more complicated and it is impossible to generalize. While it is reasonable to assign the final one or two stages of extinction to expected bad luck owing to normal fluctuations in numbers and randomness of events, it is not so easy to list the reasons why a population of some animal would become unsustainably small to begin with. Because there are many.
Famine or pestilence are on the list of possible causes. Invasive species can do significant damage to previously healthy populations causing numbers to decline to the dangerously low level that allows for random bad luck to do them in. Or, there can be an ecological shift owing to the demise of a keystone species. For instance, go back to our faux Yellowstone park. Normally, there would be a healthy population of wolves and a few coyotes straggling around the edges (the latter preferring plains and prairies, the former, woodlands). Where wolves dominate, coyotes are rare, because canids tend to compete through exclusion. I.e., the wolves beat up, kill, chase off the coyotes. But, wolves don't eat little furry or scurry ground animals like bunnies, lizards, mouse-like creatures and so on. (Well, not often, anyway). So these smaller animals worry about hawks and snakes while the wolves run around above them eating big things like elk and moose and bison.
Now, hunt out the wolves and coyotes come in to take their place. Coyotes, for a period of time, will play the role of small-critter vacuum cleaners. In the absence of wolves, they will scour the landscape of bunnies and lizards, mouse-like creatures and other small things. Locally, these animals would go to very small numbers in some cases. If, previously, habitat loss due to ranching had decimated the numbers of some lagomoroph, like B. bunnii, the coyotes might do them in.
As you might imagine, estimating the expected extinction of species is a difficult process, because so many things are involved. Added to this is the problem that there are certainly many, many species of animals and plants that have not been identified by science, but are going to go extinct for one reason or another. So, the problem of estimating species extinction rates even requires estimating the extinction of species that we don't even know exist. One way to do that is to go looking for new species under certain conditions, and use the rate at which you find them to estimate how many unknown species there are out there, and use information about known species loss to estimate how many of the estimated unkowns will go extinct. And so on. Rather complicated.
A paper just out in Nature (see reference below) makes the claim that the method of estimating species extinction rate is flawed, and that the actual rate of extinction is considerably less than what we have heard in recent years. However, the paper itself is flawed at two levels, as far as I can see, and checking with colleagues on the blogosphere (see below) I'm pretty sure others people agree with these critiques.
The paper makes the following point: Species extinction rates are estimated by reversing the findings of a widely known method of estimating species discovery ... the species-area curve method ... and in so doing underestimates the amount of habitat loss required to make a species go extinct.
Species area curves work like this. Let's say that I want to know how many insects live in the canopy of a large rain forest preserve in the Amazon. One way to do this is to gas (using a harmless sleeping gas, of course) all the insects up in the canopy in 10 hectares so they fall on a big sheet where I can observe them. We then identify and count the number of insects. If the reserve has 10,000 hectares, and I found ten different species in 10 hectares, then there must be 10,000 species there, right?
Well, no, because there will probably be species elsewhere in the forest that I already found. We can't just multiply the number of species by the number of unsampled units of land! So, let's try a different estimate: The number of species is 10. I found ten, I assume this bit of forest is just like the rest of the forest, so my work here is done here. There are 10 insect species living in the canopy over this 10,000 hectares.
Well, no, actually, that's not good enough either. There are likely to be species out there I've not found because they weren't where I was sampling! This is where Species Area Curves come in. I pick a small area, small enough that if I look in one, the number of species will be under-represented, such that if I look in other areas the same size, I'm likely to find more species. As I add units to my study I'm likely to keep finding new species over time. Say I look at 2 hectares and find 5 species. I then lookin 2 more hectares and find a few of the same ones, plus three new ones. And so on. The way a species area curve works is that you accumulate the numbers of species across an accumulating region until you run out of time or mostly stop finding species. So, 2 hectares got me 5, 4 hectares got me 7 (in total), perhaps 6 hectares gets me an accumulated total of 8, 8 hectares gets me (still) 8 because I didn't find any new ones that time, 10 gets me 9 because I found only one more, and then after that no matter how hard I look I don't get any new ones. Until, like, a week later after searching the entire forest and I find one more on the last day. Typical.
What is happening is this: As the area over which I search gets larger, I add species, until at some point in time the rate of adding species goes to what seems to be (but almost certainly is not) zero. At this point or some time just before this point, I stop and while I know there must be some species I've not found yet, I can estimate how many there will be using the curve. Individual animals (or plants or whatever) are found at a reasonably constant rate (with some variation) but NEW species are found at a diminishing rate. So the curve looks like this, an example from a study of fish:
So the number of species is approximated by using a combination of known and unknown relationships between area and frequency of occurrences of both individuals and categories of individuals. And there's a whole bunch of math and methodology involved in this approach to counting things in the wild (or in your thought experiments). And, if you want to estimate how many species of animals would go extinct as habitat is destroyed, you could use similar models and math. Think about that. How many species might you wipe out if you destroyed 10 hectares of that Amazonian reserve? Well, it's not easy to calculate because even though you might impact, say 5 species because a given 10 ha area has about 5 unique species in it (using the data from our thought experiment: In an actual rain forest the number would be much larger) but you can use the logic of more area = more impact along with the data from species-area curve estimates of speciosity to estimate extinction.
The paper in Nature makes the following correct observation (or at least I think it is correct): While we can estimate, extrapolating from a species area curve, how many species there are in a given habitat and region, reversing the numbers underestimates how much habitat would have to be destroyed in order to kill off every individual in a given species. Therefore, the estimate of species extinction based on habitat loss as a direct cause of extinction, using the reverse of a species area curve, gives a pessimistic view of species extinction rates.
This is probably correct, but the overarching point, that we have underestimated the rate at which species are going extinct, is not correct at all, for at least two reasons. First, you don't need to kill off the last individual to make the species go extinct. All you need to do is to kill off a certain number and the rest will go extinct on their own, most likely. Second, habitat loss is not the only way to make a species go extinct. You know this by now because we discussed above, but let's hit the horse a few more times for good measure.
Say, for instance, you reduce habitat in our hypothetical wolf-inhabited Yellowstone-like park to the extent that the wolves become so rare that a local outbreak of rabies kills them all off. Notice that they went extinct even though you did not destroy the amount of habitat necessary to make them go extinct as a direct result of habitat loss. Now, coyotes move in and take over what is left of the habitat. As you know, coyotes are small-animal vacuum cleaners (see above) and pretty soon there is no more B. bunnii, and a lot of other small mammals are gone as well. Crashes caused by shifts in keystone species, extinction due to small population size, the effects of disease on populations that had crashed and are coming back but with insufficient genetic variation, and so on and so forth are not considered in the paper.
Here is the abstract of the paper in question:
Extinction from habitat loss is the signature conservation problem of the twenty-first century1. Despite its importance, estimating extinction rates is still highly uncertain because no proven direct methods or reliable data exist for verifying extinctions. The most widely used indirect method is to estimate extinction rates by reversing the species-area accumulation curve, extrapolating backwards to smaller areas to calculate expected species loss. Estimates of extinction rates based on this method are almost always much higher than those actually observed2-5. This discrepancy gave rise to the concept of an 'extinction debt', referring to species 'committed to extinction' owing to habitat loss and reduced population size but not yet extinct during a non-equilibrium period6,7. Here we show that the extinction debt as currently defined is largely a sampling artefact due to an unrecognized difference between the underlying sampling problems when constructing a species-area relationship (SAR) and when extrapolating species extinction from habitat loss. The key mathematical result is that the area required to remove the last individual of a species (extinction) is larger, almost always much larger, than the sample area needed to encounter the first individual of a species, irrespective of species distribution and spatial scale. We illustrate these results with data from a global network of large, mapped forest plots and ranges of passerine bird species in the continental USA; and we show that overestimation can be greater than 160%. Although we conclude that extinctions caused by habitat loss require greater loss of habitat than previously thought, our results must not lead to complacency about extinction due to habitat loss, which is a real and growing threat.
Stuart Pimm, blogging at National Geographic, notes:
The paper's title was emphatic enough: "species-area relationships always overestimate extinction rates." With modesty, they told the media that it had taken eight years of hard work to come to that stunning conclusion. It took me eight seconds to know the paper was a sham -- and I am slow reader.
Stuart uses the example of a forest near his home near Washington D.C. that includes many bird species. He asks what would happen if every forest in the east but this one was wiped out overnight ... how many of these species would be extinct? The answer is, obviously, none of them, because they are all in the one remaining patch of forest. But, this is ...
...not the relevant answer...
How many species would eventually become extinct? The answer is very much higher. The populations of the species that survived the initial deforestation elsewhere would eventually die out.
...
One pair of (say) pileated woodpeckers or great-horned owls may have plenty of food in the Park. But, if one pair produces just one pair of young that survive to adulthood then you don't need elegant mathematics to work out the answer. There's a 50-50 chance that both those young will be the same sex -- both female or both male.
Even if they are different sexes, they will be brother and sister.
Simply, small populations suffer from the vagaries of sex and death and, on top of that inbreeding, that doom them in the long term.
Sheril Kirshenbaum also discusses the story, and both she and Stuart (oh, and me too) are concerned that this paper, which may or may not have an interesting result regarding the statistics of one part of the process of understanding species-count dynamics, will be interpreted as saying what it can't say: That the problem of species extinction is not as big as we thought. In fact, the paper does say that, but on further prodding from colleagues, one of the main authors backs off that position. Stuart Pimm reports ...
In writing to me about the fuss his paper had caused, author Fangliang He, an ecologist at Sun Yat-sen University in China, said:
"I have followed up some of the media and felt there is a danger of misinterpreting our work, which I would like to clarify here. ... All we have said is that the backward SAR is flawed and overestimates extinction rates, not anything more than that."
Well, of course, that wasn't what the paper said and it wasn't what the authors said to the media. If the paper had had "backward SAR" in the title, the media wouldn't have commented. And one wonders whether Nature would have published it.
So, it is an interesting statistical study that will have he effect of clouding public and policy-maker understanding of science, not so much because of the study (though there is that too) but because of how it has been marketed and how it is being hawked by the science press. SNAFU.
ADDED: Here's a nice new blog post on the topic: Species-area relationships don't overestimate extinction rates from habitat loss
He, F., & Hubbell, S. (2011). Species-area relationships always overestimate extinction rates from habitat loss Nature, 473 (7347), 368-371 DOI: 10.1038/nature09985
- Log in to post comments
I don't think the publication of a scientific paper should depend on the interpretation and/or misuse others could make of it. The only criteria should be it's scientific validity. If it is true that using the reverse curve we can overestimate the rate of extinction, we just have to find a better method to do that.
About the misleading title, are you sure it is not Nature's fault?
Titles for scientific articles and comments are not arbitrarily assigned by editors as are titles for other pieces, so no, it was negotiated, most likely. IN any event, if you read what I've said here, I've not taken issue with the key statistical findings.
This paper is already being used in the popular press to cast doubt on current thinking on species extinction rates, and there isn't such doubt to be cast. The authors misrepresent the relationship between their statistical finding and the species extinction estimates and the press that goes along with the paper does that as well.
And I can already hear the "facts are fact" rhetoric like a wake following a garbage scow ...
" and we show that overestimation can be greater than 160%."
Huh? A factor of 1.6 of overestimation is considered significant?
The media, and the corporations they represent, will seize on any scientific work that can be seen as in some way supporting their view that nothing ever needs to be done anywhere anytime in relation to conservation of the environment. This has been obvious for many years, and any scientist who isn't aware of this, and therefore tries to be very careful about their conclusions and how these might be represented, and the presence of sentences that can be plucked out of context, is either foolish or playing the corporation game.
This kind of stuff is no longer the subject of just happy little tea room or seminar room discussions where everyone uses the same language and knows the literature, and where the conclusions really don't matter much to anyone except appointment and promotion boards. Nowadays it matters, their is a war against the world we live in, and scientists must be very aware of not loading bullets (ie data that can be misrepresented) for those on the other side of the conservation barricades.
@daedalus2u,
I read "overestimation can be greater than 160%" as a factor of >2.6.
The title certainly is worded poorly. Nature may have negotiated or suggested the title, but normally it is just up to the authors. I've never had a journal suggest changing the title of a paper. Sadly, I'm pretty sure Nature is happy with the controversy though.
IMO, the reviews messed up on this. They should have dinged the title at least. You're allowed some leeway for overarching/overreaching and speculative points in the discussion, but not the title.
travc, I'm equivocating on the title issue because Nature is different enough that they might do something with titles. I'll ask and get back on that.
By the way, I should add this comment regarding titles: It makes no difference what a title is. There is, in fact, an unspoken rule in academia: If you base your thinking on a paper on what the title is, you're doin' it rong. You've got to at least go with what the abstract says!
This may be bad policy when it comes to the interface of science and policy makers, or science and the press, or science and the public, but it isn't entirely unreasonable, mainly because a title is short and can't contain much information anyway.
Greg,
I just happened across this paper which discusses the wolf-coyote effect quite nicely.
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1702374/?tool=pubmed
Nice one!
Excellent article Greg. You covered the key points nicely, and laid out the issues clearly. I have posted a response to He and Hubbell on the SavingSpecies blog, focusing more on the non-scientific implications of their Nature paper. (Stuart Pimm is one of the founders of SavingSpecies.) I'd love your feedback on those thoughts, since others in this thread have commented on this aspect of the paper.
Extinction isn't just a numbers game -- the trouble with He and Hubbell
Roger, I'll have a look, thanks.
Regarding titles, I checked with Nature (unofficially) and it is as I suspected: Academic papers (like this one) are the same as for other journals .... they have a few minor rules (i.e., maximum length and no punctuation) but the authors make the titles.
this was a great read. So im assuming thier is no single greatest cause behind the current extinction rate