Big Numbers: Bad Anti-Evolution Crap from anncoulter.com

A reader sent me a copy of an article posted to "chat.anncoulter.com". I can't see the original article; anncoulter.com is a subscriber-only site, and I'll be damned before I *register* with that site.

Fortunately, the reader sent me the entire article. It's another one of those stupid attempts by creationists to assemble some *really big* numbers in order to "prove" that evolution is impossible.

>One More Calculation
>
>The following is a calculation, based entirely on numbers provided by
>Darwinists themselves, of the number of small selective steps evolution would
>have to make to evolve a new species from a previously existing one. The
>argument appears in physicist Lee Spetner's book "Not By Chance."
>
>At the end of this post -- by "popular demand" -- I will post a bibliography of
>suggested reading on evolution and ID.
>
>**********************************************
>
>Problem: Calculate the chances of a new species emerging from an earlier one.
>
>What We Need to Know:
>
>(1) the chance of getting a mutation;

>(2) the fraction of those mutations that provide a selective advantage (because
>many mutations are likely either to be injurious or irrelevant to the
>organism);

>(3) the number of replications in each step of the chain of cumulative >selection;

>(4) the number of those steps needed to achieve a new species.
>
>If we get the values for the above parameters, we can calculate the chance of
>evolving a new species through Darwinian means.

Fairly typical so far. Not *good* mind you, but typical. Of course, it's already going wrong. But since the interesting stuff is a bit later, I won't waste my time on the intro :-)

Right after this is where this version of this argument turns particularly sad. The author doesn't just make the usual big-numbers argument; they recognize that the argument is weak, so they need to go through some rather elaborate setup in order to stack things to produce an even more unreasonably large phony number.

It's not just a big-numbers argument; it's a big-numbers *strawman* argument.

>Assumptions:
>
>(1) we will reckon the odds of evolving a new horse species from an earlier
>horse species.
>
>(2) we assume only random copying errors as the source of Darwinian variation.
>Any other source of variation -- transposition, e.g., -- is non-random and
>therefore NON-DARWINIAN.

This is a reasonable assumption, you see, because we're not arguing against *evolution*; we're arguing against the *strawman* "Darwinism", which arbitrarily excludes real live observed sources of variation because, while it might be something that really happens, and it might be part of real evolution, it's not part of what we're going to call "Darwinism".

Really, there are a lot of different sources of variation/mutation. At a minimum, there are point mutations, deletions (a section getting lost while copying), insertions (something getting inserted into a sequence during copying), transpositions (something getting moved), reversals (something get flipped so it appears in the reverse order), fusions (things that were separate getting merged - e.g., chromasomes in humans vs. in chimps), and fissions (things that were a single unit getting split).

In fact, this restriction *a priori* makes horse evolution impossible; because the modern species of horses have *different numbers of chromasomes*. Since the only change he allows is point-mutation, there is no way that his strawman Darwinism can do the job. Which, of course, is the point: he *wants* to make it impossible.

>(3) the average mutation rate for animals is 1 error every 10^10 replications
>(Darnell, 1986, "Molecular Cell Biology")

Nice number, shame he doesn't understand what it *means*. That's what happens when you don't bother to actually look at the *units*.

So, let's double-check the number, and discover the unit. Wikipedia reports the human mutation rate as 1 in 108 mutations *per nucleotide* per generation.

He's going to build his argument on 1 mutation in every 10^10 reproductions *of an animal*, when the rate is *per nucleotide*, *per cell generation*.

So what does that tell us if we're looking at horses? Well, according to a research proposal to sequence the domestic horse genome, it consists of 3x109 nucleotides. So if we go by wikipedia's estimate of the mutation rate, we'd expect somewhere around 30 mutations per individual *in the fertilized egg cell*. Using the numbers by the author of this wretched piece, we'd still expect to see 1 out of every three horses contain at least one unique mutation.

The fact is, pretty damned nearly every living thing on earth - each and every human being, every animal, every plant - each contains some unique mutations, some unique variations in their genetic code. Even when you start with a really big number - like one error in every 1010 copies; it adds up.

>(4) To be part of a typical evolutionary step, the mutation must: (a) have a
>positive selective value; (b) add a little information to the genome ((b) is a
>new insight from information theory. A new species would be distinguished from
>the old one by reason of new abilities or new characteristics. New
>characteristics come from novel organs or novel proteins that didn't exist in
>the older organism; novel proteins come from additions to the original genetic
>code. Additions to the genetic code represent new information in the genome).

I've ripped apart enough bullshit IT arguments, so I won't spend much time on that, other to point out that *deletion* is as much of a mutation, with as much potential for advantage, as *addition*.

A mutation also does not need to have an immediate positive selective value. It just needs to *not* have negative value, and it can propagate through a subset of the population. *Eventually*, you'd usually (but not always! drift *is* an observed phenomenon) expect to see some selective value. But that doesn't mean that *at the moment the mutation occurs*, it must represent an *immediate* advantage for the individual.

>(5) We will also assume that the minimum mutation -- a point mutation -- is
>sufficient to cause (a) and (b). We don't know if this is n fact true. We don't
>know if real mutations that presumably offer positive selective value and small
>information increases can always be of minimum size. But we shall assume so
>because it not only makes the calculation possible, but it also makes the
>calculation consistently Darwinian. Darwinians assume that change occurs over
>time through the accumulation of small mutations. That's what we shall assume,
>as well.

Note the continued use of the strawman. We're not talking about evolution here; We're talking about *Darwinism* as defined by the author. Reality be damned; if it doesn't fit his Darwinism strawman, then it's not worth thinking about.

>Q: How many small, selective steps would we need to make a new species?
>
>A: Clearly, the smaller the steps, the more of them we would need. A very
>famous Darwinian, G. Ledyard Stebbins, estimated that to get to a new species
>from an older species would take about 500 steps (1966, "Processes of Organic
>Evolution").
>
>So we will accept the opinion of G. Ledyard Stebbins: It will take about 500
>steps to get a new species.

Gotta love the up-to-date references, eh? Considering how much the study of genetics has advanced in the last *40 years*, it would be nice to cite a book younger than *me*.

But hey, no biggie. 500 selective steps between speciation events? Sounds reasonable. That's 500 generations. Sure, we've seen speciation in less than 500 generations, but it seems like a reasonable guestimate. (But do notice the continued strawman; he reiterates the "small steps" gibberish.)

>Q: How many births would there be in a typical small step of evolution?
>
>A: About 50 million births / evolutionary step. Here's why:
>
>George Gaylord Simpson, a well known paleontologist and an authority on horse
>evolution estimated that the whole of horse evolution took about 65 million
>years. He also estimated there were about 1.5 trillion births in the horse
>line. How many of these 1.5 trillion births could we say represented 1 step in
>evolution? Experts claim the modern horse went through 10-15 genera. If we say
>the horse line went through about 5 species / genus, then the horse line went
>through about 60 species (that's about 1 million years per species). That would
>make about 25 billion births / species. If we take 25 billion and divided it by
>the 500 steps per species transition, we get 50 million births / evolutionary
>step.
>
>So far we have:
>
>500 evolutionary steps/new species (as per Stebbins)

>50 million births/evolutionary step (derived from numbers by G. G. Simpson)

Here we see some really stupid mathematical gibberish. This is really pure doubletalk - it's an attempt to generate *another* large number to add into the mix. There's no purpose in it: we've *already* worked out the mutation rate and the number of mutations per speciation. This gibberish is an alternate formulation of essentially the same thing; a way of gauging how long it will take to go through a sequence of changes leading to speciation. So we're adding an redundant (and meaningless) factor in order to inflate the numbers.

>Q: What's the chance that a mutation in a particular nucleotide will occur and
>take over the population in one evolutionary step?
>
>A: The chance of a mutation in a specific nucleotide in one birth is 10^-10.
>Since there are 50 million births / evolutionary step, the chance of getting at
>least one mutation in the whole step is 50 million x 10^-10, or 1-in-200
>(1/200). For the sake of argument we can assume that there is an equal chance
>that the base will change to any one of the other three (not exactly true in
>the real world, but we can assume to make the calculation easier - you'll see
>that this assumption won't influence things so much in the final calculation);
>so the chance of getting specific change in a specific nucleotide is 1/3rd of
>1/200 or 1-in-600 (1/600).
>
>So far we have:
>
>500 evolutionary steps/new species (as per Stebbins)

>50 million births/evolutionary step (derived from numbers by G. G. Simpson)

>1/600 chance of a point mutation taking over the population in 1 evolutionary >step (derived from numbers by Darnell in his standard reference book)

This is pure gibberish. It's so far away from being a valid model of things that it's laughable. But worse, again, it's redundant. Because we've already introduced a factor based on the mutation rate; and then we've introduced a factor which was an alternative formulation of the mutation rate; and now, we're introducing a *third* factor which is an even *worse* alternative formulation of the mutation rate.

>Q: What would the "selective value" have to be of each mutation?
>
>A: According to the population-genetics work of Sir Ronald Fisher, the chances
>of survival for a mutant is about 2 x (selective value).
>"Selective Value" is a number that is ASSIGNED by a researcher to a species in
>order to be able to quantify in some way its apparent fitness. Selective Value
>is the fraction by which its average number of surviving offspring exceeds that
>of the population norm. For example, a mutant whose average number of surviving
>offspring is 0.1% higher than the rest of the population would have a Selective
>Value = 0.1% (or 0.001). If the norm in the population were such that 1000
>offspring usually survived from the original non-mutated organism, 1001
>offspring would usually survive from the mutated one. Of course, in real life,
>we have no idea how many offspring will, IN FACT, survive any particular
>organism - which is the reason that Survival Value is not something that you go
>into the jungle and "measure." It's a special number that is ASSIGNED to a
>species; not MEASURED in it (like a species' average height, weight, etc.,
>which are objective attributes that, indeed, can we can measure).
>
>Fisher's statistical work showed that a mutant with a Selective Value of 1% has
>a 2% chance of survival in a large population. A chance of 2-in-100 is that
>same as a chance of 1-in-50. If the Selective Value were 1/10th of that, or
>0.1%, the chance would be 1/10th of 2%, or about 0.2%, or 1-in-500. If the
>Selective Value were 1/100th of 1%, the chance of survival would be 1/100th of
>2%, or 0.02%, or 1-in-5000.
>
>We need a Selection Value for our calculation because it tells us what the
>chances are that a mutated species will survive. What number should we use? In
>the opinion of George Gaylord Simpson, a frequent value is 0.1%. So we shall
>use that number for our calculation. Remember, that's a 1-in-500 chance of
>survival.
>
>So far we have:
>
>500 evolutionary steps/new species (as per Stebbins)

>50 million births/evolutionary step (derived from numbers by G. G. Simpson)

>1/600 chance of a point mutation taking over the population in 1 evolutionary
>step (derived from numbers by Darnell in his standard reference book)

>1/500 chance that a mutant will survive (as per G. G. Simpson)

And, once again, *another* meaningless, and partially redundant factor added in.

Why meaningless? Because this isn't how selection works. He's using his Darwinist strawman again: everything must have *immediate* *measurable* survival advantage. He also implicitly assumes that mutation is *rare*; that is, a "mutant" has a 1-in-500 chance of seeing its mutated genes propagate and "take over" the population. That's not at all how things work. *Every* individual is a mutant. In reality, *every* *single* *individual* possesses some number of unique mutations. If they reproduce, and the mutation doesn't *reduce* the likelihood of its offspring's survival, the mutation will propagate through the generations to some portion of the population. The odds of a mutation propagating to some reasonable portion of the population over a number of generations is not 1 in 500. It's quite a lot better.

Why partially redundant? Because this. once again, factors in something which is based on the rate of mutation propagating through the population. We've already included that twice; this is a *third* variation on that.

>Already, however, the numbers don't crunch all that well for evolution.
>
>Remember, probabilities multiply. So the probability, for example, that a point
>mutation will BOTH occur AND allow the mutant to survive is the product of the
>probabilities of each, or 1/600 x 1/500 = 1/300,000. Not an impossible number,
>to be sure, but it's not encouraging either ... and it's going to get a LOT
>worse. Why? Because...

**Bzzt. Bad math alert!**

No, these numbers *do not multiply*. Probabilities multiply *when they are independent*. These are *not* independent factors.

>V.
>
>Q. What are the chances that (a) a point mutation will occur, (b) it will add
>to the survival of the mutant, and (c) the last two steps will occur at EACH of
>the 500 steps required by Stebbins' statement that the number of evolutionary
>steps between one species and another species is 500?

See, this is where he's been going all along.

* He created the darwinian strawman to allow him to create bizzare requirements.
* Then he added a ton of redundant factors.
* Then he combined probabilities as if they were independent when they weren't.
* and *now* he adds a requirement for simultaneity which has no basis in reality.

>A: The chances are:
>
>The product of 1/600 x 1/500 multiplied by itself 500 times (because it has to
>happen at EACH evolutionary step). Or,
>
>Chances of Evolutionary Step 1: 1/300,000 x

>Chances of Evolutionary Step 2: 1/300,000 x

>Chances of Evolution Step 3: 1/300,000 x ...

>. . . Chances of Evolution Step 500: 1/300,000

>
>Or,
>
>1/300,000^500

*Giggle*, *snort*. I seriously wonder if he actually believe this gibberish. But this is just silly. For the reasons mentioned above: this is taking the redundant factors that he already pushed into each step, inflating them by adding the simultaneity requirement, and then *exponentiating* them.

>This is approximately equal to:
>
>2.79 x 10^-2,739
>
>A number that is effectively zero.

As I've said before: no one who understands math *ever* uses the phrase *effectively zero* in a mathematical argument. There is no such thing as effectively zero.

On a closing note, this entire thing, in addition to being both an elaborate strawman *and* a sloppy big numbers argument is also an example of another kind of mathematical error, which I call a *retrospective error*. A retrospective error is when you take the outcome of a randomized process *after* it's done, treat it as the *only possible outcome*, and compute the probability of it happening.

A simple example of this is: shuffle a deck of cards. What's the odds of the particular ordering of cards that you got from the shuffle? 1/52! = 1/(8 * 1067). If you then ask "What was the probability of a shuffling of cards resulting in *this order*?", you get that answer: 1 in 8 * 1067 - an incredibly unlikely event. But it *wasn't* an unlikely event; viewed from the proper perspective, *some* ordering had to happen: any result of the shuffling process would have the same probability - but *one* of them had to happen. So the odds of getting a result whose *specific* probability is 1 in 8 * 1067 was actually 1 in 1.

The entire argument that our idiot friend made is based on this kind of an error. It assumes a single unique path - a single chain of specific mutations happening in a specific order - and asks about the likelihood that *single chain* leading to a *specific result*.

But nothing ever said that the primitive ancestors of the modern horse *had* to evolve into the modern horse. If they weren't to just go extinct, they would have to evolve into *something*; but demanding that the particular observed outcome of the process be the *only possibility* is simply wrong.

Categories

More like this

Update on paper access: You can get it here already. Note: I'm going to put a link roundup (updated) at this post. End Note Recent acceleration of human adaptive evolution: Genomic surveys in humans identify a large amount of recent positive selection. Using the 3.9-million HapMap SNP dataset, we…
Much to my professional shame, PZ recently pointed out David Plaisted, a Computer Science professor at the University of North Carolina, who has href="http://www.cs.unc.edu/%7Eplaisted/ce/challenge8.html">an anti-evolution screed on his university website. Worse, it's typical creationist…
I was sent the following argument by email. A new breed of ID is in the process of supplanting the former fact-free versions on U.S. university campuses. The new breed looks like this (from recent lectures on several University of California campuses): The following design argument does not require…
Previously I discussed the probability of extinction across one generation for a new mutant allele. To review, there is ~1/3 chance that a new mutant will go extinct within one generation of its origination (i.e., a de novo mutation is not replicated and transmitted to the next generation of…

It's amazing how many of these are variations on a theme. You'd think they would attempt to do something novel.

As I've said before, anybody who uses Wikipedia as an end unto itself in scholarly research is living in a state of sin. Other than that little quibble (only one part in 108 or so), this post was great! (-:

Talk of Ann Coulter and probability reminds me of a blog comment I wrote a while back --- I'm a prolific blog commentor but too lazy to get my own --- entitled "A Statistical Approach to Deranged Creationist Liars".

Damn I'm glad I saw this. I've been arguing about the retrospective error arguments with people over and over without having the correct terminology to identify and deal with it. It's a really common error you encounter in arguing with fundies over just about everything.

It's also interesting to note that an intelligent creator would have been subject to the probability of a single chain of events leading to a specific result, so if they buy these arguments but are forced to do their math correctly, the creator probability becomes far less likely than the natural selection probability.

Most of these "proofs" are based on the assumption that you need to perform 500 "steps" in a _particular_ sequential order to go from current species A to new species B.

In fact, a sizeable animal population, say the horse population of a continent, is a continuous breeding experiment, and harbors much more than 500 "new" genes at any moment of time. If 500 of those "new" (current) genes were concentrated in a single animal, it would be of a new species.

If one had the leisure to follow that horse population 1000 years, and take 1 individual as sample, it would possibly differ from a random horse of last millenium by 500 genes, or more, or less. It would possibly would be unable to interbreed with the other horsey, now 1000 years dead. The changes would probably don't show up on fossils, since the bone structure would not have noticeably changed. The new horse sports a new mottled coat, can digest several more plants more efficiently, can whinny better, etc...

It is a nice thought experiment. Pity we have so few chances to compare live specimens captured 100 years or 1 million years aparts.

The idea is that one animal population is constantly mutating and testing new genes for fitness *in parralel*, (which is the most efficient way). Also many genes that were in the process of being drifted out of the gene pool can be salvaged by a second mutation. Genes do not need to be monotonely increasing in fitness; I expect it is commonplace for successful genes to have started as an impairment, either because they only work in combinaison with other genes, or because they needed at least 2 cumulative mutations from their original gene to acquire increased usefulness.

I also ripped this persons arguments appart on biological assumptions on the Coulter site.

This statement is especially ridiculous to a molecular microbiology grad student such as myself:
(2) we assume only random copying errors as the source of Darwinian variation. Any other source of variation -- transposition, e.g., -- is non-random and therefore NON-DARWINIAN.

I pointed out that transposition is in most cases a very random process. He argued back that we don't know how transposition works. To which I replied that we use transposons as a tool to mutate genes at random in organisms to see phenotypic changes and detect genes for functions we wish to study (such as virulence factors in pathogenic bacteria). If we make a large enough mutant library we could mutate every gene in the genome (but we wouldn't expect everything to grow). If this process was not random we could not use transposons for this purpose. I went on to provide a link to commercial kits that have standardized this process!!!!!
There are many other hilariously non-biological arguments on this string. Anyways, great job with setting this "calculation" straight!

Very tangential, but I find it interesting how comp sci folks refer to inversions as reversals. I think this stems from the terminology from compsci. It shouldn't bother me, but for some reason it does. You can usually figure out someone's educational background by whether they use the term "inversion" or "reversal". Whenever I read "reversal" I think, this person isn't a biologist, and I find myself looking for misunderstandings of biology in their writing. It's probably not fair, but pet peeves aren't very logical either.

Great article from Mark again!

RPM:
Its easy to understand what "reversal" of information (or code) means. But how do you "invert" information and its code?

Dixit your test works. I'm not a biologist :-)

As I've said before: no one who understands math ever uses the phrase effectively zero in a mathematical argument. There is no such thing as effectively zero.

Nonsense. I have heard that phrase many, many times in seminars, and have read it innumerable times in peer-reviewed articles and in text books. It has, in fact, a rather precise meaning: for whatever quantity is being described as "effectively zero" the implication is that you cannot perform an experiment that could distinguish between the presumed or calculated value and an actual value of zero.

If something, anything, does in fact have a probability of, say, 10^-500, then you will never see it occurring. And "effectively zero" is a well-accepted term in such an instance. If you think no one who understands math ever uses the phrase "effectively zero" in a mathematical argument, then you need to get out of the house more.

As I've said before: no one who understands math ever uses the phrase effectively zero in a mathematical argument. There is no such thing as effectively zero.

Statisticians do, and we...

Oh, I see what you mean.

Bob

Probabilities multiply when they are independent. These are not independent factors.

Mark, when you're looking for your next topic, I know I'd appreciate a "common errors discussing probability" series.

If something, anything, does in fact have a probability of, say, 10^-500, then you will never see it occurring. And "effectively zero" is a well-accepted term in such an instance.

"Effectively zero" may be a well-accepted term in an empirical argument, or even a statistical argument, but certainly not in a mathematical one.

The statement "you will never see it occurring" is not a mathematical argument.

Um...excuse me, but who the hell believes in intelligent design anymore? Because all that was thrown out of the window after all the Drosophila experiments, wasn't it? Only Bible-belt morons believe all this. You only need to look at, say, the fact that bacteria become resistant to drugs, to believe in evolution. Because "God" wouldn't want drug-resistant bacteria, would he?

Davis,

Of course it appears in mathematical arguments--unless you narrowly define "mathematical argument" to the point where it is meaningless--"effectively" begging the question by arguing that one requirement for a mathematical argument is that it doesn't use the phrase in question.

Are theoretical physics arguments not, at least in part, mathematical arguments? I cannot imagine anybody making the claim that they are not mathematical. Yet the phrase "effectively zero" is quite common in theoretical physics papers--some of which are indeed so mathematical that they are inventing new math to solve their equations. And as far as I know the phrase "effectively zero" causes no editor or reviewer to go apoplectic.

As a mathematician, I consider mathematical arguments to be those that are rigorous. Physicists certainly engage in mathematical arguments, but they also mix in statements that pure mathematicians would consider hand-waving (even if they are based on empirical observation).

That doesn't make it wrong to use such a statement, it just means you're not doing math when you do. At least, not unless someone can give a mathematically rigorous definition of "effectively zero."

Something I've been thinking about is; what would constitute a "reasonable" argument for a universe created by God? At the risk of being flamed, I'm genuinely curious what some would answer to this question.

In other words, suppose (just for a second) that God really did create the universe; would we be able to tell this by empirical investigation? I don't have an answer, I'm just wondering.

I went into a little more detail on this question on my blog, but I didn't propose an answer, because I don't have one. I just thought it was an interesting question.

So by "mathematical argument" you mean an agument acceptable to some vaguely defined "pure" mathematics.

Not Statistics, which often uses the phrase "effectively zero."

Not Probability, which often uses the phrase "effectively zero."

Not Theoretical Physics, which often uses the phrase "effectively zero."

Not Non-linear Analysis/Chaos Theory, which often uses the phrase "effectively zero."

Not Applied Mathematics, which often uses the phrase "effectively zero."

David:

There is a clear line between applied math and pure math. In pure math, there is no effectively zero. In *probability*, there *is no effectively zero*. Ask *anyone* who does probability if there's *any* level of probability where a finite but small probability is equivalent to a probability of zero.

Further - in applied math, we don't just throw around the phrase "effectively zero": we use real measures that are meaningful in the context. Physicists only say "effectively zero" *in the context of an experiment*, where the value of "effectively zero" is determined by the limit of the precision of the instruments being used in that experiment. They'll often use a measure like 3 sigmas, or 5 sigmas. But it's always clear that they're *not* saying that any finite quantity is really the same as zero; they're saying that *to the limit of their ability to measure*, it's equivalent to zero.

For almost any measure of probability ε, no matter how extreme, it's often easy to formulate a simple experiment that produces some result whose a priori probability was smaller than ε.

For example, Dembski's "universal probability bound" is *larger* than the probability of the results of a random shuffling of a tarot deck.

Try working out the a priori probability of my personal genome, given the genomes of my parents. It's something considerably larger than 230,000.

Phil:

The answer to your question depends on what you're really asking.

You could be asking "What would the universe need to look like for it to be *reasonable* to believe that there might be a god who created it. In that case, I'd answer "exactly what we see". I think a reasonable person can believe that the universe was created by a deity.

On the other hand, you could be asking "What would the universe need to look like for reasonable people to accept a *proof* of the existence of god?" And that's a very different question. A *proof* of the existence of a deity would require some verifyiable, repeatable direct intervention by the deity demonstrating its existence.

Sagan once suggested what I think would be a reasonable proof, which could be related to another post on this blog. Suppose we measure π to a great degree of precision in the actual space-time topology of our universe. Suppose that way back, millions of digits in, there was a difference between the theoretical value of π in a plane, and the measured value of π in the actual topology of the universe. And suppose that if you measured that difference, it turned out to be a coded message.

*That* would be pretty compelling.

Marc,

Ask *anyone* who does probability if there's *any* level of probability where a finite but small probability is equivalent to a probability of zero.

That has nothing whatsoever to do with what I stated. In fact, all you are saying is something I and anyone else would readily admit: that "effectively zero" does not mean the same thing as "zero" or "exactly zero" or "identically zero." Nobody said they were the same. In fact, the adjective "effectively" presupposes that it is not exactly zero.

The question is this: would a mathematically competent person use and/or accept the phrase "effectively zero" as meaningful under some circumstances in a mthematical context? The answer is demonstrably yes.

But it's always clear that they're *not* saying that any finite quantity is really the same as zero; they're saying that *to the limit of their ability to measure*, it's equivalent to zero.

Precisely--hence it is effectively zero. The probability of a person quantum mechanically tunneling through a brick wall is not zero, and when a physicist argues that it is "effectively zero" or "vanishingly small" or "infinitesimal" he is not stating that it is exactly zero, but rather that you'll never see it happen. Yet such a person clearly understands mathematics and the argument is, at least at some level, mathematical.

Thanks for the reply, Mark. I'd agree, my question was a pretty broad one... maybe not the best way to phrase such thing.

Anyways, good answer, interesting.

Another dunking debunking.

quitter:
I am in my turn damn glad I saw your commentary. I was recently in a discussion where I got pointed to Jefferys et al likewise bayesian estimate ("creator probability" - note that frequency probability is about the ratio of events in a repeated experiment) on finetuning ( http://www.ncseweb.org/resources/articles/784_review_of_emthe_privilege… ).

It points out the similar conclusion as you do, but not with such devastating numbers as Mark illustrate are possible.

David Heddle:
"The question is this: would a mathematically competent person use and/or accept the phrase "effectively zero" as meaningful under some circumstances in a mthematical context?"

You are raising a straw man we need to burn. The question was not about "mthematical [sic] context", but about "mathematical argument". Totally different meanings, see Mark's comments.

On creationistic probability arguments, let me cite myself from a recent comment here:
"To quote Wikipedia: "The idea that events with fantastically small, but positive probabilities, are effectively negligible[3]was discussed by the French mathematician Ãmile Borel primarily in the context of cosmology and statistical mechanics.[4] . However, there is no widely accepted scientific basis for claiming that certain positive values are universal cutoff points for effective negligibility of events. Borel, in particular, was careful to point out that negligibility was relative to a model of probability for a specific physical system.[5]" ( http://en.wikipedia.org/wiki/Universal_probability_bound )"

Dembski is the only one mentioning (but failing to use meaningfully) a universal probability bound.

By Torbjörn Larsson (not verified) on 14 Aug 2006 #permalink

If something, anything, does in fact have a probability of, say, 10^-500, then you will never see it occurring.

Shuffle two decks of cards together.

Deal the cards face up on the table.

Calculate the a priori probability of the particular arrangement that you see.

By Jon Fleming (not verified) on 14 Aug 2006 #permalink

Larsson,

I don't understand your point, at least not all of it. It seems to be, in its substance, that I committed a logical fallacy by using the term "mathematical context" instead of "Mathematical argument." So it appears that you want to defend Mark's comment on some split-the-hairs distinction--whereas I assumed he was making a substantive, general claim.

However, let's assume you are correct. I would then ask you to define "mathematical argument" so that I know what you are talking about.

I would think that your definition will have to include the general statement:

The probability is so small, it is effectively zero.

As a form (right or wrong) of a mathematical argument. Because, after all, Mark applied his criticism to just that sort of analysis. If that is not a mathematical argument, then Mark's criticism is meaningless--it would mean he was arguing that something that was not a mathematical argument didn't meet the standards that are applied only to mathematical arguments.

Now, if you agree that the general argument of the type: The probability is so small, it is effectively zero is mathematical, I can provide any number of examples where scientists use exactly those words--and so provide proof that people who know about mathematics use "effectively zero" in a mathematical argument.

If The probability is so small, it is effectively zero is not a mathematical argument, then what is the purpose of Mark's criticism?

Also, have you thoroughly studied the Ikeda and Jefferys paper? (To me, it is an example of how anything can be demonstrated using Bayes' theorem and suitable, obscured, assumptions). In particular, I was wondering if you agree with their conclusion in their famous (but as far as I know unpublished in peer-reviewed literature) paper found here:

web.archive.org/web/20030821010714/quasar.as.utexas.edu/anthropic.html

namely, as they wrote:

In this article we will show that this [fine tuning] argument is wrong. Not only is it wrong, but in fact we will show that the observation that the universe is "fine-tuned" in this sense can only count against a supernatural origin of the universe. And we shall furthermore show that with certain theologies suggested by deities that are both inscrutable and very powerful, the more "finely-tuned" the universe is, the more a supernatural origin of the universe is undermined.

And, if so, do you view each new piece of fine-tuning evidence as an additional nail in the Cosmological ID coffin? That is, do you welcome fine-tuning as a way to rule out the supernatural?

Jon,

The correct way to phrase the problem is this: I can easily calculate the probability for you to shuffle the deck and deal yourself, in order, a 7c, 6s, 3h, Kc, Jd, 9d, Ac, and 5h. That number is so small compared to the number and speed at which a human can deal himself eight cards, reshuffle, and deal again that it is effectively zero. That is, you'll never shuffle a deck and deal yourself, in order, a 7c, 6s, 3h, Kc, Jd, 9d, Ac, and 5h.

David Heddle,

There's no need to argue about it -- just show a mathematical argument that makes use of the concept "effectively zero". A counter-example disprove the assertion.

By Canuckistani (not verified) on 14 Aug 2006 #permalink

David Heddle:
"So it appears that you want to defend Mark's comment on some split-the-hairs distinction--whereas I assumed he was making a substantive, general claim."

As usual you accuse your opponents of the very same thing that you do, and try to lead the discussion in circles.

A "mathematical argument" is _pure_ math, math *by itself and for itself*.

A "mathematical context" is *using* _applied_ math *in the context of an experiment*. That includes the probability argument that you are so fond of if you only read that Mark has been telling you already:
"Physicists only say "effectively zero" *in the context of an experiment*, where the value of "effectively zero" is determined by the limit of the precision of the instruments being used in that experiment."

Simple as that.

"To me, it is an example of how anything can be demonstrated using Bayes' theorem and suitable, obscured, assumptions."

As you can read above I'm somewhat sympatetic to that.

I said on http://scienceblogs.com/goodmath/2006/07/yet_another_crappy_bayesian_ar… :

"The idea of incompatibility with physics is not mine. I took it from a string physicist who is an active blogger:

"It is often said that there are two basic interpretations of probability: frequency probability (the ratio of events in a repeated experiment) and Bayesian probability (the amount of belief that a statement is correct). ...

While the text above makes it clear that I only consider the frequentist probabilities to be a subject of the scientific method including all of its sub-methods, it is equally clear that perfect enough theories may allow us to predict the probabilities whose values cannot be measured too accurately (or cannot be measured at all) by experiments."

But I also said:
"And contrary to my referenced physicist I think constraining sparse events can be useful. For example, in the SETI Drake equation estimates constrain the expected number of communicative civilisations and types of likely systems, which guides search."

Bayesian methods are used with good results in signal processing/filtering and parsimony analysis of models, and is claimed to be popular in decision theory. I note that Jefferys are in fact comparing two models as in a parsimony analysis, so I believe that there is a point in this case, as in the SETI case.

"Also, have you thoroughly studied the Ikeda and Jefferys paper?"
Now I have browsed it. :-)

"And, if so, do you view each new piece of fine-tuning evidence as an additional nail in the Cosmological ID coffin?"

No. And thank you for making me read the paper, it is interesting.

They say several things:
1) WAP implies a naturalistic universe.

"The inequality P(N|F&L)>=P(N|L) shows that the WAP supports (or at least does not undermine) the hypothesis that the universe is governed by naturalistic law."

2) This is independent on finetuning.

"If we remove the restriction that the inequalities be strict, then the only case where both inequalities can be true is if

P(N|~F&L)=P(N|L) and P(N|F&L)=P(N|L).

In other words, the only case where both can be true is if the information that the universe is "life-friendly" has no effect on the probability that it is naturalistic (given the existence of life); and this can only be the case if neither inequality is strict."

3) Finetuning supports chaotic inflation multiverse cosmology! (Which is a natural generalisation of our Lambda-CDM observed cosmology.)

"We have shown that the WAP tends to support N, and cannot undermine it. This observation is independent of whether P(F|N) is small or large, ... We believe that the real import of observing that P(F|N) is small (if indeed that is true) would be to strengthen Vilenkin/Linde/Smolin-type hypotheses that multiple universes with varying physical constants may exist."

4) A religiously empty hypotheses does best of the supernatural ideas.

"Put another way, to assume that P(F|~N&L)=1 is to concede that life in the world actually arose by the operation of an agent that is observationally indistinguishable from naturalistic law, insofar as the observation F is concerned."

(Note: Ockham razes this as a scientific hypotheses. But that analysis is outside the papers models.)

5) All other supernatural hypotheses does worse since they don't predict outcomes as sharply as whose constrained by naturalistic law.

"The point is that N predicts outcomes much more sharply and narrowly than does ~N; it is, in Popperian language, more easily falsifiable than is ~N."

6) Finetuning analysis implies low probability of a particular god in a multigods scenario.

"In particular, with the "fine-tuning" argument in mind, we would have to specify P(F|Di&L) for every i (probably an infinite set of deities). ... In general, each of the individual prior probabilities P(Di|L) would be very small, since there are so many possible deities."

So it is each new piece of religious "evidence" (each new god-theory) that is an additional nail in the Cosmological ID coffin!!!

By Torbjörn Larsson (not verified) on 14 Aug 2006 #permalink

I just tried dealing out a few hands to see what I could get, and after a few tries I got 7c, 6s, 3h, Kc, Jd, 9d, Ac, and 4h [NOT the 5h], close but no luck. In what game is this a good hand?

A "mathematical argument" is _pure_ math, math *by itself and for itself*.

A "mathematical context" is *using* _applied_ math *in the context of an experiment*.

This is a much better statement of the distinction I was trying to make earlier.

Davis,

So is the probability analysis Mark is criticizing a "pure math argument", or not? And if not, why is he criticizing it for the offensive phrase, an if so, I can find many cases where people use the same phrase "effectively zero" in a non controversial topic without having their math competence questioned.

It is the same question that Larsson side-stepped.

601,

Keep trying. You almost have a Royal Fizbin.

So is the probability analysis Mark is criticizing a "pure math argument", or not?

To me it looks more like a pure garbage argument, simply because (a) all of the earlier errors moot this argument, and (b) there's absolutely no justification given for claiming that this probability is "effectively zero."

To be honest, I don't really care if this is or is not intended to be a pure math argument anyway (I suspect the originator wouldn't know the difference). The author made it abundantly clear he/she is fairly ignorant of math, and used the term "effectively zero" in a clueless fashion.

Can someone else use the term in a clue-ful manner? Sure. Beyond that, I don't really consider this an especially interesting issue to hash out, as it doesn't really add anything substantive to the discussion, and isn't really about math per se.

David:

"It is the same question that Larsson side-stepped."

And so the circularity of your argumentation continues as per your usual procedure.

To write "The probability is so small, it is effectively zero" is not to do a mathematical argument.

It is to use applied math, and here wrongly, in a biological context.

Now I have stepped enough in this I hope.

By Torbjörn Larsson (not verified) on 15 Aug 2006 #permalink

Bah. We're letting a strawman be set up. >_<

Heddle - Let's take the particular hand you mentioned. It is 8 cards long, which gives us an a priori probability of 3.3e-14. If I've done my math rearranging correctly, the number of times you have to repeat the action before you have a &gt50% chance of getting the particular outcome is log(.5)/log(1-x), where x is the a priori probability. Plugging this in, we get about 1.26e21 trials, or about 2.4 quadrillion years at my once-a-minute rate. Obviously, this is a very long time. If every person on earth was trying, though, it would only take about 400 thousand years. If a single computer were to try, it would take much, much less. My computer can generate a billion numbers between 0 and 51 in about two minutes with garbage collecting, and that's on a low-end rig supporting two servers. A good computer should be able to throw out all that number of trials in a few days at most. If every computer on earth were to work on it, it would only take a few minutes at most.

So, yeah, 'effectively zero' does have meaning in some contexts. If I'm trying to achieve a particular 8-card hand by shuffling and dealing by myself, the chance of me succeeding is effectively zero in the time that I can spend. With enough resources, though, that 'effectively zero' becomes 'effectively 1'. This is what Mark is meaning when he says that 'effectively zero' has no meaning as it is being used. If it is greater than zero, then it *can* be achieved given enough time and resources. Calling it impossible is a lie, simple as that, unless you give a specific context. Which, of course, they don't, because combining billions of years with most of the surface of the planet earth tends to give you a lot of trials to work with, sort of like combining all the computers of the world did for the card shuffling problem.

Xanthir,

I did call it "effectively zero" in the context of a human doing the shuffling. I wrote:

That number is so small compared to the number and speed at which a human can deal himself eight cards, reshuffle, and deal again that it is effectively zero. That is, you'll never shuffle a deck and deal yourself, in order, a 7c, 6s, 3h, Kc, Jd, 9d, Ac, and 5h.

The term "effectively zero" is perfectly acceptable here--and yes it does not mean "identically zero"-- nor does it show that In(or anyone else who uses it in a similar manner) am a mathematical flunky.

Try to resist the temptation of arguing "logical fallacy!"

Yep. And I agree with you, which is why I said this in my post:
"So, yeah, 'effectively zero' does have meaning in some contexts. If I'm trying to achieve a particular 8-card hand by shuffling and dealing by myself, the chance of me succeeding is effectively zero in the time that I can spend."
In the appropriate context, effectively zero is meaningful. It's not impossible, but you'd be a fool to wait for it. If it *did* happen to come up, though, it wouldn't be a miracle, just a highly improbable event occurring.

My point was that you *can't* use the term 'effectively zero' without context. In the context of the world's computers all operating on the problem, it takes only a few moments, on average, to find that particular hand of cards. However, the people who like using the Big Number Argument are quite fond of omitting the context, and pretending that it's happening one by one, with each trial taking a noticable amount of time. *This* is the dishonesty I'm talking about.

And, of course, all the other errors that they enjoy committing, such as assuming the current state of affairs is the only possible one, assuming independence of factors, etc.

Xanthir,

I forgot what the original probability was that launched this thread, and I don't feel like looking it up,--because I don't pay much attention to those probability chains. But if was something like it usually is, something like 10^-200, and if they were referring to abiogenesis--meaning there was ~billion years for it to happen (which is generous, but whether it was 100 million years or 10 billion years is not relevant, given such a number), then it is sensible to say "the chance is effectively zero." What you should attack is the probability calculation, not the "effectively zero" comment. Because if the probability of abiogenesis really is that small, then it never would have happened without supernatural intervention.

Or, to put it differently, no scientist could accept as fundamental a theory that argues that something that improbable, in the context of the finite history of the earth, really happened.

Hmm. I did some calculations, and while they may be off by several orders of magnitude, we're working with orders of magnitude here so that doesn't change it much. Using that, I get around 1050 operations doable by all the appropriate molecules on the surface of the earth, over a billion years, with one second average time between operation. Since a probability of 10-14 required 1021 trials on average, I assume that 10-200 will require a boggling amount more - more than my calculations make available.

So, under the understanding that all of my calculations were based off of numbers I pulled off of the internet and that I am not in any way well-versed in the conditions of young earth or abiogenetic theories, I concede your point. Given that probability, abiogenesis would be at least as unlikely as me drawing a particular hand of cards. Again, it's *not* impossible, and if it happened it would *not* require a supernatural explanation (any more than a lottery winner does), but any theory attempting to explain it must answer certain questions about its plausibility.

Also, remember, we're talking abiogenesis here, not evolution. The actual argument used in the post was against evolution - abiogenesis is a whole different beast. Evolution assumes that we've already got self-reproducing individuals. Even if we assume that theogenesis is true, it doesn't affect evolution one bit.

Given this, it is indeed more useful to attack the probability itself, since there are such massive and glaring errors in the calculation. As mentioned before, the largest error is, simply, the fact that we're using an a priori probability. Combine that with false independence, and we've probably knocked the *exponent* down by some orders of magnitude.

Others may feel free to correct me.

But if was something like it usually is, something like 10^-200, and if they were referring to abiogenesis--meaning there was ~billion years for it to happen (which is generous, but whether it was 100 million years or 10 billion years is not relevant, given such a number), then it is sensible to say "the chance is effectively zero."

No, it's not. In the shuffling example it was sensible given the practical context that someone was about to start hand-shuffling cards and try for that particular hand. You're essentially telling them it's not worth their effort to do so. If human shuffling powers were greater, this might change. "Effectively zero" has nothing to do with the event itself and everything to do with whether and how one tries to cause it.

So what meaning can the phrase have in reference to abiogenesis? That if a human tried to create life by constructing a new proto-Earth every few seconds and fast-forwarding its natural evolution, they probably wouldn't succeed before dying of old age? So what?

Or, to put it differently, no scientist could accept as fundamental a theory that argues that something that improbable, in the context of the finite history of the earth, really happened.

Nonsense. Scientists accept far more improbable events, from poker games to polonium halo patterns to hurricanes. The a priori probability of an event doesn't determine whether we decide it happened.

By Anton Mates (not verified) on 16 Aug 2006 #permalink

In case the spam filter put the last copy in a holding pattern:

But if was something like it usually is, something like 10^-200, and if they were referring to abiogenesis--meaning there was ~billion years for it to happen (which is generous, but whether it was 100 million years or 10 billion years is not relevant, given such a number), then it is sensible to say "the chance is effectively zero."

No, it's not. As Mark already explained, it's mathematically meaningless. It can have some pragmatic meaning depending on context: your physics example, where "effectively zero" means "You can't distinguish it from zero with your current measuring device," or the random-shuffle example, where it means "If you're going to try to randomly shuffle cards until you get this hand, you'll probably die of old age first so don't bother." It's not about the event/system in question at all; it's about you.

So what non-mathematical meaning attaches to "effectively zero" when talking about abiogenesis? "If you're trying to create life by duplicating the ancient Earth every few seconds and fast-forwarding its history, don't bother?"

Or, to put it differently, no scientist could accept as fundamental a theory that argues that something that improbable, in the context of the finite history of the earth, really happened.

Nonsense. Scientists accept theories involving all sorts of highly improbable events. Unless hurricanes, radioactivity, and poker games are off-limits to science?

By Anton Mates (not verified) on 17 Aug 2006 #permalink

It is fascinating that he should pick equines as the test species as horses and donkeys illustrate evolution very well as the two species are similar enough that they can interbreed but yet are dissimilar enough that the progeny are almost universally sterile (I seem to remember somewhere of there being 3 cases of fertile mules). However, if a Jack (male donkey) breeds a mare (female horse) the result is a mule and breeding success is about 90% or so. If a stud (male horse) breeds a Jenny (female donkey), then the result is a hinny but the breeding success is something along the lines of 18%.
It is interesting that these two lines have almost diverged to the point where crossbreeding may one day not be possible (foxes and wolves anyone?) but now have shown enough drift that the progeny are hybrids.
So it appears the erstwhile mathematician has proven beyond a doubt that either donkeys, horses or mules don't exist. (or maybe it's zonkeys or zorses that don't exist which is another story)

Man, I do love reading your contributions to the Skeptics' Circle, but I need a day's rest afterwards! Math is NOT my strong suit...

Have to love the observation of a straw man argument in the post, followed by debate of what may be a straw man argument in the comments!