Good Math, Bad Math

I’m jumping into this late, and it’s at least somewhat off topic for this
blog, although I’ll try to pull a few mathematical metaphors into it. But Michael
Egnor, that paragon of creationist stupidity, is back babbling about evolution and
bacterial antibiotic resistance. This is a subject which is very personal to me:
my father died almost a year ago – basically from an antibiotic resistant infection.

Since Mike at the Questionable Authority and Mike at Mike the Mad Biologist already ripped Egnor to shreds, I’m not going to bother with the whole thing; I’m just going to focus on one particular part. (And per standard practice,
I won’t link to a DI site, since they feel free to arbitrarily change or delete posts and comments without any notice.)

There is another sense in which Darwinism is used in the debate about antibiotic resistance.
Darwinists claim that ‘natural selection’– the observation in biology that survivors survive– is
indispensible to medical research on antibiotic resistance. Of course, this mundane tautology is of
no value to actual research (‘I didn’t make the breakthrough until I realized that the bacteria
that survived exposure to the antibiotic were the survivors…’). Biochemistry, microbiology,
molecular biology and pharmacology do the heavy lifting in antibiotic research. Evolutionary
biologists’ inference to ‘natural selection’ is highly superfluous to the actual work. The
inference to natural selection is a rhetorical device, not a meaningful scientific heuristic.

Yet, remarkably, many Darwinists seem not even to understand natural selection. Dr. Dardel, the
study’s author, posted this comment on Panda’s Thumb:

Actually, we did indeed use darwinian (sic) evolution within this work (something unusual in structural biology). In order to obtain an enzyme with increased stability (a critical point for structural studies), we used selective pressure to obtain mutants of the enzyme. We selected for bateria (sic) with increased aminiglycoside (sic) resistance, by plating them on antibiotic containing medium. It turned out that some bacteria evolved such stabler (sic) enzymes variants which made this whole study possible!

Dr. Dardel is both candid and mistaken. His comment that the use of Darwin’s theory is “unusual in structural biology” is obviously true, and refreshingly candid. He is, however, mistaken about the application of Darwin’s theory to his recent work. His assertion that “…we selected bacteria…by plating…” is artificial selection, not natural selection. Artificial selection is breeding, in this case microbial breeding. The principles of breeding date back thousands of years, and owe nothing to Darwin. In fact, Darwin claimed that non-teleological processes in nature could produce changes in populations just as teleological processes like breeding could. Even Darwin didn’t claim that his theory explained the outcome of intentional breeding. It’s astonishing that a modern professional scientist like Dr. Dardel doesn’t recognize the difference between artificial selection and natural selection.

First, I’ve got to comment on the absolutely astonishing arrogance on
view there. Dr. Egnor believes that he knows more about an experiment than the
scientist than the person who performed it. I know that surgeons are, by reputation,
very arrogant people (you sort of have to be extremely sure of yourself
to be willing to cut a person open and fix their innards) – but this is truly beyond the pale.

But let’s get to the meat of it.

Egnor’s argument is interesting in its own pathetic way. On the surface, it’s a word game – “artifical selection” versus “natural selection”. But when you really peel away the
superfluous stuff, it’s really fundamentally an argument against the validity of experimental
science!

Here’s where I go mathy for a bit, because I think it’s the clearest way of making my point.
What scientists do is look at natural phenomena, and try to understand them. They use their
understanding to create a model of what they’re observing. What’s a model? The simplest way of
looking at it is as a function: Experiment(precondition)=Prediction – the scientist looks
at some phenomena, and tries to prune down the relevant factors to a minimum – and that minimum is
the input to the model function – the precondition of the experiment. Preconditions are
always incomplete: what you always try to do is to do the experiment under controlled conditions – which really means conditions that eliminate some of the potential inputs
in order to allow you to focus on a particular relationship or phenomena.

The output is what their understanding of the phenomena predicts.
For example, Newton looked at how things move. What he found was a very simple relationship: if I
take an object with a certain mass, and I push it, it will accelerate, and the acceleration is
related to both the mass of the object, and how hard I pushed it. To state that for an experiment,
we could say that the relevant factors are mass (m), force (F), and acceleration (a). So for an
experiment, I could let a be the dependent variable; and then model would say that a is a
function of m and F, so a(m,F)=F/m.

Then I do my experiment, and compare the result to what I predicted. Does that equation
really accurately tell me how much something accelerates under a given force?

All scientific experiments ultimately come down to that: given this set A of conditions, if I do B to it, then I predict a particular result: B(A). A is never a complete set of all the possible inputs, because everything is affected by everything else; so the input to my experiment is always incomplete, and the output is always approximate. But the heart of the experimental process is this equation: Experiment(Preconditions)=Prediction.

Egnor’s argument is that that equation is not valid. By Egnor’s reasoning, you cannot
do an experiment that tells you anything about the world, because your experiment is done under
controlled conditions. Controlled conditions are artificial, and different from the real world in
which natural phenomena occur. Therefore experiments are different from
natural phenomena: experiments are artificial phenomena, and you can draw no conclusions about natural phenomena from those experiments.

Of course, that’s rubbish. We draw conclusions from experiments all the time where the
experiment is performed under controlled conditions. The experiments do tell us useful things. But we do consider the context in which they’re done, to understand what we might miss. For an extreme example, we could do motion experiment in space far from a gravity well, so that we could
observe motion without friction from contact with a surface or from air resistance. But we’d
understand that the results wouldn’t be exactly the same as what we see on earth, because
we’re in an accelerated reference frame, with lots of friction.

In the case of the experiment that Egnor did, what the scientists did was
start with nearly sterile conditions, and take non-resistant strains of bacteria,
and expose them to a particular family of antibiotics over time, to see how they
would react. The end result was that some of the bacteria developed
a particular form of highly stable enzymes. They wanted to get bacteria with
these stable enzymes. They did it not by selecting bacteria with stable enzymes, but
by selecting bacteria that were resistant to antibiotics, because they predicted
that bacteria with the stable enzymes would be more resistant to antibiotics. So
the selection process wasn’t even selecting for the desired result! It was
a classic use of the experimental model: Prediction=Experiment(input);
StableEnzymes=Experiment(bacteria+antibiotics).

Egnor comes in after the fact, and says “the experiment isn’t valid, because
the selection was artificial”. That makes about as much sense as coming into
Newton’s experiment, and saying “That experiment about motion isn’t valid, because
you did it under controlled conditions; if it was just a mule pulling a cart up the street,
it would have been different, because the mule pulling the cart is natural,
but your experiment is artificial.

The experiment predicted that natural selection – that is,
bacteria surviving to reproduce in a particular environment – would evolve to
produce a particular kind of enzyme that they wanted. They didn’t pick the
bacteria based on what kinds of enzymes it produced – they just looked at
the end result, where they had bacteria that could survive in the presence of
lots of antibiotics. And the result was that those bacteria – as predicted – evolved
to survive in the presence of antibiotics, and in some of them, that resistance
caused the bacteria to produce the desired enzyme.

Classic experimentation. Prediction=Experiment(Precondition). It worked for
Newton; it works for physicists; it works for chemists; and it works for biologists.
You can’t reject the results of the experimental process and call yourself a scientist. If Egnor wants to give up pretending to talk about science, and admit that he’s nothing but a dark-age mystic, that’s fine. But he pretends to be a scientist, while rejecting
the results of experimentation.

So, Dr. Egnor. Tell me. How does one do experiments in your universe? Is it even possible for biologists to do experiments of any kind? Or do
you believe that biology is fundamentally immune to experimentation and therefore
not really science at all?

Comments

  1. #1 Pierre
    March 10, 2008

    Mark,

    Was your post truncated somehow? I understand that the Scienceblogs servers are slightly buggy at the moment… maybe that’s why ?

  2. #2 _Arthur
    March 10, 2008

    I think Egnor “argument” such as it is, was based on that old creationist canard, that the gene of antibiotic resistance was already present in ONE of the bacteria subjected to antibiotics, at the begin to the experiment. That gene has ALWAYS been present in bacteria since bacteria were Created on Day 3. So not only it was artificial selection, but no mutation actually took place during the experiment.

  3. #3 Jon McKenzie
    March 10, 2008

    The distinction between artificial selection and natural selection isn’t even a very fine one in the first place. Combine that with Egnor’s complete misunderstanding of experimental science and the purpose and scope of models and we pretty much have a typical case of scientific amateurism. This guy simply isn’t qualified to be talking about this stuff. He clearly doesn’t get it.

  4. #4 Torbjörn Larsson, OM
    March 10, 2008

    Egnor is also flogging other creationist horses, among them the tautological one:

    survivors survive

    This takes so little effort to be revealed as fallacious that people struggling with creationist indoctrination may be helped by it. If, say, ‘falling objects fall’, does that mean gravitation is a “mundane tautology” “of no value”? Egnor may have made a mistake, not his first.

  5. #5 Jefrir
    March 10, 2008

    Personally, I’m intrigued by the references to “breeding” bacteria. Artificial selection v. natural selection may make a difference to some experiments involving organisms that reproduce sexually – mostly just because you get the required result rather quicker. In the case of asexual reproduction I cannot see how it can possibly be significant.

  6. #6 sparc
    March 11, 2008

    I am looking forward to see Egnor discussing Biologic Institute’s microbiologist Ann Gauger’s not too ID-creationism friendly results presented at the Wistar retrospective symposium:

    She gave what amounted to a second presentation, during which she discussed “leaky growth,” in microbial colonies at high densities, leading to horizontal transfer of genetic information, and announced that under such conditions she had actually found a novel variant that seemed to lead to enhanced colony growth. Gunther Wagner said, “So, a beneficial mutation happened right in your lab?” at which point the moderator halted questioning. We shuffled off for a coffee break with the admission hanging in the air that natural processes could not only produce new information, they could produce beneficial new information.

  7. #7 Chris Noble
    March 11, 2008

    Egnor’s pseudoargument is also an example of begging the question.

    He assumes that you can distinguish between “natural” and “artificial” selection.

    This presupposes that we are categorically different from other forms of life such as penicillin mould.

    What is the difference (apart from magnitude) between the selective pressure from “natural” growing penicillin mould and that from isolated penicillin?

    Egnor tries to slip ideas about intelligent design into the premises of his pseudoargument.

  8. #8 Joe Kilner
    March 11, 2008

    Natural selection is basically a tautology, which is why denying it as a mechanism is “not even wrong”.

    The argument (although many on both sides seem to miss this) is whether the mechanism of natural selection is sufficient to describe the variety of living organisms on earth. Anyone who thinks that it isn’t hasn’t spent any time suffering, I mean studying, invertebrate palaeontology (apologies to all fossil bivalve lovers out there).

    Proper appreciation of the massive chains of development, the enormous time scales and the huge variety of life both past and present is hard and boring, which is why a lot of people don’t really do it and which so many see ID etc. as a viable alternative – because they can’t imagine how you get from bacteria to horse. Seeing in detail even one section of the journey is enough for you to say “Yes, OK, I give up, the evidence is overwhelming, please don’t ask me to identify another brachiopod”. But without the detail to make it real you are just left with the limits of your imagination, which is actually a pretty restrictive limit.

  9. #9 trrll
    March 11, 2008

    The distinction between natural selection and artificial selection as the terms are commonly used is quite clear. Artificial selection is where you evaluate each individual in a population, choose the ones that have the traits that you want, and breed them. You know exactly where natural selection is going and what you will end up with at each stage, because you are consciously choosing the traits that you want to reinforce. In other words, in artificial selection, the breeder says, “I want more animals are like these.”

    Natural selection is where the environment does the choosing. In a natural selection experiment (or a genetic algorithm computer experiment), the experimenter chooses an environment, places a population of creatures (or simulated creatures) in it, and then lets it go. She does not know in advance or attempt to control what specific traits will arise as the result of the experiment–rather, she creates an environment and challenges the population to survive and thrive in that environment. The population itself discovers, by trial and error, what traits are needed to maximize reproductive success in that environment. Thus, while there are almost never surprises in artificial selection (and those that occur are almost always undesired and unwelcome), natural selection–either in the wild or in simulation–is capable of discovery, of providing novel information on what is needed to thrive in a particular type of environment.

  10. #10 Torbjörn Larsson, OM
    March 11, 2008

    Ooh, nitpick time!

    they can’t imagine how you get from bacteria to horse

    That may indeed be impossible, as recent research suggests that Bacteria may be a relatively recent split from Eukarya, with Archaea as root (and Viruses resulting from an old Archaean split).

    I guess Bacteria may be seen as streamlined sports cars compared to the old utility trucks of Archaeans and Eukaryans.

    Thus, while there are almost never surprises in artificial selection (and those that occur are almost always undesired and unwelcome), natural selection–either in the wild or in simulation–is capable of discovery, of providing novel information on what is needed to thrive in a particular type of environment.

    If selection is providing information on the environment by narrowing probability distributions (Dawkins’ analogy), I think both natural and artificial selection may provide information by way of fixation. IMHO it is just that the artificial environment is rather weird, it is a (beauty) standard of reinforced traits. Or am I missing something?

  11. #11 Torbjörn Larsson, OM
    March 11, 2008

    I guess Bacteria may be seen as streamlined sports cars compared to the old utility trucks of Archaeans and Eukaryans.

    And Viruses a trailer from scrounged up and newly made parts.

    Wonder how the original cart looked like though, before the engine of the genome was installed.

  12. #12 sirhcton
    March 11, 2008

    “In the case of the experiment that Egnor did, . . . .”

    I presume you meant “In the case of the experiment Dardel did, . . . .” My guess is the Egnor limits his experiments to surgery on his patients, rather than studying bacterial resistance to antibiotics, although he and his patients benefit from Dardel’s work.

  13. #13 trrll
    March 11, 2008

    If selection is providing information on the environment by narrowing probability distributions (Dawkins’ analogy), I think both natural and artificial selection may provide information by way of fixation. IMHO it is just that the artificial environment is rather weird, it is a (beauty) standard of reinforced traits. Or am I missing something?

    Yes, in artificial selection, the breeder is trying to enhance or combine specific traits that the breeder has observed in the population and would like to isolate in a specific line because of esthetic or practical value. The breeder knows in advance what traits he is seeking. The sort of surprises that might occur in selective breeding are, for example, reinforcing of recessive traits that the breeder did not know about or intend to breed for and usually does not desire–hip displasia in certain dog breeds, for example. No novel information is gained by selecting small dogs to breed and ending up with a line of dogs that breed true for small size, although the breeder may sometimes gain information about recessive traits that were present but unrecognized in the population.

    In natural selection, or experimental simulations of it, no decisions is made as to which traits to breed for. Rather the population is offered a problem in the form of an environment. The experimenter is in general unaware of what traits will in fact provide success in the environment. The information gained is knowledge regarding which traits and combinations of traits are beneficial in that particular environment.

  14. #14 Jud
    March 12, 2008

    Egnor wrote: …highly superfluous…

    This left my rhetorical device warning flashing (how Freudian that Egnor criticizes others’ rhetorical devices in his next sentence). How exactly is something “highly” superfluous? Just being plain old inessential or irrelevant isn’t enough?

    Reminds me of the ad I got from my alma mater about a “richly dimensional” mantelpiece clock, which immediately made me think “3 isn’t enough?” (OK, 4 with time.)

  15. #15 Kev
    March 12, 2008

    Mark,
    I am a new reader of your blog, and am thoroughly enjoying it so far. You’ve made me seriously consider getting back into college to get my (any) degree.

    I just wanted to drop a note to say sorry for the loss of your father. Judging from the words you write, I can only imagine he was proud of you.

    Keep up the good work.

  16. #16 Monado, FCD
    March 12, 2008

    Egnor shows his egnorance once again. Dardel’s experiment might be called “directed natural selection” but it certainly isn’t artificial selection, as readers above have pointed out.

    “Survivors survive” might be tautological, but “Why do some survive when others die?” is not and that’s the insight of natural selection, just as “some horse will win the next race” is trivial but “which horse will win?” is not. If your chosen horse is a mudder or hates mud, the environment makes a difference – and if the horse were a wild one being chased by a lion, it could be a vital difference.

  17. #17 Torbjörn Larsson, OM
    March 12, 2008

    No novel information is gained by selecting small dogs to breed and ending up with a line of dogs that breed true for small size, although the breeder may sometimes gain information about recessive traits that were present but unrecognized in the population.

    Ah, you consider the breeder as the observer. Sure, one can do that, what constitutes information is relative to the observer. Well, I actually don’t know what strictly constitutes information in that particular case.

    I OTOH was considering the information that the population, or rather its genome, gain by selection, i.e. life itself as the observer. Selection constitutes a narrowing of probability distributions (large dogs traits are weeded out, and eventually small dogs traits fixated), and thus learning by bayesian inference as I understand it. Dawkins makes that analogy (and excuses for the length of the quote):

    Let me turn, finally, to another way of looking at whether the information content of genomes increases in evolution. We now switch from the broad sweep of evolutionary history to the minutiae of natural selection. Natural selection itself, when you think about it, is a narrowing down from a wide initial field of possible alternatives, to the narrower field of the alternatives actually chosen. Random genetic error (mutation), sexual recombination and migratory mixing, all provide a wide field of genetic variation: the available alternatives. Mutation is not an increase in true information content, rather the reverse, for mutation, in the Shannon analogy, contributes to increasing the prior uncertainty. But now we come to natural selection, which reduces the “prior uncertainty” and therefore, in Shannon’s sense, contributes information to the gene pool. In every generation, natural selection removes the less successful genes from the gene pool, so the remaining gene pool is a narrower subset. The narrowing is nonrandom, in the direction of improvement, where improvement is defined, in the Darwinian way, as improvement in fitness to survive and reproduce. Of course the total range of variation is topped up again in every generation by new mutation and other kinds of variation. But it still remains true that natural selection is a narrowing down from an initially wider field of possibilities, including mostly unsuccessful ones, to a narrower field of successful ones. This is analogous to the definition of information with which we began: information is what enables the narrowing down from prior uncertainty (the initial range of possibilities) to later certainty (the “successful” choice among the prior probabilities). According to this analogy, natural selection is by definition a process whereby information is fed into the gene pool of the next generation.

    If natural selection feeds information into gene pools, what is the information about? It is about how to survive. Strictly it is about how to survive and reproduce, in the conditions that prevailed when previous generations were alive. To the extent that present day conditions are different from ancestral conditions, the ancestral genetic advice will be wrong. In extreme cases, the species may then go extinct. To the extent that conditions for the present generation are not too different from conditions for past generations, the information fed into present-day genomes from past generations is helpful information. Information from the ancestral past can be seen as a manual for surviving in the present: a family bible of ancestral “advice” on how to survive today. We need only a little poetic licence to say that the information fed into modern genomes by natural selection is actually information about ancient environments in which ancestors survived. [Dawkins emphasis removed, mine added.]

    So by this analogy I believe there is no difference in potential information gained between natural and artificial selection, as considered from the viewpoint of the evolved life itself.

  18. #18 Torbjörn Larsson, OM
    March 12, 2008

    Oh, and perhaps the bayesian inference analogy can be made somewhat stricter as AFAIU (by way of a categorical blog reference, btw) in population genetics there are genetics models of (asexual) populations that looks exactly like bayesian inference on relative allele frequencies.

    A cruder analogy would then be that alleles are hypotheses about the environment that are kept or rejected at fixation.

  19. #19 Theron
    March 13, 2008

    Engor says that natural selection is a tautology – “survivors survive.” Ummm, no. If we are going to simplify natural selection in this way, something closer to a true definition would be “survivors survive because they are different from non-survivors.” Egro, Engor, this is not a tautology.

    This guy is a surgeon? Uff.

  20. #20 trrll
    March 13, 2008

    Ah, you consider the breeder as the observer. Sure, one can do that, what constitutes information is relative to the observer. Well, I actually don’t know what strictly constitutes information in that particular case

    Frankly, I think that Dawkins gets Shannon wrong. When Shannon says “Let’s estimate…the receiver’s ignorance or uncertainty before receiving the message, and then compare it with the receiver’s remaining ignorance after receiving the message. The quantity of ignorance-reduction is the information content,” Shannon is talking about the receiver’s ignorance regarding the content of the message, not the sum total of the receiver’s knowledge. Restricting consideration to the message, rather than the totality of the observer’s knowledge, is what enables Shannon to define information as solely a property of the message, essentially its entropy. But if we define information in this sense, then it cannot be correct that “Mutation is not an increase in true information content, rather the reverse, for mutation, in the Shannon analogy, contributes to increasing the prior uncertainty.” Prior to receiving the message, the receiver has no knowledge of its content, and the mutated sequence decreases his ignorance by a greater number of bits than the nonmutated sequence. This is because DNA sequences are not random; they contains redundancies reflecting the presence of recurring protein coding and DNA regulatory motifs that enhance fitness, and mutation decreases that redundancy–i.e. the number of bits required to specify that message is increased. So the receiver’s ignorance will be reduced to a greater degree by the mutated sequence.

    So in the sense of Shannon, mutation increases, and selection decreases, the information content of the genome.

    The problem here is that a Shannon approach is not adequate to considering the problem of meaningful information, which must in some way take into account what the receiver already knows. In this broader context, then, I think that Dawkins has right idea. But then one cannot get away from the necessity of considering who the “receiver” of the message is, and what that receiver does and does not already know.

    And going back to the question of natural selection (real or simulated) vs artificial selection, in the case of artificial selection the breeder already knows what traits he is seeking, so there is very little uncertainty of the outcome, and the results decrease that uncertainty only slightly. On the other hand, with natural selection, the observer or experimenter knows only the environment, and has a great deal of prior uncertainty as to what combination of traits will maximize (or “satisfize”) an organism’s fitness in that environment.

  21. #21 Jonathan Vos Post
    March 15, 2008

    In a prior thread I discussed what Claude Shannon told me about the genuine issue of “meaningful” information.

    Here, on the other hand, is an interesting new paper.


    Title: Shannon Information Capacity of Discrete Synapses
    Authors: Adam B. Barrett, M.C.W. van Rossum
    Comments: 5 pages, 2 figures
    Subjects: Neurons and Cognition (q-bio.NC)

    There is evidence that biological synapses have only a fixed number of discrete weight states. Memory storage with such synapses behaves quite differently from synapses with unbounded, continuous weights as old memories are automatically overwritten by new memories. We calculate the storage capacity of discrete, bounded synapses in terms of Shannon information. For optimal learning rules, we investigate how information storage depends on the number of synapses, the number of synaptic states and the coding sparseness.

  22. #22 Anonymous
    March 16, 2008

    I just wanted to say I’m very sorry about the loss of your father.

    Eileen
    Dedicated Elementary Teacher Overseas (in the Middle East)
    elementaryteacher.wordpress.com

  23. #23 Skeptigirl
    March 18, 2008

    In addition to the issue of models used in the scientific process, I wonder if Michael Egnor would consider the bulk of our food crops before the advent of human controlled gene transfer as being naturally selected or artificially selected? Is a nest built by a bird natural while a home built by a human unnatural?

    It is interesting how some people draw a line through a continuum and state with certitude that the line is a clear dividing point. And it is also interesting that people are so quick to divide artificial from natural.

    As for the idea of using models in science, it is a pretty consistent theme for theist believing science deniers to have a poor understanding of the science they are criticizing. After all, if you knew what you were talking about you’d know that the mythical beliefs one tries to find evidence for came about before we improved our collective ability to observe and assess the evidence around us. It stands to reason that conclusions drawn from careful scientific method will be right more often than conclusions drawn from unsystematic observations which pervade the magical thinking mentality of theist believing science deniers. They know that Thor and Zeus are not throwing lightning bolts at people, but they haven’t figured out the big man in the sky isn’t either.

  24. #24 Torbjörn Larsson, OM
    March 18, 2008

    @ trrl:

    I’m sorry I missed this reply, because it is interesting.

    This is because DNA sequences are not random; they contains redundancies reflecting the presence of recurring protein coding and DNA regulatory motifs that enhance fitness, and mutation decreases that redundancy–i.e. the number of bits required to specify that message is increased.

    I think Dawkins’ point is that the receiver (the replicating genome of the population) can only judge “the message” on the fitness of the traits at selection. More alleles means more uncertainty about the fitness until after selection. The message isn’t the protein structure in isolation, it is the frequencies of all alleles and how their fitness depends on the environment, the whole system. But the reviever has no prior or detailed knowledge of this, only the operation of selection will confer information.

    On the level of DNA coding a mutant SNP can indeed mean an increased message length to specify a structurally different protein, but the structure of the protein isn’t the message, the trait fitness is. I have the feeling we are discussing finegraining (protein coding) vs coarsegraining (fitness of traits) and individuals (protein structures) vs populations (genomes with allele frequencies).

    So in the sense of Shannon, mutation increases, and selection decreases, the information content of the genome.

    Granting your model, I doesn’t understand how selection decreases information. Selection would tend to establish those redundancies you discussed.

    in the case of artificial selection the breeder already knows what traits he is seeking,

    Now you jump from regarding protein coding as the message to traits. Plus several different protein sequences can give the same traits, which means the breeder won’t know which protein “message” he seeks.

    In conclusion, to me Dawkins analogy seems more coherent and less contradictory, and also in line with the gist of population genetics.

  25. #25 Jonathan Vos Post
    July 14, 2008

    This one is a nice model indeed:

    New submissions for Tue, 15 Jul 08

    [1] arXiv:0807.1943 [pdf, other]
    Title: Failure of antibiotic treatment in microbial populations
    Authors: Patrick De Leenheer, Nick Cogan
    Comments: 11 pages, 6 figures
    Subjects: Populations and Evolution (q-bio.PE); Other (q-bio.OT)

    The tolerance of bacterial populations to biocidal or antibiotic treatment has been well documented in both biofilm and planktonic settings. However, there is still very little known about the mechanisms that produce this tolerance. Evidence that small, non-mutant subpopulations of bacteria are not affected by antibiotic challenge has been accumulating and provides an attractive explanation for the failure of typical dosing protocols. Although a dosing challenge can kill all the susceptible bacteria, the remaining persister cells can serve as a source of population regrowth. We give a robust condition for the failure of a periodic dosing protocol for a general chemostat model, which supports the mathematical conclusions and simulations of an earlier, more specialized batch model. Our condition implies that the treatment protocol fails globally, in the sense that a mixed bacterial population will ultimately persist above a level that is independent of the initial composition of the population. We also give a sufficient condition for treatment success, at least for initial population compositions near the steady state of interest, corresponding to bacterial washout. Finally, we investigate how the speed at which the bacteria are wiped out depends on the duration of administration of the antibiotic. We find that this dependence is not necessarily monotone, implying that optimal dosing does not necessarily correspond to continuous administration of the antibiotic. Thus, genuine periodic protocols can be more advantageous in treating a wide variety of bacterial infections.

  26. #26 Jonathan Vos Post
    September 16, 2008

    What information is, and what randomness is, as opposed to the lies of Intelligent Design:

    New submissions for Wed, 17 Sep 08

    [36] arXiv:0809.2754 [ps, pdf, other]
    Title: Algorithmic information theory
    Authors: Peter D. Grunwald (CWI), Paul M.B. Vitanyi (CWI and Univ. Amsterdam)
    Comments: 37 pages, 2 figures, pdf, in: Philosophy of Information, P. Adriaans and J. van Benthem, Eds., A volume in Handbook of the philosophy of science, D. Gabbay, P. Thagard, and J. Woods, Eds., Elsevier, 2008
    Subjects: Information Theory (cs.IT); Learning (cs.LG); Statistics (math.ST)

    We introduce algorithmic information theory, also known as the theory of Kolmogorov complexity. We explain the main concepts of this quantitative approach to defining `information’. We discuss the extent to which Kolmogorov’s and Shannon’s information theory have a common purpose, and where they are fundamentally different. We indicate how recent developments within the theory allow one to formally distinguish between `structural’ (meaningful) and `random’ information as measured by the Kolmogorov structure function, which leads to a mathematical formalization of Occam’s razor in inductive inference. We end by discussing some of the philosophical implications of the theory.