Mathematical Study of Drug Interactions in the Evolution of Antibiotic Resistance

Orac has posted a really good description of a recent paper discussing how
interaction between different antibiotics effects the evolution of antibiotic resistance in
bacteria populations.

It's a mathematical analysis of experimental results generated by combining drugs which normally interact poorly with one another, and analyzing the distribution of resistance in the resulting populations. It turns out that under the right conditions, you can create a situation in which the selective pressure of the combination of drugs - which are less effective when combined - can select in favor of the non-resistant variant of the bacteria!

Check out Orac's post for details; I may also try to get a copy of the paper and post a more detailed look at the math later this week.

Tags

More like this

I'm jumping into this late, and it's at least somewhat off topic for this blog, although I'll try to pull a few mathematical metaphors into it. But Michael Egnor, that paragon of creationist stupidity, is back babbling about evolution and bacterial antibiotic resistance. This is a subject which is…
A pig flying at the Minnesota state fair. Picture by TCS. I've been involved in a few discussions of late on science-based sites around yon web on antibiotic resistance and agriculture--specifically, the campaign to get fast food giant Subway to stop using meat raised on antibiotics, and a…
After this post on antibiotic resistance, many of you may have seen an exchange on Twitter calling me out for being "knee-jerk" about my call to action to do something about the overuse of antibiotics. In that post, I focused on antibiotic use in agriculture, giving only brief mention to human…
I was extremely disturbed to see in the NYT's letters a veterinarian's defense of the practice of overuse of antibiotics in animals that suggested transmission of resistant organisms does not occur. Nonsense! It is abundantly clear that antibiotic use in animals results in resistant strains that…

It's a very interesting experiment, with many potential implications. It means that with a multi-drug treatment combining drugs A and B, bacteria can't develop resistance to A because they become more vulnerable to B, and they can't develop resistance to B because the become more vulnerable to A. The only mutation path that can save the bacteria is to develop a resistance to both A and B at the same time, which is much more difficult to achieve.

It's an achievement for us, but we haven't won the war against nature yet. Instead of fighting evolution on a single axis (the evolution path of the resistance), we have to manage multiple dimensions at the same time, as many as drugs we know. I think that's what you call "going meta" :-)

It means that with a multi-drug treatment combining drugs A and B, bacteria can't develop resistance to A because they become more vulnerable to B, and they can't develop resistance to B because the become more vulnerable to A.

I can't download a sensible copy of the paper, but that isn't what I get of Orac's description or the abstract. Orac claims they look at hyper-antagonistic, suppressive, combinations in toto.

The abstract notes that "Used in such a combination, a drug can render the combined treatment selective against the drug's own resistance allele."

And Orac quotes the paper as "Our simple geometrical approximation anticipates a region of competitive selection against resistance in such suppressive drug combinations when the targeted resistance mechanism works specifically (uniaxially) on one of the drugs. Indeed, for doxycycline-ciprofloxacin, [...]"

It is exciting research though.

By Torbjörn Larsson (not verified) on 09 Apr 2007 #permalink

I did a brief writeup of the paper for the Boston Globe on Monday. What Kishony and Chait found is this:

- Start with a strain of bacteria that is resistant to drug A.
- Treat the bacteria with a suppressive combination of drugs A and B -- that is, combined, the two drugs are less effective than either is alone, in this case because A suppresses B.
- The resistant bacteria essentially "ignores" drug A, which leaves drug B free to attack the bacteria, because it's no longer being suppressed by A.

That's a major generalization, of course, but it's essentially what Kishony and Chait observed using E. coli and two common drugs.

A few caveats:

1) This is only dealing with sub-lethal concentrations of treatments. In other words, they're not trying to kill the bacteria, although their experiment did result in some of the resistant bacteria being killed. What they're going for is a relative selection for "sensitive" bacteria over resistant strains.

2) They haven't yet found a drug combo that's reciprocally suppressive, which is important, because in the scenario above, the treatment would only select against strains resistant to drug A. If, instead, they could find a combo where drug A suppresses B and B also suppresses A, it could be used against strains resistant to either A or B.

Regardless though, it's really interesting stuff!

The resistant bacteria essentially "ignores" drug A, which leaves drug B free to attack the bacteria, because it's no longer being suppressed by A.

Now I think I get it. Thanks Bruce and Michelle for the link to the paper and the exposition!

By Torbjörn Larsson (not verified) on 10 Apr 2007 #permalink

There's a strange reference which has some numbers in it. Citation first, then numbers.

arXiv:0704.1169
Title: Holographic bound and protein linguistics
Authors: Dirson Jian Li, Shengli Zhang
Comments: 4 pages, 4 figures. A trial application of holographic bound in life science

(Submitted on 10 Apr 2007)

Abstract: The holographic bound in physics constrains the complexity of life. The finite storage capability of information in the observable universe requires the protein linguistics in the evolution of life. We find that the evolution of genetic code determines the variance of amino acid frequencies and genomic GC content among species. The elegant linguistic mechanism is confirmed by the experimental observations based on all known entire proteomes.

I excuse them the repeated use of "Plank length" [sic] because their English is better than my Chinese.

Holographic bound gives upper limit to information storage of the observable universe as ~10^122 bits. Okay.

They give number of states of possible 20-amino acid proteins of maximum length n as 2^(20^n), hence the log of that hence
~20^n bits. Okay

They now claim that the information storage of the observable universe as ~10^122 bits limits having all possible proteins to length maximum 94. Okay. They say "Interestingly, the most frequent protein length for the life on our planet is about" that same 94. Is it?

Then they go onto a tangent which I'll not pursue here.

Anyone have a comment on Good Math, and/or Bad math, and/or Biology, in the paper?

I probably need to read the paper more thoroughly to really understand it. But so far I think these physicists claim:

1. The holographic bound, not the local contingency of evolutionary history and function, puts a global constraint on protein length.

2. A "linguistic model" is proposed that incorporates evolutionary history (assumed genetic code chronology) and biological function (genetic code redundancy) to account for protein length.

I'm sure biologists would have a lot to say on this. There are lot of studies on evolution of proteins out there. Most often they show how proteins, their length and composition are clustered due to history and function.

The perspective is different from the authors since the function considered is most often the proteins function and their evolution. Naively that ought to decide the distribution of lengths.

About claim #1, I don't see any support but the coincidence in numbers. In fact, the known proteins doesn't tell us about function and phylogeny. Nor do the authors consider the fact that life forms evolves so a static bound should not be a problem. (Ie if a species goes extinct, it liberates information.)

The formulation on "the complexity of life" is unfortunate IMHO, because it makes it seem the authors are discussing an absolute constraint. But the discussion of the bound doesn't cover the possibility for differently working life elsewhere, or even further evolution of the genetic code. (The later isn't expected of course, just possible since there are many possible amino acids.)

By Torbjörn Larsson (not verified) on 11 Apr 2007 #permalink

Two more notes, though.

First, it is probably more accurate to say that the model describes protein length than accounts for it.

Second, the genetic code is indeed evolving. I was thinking of eucaryotes, where there probably are too many constraints to make further development likely. But there are reportedly taxons of unicellular life that uses variants of, and amino acid additions to, the common code.

By Torbjörn Larsson (not verified) on 11 Apr 2007 #permalink

I discussed "Holographic bound and protein linguistics", by Dirson Jian Li and Shengli Zhang, while at the Dodgers-Rockies game last night, with Dr. George Hockney (ex-FermiLab, now JPL).

He also had no problem with Bekenstein's Holographic bound giving an upper limit to information storage of the observable universe as ~10^122 bits.

He didn't disagree with the arithmetic of number of states of possible 20-amino acid proteins of maximum length n as 2^(20^n), hence the log of that hence ~20^n bits. Although we did agree that "20" is somewhat arbitrary being for human DNA, there are other amino acids, and why not go straight to the 64 codons that map to the amino acids?

But he was even more skeptical than Torbjörn Larsson about the length ~ 94 argument.

"How much information is in each protein molecule?" he asked in various ways. The Shannon entropy presupposes a probability distribution over an ensemble of possible proteins. If there is one of each up to n = 94, then they are equinumerous, but what is the basis for considering any given molecule to be the message, in the channel of possibles over that ensemble?

"Suppose I fill the observable universe with oxygen atoms," he asked, "then what is the information content of that?"

When the paper gets grammatical about DNA, making proteins non-equiprobable, then the amount of information in each one decreases.

We leaned towards the "94" being a back-of-envelope coincidence. Also, the proteomic database used by the authors was biased and incomplete, and there are other problems.

I'll re-read again, and see if there is anything else.

Okay, how about this coincidence, reported here for
the first time:

Mean Molecular Weight of the 20 Standard Human Amino
Acids = 136.90019

1/fine structure constant ~ 137.03599976

ratio 136.90019/137.0359997 = 0.999008949.

Coincidence? You be the judge!

http://scienceworld.wolfram.com/physics/FineStructureConstant.html

Below molecular weights from HMDB
(Human Metabolome Database)

89.09318 + 174.20100 + 132.11792 + 133.10268 +
121.15800 + 147.12926 + 146.14500 + 75.06660 +
155.15456 + 131.17291 + 131.17291 + 146.18756 +
149.21100 + 165.18913 + 115.13046 + 105.09258 +
119.11916 + 204.22501+ 181.18854 + 117.14634 =
2738.0038

The 20 Standard Human Amino Acids

HMDB Name (alphabetically) Formula Molecular
Weight
HMDB00161L-Alanine C3H7NO289.09318
HMDB00517L-Arginine C6H14N4O2174.20100
HMDB00168L-Asparagine C4H8N2O3132.11792
HMDB00191L-Aspartic acidC4H7NO4133.10268
HMDB00574L-Cysteine C3H7NO2S121.15800
HMDB00148L-Glutamic acidC5H9NO4147.12926
HMDB00641L-Glutamine C5H10N2O3146.14500
HMDB00123Glycine C2H5NO275.06660
HMDB00177L-Histidine C6H9N3O2155.15456
HMDB00172L-Isoleucine C6H13NO2131.17291
HMDB00687LeucineC6H13NO2131.17291
HMDB00182L-Lysine C6H14N2O2146.18756
HMDB00696L-Methionine C5H11NO2S149.21100
HMDB00159L-PhenylalanineC9H11NO2 165.18913
HMDB00162L-ProlineC5H9NO2115.13046
HMDB00187L-SerineC3H7NO3105.09258
HMDB00167L-ThreonineC4H9NO3119.11916
HMDB00929L-Tryptophan C11H12N2O2204.22501
HMDB00158L-Tyrosine C9H11NO3181.18854
HMDB00883L-Valine C5H11NO2117.14634

Total 2738.0038
Mean 136.90019 = 2738.0038/20