How is a synthetic bacterium like a black swan?

Edge.org has invited comments on Craig Venter's synthetic bacterium from thinkers like Freeman Dyson, George Dyson, and our very own PZ Myers. Nassim Taleb is particularly pessimistic:

If I understand this well, to the creationists, this should be an insult to God; but, further, to the evolutionist, this is certainly an insult to evolution. And to the risk manager/probabilist, like myself & my peers, this is an insult to human Prudence, the beginning of the mother-of-all exposure to Black Swans. Let me explain.

Evolution (in complex systems) proceeds by undirected, convex bricolage or tinkering, inherently robust, i.e., with the achievement of potential stochastic gains thanks to continuous and repetitive small, near-harmless mistakes. What men have done with top-down, command-and-control science has been exactly the reverse: concave interventions, i.e., the achievement of small certain gains through exposure to massive stochastic mistakes (coming from the natural incompleteness in our understanding of systems). Our record in understanding risks in complex systems (biology, economics, climate) has been pitiful, marred with retrospective distortions (we only understand the risks after the damage takes place), and there is nothing to convince me that we have gotten better at risk management. In this particular case, because of the scalability of the errors, you are exposed to the wildest possible form of informational uncertainty (even more than markets), producing tail risks of unheard proportions.

I have an immense respect for Craig Venter, whom I consider one of the smartest men who ever breathed, but, giving fallible humans such powers is similar to giving a small child a bunch of explosives.

Boom. Read the rest at edge.org.

More like this

After years of painstaking research and experimentation, genomic pioneer J. Craig Venter has accomplished a long-awaited goal: he and his team at the J. Craig Venter Institute have introduced a synthetic genome into bacterial cells that can grow and replicate itself. Some have gone as far as…
They're discussing Venter's nifty new toy on Edge, and I've tossed my own contribution into the mix. It's a response to the doomsday fears I keep seeing expressed in response to the success of this project. I have to address one narrow point that is being discussed in the popular press and here on…
It's a new year, and that means it's time for Edge.org's annual silly question. This year, in addition to giving the question to scientists and philosophers, they also gave it to business people, and even Brian Eno. As in the past, there are a lot of people in cognitive science and related fields…
John Wilikins has a post on my last couple of entries: In a couple of posts, Scibling Alex Palazzo of The Daily Transcript has given two quite distinct views of what biology is about: information, and mechanism. In the first he argues that what is needed to build organisms is information, and in…

I totally agree in that every biotech GMO out there has yet to be placed in a REAL LIFE lab of thousands to hundreds of thousand years to prove the success and ability to fit into an eco system. Humans only test for the immediacy of profit, copyright and getting to the market before competitors. Any possible fatalities or mutation of normal eco-bio cycles in the macro or micro levels are never found until a major catastrophe is uncovered. This artificial life is a play on concept because the DNA segments were manufactured. But the DNA genome was STILL a known billion year developed living pattern with a few tweaks of known bits from outside. The cell engine or body was already formed and in place naturally. It's more like a Frankenstein than artificial life shocked into active reproduction and replication using a human designed from scratch plan. Then give me and PR a call Venter.

When I saw this a few days ago, it didn't seem very scary. It seemed so difficult to make a synthetic organism that would successfully compete with natural ones that the danger did not seem big compared to other risks, eg from ordinary genetic engineering. I've now changed my mind.

If you can produce a version of a successful organism which kills people slowly (not hard using ordinary genetic engineering), and which can outcompete the wild type (the hard bit) you could kill millions or even billions of people. Being able to get genomes in and out of computers makes the hard bit possible. Two possible methods are (1) to remove the junk DNA from the E Coli genome in a computer, and make a junkless version of E Coli, which would be more efficient than the wild type and (2) make a version of E Coli which uses a non-standard genetic code, which would not be susceptible to existing viruses.

Some very rough calculations for the first method. E Coli is about 3% DNA by dry weight, and has about 20% junk. Making DNA takes about as much energy as making protein and stuff, so the junkless version is about 0.6% more efficient. That translates into a selection coefficient of 0.006 and population genetics says that means about 4000 generations is enough for Junkless to take over the world. Generation time in mammalian gut is about an hour so that's about six months. I guess the generation time in soil and water is a lot longer, but a few years could be enough.

I am not a biologist. I hope I've got something wrong...

@Graham: "to remove the junk DNA from the E Coli genome in a computer, and make a junkless version of E Coli, which would be more efficient than the wild type" I think that's an error there. E. coli don't contain any junk DNA. The genome of bacteria is much, much smaller than that of humans and other eukaryotes and they simply don't have the room to collect redundant DNA. All the genes in E. coli are necessary and honed for survival, if you start taking them out, the E. coli starts getting pretty pathetic pretty quickly.

That's what most lab strains of E. coli are; bacteria with bits of the DNA taken out so that they aren't dangerous if they do get out the lab (they just die).

I just re-read your post and saw you have figures for the 'junk' DNA, would be very interested to know where you got those figures from, as I hadn't read that about E. coli. And even if you can classify 20% of the genome as 'junk' I do have not seen anything that will convince me that a single strain of E. coli will 'take over', especially not one with 20% of its genome knocked out.

Non-standard genetic code would also require the production and synthesis of a non-standard ribosome and non-standard tRNAs to work that code. And synthesising new proteins is a *lot* harder and has not (I think - again may be wrong!) been done yet, although Venter's been trying to make a synthetic normal ribosome for a while.

Hope that's useful! I would really appreciate the link to the E. coli junk DNA information.

Bioinformatics by Arthur Lesk, p82 says of the E Coli genome: "Approximately 89% of the sequence codes for proteins and structural RNAs." When I did my calcs, I misremembered that as 79%. Whoops, please replace 20% with 10%. It doesn't make that much difference to my main point. I know there's other things besides proteins and structural RNAs, but I don't think they occupy much space in the genome and Lesk also specifically mentions "noncoding repeat sequences" and "prophage remnants" which sound like junk to me. Presumably Genbank has the full details.

New junk is always being added via random insertions and viruses. The only way natural selection can get rid of junk is by making random deletions and selecting for those which hit junk rather than a gene. We can outperform natural selection in this task.

I don't think you need a new ribosome to use a non-standard genetic code, only new tRNAs and those enzymes with long names which stick the amino acids onto them. And you might be able to borrow those from organisms which have a slightly non-standard coding, and mitochondria. I think you'd need more than that to confer good resistance to viruses, but I don't think non-standard coding is very far away.