Deficiencies in modern evolutionary theory

i-ccbc028bf567ec6e49f3b515a2c4c149-old_pharyngula.gif
This is a post originally made on the old Pharyngula website; I'll be reposting some of these now and then to bring them aboard the new site.

There are a number of reasons why the current theory of evolution should be regarded as incomplete. The central one is that while "nothing in biology makes sense except in the light of evolution", some important disciplines within biology, development and physiology, have only been weakly integrated into the theory.

Raff (1996) in his book The Shape of Life(amzn/b&n/abe/pwll) gives some of the historical reasons for the divorce of evolution and development. One is that embryologists were badly burned in the late 19th century by Haeckel, who led them along the long and unproductive detour of the false biogenetic "law". That was a negative reaction; there was also a positive stimulus, the incentive of Roux's new Entwicklungsmechanik, a model of experimental study of development that focused exclusively on immediate and proximate causes and effects. Embryology had a Golden Age of experimentation that discouraged any speculation about ultimate causes that just happened to coincide with the time that the Neo-Darwinian Synthesis was being formulated. Another unfortunate instance of focusing at cross-purposes was that the Neo-Darwinian Synthesis was fueled by the incorporation of genetics into evolutionary biology...and at the time, developmental biologists had only the vaguest ideas about how the phenomena they were studying were connected to the genome. It was going to be another 50 years before developmental biology fully embraced genetics.

The evolutionary biologists dismissed the embryologists as irrelevant to the field; furthermore, one of the rare embryologists who tried to address evolutionary concerns, Richard Goldschmidt, was derided as little more than a crackpot. It's an attitude that persists today. Of course, the problem is mutual: Raff mentions how often he sees talks in developmental genetics that end with a single slide to discuss evolutionary implications, which usually consist of nothing but a sequence comparison between a couple of species. Evolution is richer than that, just as development is much more complex and sophisticated than the irrelevant pigeonhole into which it is squeezed.

In her book, Developmental Plasticity and Evolution(amzn/b&n/abe/pwll), West-Eberhard (2003) titles her first chapter "Gaps and Inconsistencies in Modern Evolutionary Thought". It summarizes the case she makes in the rest of her 700 page book in 20 pages; I'll summarize her summary here, and do it even less justice.

She lists 6 general problems in evolutionary biology that could be corrected with a better assimilation of modern developmental biology.

  1. The problem of unimodal adaptation. You can see this in any textbook of population genetics: the effect of selection is to impose a gradual shift in the mode of a pattern of continuous variation. Stabilizing selection chops off both tails of the distribution, directional selection works against one or the other tail, and disruptive selection favors the extremes. This is a useful, productive simplification, but it ignores too much. Organisms are capable of changing their specializations either physiologically, in the form of different behaviors, morphologies, or functional states, developmentally in the form of changing life histories, or behaviorally. Evolution is too often a "theory of adults", and rather unrealistically inflexible adults at that.
  2. Cohesiveness. There is a long-standing bias in the evolutionary view of development, that of development imposing constraints on the organism. We can see that in the early favor of ideas like canalization, and the later vision of the gene pool as being cohesive and coadapted, which limits the magnitude of change that can be permitted. It's a view of development as an exclusively conservative force. This is not how modern developmental biologists view the process. The emerging picture is one of flexibility, plasticity, and modularity, where development is an innovative force. Change in developmental genes isn't destructive—one of the properties of developmental systems is that they readily accommodate novelty.
  3. Proximate and ultimate causation. One of the radical secrets of Darwin's success was the abstraction of the process away from a remote and unaccessible ultimate teleological cause and to a more approachable set of proximate causes. This has been a good strategy for science in general, and has long been one of the mantras of evolutionary biology. Organisms are selected in the here and now, and not for some advantage many generations down the line. Evolutionary biology seems to have a blind spot, though. The most proximate features that affect the fitness of an organism are a) behavioral, b) physiological, and c) developmental. These are the causes upon which selection can act, yet the focus in evolutionary biology has been on an abstraction at least once removed, genetics.
  4. Continuous vs. discrete change. This is an old problem, and one that had to be resolved in a somewhat unsatisfactory manner in order for Mendelian genetics (an inherently discrete process) to be incorporated into neo-Darwinian thought (where gradualism was all). Many traits can be dealt with effectively by quantitative genetics, and are expressed in a graded form within populations, and these are typically the subject of study by population geneticists. We often see studies of graded phenotypes where we blithely accept that these are driven by underlying sets of graded distributions of genes, such as the studies of Darwin's finches by the Grants, yet those genes are unidentified. Conversely, the characters that taxonomists use to distinguish species are usually qualitatively distinct are at least abruptly discontinuous. There is a gap in our thinking about these things, a gap that really requires developmental biology. Wouldn't it be useful to know the molecular mechanisms that regulate beak size in Darwin's finches?
  5. Problematic metaphors. One painful thing for developmental biologists reading the literature of evolutionary biology is the way development is often reduced to a metaphor, and usually it is a metaphor that minimizes the role of development. West-Eberhard discusses the familiar model of the "epigenetic landscape" by Waddington, which portrays development as a set of grooves worn in a flat table, with the organism as a billiard ball rolling down the deepest series. This is a model that completely obliterates the dynamism inherent in development. Even worse, because it is the current metaphor that seems to have utterly conquered the imaginations of most molecular biologists, is the notion of the "genetic program". Again, this cripples our view of development by removing the dynamic and replacing it with instructions that are fixed in the genome. This is bad developmental biology, and as we get a clearer picture of the actual contents of the genome, it is becoming obviously bad molecular biology as well.
  6. The genotype-phenotype problem. Listen to how evolutionary explanations are given: they are almost always expressed in the language of genes. Genes, however, are a distant secondary or tertiary cause of evolutionary solutions. Fitness is a collective product of success at different stages of development, of different physiological adaptations, and of extremely labile interactions with the environment. Additionally, a gene alone is rarely traceable as the source of an adaptive state; epigenetic interactions are paramount. We are rarely going to be able to find that a specific allele has a discrete adaptive value. It's always going to be an allele in a particular genetic background in a particular environment with a particular pattern of expression at particular stages of the life history.

One unfortunate problem with discussing these issues in venues frequented by lay people is, you guessed it, creationism. Any criticism of a theory is seized upon as evidence that the theory is wrong, rather than as a sign of a healthy, growing theory. The Neo-Darwinian Synthesis is not wrong, but neither is it dogma. It was set up roughly 70 years ago with the knowledge that was available at the time, and it is not at all surprising that the explosion of new knowledge, especially in molecular biology, genetics, and developmental biology, means that there are radically different new ideas clamoring to be accommodated in the old framework. The theory is going to change. This isn't cause for creationists to rejoice, though, because the way it is changing is to become stronger.

More like this

Jonathan Wells recently gave a talk in Albuquerque at something called the "Forum on Science, Origins, and Design", a conference about which I can find absolutely nothing on the web. I wasn't there, of course, and I don't get invited to these goofy events anyway, but I did get a copy of Wells'…
Variation is common, and often lingers in places where it is unexpected. The drawing to the left is from West-Eberhard's Developmental Plasticity and Evolution(amzn/b&n/abe/pwll), and illustrates six common variations in the branching pattern of the aortic arch in humans. These are…
The first chapter of Evolutionary Genetics: Concepts & Case Studies gives a quick sketch of the arc of the field that the book covers via exposition of topical and current issues. Michael R. Dietrich focuses on the series of controversies which serve as "hinges of history." I have addressed…
People who don't understand modern evolutionary theory shouldn't be writing books criticizing evolutionary theory. That sounds like rather pedestrian and obvious advice, but it's astonishing how often it's ignored — the entire creationist book publishing industry demands a steady supply of…

One of the only "embryologists" I have ever heard mentioned favorably by the builders of the New Synthesis is De Beer. Any idea why they accepted him but not others?

Garstang and Bateson were well regarded, too. Goldschmidt is the one who had considerable animosity aimed his way, and part of that was due to personality, I'm sure.

Development wasn't incorporated because nobody (including the developmental biologists of the time) understood it very well. I think there was also a conscious reaction against evolutionary biology within the embryological community, and I think that was a consequence of the Haeckelian fluff that distracted so many for so long, and the success of Roux's program that focused exclusively on proximate mechanisms.

Evelyn Fox Keller has some excellent analysis of the historical roots of some of these issues from the point of view of language of scientific discourse (Making Sense of Life). There is such a widespread misconception, even (or especially?) among biologists, that genes are proximal causal agents with primacy over all other factors. The prime example, of course, being the absurd promises made by those pumping up genome biology in the 90s (I remember someone with an actual PhD saying gene sequencing would allow parents to hear the singing voice of their child before it was born).

"Gene action" and phrases like it from molecular biology dominate so many fields. It is interesting to see how movements like developmental systems theory (largely theoretical and unknown to most scientists) and systems biology (ceaselessly repeated buzzword that apparently now means "studying more than one gene") have tried to knock people's thinking out of this deep groove. It seems like more people are starting to see the limitations of explanations offered by the modern synthesis, and it is becoming respectable to think up models that work at levels other than differential gene regulation...What will the new kinds explanations look like? What will the metaphors be now that "blueprint" seems so fuddy-duddy? Why are "objective" scientists so metaphor-driven in the first place? Fun times...

Really interesting post.

And on a personal note, it's interesting to see that the dividing lines you mention are mirrored in current debates in the cognitive science community*, which I'm sure is because we just stole the talking points from biology.

I was curious if you could expand on what you mean by "modularity" when you wrote:

"The emerging picture is one of flexibility, plasticity, and modularity, where development is an innovative force"

In cog sci speak, 'modularity' has a very particular sort of meaning (and lots of baggage) which I'm guessing is different in biology.

*please note I am not talking about Evolutionary Psych here, so hopefully we don't get sidetracked. Just more general sorts of questions/issues that all kinds of people who study human behavior wonder about in terms of how different cognitive functions/brain areas develop.

Good post, nice list.

Print up 2,000 copies, please -- and when the IDists start flapping at the next standards or textbook hearing about "teaching the controversy," we can pass this around to tell just what the controversies really are, and insist they be given full play in the textbooks since the IDists want the controversy.

They won't expect the Spanish Inquisition, and they certainly won't expect somebody to propose more evolution in order to teach the real controversies.

By Ed Darrell (not verified) on 16 Jan 2006 #permalink

It is interesting to see how movements like developmental systems theory (largely theoretical and unknown to most scientists) and systems biology (ceaselessly repeated buzzword that apparently now means "studying more than one gene") have tried to knock people's thinking out of this deep groove.

Funny you mention that. I'm a big fan of developmental systems theory and think it is more right than wrong (although, unfortunately, it doesn't have much of a research program associated with it). DST is what you get when you have someone seriously question the assumptions of modern biology, someone with some real background in the ideas of the field and an open and creative mind.

What you see in developmental modularity is that there are subnetworks of gene interaction that can be instantiated relatively easily, and that because it is all about interactions, integration emerges rather naturally. For instance, dll is a gene that recruits a whole collection of processes that are associated with producing a protrusion from a sheet. Invoking that gene anywhere generates a morphological change that can be useful -- you see it in action in legs, wings, antennae, tails.

Ah, systems theory.

I really need to study it more. I came across it originally in my control systems classes for electrical engineering. I recently discovered similar mathematics in a graduate level micro-economics course. And I've just started a graduate level corporate finance course which is, amazingly enough, introducing similar concepts.

I've found the concepts of systems theory a great help in my automotive engineering work, but it's a continuous surprise to me how few automotive engineers are familiar with the concepts.

It's amazing how some concepts translate across several fields.

-Flex

Of course, the message of evo-devo is that developmental biology and evolutionary biology need to pull themselves up by one another's bootstraps. Now that that's happening on a large scale, I think we're entering a golden age of evolutionary biology.

I wouldn't put much money on developmental systems theory myself. It's just the latest version of a kind of talk that's been going on under one guise or another for decades and has yet to produce much in the way of empirical content. Wake me up when there actually are experimental results that support it and distinguish it from old-fashioned "genism".

By the way, let's also not forget that the biosphere is dominated by unicellular prokaryotes, and they evolve too...

By Steve LaBonne (not verified) on 17 Jan 2006 #permalink

Why is "genetic program" a poor metaphor? The bootstrapping process is a common one in programs. The PC boot process is the best known example and is a hugely complex process, starting with simple ROM instructions in the BIOS setting up enough of the environment to load the master boot record (MBR) from the disk. The bootloader in the MBR is often a 3-stage process, where each stage sets up more and more of the environment until enough exists for the OS kernel to be loaded from disk. The OS kernel also goes through several stages of setting up environments, loading more of itself, getting rid of old pieces it needed during earlier boot stages but not any longer, detecting hardware, loading components to deal with that hardware, doing more configuration in response to that, etc.

During the process, the processor recapitulates its evolution too, initially functioning as a simple 16-bit 8086 type machine from 25 years ago with no memory protection and a segmented memory model, but ending up as a modern Pentium 4/Athlon in 32- or 64-bit protected mode.

Compilers deal with a lot of bootstrapping issues too, so it's a broad problem in CS, which I suspect makes a better metaphor than the author above thinks.

Re Goldschmidt:

The concept of the hopeful monster had far greater resonance with the botany crowd--it configured well with the polyploidy and introgressive hybridization many were observing at the population level.

Mike

I remember someone with an actual PhD saying gene sequencing would allow parents to hear the singing voice of their child before it was born

This isn't entirely unreasonable, just flawed. The length of the vocal cords has the most to do with how one sounds, as well as other size factors. You could assume an optimum growth, based on the genetics that imply final size and make a decent extrapolation of what tonal range they would have. The rub comes from the fact that its highly improbably that their development will be optimal, even under the most carefully controlled circumstances, and you would need to predict cognitive capacity for tonality, which might also be effected by development, etc. Basically, if you could grow someone in a tank to the age you wanted, and feed the knownledge they needed to become a singer, you *might* be able to make such a prediction. Otherwise, no, it wouldn't work. But you might be able to come up with a decent enough approximation to tell if they even had a singing voice, or would end up sounding like a half dead frog. ;)

As for the genetic program issue.. I tend to agree with JW, though only with the addition that I would class the hardware and any networks, etc. attached to it as the "environment", an go with a live CD type "genetic program". Why? Because a live CD is far more adaptive, can function in any environment you can put it in, so long as it meets minimal requirements and isn't actively hostile (like trying to boot one in a mainframe, when its supposed to run in PC harware) and its level of adaptability depends a lot on which "species" you drop into the CD tray before turning on its living environment. The only thing it doesn't do it randomly experiment with better ways to deal with the environments it finds itself in, then automatically burn more copies of those mutants, so that they get picked up by the wind to land in some other CD drive.

Its really only how much flexibility exists in the program and if it can mutate that makes the analogy shaky. The premise isn't that bad. In both the, "What will their voice be like?", and, "Is it a program?", cases, the problem isn't the analogy, its the assumption of strict linearity and absolute results that is the problem. And even our computer software is become more adaptive than what the original analogies imply.

This is probably the best critique of the "program" metaphor to date. It was also quoted in Fox-Keller's wonderful little book "Re-figuring Life".

"genetic program" is a poor metaphor because y'need cytoplasm and gizmos therein to read it and act on its directions. moreover, there are many cooperating processes and "programs", including the mitochondrial mechanisms. i think the focus upon genetics and the genetic code arises because (a) it's seen as "neater", being "anatomy free", and (b) the power of the allusion to it being this kind of computer program. computing has permeated all aspects of popular life with its phrases, lingo, metaphor, and concepts. yet most people really don't understand how computer programs work and why.

IMO, seen as some kind of means for "programming", the genetic code is horrible. but that's not what it is, so it shouldn't be judged that way.

on another matter related to the purpose of the post, is there any significant use made of computational models in developmental biology? that is, take some part of a critter's development, and model it as a computer program having a bunch of simulated cells each with rules governing when they reproduce and die. model signalling pathways. start small, build models of growth of different subsystems, and eventually hook 'em together.

these wouldn't be math models. they'd be simulations consisting of large numbers of automata interacting to try to "grow" a critter.

I'm wondering, how does the Krebs' cycle not make sense except in the light of evolution? Or Mendellian Genetics?

Perhaps one day, that will be true, but we don't even understand the molecular mechanisms that regulate the size of finch beaks. Its not like people scratch their heads regarding photosynthesis until the evolution lesson comes up...

To me, evolution is a fascinating topic, but science has not shown that evolution and design (or creation) are mutually exclusive, so I propose that those of us who are truly interested in advancing knowledge and understanding drop the false dichotomy stuff. Yes, there are difficulties associated with detecting design, but there are similar difficulties with a strict methodological naturalism.

PZ, your quote of the day by Steve Allen tends to contradict your own statement shared below, regarding how to treat others who dissent from evolutionary dogma.

You said, "Our only problem is that we aren't martial enough, or vigorous enough, or loud enough, or angry enough," he wrote. "The only appropriate responses should involve some form of righteous fury, much butt-kicking, and the public firing and humiliation of some teachers, many school board members, and vast numbers of sleazy far-right politicians."

I agree with Allen 100%. Yes there are religious fanatics, which are the folks I presume you target with the above statement, but there are also evolution fanatics who make statements like "nothing makes sense in biology except in the light of evolution."

moreover, there are many cooperating processes and "programs",

As is also the case in the booting process once the kernel begins spawning kernel threads and system processes, and as there are on any modern computer system. This blog is hosted on a machine or set of machines running sets of cooperating web, application, and database processes (along with a scores of operating system processes performing tangentially related tasks) to host this conversation.

on another matter related to the purpose of the post, is there any significant use made of computational models in developmental biology? that is, take some part of a critter's development, and model it as a computer program having a bunch of simulated cells each with rules governing when they reproduce and die. model signalling pathways. start small, build models of growth of different subsystems, and eventually hook 'em together.

I'm quite interested in this question too. While I suspect cellular automata will be too much of an oversimplification, they were my mental model for understanding Endless Forms Most Beautiful.

This discussion shows precisely why the "program" metaphor is ill-advised (and I'm someone who has used it in print...). Metaphors are supposed to illuminate by calling attention to similarities between something difficult / unfamiliar, and something simple / familiar. The analogy in question fails because it invokes an incorrect image in most people's minds (an actual series of instructions). Saying that those people don't really understand computer programming only reinforces the problem. If trained computer scientists are the only ones that will be able to understand the correct meaning of your analogy, then it's time to use a different analogy...

This article is great! and a million times more useful than talking about ID - which most readers of the blog already know is rubbish.

I agree that the reason I think the computer/program metaphor is a bad one is that I don't know that much about how computers actually work. Because of my personal background, I think cell differentiation is less like a computer loading a new program and more like a javanese gamelan piece changing irama, because for me this type of shift better captures the distributed causality, reorganization of parts, and network dynamics of a state change. I think this reveals a general psychologycal pitfall of metaphors: the more you know about one complicated system, the more you you tend to view other complicated systems as analagous or subject to similar underlying principles. Not necessarily a bad thing if it helps you to organize your understanding, but it only really functions if it guides you to testable predictions and you don't succumb to the fallacy that the metaphor tells you how something "really" works.

The question that fascinates me about these metaphor-models of genomes, cell signaling networks, chemotaxing e. coli, etc., is the extent to which the metaphors limit our questions and determine what kinds of explanations we find satisfying. For example, many feel like if all of the observations about, say, a growth cone can be accurately modelled in a computer, its behavior has somehow been explained. It always reminds me of (I think) Turing's computational model for Drosophila segmentation, which was elegant, fit the observed data, and was dead wrong. Mathematics is getting better at biological problems (e.g. inference from Bayesian network analysis works under certain conditions), but I still think computational models fail as explanations.

on another matter related to the purpose of the post, is there any significant use made of computational models in developmental biology?

I don't know of any that have made "significant" contributions to understanding a specific developmental process, but certainly there are models that, based on experimental data, demonstrate the robustness and describe the network properties of some developmental events. On a fuzzier level, there are beautiful mathematical concepts that are useful in thinking about pattern formation in development, like the reaction-diffusion model of Hans Meinhardt and Alfred Geirer.

Transcriptional networks governing Drosophila segmentation are maybe the most modelled developmental system, see Odell, Ingolia, and Barkai.

There are also interesting computational models of topographic mapping, the developmental process of axonal growth cones locating precise positions within concentration gradients. See O'Leary, Honda, Lemke.

Hey Dave, we'd all love to see some experiments probing the nature of the Designer. Got some to propose? If not (and the DI folks don't either, alas), I'm afraid you've got a very major "difficulty" which very definitely has no parallel at all in "strict methodological naturalism". You see, Dave, science is about finding stuff out, not making shit up.

By Steve LaBonne (not verified) on 18 Jan 2006 #permalink

"The question that fascinates me about these metaphor-models of genomes, cell signaling networks, chemotaxing e. coli, etc., is the extent to which the metaphors limit our questions and determine what kinds of explanations we find satisfying. [...] Mathematics is getting better at biological problems (e.g. inference from Bayesian network analysis works under certain conditions), but I still think computational models fail as explanations."

No. I believe your opinion rests on a misunderstanding of the role of modeling in biology. One important use of models is as formalizations of our ideas about the world (there are other uses of course). In this role, they provide a way to be clear and explicit about our assumptions and mechanistic statements about the world, and check whether they can do what we say they do. For example, cell biologists are fond of ending their talks with statements like "here's my model of how [their favorite system] works" and then showing some gene names connected by arrows. They are usually using a very simple model of gene action, but they do so implicitly. The assumption is that it's all simple enough that everyone will get what they're talking about. Now, traditionally this worked because pathways had only a handful of genes. However, we have reached a stage where for the best understood systems (e.g., C. elegans vulval development, Drosophila segmentation) there are so many genes involved, that just displaying the pathway does not reveal what's going on. Suddenly most of us (i.e. the ones not working day to day with the system) need a computer model, however simple, just to understand what's going on. Modern genomic techniques are extending this to other systems.

In my opinion the goal of cell and developmental biology is to be able to write a simple program, add assumptions about and measurements from our favorite system, and show that it can explain existing and as yet undiscovered data. Just because it's hard to do and has failed in the past (although I disagree with your description of Turing's work) doesn't mean it shouldn't be done. If not this, what is your alternative? Surely not verbal pseudo-understanding of systems involving hundreds or thousands of genes and their products and even more cells. We already have that.

To use a parallel from evolutionary biology, after the publication of the "Origin", biologists spent decades thinking about verbal models of natural selection and debating whether it worked. Predictably, they didn't get very far. The resolution came first from the modeling efforts of the theoretical population geneticists (Fisher, Haldane, Wright, et al), not from any measurements or observations.

The computer "metaphor" isn't a metaphor; it's a literal truth: biology is mostly computation. We can't abandon the computer "metaphor" just because many people don't understand what "computation" actually is---there is no better "metaphor," because there is no substitute for understanding the literal fact that biology _is_ mostly _computational_.

If people are misled by preconceptions about what "computation" and "programs" are, the solution is to explain computation and programs, not to choose "a better metaphor"---there isn't one, and cannot be. We will make zero progress toward explaining the essence of biology if we look for "other metaphors."

People do often assume that a "program" is a serial Von Neumann-machine program which takes serial inputs and produces serial outputs, etc., etc. They often assume that all "computer programs" are high-level and do not exhibit strong hardware-dependencies. Or that "computer programs" do rote calculations, rather than extremely flexible adaptation. They assume that computer programs are "algorithmic" in the sense that they produce _determinate_ outputs... as opposed to being algorithmic internally but "heuristic" in terms of what the computed "answers" _mean_.

It's all rather confusing, unfortunately.

Too bad. There is no substitute for the language of computation. If biologists choose a different vocabulary, they will simply _replicate_ the problem that computer scientists have in explaining these things to general audiences---not solve it.

Biologists need to know what computer scientists know about digital switching
networks, device drivers & device-(in-)dependence, recursion, adaptation, reflection,
reactive planning, the level-dependence of "heuristics vs. algorithms," etc.

Computation is _the_ central concept of computer science, and of biology. Evolution is algorithmic, as are the things that evolve. The hardware is important, but is "peripheral,"---the product is primarily computational. Evolution is an algorithmic search process that produces mostly-algorithmic solutions. (And thereby poses new problems, recursively, by shifting fitness landscapes.) These mostly-algorithmic solutions are primarily encoded in mostly-binary switching systems (genetic regulatory networks).

The evolved complexity of an organism consists mostly in a binary digital switching
program, heuristically "designed" (a) to work, i.e., be adaptive and (b) to be amenable to further computational processing (i.e., be evolvable). It is literally an _algorithm_ generated by an _algorithm_ to be processed _by_an_algorithm_.

When will biologists stop referring to computation as a "metaphor," and get
serious about computation?

John Searle would beg to disagree, and would say that the description of a natural system as engaging in computation cannot be anything more than a metaphor, since computation is not an intrinsic feature of physical systems but is only a way in which humans choose to interpret the behavior of certain physical physical systems when it's convenient to do so. To quote Searle directly: "Absolutely essential, then, to understanding the nature of the natural sciences is the distinction between those features of reality that are intrinsic and those that are observer-relative. Gravitational attraction is intrinsic. Being a five dollar bill is observer-relative. Now, the really deep objection to computational theories of the mind can be stated quite clearly. Computation does not name an intrinsic feature of reality but is observer-relative and this is because computation is defined in terms of symbol manipulation, but the notion of a `symbol' is not a notion of physics or chemistry. Something is a symbol only if it is used, treated or regarded as a symbol." As indicated in that passage, this objection was originally posed in response to certain currents of thought in cognitive science, but it applies a fortiori to the rest of biology. I'm afraid I have never been impressed at all by the attempts to refute Searle on this point.

More generally, we should never lose sight of the fact that our scientific models of the natural world are just that- models, which, however sophisticated and lifelike, shouldn't be confused with the real thing.

By Steve LaBonne (not verified) on 18 Jan 2006 #permalink

Ricardo said: I believe your opinion rests on a misunderstanding of the role of modeling in biology.

The main issue I'm pursuing is not models, it's explanations, which models (arrow diagrams or computer simulations) are not. Presume we could design a precise computer model of a bacterium (or a conscious mind, or anything) by loading everything it is possible to know about mechanistics of the fundamental molecular interactions involved. This model is undoubtedly useful; my point is that you haven't gotten much closer to a satisying explanation of how a bacterium works. Too much happens between the molecular level and the behavior of the system, even if your simulation behaves like a perfect bug because you got all the details right.

If you could use observations of this model's behavior and extract higer order principles of the system's function and compare that to what happens in a real bacterium, you might be getting somewhere. However, so far the huge difficulties in computational modeling even the simplest signaling modules (e.g. the MAP kinase cascade, for which we could not hope to have more detailed kinetic info or experimental observations) in a way that makes sense and fits observations leads me to suspect that...well, we all have to choose what to spend time on. The basic issue is that although you need way too much information in order to accurately (or even adequately) model the MAP kinase cascade, you don't need it to have an understanding of what the signaling module is doing in a particular context.

Paul W. said The computer "metaphor" isn't a metaphor; it's a literal truth: biology is mostly computation.

I'm sure there is a generalized vocabularly of computation that can be used to talk about any possible computational system, and maybe it would be useful for biologists to use it sometimes, if they felt like it. But the reason "computer" is a bad metaphor is that it is not a mathematical abstraction, computers are human-designed machines. They are just one type of possible machine that performs computations in a very particular and historically contingent way, and there is no reason to suppose that any other computational system works in a way analagous to computers on Earth in the late 20th and early 21st century. So to use a "computer" as an explanatory framework about biology is potentially very misleading, therefore of limited use.

As to whether biology is or isn't mostly or kinda like or enitrely computation, we could quote the literature on this argument ad infinitum, but one shouldn't suggest the issue is settled.

Steve---I have to say I'm not impressed when you quote Searle. Searle's view is that intentionality is a mysterious thing that he can't explain, which meat machines like us can have but "computers" can't. He can't explain where this mysterious, apparently irreducible property ---being _actually_ able to "interpret" things---comes from. None of the philosophers I know or regularly read agrees with Searle; I certainly don't. (Though he's very popular among the anti-"strong AI" crowd, most of whom don't seem to actually understand him.)

The ability of DNA to "encode" various aspects of a "design" is not just a subjective interpretation. It plays an important role in scientific explanation of actual phenomena, such as the continued existence of life on earth, recurrence of phenotpic feature X when gene Y is expressed, etc. These things happen whether there are humans around to use their quasi-magical "interpretive" abilities on them or not.

So whether or not you buy Searle's take when it comes to AI, and "mental" states---which I don't---you shouldn't when it comes to biological explanation.

There is something more fundamental going on, which Searle doesn't get. Irrespective of any subjective interpretation, there is such a thing as information---and I mean this in a computational sense, not just the brute physical "information theory" sense. There are certain loci where

(1) small differences in state have big downstream consequences, because there is machinery around those loci that copies and/or amplifies the information

(2) in those loci, what counts as a difference in state depends on the machinery that decodes and responds to those difference, but

(3) different machinery could do the same job, using _different_ state differences
and _different_ machinery to decode them

(E.g., a single base-pair change in a gene may cause a freakish development of a macrocopic organism, or a cascade of effects in all of its descendants, in a way that no other molecular change can approach. Or consider the replacement of a "modular" set of several switching genes with an entirely different but _equivalent_ "modular" set of genes---i.e., which take the same inputs and yield the same outputs. The new set may have nothing in common except for the input and output binding sites, and may cause no phenotypic change whatsoever; the replacement _module_ is computationally _equivalent_ because it _computes_ the same function: given certain inputs, it gives certain outputs.)

What counts as a computation over a representation isn't, as Searle would tell you, a matter of subjective interpretation by a quasi-magical subject capable of "really interpreting" symbols. It's a matter of loci of control and formal equivalence.

As a simple example, consider a thermostat. A thermostat may be a simple spring that expands to make an electrical contact (when it's too hot) and contracts to break it (when it's not). Or it may involve an electronic sensor, an analog-to-digital converter, and chip with a program in PROM memory that computes something equivalent, in an entirely different way. Either is a thermostat, because what counts as being a thermostat is its causal role in a feedback cycle. You can replace a spring thermostat with a digital-chip-based thermostat, or vice versa, and maintain the feedback loop, even if those two devices have no parts and no representations in common. (Except the inputs and outputs; even those you could change with adapters.) You can't substitute a fuse, or a rock, or a carburetor, and have the same equivalence---they simply don't compute the same function.

This is not just a matter of interpretation. There is an infinite number of ways for things to be thermostats, but an infinitely larger infinity of ways for things _not_ to be thermostats---if it can't play a certain role in the maintenance of a certain kind of feedback loop, it isn't a thermostat. This is an objective fact, not a subjective
interpretation. (We may particularly _care_ about the distinction between thermostats and non-thermostats for human reasons---e.g., we like to be comfortable---but the distinction is nonetheless objectively real. Some things aren't thermostats.)

Searle is very fond of "intuition pump" arguments where he picks apparently radically different kinds of things, and pumps hard on the intuitive, qualitative differences---e.g., five dollar bills as opposed to, say, thermostats. It's actually very hard to understand a five dollar bill, consciously and rationally---the kind of information it conveys, and how it conveys it, in the context of an incredibly sophisticated information processing system. (A bunch of human minds engaged in quite sophisticated social cognition.) This makes it seem like an entirely different kind of thing in some deep but undefined way, which is exactly the response he wants from the reader. He doesn't really want to _clarify_ intentionality, because he has no clear ideas about it. He just thinks "computers can't do it," but "meat can."

(And he doesn't really want to explain the social psychology of money---which likely _does_ apply to many alien species on other planets, because it's a product of basic game theory among a broad class of possible alien species. It's an objectively recognizable engineering solution to an actual objective problem---which various human cultures hit on independently---for deep reasons that are not nearly as "subjective" as Searle would have you think, or nearly as "interpretive" in the _way_ he'd have you think.)

I think it's better to start with examples, like thermostats or genes, which are easy to see the physical realizations of, easy to envision _alternative_ but _equivalent_ realizations of.

And in the case of genes, it's easy to see the information-bearing properties, in that genes induce a lot of fancy machinery to copy themselves, almost bit-for-bit, and do so by objectively observable means. (Like inducing their hosts to have sex, seek food,
kill and/or eat other hosts of different DNA, etc.)

These things have to count as objective facts, not just interpretations. In normal genetic circumstances, a given codon _is_ a symbol for the amino acid it codes for,
whether Searle wants to glorify it as a _true_ "symbol" in his mysterious mentalistic "intentional" terminology or not. It might not count as a "symbol" at a psychological level, but it's a fine _symbol_ in the brute computational sense. For most biological/computational purposes, it doesn't matter whether Searle would consider something a true _symbol_; intentionality is not the issue, and the "symbol" does the computational job. The egg gets to make another chicken, which makes another egg.

That's the standard lazy misreading of Searle, Paul. It's simply a fact that nobody understands intentionality- yet. Searle is not a mysterian like McGinn who despairs of ever doing so, he simply holds the sensible, and scientifically rational, view that we won't understand it until we understand a lot more about how the brain works.

There is an infinite number of ways for things to be thermostats, but an infinitely larger infinity of ways for things _not_ to be thermostats---if it can't play a certain role in the maintenance of a certain kind of feedback loop, it isn't a thermostat. A physical feedback loop is not a computation, it's an intrinsically physical process, just like Searle's favorite example of digestion. The existence of a thermal feedback loop depends purely on its physical response to temperature, not on any observer-dependent interpretation of what it's doing. Computation, on the other hand, involves manipulation of symbols. This example reveals precisely where your thinking (and that of the computationalist AI types)goes awry.

Symbols are just that, symbols- things given
an interpretation by human intentionality. They are not part of the lawa of physics. And wishing by computationally-oriented people doesn't make it so. (When all you have is a hammer, everything looks like a nail.)

By Steve LaBonne (not verified) on 19 Jan 2006 #permalink

A physical feedback loop is not a computation, it's an intrinsically physical process,

This sounds like a failure to make a type/token distinction.

I could as well say that electrons flowing through wires and transistors in my laptop are not a computation---that's all "intrinsically physical" stuff---as indeed it is. Computing devices are intrinsically physical.

Whether something is physical or computational is not generally the question; the question is whether it's physical and computational.

just like Searle's favorite example of digestion.

Digestion makes a great intuition pump, becuase it's obviously a part of biology that's "actually doing something" rather than "mostly computing." But by the same token, the power supply in my computer isn't particularly computational. It's just transforming energy to a more usable form. Still, it's part of a mostly computational system. And in both cases and there _is_ a fair bit of computation going on to regulate the process. Various "small" aspects of the physical state of the system are used to "represent" (reflect) other aspects in ways that enable feedback, switching, cascades, etc.

You or Searle might object that this isn't "real computation" because the representations aren't "real representations," because there is no intentional subject there to actually "interpret" them."

I just don't understand why we should care. I never claimed that the computation done by, say, genetic regulatory networks, was computation to a subject with intentionality. (Until we figure it out, anyway.) When codons are transcribed, there's no conscious scribe there, who is an intentional subject, to "really read" the codons and "really understand" them. That may be relevant to the consciousness-related stuff Searle likes to talk about, but it's irrelevant to most of biology. (It's fine by me if biology is mostly like Searle's "Chinese Room," with nobody understanding anything, so long as it works.... until we get to psychology, and then we can argue about intentionality. :-) )

There doesn't have to be an intentional subject for something to be computational.

In Searle's terms, the "computational" aspects of biology may not be _really_ computational, because there's no intentional subject to "interpret" the "symbols." Maybe we should put scare quotes around "transcribe," "switch," and "regulate," and
"signal" as well.

What would that accomplish? Genes do switch on and off, genes are transcribed and switch other genes on and off, and there is a whole lot of signaling going on... Most of the complexity in biology is in the regulation of other aspects of biology, e.g., cascades of such things. If there's no intentional subject around to justify calling those things "computational," as Searle would like, too bad. It still works, and it still _computes_, even if it's a "zombie" system, with no intentional subject around who has the mysterious power to imbue it all with "real meaning," top-down. (And turn meat-robot Pinocchio into a real boy.)

I think most of this talk about intentionality vs. "intrinsic" physicality is a great big red herring, which conceals the real scientific difficulties.

And there are real scientific difficulties. I understand people's resistance to the computer "metaphor" based on simplistic ideas of computation. (E.g., assuming that all computation is serial, discrete, etc.) I understand their fear that if we call the systems that regulate development a "program," people might jump to conclusions about what such a program would look like, based on very stereotyped impressions of "computer programs." And they might miss what's actually happening in a given biological system because they would have "written the program differently" in a different "programming paradigm," for "a different underlying architecture."

On the other hand, I don't think the right solution to this is to ignore computer science and reinvent it using "biological" jargon. That would be like confusing "mathematics" with arithmetic and trying to reinvent mathematics (e.g., topology or graph theory) under new names that were "not mathematical" because you couldn't make arithmetic do what you needed. Computer science isn't just a set of hacks for programming late-20th-century devices.

Computer science is like mathematics---it's a very broad and basic "field." At least in principle, it encompasses a lot of the kinds of things biologists are trying to figure out.

It is not just a historically contingent thing about certain devices built in the late 20th century. For example, if you look at a multitape Turing machine, it bears almost no resemblance to the von Neumann machines we normally program these days; still it is deeply illuminating with regard to digital computation in general. And if you look at issues of the representation of modularity in object-oriented and reflective systems, they're really all about evolvability---how to separate out aspects of modules so that they can be combined and/or changed and/or "exapted" without wrecking things. The actual languages people actually use now are typically serial, or nearly so. But the issues of factoring and combining aspects of a design are fundamental and should apply equally to massively parallel and distributed systems and other programming paradigms. This has essentially nothing to do with whether computers are von Neumann machines, or programming languages are procedural and sequential; it's at an entirely different level of analysis, which most biologists don't even know about when they dismiss "computation" as "a misleading metaphor."

That's why nobody should have been surprised when biologists found genes you could turn on to grow an eye, etc. Those are subroutine calls. The idea of subroutines with call sites distinct from the subroutines themselves is very basic software engineering, and it would be surprising if evolution could have succeeded without stumbling into this very basic and useful trick.

Presumably evolution discovered many of these kinds of things zillions of years ago, and computer scientists are reinventing a fair number of ancient "biological" wheels that biologists haven't gotten around to discovering yet.

Biology will only be a mature science when we understand a lot more of the software engineering principles that have been built-in by evolution. When we do, I'm sure we'll discover many parallels between things like developmental biology and bootstrapping operating systems, factoring software and biological evolvability, etc. I don't think there's any way evolution could have worked as well as it has without evolution stumbling into many of the same kinds of basic engineering techniques that computer scientists intelligently search for.

There certainly is a risk with any particular neat idea that you might be blinded to alternatives, and waste your time looking for nonexistent nails to hit with your hammer. But there's a bigger risk if you ignore neat and likely relevant ideas that are really basic problems and their possible solutions---you will miss the forest for the trees.

So, for example, in object-oriented software engineering, there's something called "the fragile base class problem." The general problem is that you may have a designed or evolved module that works as-is, because it processes inputs and generates outputs in a useful way. But when you try to "inherit" from it, and make a variant of it, it generally breaks, because any changes to the internals of the module must respect the internal logic of the module. So it's hard to "exapt"---unless it's a cleverly parameterized module, where you can systematically replace certain _related_ parts in correspondingly _related_ ways, which preserve the crucial internal logic. Software is generally not very evolvable if it's not designed or evolved _to_evolve_.

(You might think this is just an artifact of a circa-1990's "object-oriented programming" fad, but it's not. The same deep problem arises with any programming paradigm---logic programming, declarative knowledge representation systems, functional programming, procedural programming... it transcends most aspects of programming paradigms, and each paradigm must come up with analogous solutions to address it.)

Presumably nature has encountered this problem millions of times, with unevolvable modules that lead to evolutionary dead-ends. And either it has "solved" the problem by brute force---eventually discarding the particular unevolvable modules and keeping the particular modules that purely by chance are properly modular---or it has stumbled into higher-level software engineering principles, which systematically promote not just the evolution of modules but of modules parameterized in certain kinds of ways.

I am pretty certain it's mostly the latter; when we understand genetics at a substantially higher level than the "machine code" we're decoding right now, we'll see many things remarkably similar to cleverly designed "evolvable" and "extensible" software. So, for example, seemingly "weird" things about recent discoveries in gene expression may well be the low-level mechanisms that support the high-level mechanisms of parameterization needed for good software engineering. But if biologists don't understand the fragile base class problem, they're unlikely to recognize what the low-level mechanisms are really for.

And if I'm wrong, that will have to be scientifically interesting, too. It will either indicate that evolution isn't very clever at all, and the problems it solves are somehow much easier than they look, or that there's a different set of software engineering principles that evolution has discovered and computer scientists have missed.

If so, that wouldn't be because biology isn't mostly computational, or because evolution isn't mostly the software engineering of that mostly computational stuff. It would be because computer science was looking in the wrong places, and missed important things about the nature of computation and software engineering.

And that would be very interesting to me, as a computer scientist whose hero is Darwin. :-) One way or another, biology and computer science have to connect up in a big way.

Whether something is physical or computational is not generally the question; the question is whether it's physical and computational.. Exactly, I never said otherwise. A thermostat is the former but not the latter. Purely physical, non-designed, not even biological thermal-feedback systems can exist (in some aspects of the earth's climate, for example). And temperature is not an observer-dependent phenomenon. If it were, you'd be immune to being burned to death unless you were conscious at the time. ;)

Computation is often a useful metaphor in biology, nobody denies that. But it's not the only useful metaphor and one should not lose sight of the fact that it is a metaphor. That's all I'm trying to say, and in some of what you write above you appear to be backtracking towards that position yourself- good for you.

By Steve LaBonne (not verified) on 19 Jan 2006 #permalink

Computation is often a useful metaphor in biology, nobody denies that. But it's not the only useful metaphor and one should not lose sight of the fact that it is a metaphor. That's all I'm trying to say, and in some of what you write above you appear to be backtracking towards that position yourself- good for you.

Sorry to disappoint you, but I'm not. I don't think you or Searle are correct to say that computation is X, where X necessarily involves a subject with intentionality.

I think it's more correct to say that the initial examples of computing, so called, involved intentional subjects. (People called "computers," before the advent
of automatic digital computing.)

Then we discovered that there is a more general phenomenon that that was just one example of. Computation can be done by automatic machinery, and by machinery that is evolved rather than intelligently designed.

Here's an analogy. Consider a physicist talking about waves of electromagnetic radiation. You might say that's just a "metaphorical" usage---if you look at the history of the term, it turns out the original paradigmatic case of "waves" involved
water.

But that doesn't matter. Propagating waves of (roughy circular) motion near the surface of water turn out to be an instance of a more general, more explanatory and much broader class of things. When we recognized the more general phenomenon, we generalized the term "wave" accordingly, to capture this more fundamental kind of thing we'd only superficially understood before.

So it is, I claim, with computation. The concept of computation is more general than it was understood to be a few hundred years ago. We have come to realize that computation can be performed by mechanisms that are not intentional subjects, and are not designed by intentional subjects.

Here's a thought experiment to back that up. Suppose that by some admittedly very farfetched coincidence, some random motions of some stuff happened purely by chance to assemble a Universal Turing Machine, and I managed to recognize it as such.

I could then program this gadget and use it compute any function that any Turing machine (or von Neumann computer) could compute.

That would be extremely remarkable. Now you might say that it wasn't a real computer, but something I found and used as one---something whose operations I chose to interpret as computation. Or maybe it becomes a computer by my intentional act of interpreting it as such.

But that misses one crucial fact. The assembly of such a device by random forces is extraordinarily, astronomically improbable. I can't use a rock, or an armchair, or the Sun as a universal turing machine. Of any googol of randomly-assembled objects, I might not find any such object-I-could-use-and-interpret-as-a-universal-computer.

So there's an objective fact here about this remarkably, astonishingly computer-like object, which isn't just a matter of subjective interpretation. It's a fact about what it can do---it can be programmed to compute arbitrarily complex functions---which the overwhelmingly vast majority of objects simply cannot do.

We need a name for this remarkably computer-like property of this object.

We could say it's an "accidental-computer" which "isn't really a computer but behaves exactly like one."

I think that would be absurd. The right thing to say is that it's a computer of bizarre, accidental provenance.

Similarly, an evolved computer-like thing, such as a genetic regulatory network, does real computatation, and really is a computer of a sort, of rather odd provenance.
(Or provenance that seems odd until you take Darwin seriously). Even if it wasn't intentionally designed to compute. Evolution is a good enough substitute for
design that we can talk about function---wings are for flying, feet are
for walking---and the functions of genetic regulatory networks are largely computational.

It just turns out that computation just isn't something that requires intelligent design, or even an intentional subject around to watch and interpret it.

I admit that the thermostat example is tricky. Asking whether a thermostat computes is a lot like asking whether a virus is alive; it's a marginal case in a gray area.

However, if you have something like a thermostatic feedback loop in an organism, and the organism exists because of that evolved mechanism, I'd say that is computational. It computes a successor state of the organism---a response to
a stimulus---in a way that serves a definite function. (Even if not an intentional
subject's "purpose.")

It may do so using particular "intrinsically physical" mechanism, but the explanation of its basic function doesn't depend on the particular low-level physical details of that mechanism.

So, for example, it might have evolved a different kind of temperature sensor, a different signaling mechanism, and a different heat-generating process, but the formal structure could be identical and play the same role in explaining the continuing existence of the feature---it would have the same function (regulating heat) and the same consequences (survival and reproduction).

That's good enough for me. I don't see any reason to say this isn't the computation of a response to an input, any more than I see a reason to say that waves aren't waves if they aren't motions near the surface of water. As with "waves" there's something more general and fundamental going on with "computation" than the old paradigm cases capture.

The main issue I'm pursuing is not models, it's explanations, which models (arrow diagrams or computer simulations) are not. Presume we could design a precise computer model of a bacterium (or a conscious mind, or anything) by loading everything it is possible to know about mechanistics of the fundamental molecular interactions involved. This model is undoubtedly useful; my point is that you haven't gotten much closer to a satisying explanation of how a bacterium works. [miko]

I'm not sure I understand your idea of what constitutes an explanation (I suspect this is similar to the other discussion on computation in these comments). I for one would be quite happy with the situation you describe, assuming it's even possible to achieve. As a scientist I have to hope it is.

On the molecular level of ribosomes and tRNA, a gene is remarkably like a tape for a Turing machine. What it isn't is a script for building an adult organism.

When I read PZ's posts about the process of embryonic development, I do think in terms of programs, but they're little, simple programs running semi-stochastically on a zillion parallel threads in a complex environment that interacts with them to produce an organism. It's not "line 50: put the right arm here".

I think the really bad metaphor is the "genetic blueprint": the idea that in some sense there is a plan for an adult written down in there. I think this underlies a lot of folk reasoning about genetics, such as the endless arguments over whether this or that quality is down to nature or nurture. There's this idea that the genetic blueprint lays down the basic outlines of the adult, like the blueprint of a house, and upbringing is just like adding paint and furniture to the house; there's only so much it can do and the types of changes are qualitatively different. But that's not how it is at all.

In the less-sophisticated forms of science fiction, you occasionally see a further distortion of the blueprint metaphor that adds in a kind of sympathetic magic control of body form: something rejiggers an adult human being's DNA and, bam, he turns into a monster! (This is not universal in SF: stuff Samuel R. Delany wrote back in the sixties makes me think he had a more or less correct understanding of how it all works, but, then again, Delany is probably one of the smartest people ever to write science fiction.)

[Miko:] The main issue I'm pursuing is not models, it's explanations, which models (arrow diagrams or computer simulations) are not. Presume we could design a precise computer model of a bacterium (or a conscious mind, or anything) by loading everything it is possible to know about mechanistics of the fundamental molecular interactions involved. This model is undoubtedly useful; my point is that you haven't gotten much closer to a satisying explanation of how a bacterium works.

[Richard:] I'm not sure I understand your idea of what constitutes an explanation (I suspect this is similar to the other discussion on computation in these comments). I for one would be quite happy with the situation you describe, assuming it's even possible to achieve. As a scientist I have to hope it is.

I take Miko's basic point to be that a detailed simulation of something whose logic you don't understand effectively gives you a copy of that thing, which you also don't understand.

Suppose we had the scanning technology to take a snapshot of an organism at an atomic level, and simulate its basic physics---motion of molecules, making and breaking of chemical bonds, etc. This would also "simulate its biology," by brute force. We could feed the organism simulated food, let it wander around a simulated environment exhibiting its simulated behaviors, etc. But it wouldn't explain things like "how the organism decides when to look for food" or "why it keeps breathing" or "how faulty gene transcription is detected and corrected." It's a model at the wrong level
to answer those kinds of questions.

Similarly, if we could simulate whole large environments in low-level detail, we could simulate biological evolution by brute force---we could watch individual organisms reproduce and die, etc. But we would be in the same position as Darwin was, trying to explain evolution; we'd still need to formulate the theory of evolution by natural selection and test it against particular data points out of the unimaginably vast sea of data produced by our simulator. (Actually, it would be a lot of work just to pick out individual organisms from the vast sea of data about molecular interactions. That alone would require some impressive theories.)

This sort of thing happens to computer engineers, trying to figure out what's wrong with a faulty chip that computes the wrong answers sometimes. If all of the individual gates and wires in the chip are working correctly, a gate-level simulator may exactly predict every result at every step of the computation, and agree with the actual results. This tells you that you have a design flaw, not a manufacturing problem, but it doesn't tell you what the design flaw is. For that, you need higher-level theories---e.g., about what constitutes a correct conditional jump or floating-point multiplication, and a correct hardware design to implement it---not detailed simulations of gates. (Without that kind of higher-level theory, you can't even say what it means for the chip to be "faulty.")

By the way, in the other discussion about "computation," I wasn't disagreeing with Miko about this point, which I take to be entirely correct. I wasn't arguing for brute-force computational modeling as the salvation of biology, or anything remotely like that.
(That just can't work.) I was arguing for computational theories and
explanations, because I believe the biological phenomena are mostly computational. (And because I think the goal of science is explanation, not prediction per se; predictions are mostly useful for testing explanations.)

I wasn't disagreeing with Miko about this point, which I take to be entirely correct. I wasn't arguing for brute-force computational modeling as the salvation of biology, or anything remotely like that. [Paul W]

Neither was I. I understand that that approach would not explain anything. But I don't really know any actual modellers who are trying to do that. So I guess we are all in agreement...

Could someone please explain problem 1, "The problem of unimodal adaptation", please? In particular, what is meant by "unimodal"?

Does this boil down to "biologists tend to concentrate on only one trait of the members of a population, ignoring the way that two or more traits can interact to affect fitness"? E.g., the researcher studying an arms race between frogs and flies in a pond will tend to ignore the subpopulation of frogs off to the side who have started eating mosquitos.

Maybe I'm dense, but I just don't get this problem. Is it something that isn't properly explained by modern theory? Is it that everyone knows that this is a simplification, but the full problem is too hard to study properly? What?