What Will Change Everything?

So far, my favorite response to the annual Brockman challenge - this year, the question was "What will change everything?" - comes from the physicist Stuart Kauffman:

Reductionism has reigned as our dominant world view for 350 years in Western society. Physicist Steven Weinberg states that when the science shall have been done, all the explanatory arrows will point downward, from societies to people, to organs, to cells, to biochemistry, to chemistry and ultimately to physics and the final theory.

I think he is wrong: the evolution of the biosphere, the economy, our human culture and perhaps aspects of the abiotic world, stand partially free of physical law and are not entailed by fundamental physics. The universe is open.

Many physicists now doubt the adequacy of reductionism, including Philip Anderson, and Robert Laughlin. Laughlin argues for laws of organization that need not derive from the fundamental laws of physics. I give one example. Consider a sufficiently diverse collection of molecular species, such as peptides, RNA, or small molecules, that can undergo reactions and are also candidates to catalyze those very reactions. It can be shown analytically that at a sufficient diversity of molecular species and reactions, so many of these reactions are expected to be catalyzed by members of the system that a giant catalyzed reaction network arises that is collectively autocatalytic. It reproduces itself.

SNIP

Can we have a natural law that describes the evolution of the swim bladder? If a natural law is a compact description available beforehand, the answer seems a clear No. But then it is not true that the unfolding of the universe is entirely describable by natural law. This contradicts our views since Descartes, Galileo and Newton. The unfolding of the universe seems to be partially lawless. In its place is a radically creative becoming.

Let me point to the Adjacent Possible of the biosphere. Once there were lung fish, swim bladders were in the Adjacent Possible of the biosphere. Before there were multicelled organisms, the swim bladder was not in the Adjacent Possible of the biosphere. Something wonderful is happening right in front of us: When the swim bladder arose it was of selective advantage in its context. It changed what was Actual in the biosphere, which in turn created a new Adjacent Possible of the biosphere. The biosphere self consistently co-constructs itself into its every changing, unstatable Adjacent Possible.

If the becoming of the swim bladder is partially lawless, it certainly is not entailed by the fundamental laws of physics, so cannot be deduced from physics. Then its existence in the non-ergodic universe requires an explanation that cannot be had by that missing entailment. The universe is open.

Karl Popper famously distinguished between two types of scientific problems: clocks and clouds. Clocks are neat, orderly systems that can be elegantly solved through reduction. (Think, for instance, of planetary orbits, which can be explained with gravitational equations.) A cloud, on the other hand, is an epistemic mess; as Popper put it, they are "highly irregular, disorderly, and more or less unpredictable." After all, clouds are carried and crafted by an infinity of currents; they seethe and tumble in the air, and are a little different with every moment in time.

The question, of course, is whether the universe (and all the life contained therein) is a clock or a cloud. The methodology of modern science is predicated on the assumption that everything is a clock, a wonderfully complex timepiece to be sure, but still a clock. But what if reality is a cloud? Is reductionism still valid? Or are clouds best solved through simple observation, induction and ad hoc theorizing? Of course, that's how science was done in the 19th century (eg, Darwin), but maybe that's our future?

via kottke

Categories

More like this

As PZ Myers has already pointed out, Francis Collins has been busy spreading the gospel. Myers has already dismantled Collins squishy theism better than I ever could, so I thought I would focus on one particular Collins' claim in particular. It's a theme that consistently gets rehashed in his…
I recently had a short article in Wired on the danger of getting too enthralled with our empirical tools, which leads us to neglect everything that our tools can't explain: A typical experiment in functional magnetic resonance imaging goes like this: A subject is slid into a claustrophobia-inducing…
It seems Stuart Kauffman has come out as a pantheist. Discussing his new book Reinventing the Sacred: A New View of Science, Reason, and Religion for New Scientist, he writes: The unfolding of the universe - biotic, and perhaps abiotic too - appears to be partially beyond natural law. In its place…
The Economist believes that "modern neuroscience is eroding the idea of free will": In the late 1990s a previously blameless American began collecting child pornography and propositioning children. On the day before he was due to be sentenced to prison for his crimes, he had his brain scanned. He…

Mu. The "unfolding of the Universe" is strictly governed by "natural law", but that does not mean that it is strictly deterministic. The fact that you can't predict the outcome does not mean that the mechanisms involved are unknowable.

The s->r, billiard-ball determinist model is not the only determinist model. A selectionist model is every bit as much "natural law", but as Dunc says, is unpredictable. Natural selection, operant learning, and cultural evolution are each examples of this model, which cannot reasonably be reduced to an s->r model.

The unfolding of the universe seems to be partially lawless. In its place is a radically creative becoming.

The biosphere self consistently co-constructs itself into its every changing, unstatable Adjacent Possible.

I'm sorry...this just sounds like New Age gobbledygook. People who attack reductionism are almost always attacking a straw man. They especially like jumping explanatory levels (e.g., How do you explain the beauty of a symphony in terms of acoustics? How do you explain the evolution of a swim bladder in terms of particle physics?)

Hierarchical reductionism is what makes the most sense, and how scientists actually make real progress:

In his book The Blind Watchmaker, Richard Dawkins introduced the term "hierarchical reductionism" to describe the view that complex systems can be described with a hierarchy of organizations, each of which can only be described in terms of objects one level down in the hierarchy. He provides the example of a computer, which under hierarchical reductionism can be explained well in terms of the operation of hard drives, processors, and memory, but not on the level of AND or NOR gates, or on the even lower level of electrons in a semiconductor medium.

If we want to understand the behavior of clouds or the evolution of the swim bladder, the first thing to do is to try to identify a suitably lower level of description in which the problem is tractable. Then we start devising hypotheses/models.

How exactly do we try to understand complex systems under the Kauffman paradigm? Do we just meditate on Adjacent Possibles or creative becomings?

Neither clock nor cloud, but both at different times.

The folding of a protein may be explained by mathematics (though it's impossible to predict the 3D structure as yet), but it will always be possible to exist as a large range of possibilities and possible configurations.

The constraints on protein folding may be seen as analogous to the constraints on the cloud example you give above; outside influences such as air currents or chaperone proteins and pH.

This variation in configurations becomes exponential when you start to vary individual amino acids.

We may not be able to predict the shape of a cloud, but like the protein we should be able to make one the shape we wish, given enough information and control over the environment.

Patrick

I think that the downfall of reductionism as a practical philosophy lies in the probabalistic nature of the universe. Reductionism says that if we just know all of the contents of the universe, their properties and the forces acting upon them, then everything can be explained from the top down or the bottom up. But we know that many of the fundamental processes of the universe--the behavior of photons, for example--cannot be understood except in terms of probability. And the uncertainty that is part and parcel of any probabalistic system means that even if all of the starting materials and forces were known, we could never predict every unfolding of the universe. This is why reductionism fails, and I don't think that science as a field has really come to terms with the full implications of this. And this is I think what Kauffman is getting at. It's possible that reductionism could be true from a metaphysical standpoint, but from a practical standpoint of what we will ever actually be able to know scientifically about the universe, reductionism is fools gold.

A word about hierarchical reductionism: this is the only practical application of reductionism. But what seems to be ignored or at least downplayed by its adherents is that by stating that we can learn and know things quite well within one or two levels, we have just placed huge limitations on what we can know. And hasn't the scientific project always seen itself as this limitless way of figuring out every inner working of the cosmos? Science hasn't been a mere attempt to figure out how birds evolved from reptiles. We wanna know how the big bang led to the human brain! And we can't! This is what we need to grapple with.

Dunc: The "unfolding of the Universe" is strictly governed by "natural law", but that does not mean that it is strictly deterministic.

In that it appears there are "choices", where you can have an equally valid universe whether the quantum coin lands heads or tails, and where which happens appears underspecified without reference to the result of that choice (EG: Conway-Kochen).

However, that does not preclude having a universe that is of Recursively Enumerable complexity, and recognize that what the universe is, has a recognizable pattern. You just can't tell whether some particular data is impossible with that pattern.

Jonah Lehrer: The methodology of modern science is predicated on the assumption that everything is a clock, a wonderfully complex timepiece to be sure, but still a clock.

Actually, no. A "clock" corresponds to a pattern of at most Recursive complexity: you can definitely tell "Yes" and "No". A "cloud" corresponds to where the pattern may be of Recursively Enumerable complexity: you can definitely tell "Yes", and you might sometimes tell "No", but you can't always be sure about "no".

Science is easier with clocks, but even assuming clouds it can be shown that the science can still handle them using the usual methodology: gathering evidence, forming conjectures about the evidence, developing a formal hypothesis which indicates how the current evidence may be described under the conjecture, and competitive testing of all candidate hypotheses (including that previously tested best and currently holding title as "Theory") under a formal criterion for probable correctness.

The math just gets more vicious.

Now, anything harder than RE (EG: requiring a Halting Problem Oracle)... that looks to be insurmountable. Science appears to "solve" that problem by assuming as a philosophical principle such patterns don't exist.

Reductionism has been the main subject for more that 300 years just because the technology to track the network behavior didn't exist yet.

Network and unpredictable behavior are superior and more interesting than traditional and reductionist science (if it is possible) just because the vast possibilities that them can offer to the human knowledge.

Statics and mathematics can explain the dynamics of a very big network (like the human brain and the internet) but can't forecast the emergence of a new trend or a new behavior in this network just because it can't be predicted.

Even neurology and neurochemistry can explain in a very exact detail, the way the nervous system works, but cannot predict the intrinsic and particular thoughts, belives, feelings and future of the person formed by this network of nervs and neurons. That ignorance is in fact great news because it left open new ways of understanding the universe including live beings.

Artificial intelligence, parallel computing and a new kind of society are going to appear in the next centuries thanks to a new kind of science that we are still in proccess of understanding. I particullary belive that the application and mastering of this knowledge is going to CHANGE EVERYTHING we know.

This theme has been in the air for so many years and has been called "emergency" not because 911, but because it has been demonstrated that new and unexpected patterns "emerge" from big network systems even when the rules that governs them are very simple. Read about Hans Moravec, Kevin Kelly and Ray Kurzweill.
----
PD. Sorry for my inefficient english, but I like your blog so much and I wanted to participate in it.

Best regards

Erick Zarate
VisionEZ
Mexico, City
http://visionez.blogspot.com

Interesting discussion. While reviewing the neuroscientific literature two metaphors came to me, the cloud and the tree.I had planned to write an essay about this but have yet gotten around to it. The cloud refers to the relationships of interleaved neurons, the tree to the organization of the modules that comprise the architecture of the brain. The two are linked by ionic interaction of the ion channel, the gating mechanism of the system that regulates the propagation of the action potential of the neuron.Once again, interesting discussion.

By sericmarr (not verified) on 24 Jan 2009 #permalink

The sole limitations of reductionism are inadequate computing power, and the halting problem. Perceiving the universe as hierarchal, probabilistic, stochastic, or chaotic are merely a clever crutches for working around said inadequate computing power. Nothing can work around the halting problem.

Llewelly-- I can't see how the only problem with reductionism is a mere lack of adequate computing power. Certainly for us to try to model the universe from top to bottom we would need a hell of lot more computing power than we currently have, but this is simply a practical concern. Even if we had infinite computing power how are you going to write this hypothetical computer program that will model the entire universe (multiverse?) without knowing in sufficient detail all of the contents and forces driving the evolution of the cosmos? The main problem with reductionism doesn't have to do with computing power but with this lack of knowledge that is unavoidable when you come up against the uncertainty principle and the properties you cast aside: probability, hierarchy, chaos. Any way you slice it, reductionism fails. I can sympathize with wanting to cling to a reductionist world-view. It really feels like the universe should work in a reductionist way--and maybe it does! But we just can't ever know because reductionism as a mode of inquiry will never get us there.

i mostly agree with Kurt. mechanism seems to be obviously wrong. as kurt pointed out, mechanistic notions cannot allow for the uncertainty principle. the world is chaotic and messy.

However, I think it's somewhat mistaken to think of reductionism as identical to mechanism. mechanistic theories -are- reductionist, but not all reductionist theories are mechanistic. it is true that an essential characteristic of reductionism is the idea that 'x' can be understood by looking at its components and how those components function and interact. however, the 'x' might not be a material thing (as mechanism necessitates). the 'x' might be a phenomenon, or a process, or a theory. many theories of mind and language are reductionist, and their reductionism occurs on the meta level and are not (necessarily) materialist. these are not mechanistic notions.

still...i think all reductionism suffers the same problem (again, i think i agree with kurt here): they can only reach so far. in the same way that physics cannot be fully understood by looking at a machine, something relevant and essential is lost when we break things/ideas/processes down to their parts and neglect the fact that all things are, at least a little bit, chaotic.

I often question whether certain things - like how we got from the Big Bang to the human mind, or the workings of quantum mechanics - are actually irreducible, or whether they are epistemologically indeterminate. If everything were reducible, it seems that things would be perfectly predictable (like the clock as opposed to the cloud). Whether or not the cloud is indeterminate for intrinsic reasons or epistemic reasons, we may never know; for those who want a reductionist science, this is deeply troubling.

Whenever I read about reductionism, I step back and look at the psychological processes underlying a reductionist movement. I am not up on the literature (if any has been done on the subject), but what do you think sparks the desire for reductionism? Perhaps it has to do with our limited cognitive systems - with heuristics, rules, and ways to flawlessly determine outcomes, we save cognitive resources that can be put toward overriding goals that we deem more important for evolutionary reasons.

Let me see if I can turn this discussion on its head.

Computer software developers are fond of the expression leaky abstraction. The term is motivated by the fact that the goal of most software design is to abstract away the details of implementation of a particular solution on an actual physical computer or computer network. Computers themselves are the first level of abstraction; digital circuit and chip designers try to build systems that reliably conform to a clean logical specification, despite the fact that they are made of physical devices that are in fact complicated systems; when observed closely they are full of non-linearities and stochastic behavior. At a deep level, their operation depends on properties of matter that can only be understood with the aid of quantum mechanics.

The very possibility and potential fruitfulness of reductionism became apparent as we began to grapple seriously with the "leaks" in our abstract obliviousness to the structural depth of the physical world. (Remember that this really only began in earnest a few centuries ago.) This obliviousness was all the stronger because we didn't think of these abstractions as abstractions; we thought of them as immediate experience of outer reality. That's a key point; our sense experience is in fact a particular view of the world outside (and within) our bodies constructed by our nervous systems and brains.

Let me assert, then, that the relevance of reductionism is not the megalomaniacal dream of "explaining everything" on the basis of some hypothetical "ground substance" and associated fundamental laws. Rather, it is the hopeânever fully or finally justifiableâthat when our abstractions leak, we can explain why. The leaks are windows into a deeper level of reality, in terms of which our abstractions can come to be seen as intricate constructions.* Of course, the abstractions remain not just useful, but essential; the implicit and multi-layered complexity of the world can't be swallowed whole.

(* Not designed abstractions, mind you; let's not open that ultimately vacuous can of worms.)

Network and unpredictable behavior are superior and more interesting than traditional and reductionist science (if it is possible) just because the vast possibilities that them can offer to the human knowledge.