An interesting article is up at the New Atlantis by Ari Schulman, arguing that we will never be able to replicate the mind on a digital computer. Here I want briefly to argue there are other reasons for this.
Transhumanists are fond of claiming that one day we will be able to download a state vector description of our brain states onto a suitably fast and sophisticated computer, and thereafter run as an immortal being in software. I want to give two reasons why this will not happen, and neither of them rely on anything like the Chinese Room, which is just a bad argument in my opinion.
Reason 1: A computer program, or any state description simulated in a computer program, is a representation. Insofar as it represents some aspect of the real world, it is interpreted outside the program so that variable Mass_vector represents the vector of some mass and is not just an unanchored variable in an arbitrary program. In short, we anchor these abstractions so they have intentionality – so that they are about something rather than being an interesting but unreal mathematical transformation. The sole difference between an arbitrary program of, say, a Newtonian simulation of the solar system and an arbitrary program that happens to be isomorphic in structure to a Newtonian simulation, but is, say, the accidental outcome of a heuristic programming machine, is that we anchor it thus.
Consequently, any program that purports to be “me” running on a Cray 9000 is indistinguishable from an arbitrary program that does much the same thing. What makes it about me? That is it supposed to represent me by those who set it up. Now this is not sufficient to make that program not me, but consider this – according to Turing, any program that is computable (and the software “me” must be) can be “run” on any other system. Hence, I could instantiate that program by a Lego Turing Device. I would be very unhappy calling that “me”. It’s a representation of me, not a copy of me. Representations are, as it were, internal to cognitive systems, not aspects of the real world. That is, a representation, being a semantic entity, has to have some referential link to the thing it represents, and it is in that relationship that the representation gets its purchase as a representation. And that relationship is the outcome, I think, of our intentionality not of the thing itself. So I would not say that the representation, however dynamic and convincing to observers, is me. At best it represents me in the abstract semantics of the observers.
To put it another way, the simulation is no more “me” than a photograph that survives my death. It represents (some) aspects of me, but the object “me” is long gone once I die. So much for survival after death by downloading.
Reason 2: I call this the Inverse Eleatic Principle. Graeme Oddie named the Eleatic Principle for Armstrong’s argument that for some class of entities to exist, they had to make a causal difference. Abstractions lack causal power, and hence they don’t exist. I want to invert this, and say that causal differences are inevitable with physical differences. If you take the structure of a mind as implemented in wetware, and put it on electronic hardware, it will inevitably lack many of the properties, and gain many new ones, of the biological mind. I’m not saying it can’t be done: just that if it were, the end result would not be “the same” as the original.
A while back, some engineers [PS] “evolved” a circuit board, using reprogrammable gates and a genetic algorithm. The final result was in fact better for the particular fitness function chosen than anything designed, but it had four distinct circuits that were unconnected to the main circuit that could not be pruned away without degrading the performance of the main circuit. It turned out that they affected the main circuit’s behaviour by electrical induction. Physical differences from the logical structure of the circuit made a considerable difference to the behaviours of the overall system. In short, what was “logically equivalent” depended on the resolution of the description – if you excluded the inductive properties of these extrinsic circuits, you didn’t get the results.
The human brain, and in particular any individual brain, must be like this. It is not enough to describe a state vector of every neuron and represent it in a digital computer. To get something approaching the exact brain, you really need to describe every molecule in the brain, and it’s binding and electromagnetic properties as well. And even then, you only simulate the brain – which is an organ in a larger system, all depending on physical properties to do things we represent, and only represent, on a formal Turing system. Physical differences make a difference and we do not know what is important. In short, we do not know what to represent yet, and when we do, it is likely that physical differences will still make significant differences.
The key term here is “significant”. What counts as the “natural” equivalence class into which we divide the phenomena to be represented? When do the mathematical representations of mind, or solar systems, become the things they are modelling? In the case of solar systems, we have no trouble – they never do. No computer orrery will ever include the actual masses of the objects in the solar system. But in the case of minds, we seem to equivocate, as what we value about minds are properties that are not physical, and seem to be computable. But I argue the equivalence classes themselves are just conventions, not natural properties of the mind, and so if you want to place my mind in some other entity, make sure it has all the physical properties I do, or else you merely have an old, faded, photograph of me.