Why we'll never be downloaded

An interesting article is up at the New Atlantis by Ari Schulman, arguing that we will never be able to replicate the mind on a digital computer. Here I want briefly to argue there are other reasons for this.

Transhumanists are fond of claiming that one day we will be able to download a state vector description of our brain states onto a suitably fast and sophisticated computer, and thereafter run as an immortal being in software. I want to give two reasons why this will not happen, and neither of them rely on anything like the Chinese Room, which is just a bad argument in my opinion.

Reason 1: A computer program, or any state description simulated in a computer program, is a representation. Insofar as it represents some aspect of the real world, it is interpreted outside the program so that variable Mass_vector represents the vector of some mass and is not just an unanchored variable in an arbitrary program. In short, we anchor these abstractions so they have intentionality - so that they are about something rather than being an interesting but unreal mathematical transformation. The sole difference between an arbitrary program of, say, a Newtonian simulation of the solar system and an arbitrary program that happens to be isomorphic in structure to a Newtonian simulation, but is, say, the accidental outcome of a heuristic programming machine, is that we anchor it thus.

Consequently, any program that purports to be "me" running on a Cray 9000 is indistinguishable from an arbitrary program that does much the same thing. What makes it about me? That is it supposed to represent me by those who set it up. Now this is not sufficient to make that program not me, but consider this - according to Turing, any program that is computable (and the software "me" must be) can be "run" on any other system. Hence, I could instantiate that program by a Lego Turing Device. I would be very unhappy calling that "me". It's a representation of me, not a copy of me. Representations are, as it were, internal to cognitive systems, not aspects of the real world. That is, a representation, being a semantic entity, has to have some referential link to the thing it represents, and it is in that relationship that the representation gets its purchase as a representation. And that relationship is the outcome, I think, of our intentionality not of the thing itself. So I would not say that the representation, however dynamic and convincing to observers, is me. At best it represents me in the abstract semantics of the observers.

To put it another way, the simulation is no more "me" than a photograph that survives my death. It represents (some) aspects of me, but the object "me" is long gone once I die. So much for survival after death by downloading.

Reason 2: I call this the Inverse Eleatic Principle. Graeme Oddie named the Eleatic Principle for Armstrong's argument that for some class of entities to exist, they had to make a causal difference. Abstractions lack causal power, and hence they don't exist. I want to invert this, and say that causal differences are inevitable with physical differences. If you take the structure of a mind as implemented in wetware, and put it on electronic hardware, it will inevitably lack many of the properties, and gain many new ones, of the biological mind. I'm not saying it can't be done: just that if it were, the end result would not be "the same" as the original.

A while back, some engineers [PS] "evolved" a circuit board, using reprogrammable gates and a genetic algorithm. The final result was in fact better for the particular fitness function chosen than anything designed, but it had four distinct circuits that were unconnected to the main circuit that could not be pruned away without degrading the performance of the main circuit. It turned out that they affected the main circuit's behaviour by electrical induction. Physical differences from the logical structure of the circuit made a considerable difference to the behaviours of the overall system. In short, what was "logically equivalent" depended on the resolution of the description - if you excluded the inductive properties of these extrinsic circuits, you didn't get the results.

The human brain, and in particular any individual brain, must be like this. It is not enough to describe a state vector of every neuron and represent it in a digital computer. To get something approaching the exact brain, you really need to describe every molecule in the brain, and it's binding and electromagnetic properties as well. And even then, you only simulate the brain - which is an organ in a larger system, all depending on physical properties to do things we represent, and only represent, on a formal Turing system. Physical differences make a difference and we do not know what is important. In short, we do not know what to represent yet, and when we do, it is likely that physical differences will still make significant differences.

The key term here is "significant". What counts as the "natural" equivalence class into which we divide the phenomena to be represented? When do the mathematical representations of mind, or solar systems, become the things they are modelling? In the case of solar systems, we have no trouble - they never do. No computer orrery will ever include the actual masses of the objects in the solar system. But in the case of minds, we seem to equivocate, as what we value about minds are properties that are not physical, and seem to be computable. But I argue the equivalence classes themselves are just conventions, not natural properties of the mind, and so if you want to place my mind in some other entity, make sure it has all the physical properties I do, or else you merely have an old, faded, photograph of me.

More like this

A recent New Scientist article poses the often-posed question in the title. The answer is mine. Forgive me as I rant and rave on a bugbear topic... OK, I know that we live in the "information age" and far be it from me to denigrate the work of Shannon, Turing and von Neumann, but it's gotten out…
There's a famous anecdote about Wittgenstein and his friend Piero Sraffa by Norman Malcolm (Ludwig Wittgenstein: A Memoir): Wittgenstein was insisting that a proposition and that which it describes must have the same 'logical form', the same 'logical multiplicity', Sraffa made a gesture, familiar…
Michael Egnor, tiresome little lackey of the DI that he is, is asking his readers to help me find out where altruism is located. I'm not going to link back to him—sorry, but I'm afraid it would only encourage him, and I don't want to be an enabler—but I will try to address his flawed question. He…
I'm away on vacation this week, taking my kids to Disney World. Since I'm not likely to have time to write while I'm away, I'm taking the opportunity to re-run an old classic series of posts on numbers, which were first posted in the summer of 2006. These posts are mildly revised. Ω is my own…

If I understand your reasons right, then:

Reason 1: Addition on an abacus and addition on a computer are different in the abstract sense (or, why assume we would transfer a model of the brain instead of a model of it's processes? If the person would think 'x, y -> b', there's no necessity for transferring the neural structure, just that element [and anything that affects/would be affected by it] - which still retains a fair sense of identity between the platforms supporting it)

Actually, my last comment gives me another thought: it's like saying MS Word is not still MS Word if it's ported to another architecture.

Reason 2: That doesn't invalidate personal identity any more than a deaf person getting an aural implant, or a child learning something new, or someone discovering an 'unknown' skill they had - the 'me' is still continuous, despite changes in the qualities of it.

I don't really understand most of the philosophical terminology, so forgive me if this is a stupid question. But the equivalence classes have to be broad enough so that the "you" of today is the same person as the "you" when you were 5 years old, no? Or would you disagree with that? It seems to me that any equivalence class that's that broad ("you"'re running on a pretty much completely different physical system at the molecular level than you were then, after all) would probably include versions of "you" that run on digital computers as well.

My take is different. If my friends and family---the people who know me best---couldn't distinguish a programmed "me" from the organic "me" they know (and let's assume that they're trying really hard to make that distinction, and "I"'m doing what "I" can to help them succeed, i.e. it's a game set up entirely in their favor and yet they still can't make the distinction), then that program must be considered "me" for all intents and purposes.

Isn't there also a simple resolution argument to be made? Computers have limited resolution. Floating points can only float so far, so to speak. The brain's resolution limit is... what? The Planck distance? Who knows. I've also read some articles and books that suggest there are quantum effects involved with the mind. If that pans out, well, have fun building a precision model of effects that are probabilistic and/or defined as simply uncertain.

For some time now I've considered "downloading the mind" one of the silliest things to come out of futurism community along with flying cars and The Singularity (sorry, Vernor). I just finished Pandora's Star by Peter Hamilton, and the whole recording and making copies of the mind on tiny devices was an annoying part of an otherwise good novel. I bought into the wormhole enhanced rail system much more easily. ;-)

By Quiet_Desperation (not verified) on 05 Mar 2009 #permalink

too much Kurtzweil?
It is our future. We will learn to control the fundamental elements of matter and energy and eventually reconfigure ourselves to live forever within an alternate "platform".
Actually, I think your basically asking what is consciousness? or why am I me?
Wanna hear the end? We go on to use all the matter/energy in the universe to form a giant conscious machine thingy called Jebus.

By Disillusioned … (not verified) on 05 Mar 2009 #permalink

And suppose the simulation is run on a computer linked by suitable peripherals to the environment (video camera providing-- through appropriate filter-- input to the "optic cortex" subroutine, simulated motor neuron firings controlling "waldos"). Would it be possible for the system to attain suitable "anchoring" in this way? If the time-lapse between downloading and the running of the simulation was short enough and the environment the computer interacts with close enough to my original environment, could it MAINTAIN my anchoring is a way that would make it me? (That's just a quesstion about Reason #1.)

Let me go away and think about what you have said.

Anecdote: Hofstadter (of "Gödel Escher Bach" fame) was saying, either in a lecture or in conversation at Indiana University where he taught, that to "think about" something was just to contain an internal representation of it. The logician Mike Dunn (who told the story to me) raised what seemed like an objection: Zermelo-Fraenkel set theory contains representations of lots of things (recall the chapters in intro set theory books about defining numbers in set theory etc etc). Hofstadter: "Great! That means that Zermelo-Fraenkel set theory thinks about numbers etc etc!"
Dunn, commenting, later: Hofstadter just doesn't understand the concept of a reductio.

By Allen Hazen (not verified) on 05 Mar 2009 #permalink

"A rose, by any other name, would still smell as sweet"
I hate to invoke "moore's law", but to equate our current knowledge and technological limits within the parameters of a philosophical debate is fighting dirty. Is there a time limit?
Man's goal has always been to become(one with)god. Why stop now?
Are you your father's son? Are you who you were when you were an infant, a child, and old person? Are you still you if you become a paraplegic, have a stroke or even a lobotomy? The only constant is that we are not...
What if a machine could learn, adapt/evolve and be programed to survive... would it still be a machine?
We all change. Reading this blog changes us.
Whether you digitally download your mind or are meshed as a computer/human hybrid, you still might be you... however altered and irrevocably changed.

By Disillusioned … (not verified) on 05 Mar 2009 #permalink

Perhaps this is a layman's way of saying what you said in point 1, but:

I just figure that even if the technology existed to copy the state of my brain into a computer (even amazing technology that could copy molecule by molecule) it still wouldn't work, because the 'mind' in the computer, if it matched the mind in my brain, would be trying to send messages down physical nerves to my body, and looking for messages back from eyes and ears and skin and organs. And it wouldn't get those messages in the computer, (because only the brain is copied) so it would just, well, sh1t itself and FAIL.

In order to copy the brain-mind into a computer, you wouldn't just have to copy molecule by molecule in the brain, you would have to copy molecule by molecule in the whole body, so that the copied mind had a body to communicate with. And then you would have to simulate a world for this copied body-brain-mind to inhabit.

I think it is just as ignorant to say it is absolutely not possible as it is to say it absolutely is. Any scientist who makes such claims is clearly not a very observant one because, if history has taught us anything, it should be that what we do not know far surpasses that which we do. All we can really do is read the data and try to make our best interpretations. I personally lean toward the notion that eventually this sort of thing may happen, not likely in any of our lifetimes, but who knows? This is, of course, just my own opinion and I am well aware that the future may prove me wrong. Perhaps one day science will reach its end and plateau ~or~ maybe there is no end at all.

As far as the idea that a computer copy of you won't actually be you goes, you can throw that one out right now because who are you anyway? If you look at a picture of yourself as a child and say, "that's me" how can you be so certain. The image in the mirror is not the same as the image in the picture. You may have similar features, but by now the atoms that composed you then are most likely no longer with you. The thoughts you had back then would seem absurd to you now. I do agree that if it were possible to download yourself to a computer you would not recognize it as yourself, but it might. Of course the second the download was completed you and your computerized copy would have diverged in your thought processes and you would suddenly be different, but you would both still 'feel' like you. Don't be so quick to lay out definitives, my friend, because nature has a tendency toward to disproving them.

And one final thing; you use a lot of fancy words to try to convince your readers that you have authority, no doubt to try and sway them to your personal beliefs, but this is not a wise person's way. It is far better to state your side clearly and ask for reasonable counter arguments. Others will know the depth of your knowledge by your insights, not your vocabulary. Am I wrong? It is clear you are an intelligent person, I only say these things because you make the all-to-common mistake of setting your intellect toward confirming your own preconceived notions rather than maintaining an open mind and attempting to see both perspectives. Make this one simple change in your thinking and it will reward you every day of your life. Think about it.

By Eric Patton (not verified) on 05 Mar 2009 #permalink

Thank FSM nobody said the name 'Penrose' or I'll have to explain why "brain as a quantum computer" is a complete bullshit.

By Alex Besogonov (not verified) on 05 Mar 2009 #permalink

Do it anyway.

I don't need quantum effects. I just need functional molecular tertiary structures, weak attraction and van der Waals forces. We know these are crucial in biological function at the molecular level. And this is too great to be simulated at real time anyway, although that's more of a practical than a theoretical limitation, arguably.

The neuron is NOT a simple switch, or little black box. It is at least a minicomputer, performing extremely complex nonlinear computations on hundreds or thousands of inputs in complicated format, under control of genetic, hormonal, neurotransmitter, and other factors.

I contend with basis that the neuron is, in fact, a nanocomputer, and the neural network is NOT a Hebbseian McCullough-Pitts kind of net, but merely the Local Area Network of a distributed molecular computer, where 90%+ of the computation is being done by the non-steady-state
dynamics of protein molecules within the neurons (and glial cells), in a Laplace-transform domain quite different from the physical substrate (*thinks* Greg Egan's Diaspora) as determined by my peer reviewed Mathematical Biology solutions to the Michaelis-Menten equations of the metabolism, as solved by Krohn-Rhodes decomposition of the semigroup of differential operators.

Whoops. That does already sound like gobbledegook, of the "reverse the dilithium crystals" variety. Suffice it to say that I agree with Ari N. Schulman for yet other reasons, that the Rapture of the Nerds is based on antique and reductionist toy problem misunderstandings of what a cell and a brain are. I prefer to struggle with the current literature and the current Math and the current experimental data, rather than be stuck in the 1956 vision of AI, which has failed so badly that John McCarthy, who coined the very term "Artificial Intelligence" has
confessed to me that he wishes he'd never invented the phrase.

Alex Besogonov (#9):

Thank FSM nobody said the name 'Penrose' or I'll have to explain why "brain as a quantum computer" is a complete bullshit.

::ahem:: "Penrose"
(but only because I've heard rumblings of the argument, and I want to know more about the matter ;-) )

John S. Wilkins (#10):

And this is too great to be simulated at real time anyway, although that's more of a practical than a theoretical limitation, arguably

Not too long ago, it was too difficult to edit a simple 2D graphic in real time. Now I can manipulate 3D virus models on my laptop in real time...

And how are those complete cell simulation models going? I know some folk trying to do it with massive Beowulf clusters, and they are not doing it at the molecular level. Now, scale that up to include intercellular media, signalling molecules, at roughly ten billion instances, plus all the molecular structures of incoming food and air and water.

When you can manipulate that, or even just a single cell, on your laptop in real time, come and talk. A 3d model of a virus (but not of the physical properties of the atoms as such, just general rules) is nothing compared to that. Consider the folding problem - we can't do it properly for single large molecules. Now factor in hundreds of trillions of instances in a single cell.

Moore's Law is way too simple for the combinatorics to ever work out in our lifetimes. But here's a way to simulate all that in real time: build an actual copy. No computers are needed, and it works the right way every time!

Oh, I meant to respond to Allen: Even if you have a perfect simulation with all the real world interaction you like, it still isn't the same sort of intentionality as my being me in wetware. In the latter case there's no intentionality required - I just am me, and I don't simulate anything (unless I'm doing a Scrubs-like daydream). In the latter case there is still that representation, that sign-signified dichotomy. It's anchored in the world, but it isn't anchored as me. It might fool my mum, but it wouldn't fool Leibniz' god/the real world.

I really don't think argument 1 has any force, and I believe that the crucial difficulty comes when you say:

"What makes it about me? That is it supposed to represent me by those who set it up."

This seems to me to be a quite problematic statement. I can think of all sorts of reasons why I might decide that a program was or wasn't (in some relevant way) "about me", but these would have much more to do with continuity with my memories, personality etc, and I don't think I would be at all persuaded (one way or the other) by the intentions of the programmers.

Then to Reason 2 - here I think you are on unassailable ground, but I would venture one comment to at least ameliorate somewhat the thrust of your conclusion. It would be my contention that the computer simulation (for exactly the reasons you give) would indeed diverge immediately from the path which a biologically embodied brain might take. But it seems to me that while it may then (sometime after time x=0) be impossible to say that the simulation was 'you' in any relevant sense, it would NOT necessarily be false to say that it was a simulation which HAD BEEN 'you' at time x=0.

By Lindsay Cullen (not verified) on 05 Mar 2009 #permalink

Actually, John, I pretty much agree with all that you said .. at least on an initial, quick reading. I'm quite a sceptic about uploading, although I don't totally rule it out in principle. It's one of the many things that I consider red herrings in debates about technology (though at least worth writing science fiction stories about ... and worth some speculation and thought experiments). Others include the whole Singularity schtick and the idea of "uplifting" non-human animals to human-level intelligence. I'm not a fan of any of those, and nor of cryonics or drastic calorie-reduction diets.

But for all that, as the decades have unfolded we've see constant opposition to real technologies, such as the contraceptive pill (though that battle is pretty much won in Western societies), IVF (ditto), and various methods for carrying out research on embyros, including ideas of therapeutic cloning. I have no doubt that we will live to see some other technologies come along that will look as radical to us as the contraceptive pill looked in 1961 or Dolly looked in 1997.

I don't know what these technologies will be, but I doubt that they'll include uploading. Maybe, for example, we'll see technologies that make it much easier to control the timing of pregnancy and to separate it from sex ... for example, if egg extraction/freezing and IVF become much easier and cheaper. Maybe we'll have some real impact in extending human longevity (though not by the amounts Aubrey de Frey talks about). Maybe we'll overcome some of the problems that have made human reproductive cloning and genetic engineering for "designer babies" out of the question, though again the advances will be modest compared to what we can imagine if let our minds free wheel.

Whatever technologies come through the pipeline may cause genuine problems, but they will also bring benefits, and they'll probably attract some plainly irrational and illiberal kinds of opposition. I'm in the battle over technology solely to advocate that our ordinary standards of rationality and our ordinary processes of policy making in a liberal society should prevail. I don't want to lose benefits because we jump at shadows or pander to some contentious religious or quasi-religious viewpoint.

That should be considered a very moderate view, but given the direction of public policy in most countries of the world, including most Western ones, it is actually looked on as a very radical pro-technology one.

It seems to me philosophers have somewhat been seduced
By the metaphor of storage, and conclusions it implies.
The self, itself, it promises, is something thatâs produced
Via information transfer in that blob behind our eyes.
All too often this assumption underlies their exploration;
The conclusions that it leads to seem a normal path to follow
But inherent in the metaphor is one sort of explanation;
By removing those assumptions, itâs a tougher bite to swallow.
If the structure of the person helps to form whatâs introspected
(And the social and environmental atmosphere as well)
Then feelings, thoughts, or memories just cannot be dissected
From the person as a whole, as information one could tell.
âAh, but thatâs just further informationâ, I have seen in practice,
When I try this explanationâand I want to pull my hairâ
You could stuff it in, of course, but itâs like sitting on a cactus:
Just because it can be sat on, doesnât mean the thingâs a chair.

(A few of us have been exploring a similar topic--in verse-- here:
http://digitalcuttlefish.blogspot.com/2009/02/daniel-dennetts-darwin-da… )

Your argument 1 seems to be based on the assumption that there is something in "you" which is more then just a whole lot of atoms in a certain configuration. Soul, intentionality, or whatever you want to call it.

If that's not the case then I assume you'd agree that if you could magically create another copy of all the atoms in you in the same configuration as you, you'd have for all practical purposes another copy of you and no one (including you, if you were out of it during the copying process) would be able to differentiate from that point on who the "real" you is?

Assuming that's all fine, then we can go to the next step and claim that we have some super-duper insane computer who instead of copying, can simulate all the atoms in your body correctly. And then, if that computer can obtain extremely precise information about the real world, it can build a full simulated world for the simulations inside the computer - and voila you have a full simulation of reality. The only catch is of course, that unless this computer somehow also has the magical power to change the world based on it's simulations, it will never have an actual effect on the real world - so other then being used to make predictions it would be pretty useless. And of course everytime you make a prediction you'd have to reset it, since it's prediction has changed the real world compared to the simulated one (since the computer obviously cannot simulate itself in the simulated world - that would end up being an infinite loop).

Still, I don't see why in principle that couldn't be done and you'd have a virtual version of you that acted exactly like you, so long as nobody in the real world actually looked at the computer (since the computer itself could never be simulated in the simulated world).

Oh and as a practical matter I happen to do fairly accurate calculations (simulations if you want to call them that) of atoms/molecules/solids in the real world, being a condensed matter physicist. And there's no chance of anything like this being remotely possible based on current technology or anything currently proposed in quantum computing. That's not to say it's completely impossible (although I'd guess that it is), but it's certainly not something that will happen if we just make incremental improvements to current computers.

some of the philosophical cream-puffery in the comments so far is pretty cringe-worthy (but I suppose I'm not going to help with my amateur chaos theory bs..)

Even a classical system, given enough self-interactions, will begin to behave chaotically if you run it long enough. This chaos just means that the result of any interaction is not deterministic. There is some set of possible results, and which one the system ends up at is determined probabilistically.

This fact in itself is not a problem. In the simulation, you don't have to get the exact same results given the exact same initial conditions (or as close to exact as you can possibly get) because not even a real brain could do this.

However, it seems to me that the structure of the results space is important. That is, how many end points does the system produce, and how probable are each of the results?

I'm not sure, but it seems like a real brain, made up of millions of actual self-interacting parts (neurons, neurotransmitters, hormones, etc) would create a results space characteristically different than a simulation of millions of self-interacting parts. I guess the reason I think this is because the chaotic system depends so finely upon the initial conditions (and the conditions of each piece that contributes to the self-interactions) that the the fundamental difference between real pieces interacting and a simulation of pieces interacting would be teased out, resulting in characteristically different-looking results spaces.

And, the real results space (I guess leading to your consciousness) is changing all the time due to these real interactions.

So, in my view, in order to simulate this system, you'd need a computer with precision to EXACTLY match the characteristics of neurons et al. (i.e. infinite precision), or in order to maintain a characteristic results space using a computer with finite precision (i.e. a real one), you'd need to update the results space continuously by re-downloading in from the source every clock cycle.

Thus, all you'd be doing is live-streaming the output from a brain (not actually running one). And when the real brain dies, all you have is a computer simulation, starting with an exact copy at the moment of death, becoming less and less like a real brain with each clock cycle.

(Hopefully that doesn't sound as much like Timecube as I think it does... it's early still)

I think you are making three underlying mistakes.

First, "replicating the mind" is different than "replicating the brain". The later is a physical entity, like the solar system you mention. So no, the best a computer can do is contain an abstract representation of the thing. But the former is an abstraction, a process of some sort. And just as an arrangement of membranes and chemicals can execute this process, so can (in principal) an arrangement of wires and electrons.

Now, of course there may indeed be no way to get a completely faithful duplicate of the process, such that they never diverge, even (falsely) assuming that the two copies would get identical inputs forevermore. You argue it does, but I think this is your second mistake: it is self evident that it does not matter. By your argument, if I get hit by a cosmic ray today, and that alters the state vector in my brain, then in order to count as "me", my computer-embodied duplicate would have to simulate that cosmic ray too. It follows that I should be sure glad I got hit by that cosmic ray, otherwise, had I not been hit, I would have ceased to be me.

Obviously, our minds are noisy things. Yet we seem to stumble along fine, despite the noise. I don't cease to be me just because of some perturbations now and then. Or all the time (van der waal's forces, weak attraction, or whatever). No, what defines "me" is obviously the equivalence class -- what is left over when you ignore all the meaningless little random variations, even when those variations affect the outcome, my thoughts and actions. (And obviously, it may be that van der waal's forces might turn out to be significant, in that they have introduce some non-random bias into the functioning of our brains, in which case the computer would have to have some equivalent bias.)

And lastly, as another commenter pointed out: a computer can do the same thing an abacus does without simulating the sliding of beads on wires. The virtual you can take shortcuts. It does not necessarily need to do protein folding and cellular simulation. It just needs to get a close enough result.

And just for good measure, the whole argument reminds me of anthropocentrism. I know you don't want it to be the case. But reality doesn't care if it makes you uncomfortable. To be sure, I think a definitive positive our negative result would be exciting: a demonstration of mind duplication, or some interesting new physics showing why it can't be done.

Despite the claim that "this is not a Chinese room" and then the intuitive "if it is a lego machine it is not me" was stated it became exactly Searle's Chinese room and subject to the same a priori faults as Searle's argument from personal incredulity.

Everything that follows is but handwaving in support of a longstanding, thoroughly debunked assertion. I have a truly marvellous proof of this proposition which this text box is too narrow to contain.

The key term here is "significant". What counts as the "natural" equivalence class into which we divide the phenomena to be represented?

That's part of the answer to your "Reason 2". Yes, there are a lot of interdependencies in the brain, and between the brain and the body. But you don't have to simulate "every molecule" in the brain to simulate its functioning an a real sense.

To get an exact simulation, sure - but even then, the difference starts to get abstract. Imagine we had a way to do a quantum clone of a whole human being - you'd have two people who were precise copies, down to quantum states. (According to QM, neither would actually have a better claim than the other to being the 'original'.) Even if you tried to put them in the same environment, microscopic differences and quantum fluctuations would cause them to diverge. But would that mean neither of them were that person?

Your example of the evolved circuit that had complex internal interdependencies is well-taken, but doesn't mean quite what you think, for this reason. The function they were going for was implemented very efficiently, and in a non-orthogonal way... but other, orthogonally-implemented circuits can carry out the same function.

A neuron is practically a small computer in its way, as Jonathan Vos Post points out. However, its overall function is not critically dependent on every single contingent fact of its makeup. It's for that reason that there can be a "Laplace-transform domain quite different from the physical substrate" - and if it's quite different from the physical substrate, then the same domain could be implemented by other substrates.

The neuron is really complex and nonlinear, but not infinitely complex. And if it were as hypersensitive to precise conditions as one might conclude from reading some of these responses, the slightest impact or loud noise would result in constant epileptic fits. Neurons combine both sensitivity and robustness. I don't see where the necessary sensitivities can't be replicated (though not by a trivial circuit or program) and the less essential incidentals be abstracted away.

I agree with the other objections to "Reason 1" above - like kevin and malachip.

A mind - a person - is not an object - a thing. It's a process, a pattern. Replicating an abstract process on another substrate is different from simulating a physical object.

Consider the MS Word example above. It's a process that requires a complex environment to run on - Windows, an x86 processor, etc. One way to run it on Linux is to have it run in an emulator - but there are different kinds of emulators.

If run on an x86 processor, you can use WINE - a program that (sort of) pretends to be Windows. Word 'sees' the same subsystems to call on, and works fine. Or one can take another route, and emulate the x86 processor, and run Windows on that emulated CPU, and Word on top of Windows.

Or... you can take the source code of MS Word, and recompile it to run on another CPU and OS - MS did this with Macintosh computers. The high-level behavior is the same, but the low-level subsystems can be implemented very differently.

Now, human minds and brains are not nearly as orthogonal and separable as even a Microsoft computer program - but I already talked about "Reason 2" above. An actual simulated human on a computer would probably need a mixture of strategies - vaguely analogous to emulating parts of both the CPU and the 'operating system', with some subsystems 'recompiled'. Probably the computer itself would need some special design to make it remotely efficient.

But unless neurology depends on radically different physics than we think it does, 'uploading' is possible in principle. Whether it'll ever be practical is still an unsolved problem in engineering, though, I'll admit.

I find great solace in the fact that nature is not limited by the human imagination.

John:
The point wasn't that it could be done now, but that the argument "we can't do it now, therefore it can't be done" isn't very good in this case. And I seem to recall a BlueGene/L doing a simulation of a portion of a rat cortex not too long ago..

Sorry, not very convincing.

Reason 1: Basically boils down to metaphysics. You are asserting some property that is 'you' that is not defined by your physical properties. Take you picture example, you allow that this picture has a few physical properties in common with yourself. If ever there were a system that had all physical properties in common with yourself, the copy, in all physical ways, would be you. You have two soft outs here that I see. One, that to share all physical properties with you the copy would probably need to be either robotic, or at least have a simulated body, but that certainly isn't a show stopper.

Reason 2: you are affirming the consequent. "If you take the structure of a mind as implemented in wetware, and put it on electronic hardware, it will inevitably lack many of the properties, and gain many new ones, of the biological mind. I'm not saying it can't be done: just that if it were, the end result would not be "the same" as the original." That sameness is exactly the argument in question.

Take your example of the circuit board. Those extra circuits may not have a meaning in a standard wiring diagram, but they do have a functional role. They could be removed any replaced with anything which occupies that same functional role (other forms of inducers, maybe a super conductor). Likewise as long as we know the critical functional roles of every aspect of the brain, we can theoretically replicate it in many mediums. You assume this can never happen before you make an argument why it can't.

I'd also like to point out that we are much more forgiving with the terms 'you' and 'me' than you seem to be. People go through extreme change all of the time which fundamentally alters how they think and behave and yet we still call them the same person. Lost limbs, psychoactive drugs, love, and simple aging all change your physical properties just as dramatically as the changes envisioned in 'downloading' yourself to a robot. It seems that the defacto definition of self-hood is an entity that has a contiguous path through space and time (that we don't jump through space or time without filling in the spots in between), but I would argue that this is a pretty boring definition. It doesn't differentiate us from rocks, nor does it bear any relationship to the brain or mind. Under that definition, what makes you 'you' is not your thoughts or feelings, but merely your position. So maybe the most productive way is for people to start nailing down a definition of the terms 'you' and 'me' that both allows for drastic physical and mental change, while remaining a interesting and meaningful definition.

The map is not the territory.

By Kevin Clarke (not verified) on 06 Mar 2009 #permalink

Science Fiction award-winning ex-software guru with Pharmacology degree asks, after I pointed him and his blog readers to this thread:

http://www.antipope.org/charlie/blog-static/2009/02/the_21st_century_fa…

#199.

... [what] "about tunnelling nanotubes. Speculative, I know, but they've been observed in human tissue -- which suggests there's something we were unaware of until 2004 going on and I suspect if it occurs in kidney cells we'll probably find something similar going on in the brain."

"There's a lot of stuff we just don't understand in cytology: for example, what do vault organelles do?"

I don't know. Can you help, Prof. Wilkins?

J.S.W.:".. the end result would not be "the same" as the original."
Then.. J.S.W:s version yr2009 brain restored to version yr2001..(by biomolecules, or stroke ..)

Friends et al:"That's not the J.S.W. we knew of, he is like some old copy or robot continously mumbling about WTC.." ?

How much time tolerance allowed to "the same" ? Depends on friends+relatives ?

Malachip: the Chinese Room argument is meant to show that intentionality of semantic language cannot be got from syntactical processing. That is not my argument here. Let's see if I can give you a better account: If I am coterminous with a description of me, then if you were able to list off all my properties, there I am in your description. According to the downloaders, that description is not a description of me, it is me. I am not arguing that intentionality in general cannot be got from some computable process (it can IMO - via teleosemantics). I am arguing that the description is not the thing described. This is a common error that I call the reification fallacy. As Kevin gnomically said, the map is not the territory (unless the territory is used as a map of itself).

Actually, I think your basically asking what is consciousness? or why am I me?

That's what it all boils down to. In my opinion, "Why am I me" is the most profound question that can possibly be asked, and strikes at the heart and soul of the consciousness problem, if it doesn't actually define it. All knowledge depends on who and what 'you' are. If physics as we know it was all there is, and the universe truly had no preferred locations and times, then that question should not be mysterious at all. If you could make a theoretically perfect copy of yourself, which one of them is 'you', and why? What physically distinguishes the fact that reality is being experienced from the perspective of your body, rather than from the perspective of your perfect clone's body? Of anyone else's body, for that matter?

John, your response to Malachip doesn't seem very convincing to me. As I understand your Argument 1, you see the representation as essentially being without semantics, and thus mere syntactical processing...which degenerates into the Chinese Room (a situation that similarly lacks intentional grounding).

a representation, being a semantic entity, has to have some referential link to the thing it represents, and it is in that relationship that the representation gets its purchase as a representation. And that relationship is the outcome, I think, of our intentionality not of the thing itself. So I would not say that the representation, however dynamic and convincing to observers, is me. At best it represents me in the abstract semantics of the observers.

But I would argue that, at this point, that is all that representations are -- the perspective of some observer, and not a property that is inherent in the system. As you yourself point out, "we anchor these abstractions so they have intentionality" (emphasis added). It is our external assignment of meaning to the abstractions that gives them whatever intentionality they have. This assignment has to be external, because as you've just pointed out, the internal abstractions have no idea to what they actually point to in the real world, or if they point to anything at all. As you note, as long as one had the proper interface, the same abstractions could be referring to multiple physical situations that are nonetheless abstractly isomorphic (one example is that systems of masses, springs, and dampers can be described using exactly the same equations as those for capacitors, inductors and resistors). More to the point, one could actually switch between those different isomorphic states and the system would be none the wiser. You're not going to get intentionality in the system, but simply from the observation of the system ("Hey, that program is hooked up to springs and masses, so it must be representing those internally.") There is no intentionality inherent in the system, even when it has a causal connection to the physical world.

By making the claim you do that the abstraction itself doesn't actually represent anything, I think you've posed a greater challenge to your account of intentionality and mental representation than you want to. I think that challenge is right, but I get the sense you don't.

Since we have no objective method of measuring consciousness, we can dispense with Reason 1. Your unhappiness is not a reason.

As for Reason 2, we simply don't know yet what level of detail we need to properly simulate a neuron. I will be very surprised if turns out to be the molecular level. I will be very surprised if the function of a neuron, at least for the purpose of building a human-like mind, can't be adequately simulated by something far simpler than the original cell, which is full of evolutionary baggage -- necessary to its function as a living cell, but probably not necessary to thought.

Rally all the arguments you wish, a copy is not the original. All any argument that a copy can be the original is saying, is that the person presenting the argument wants to world to work in a manner that lets him have what he wants. No matter how faithful, how complete a copy is, it is still something, or someone, else.

Basic concept time: A single object cannot exist in two separate locations at the same time.

In order to upload ourselves to a computer we cannot copy ourselves, but must instead find a way to remove us from our bodies and then upload that. We do that we prove we can survive the death of our bodies, and so prove that life after death is possible. Now consider the implications and consequences of that.

Basic Concept Time: No matter how you rationalize your desires, reality wins.

Hmm, don't really agree with either of these arguments but not got time for lengthy discussion as to why.

They both seem to boil down to saying "it's not me because it's not made out of the same stuff". I think this misses the point about what we're talking about when discussing representations of ourselves. We're not the stuff the system is made of, we're the interactions and processes arising from the system. We can quite conceivably have the same interactions and processes arising from two completely physically different systems. In light of this your two objections don't make much sense.

But I'm busy, tired and too frazzled to argue the point even vaguely coherently :p

Mind is a rather odd thing actually. It would seem that it cannot be separated from the physical substrate that it is caused by or correlated with (take your pick), and copying may be practically impossible, even if you assume a complete materialist substrate.

It seems to be an identity problem. But physical identity and conscious identity are too very different things. Mind is only associated with a particular physical substrate in passing. All the atoms in your body are replaced something like every ten years, your neurochemistry changes, and neurons themselves are dying and changing. Some even grow new connections. A analogy might be an ocean wave, where the essential pattern does not change too much over time, but individual atoms do. Nevertheless, at any given time, the wave is always associated with specific atoms. But ocean waves are not conscious (as far as I can tell), and for mind, the why-am-I-me problem still remains: why this physical pattern and not another? A unique type of identity problem.

the answer is that the mind is a verb, not a noun.

By Sammy Brown (not verified) on 06 Mar 2009 #permalink

Suppose that it is possible to upload a "me" while "I" still exist ("me" being the uploaded copy and "I" being the original.) and that we do so. At the instant after uploading two possibilities present themselves:

1. Either "me" and "I" are the same "individual"; which implies that every thought, act, process, etc that occurs in "me" occurs simultaneously in "I". However, if "me" and "I" occupy different spacetime locations then for this condition to obtain would require that "me" and "I" can communicate instantaneously i.e faster than the speed of light; extremely unlikely to be possible within all known laws of physics. Alternatively "me" and "I" must be entangled - a state identical to quantum entanglement but on a macroscopic scale, i.e. each and every 'element' (quanta, particle, whatever) of "me" must be entangled with each and every corresponding 'element' of "I": also extremely unlikely to be possible (particularly if the 'substrata are different.).

or

2. Known physics is not violated and "me" and "I" occupy different spacetime locations. Therefore, at each and every instant, no matter how small a time interval involved, after the uploading then "me" and "I" experience 'external reality' differently - at least in time. If "me" and "I" experience 'external reality' differently then "me" and "I" are affected differently by it at all instants in time so "I" cannot be "me".

I conclude that uploading is not possible without violating known physics.

There are some core assumptions here that the debate rests upon. First, mind is held to be physical. How that is taken depends on the individual thinker, but I take it to be the case that the mind is solely the physical processes going on in human bodies, specifically the brain and central nervous system. If there is a soul or separable mind, then the game is over. So the issue is what physical attributes something has to have to be a mind, and in particular to be my mind.

Second, this is not really about personal identity. I do not care for the purposes of this argument whether or not the copy would be the "true" me or not. Assume that two identical copies in every respect save location are both me. The question is what one must do to copy me. In other words, how much difference between them doesn't make a difference?

Third, the issue arises of when a representation or simulation is the same thing as the thing it represents or simulates. We mention simulations of living cells above - this is no philosopher's conceit. It takes real computational power to simulate even molecular behaviours - if we need to do that for individual molecules rather than classes of them to make a decent simulation of a "real" cell, we are behind the eight ball, although it may not be a problem for a philosopher where it is a real problem for the computational biologist.

This last point is a wide one. I am interested in knowing, for example, when we can rely upon simulations for real world answers to, say, conservation management. If we do an "individual based model" for a given population or area, are our answers going to be accurate or just "in the ballpark"? This is not something we can answer a priori; and if it turns out the solution and the realworld outcome are different, then, and only then, do we know we failed to get the "right" equivalence classes.

But the transhumanist downloading issue presumes we would know the "right" equivalence classes, and that is just to get a simulation that behaves like the realworld me. What differences would make a difference in a downloaded me?

Cannonball Jones: Yes, systems can be superveniently realised. But if the Inverted Eleatic Principal is right, then all physical differences will in the end make a difference. There is a phrase used in the supervenience literature: "in all relevant respects the same". What counts as "relevant" depends on how you set up the descriptive categories in the first place, which gets back to reason 1.

It seems to me we are still somewhat object-orientated in our perceptions of ourselves. We are also Heinlein's "pink worms", events that follow a unique trajectory through space and time. I claim, for no particularly good reason, that no one but me can occupy my space or follow my course exactly through spacetime. A copy of my consciousness 'downloaded' into a supercomputer or something like Mr Data's "positronic brain" would still be a copy of me not me. When I die, if the copy of me had been downloaded into the android body, I do not believe I would suddenly find myself transposed into an android. I would simply stop. What continued would be very similar but it would not be me.

By Ian H Spedding FCD (not verified) on 07 Mar 2009 #permalink

But the question isn't whether it will be the "real" me on that computer, but whether it would be a good replicate of me. I assume it won't be the real me, whatever that is, since either the me that now exists will continue to do so or would cease upon being sufficiently well scanned to replicate. Either way it would be a copy. My concern is whether it would be a good copy.

My concern is whether it would be a good copy.

I suppose the best place to look for an answer would be identical twins, who are about as close as you can find to being copies of one another. I'm only one observer, but my experience has been that even though they share many traits, they can be very different people. If a copy of "you" exists at another location, and it is identical to you in every respect except location, then it's location is still a property that distinguishes it from you. At the moment of replication, it will instantaneously become "not like you" due to different interactions with it's altered environment, and it will diverge more and more from you over time.

Gosh, John, you've stirred up the transhumanists with this one.

I'm always confused by people who talk about uploading as a cure for disease. If we had such a complete knowledge of neurobiology as to be able to create a complete, "good" copy of our brains... how many diseases would still be incurable?