Extropians, Kurzweil, Libertarians, and the deluded immortality scam

The story should begin with the victim. This is Kim Suozzi, 23 years old, and diagnosed with a terminal brain cancer that was going to kill her within a few months. She's doomed and she knows it, so she has gone to Alcor, signed over her life insurance money, and asked to have her head frozen after death in the unlikely hope that someday, someone will be able to revive her. I feel a deep sadness for her; for someone so young, for anyone, to be confronted with an awful mortality is tragic.

She did die too soon after this video was made. And now we learn about the bumbling corpse mutilation that occurred afterwards.

You might want to stop reading right here. It's a hard story, especially after seeing the young woman alive.

Within minutes of taking custody of the body, the bumbling Alcor team began experiencing a series of equipment failures. A temperature monitor didn’t work because, as it turned out, the batteries were dead. Shortly thereafter, their expensive mechanical chest-compression device stopped functioning. Then, having moved Suozzi’s body into a tub of ice, the Alcor team realized they’d forgotten to bring along a key piece of cooling equipment. Alcor’s after-action report, compiled from the haphazard “free-form” observations of an unnamed but “experienced” observer, determined that such mistakes could in the future be remedied by “the use of a checklist.” Now there’s a thought.

Forty-five minutes after Suozzi was declared dead on the morning of January 17, 2013, her corpse arrived at Alcor headquarters, where a crack team of quacks shaved her head and drilled a number of sizable holes into her skull. Microphones were then inserted in order to detect the cracking sound of tissue-destroying ice crystals—a freezer-burned brain being even less useful to the imaginary reincarnators of the future than an otherwise undamaged one.

At 9:33 a.m., Suozzi’s body was moved to an operating table. Ten minutes later, Alcor’s technophilic necromancers completed “cephalic isolation”—a euphemistic neologism that means they cut off her head. Such bloodless jargon obscures the macabre slapstick of the antics in the morgue—er, “operating room.” As the magazine account went on to relate:

9:45 a.m.: Cephalon placed in holding ring of cephalic enclosure.

[Translation: They put Suozzi’s head in a box.]

9:51 a.m.: Cephalon fell out of holding ring.

[Translation: Her head fell out.]

9:52 a.m.: Cephalon repositioned.
[Translation: It’s a good thing that, as far as anyone knows, none of these people have been operating on live human bodies.]

Suozzi’s bodily fluids were flushed and replaced with a specially formulated and questionably effective “cryoprotectant”—antifreeze. The official recap alludes to a certain amount of rubbernecking and bickering consistent with past insider accounts of Alcor operations. That wasn’t all. “Unfortunately,” the Cryonics report notes, “there was some confusion and disagreement regarding the ideal temperature at which to perform surgery.” One might assume a forty-four-year-old organization devoted to storing body parts on ice would have reached some working consensus on this question by now.

In the months ahead of the procedure, Alcor boasted of the important research data it would glean thanks to Suozzi’s corporeal donation. But afterward, the official notetaker lamented that the only information collected during the procedure came from the thermometer crammed into her nose.

In Alcor’s account, “the actual success of perfusion in this case appears negligible.” (Perfusion is the term for pumping fluids through blood vessels.) A CT scan later confirmed that “cryoprotective perfusion was not generally successful”—meaning that Suozzi’s brain would not be well preserved. (Or, in Alcor jargon, “cortical cryoprotection” was “minimal.”) In other words, the procedure was a failure. The Times glossed over this and other facts that undermined its bizarrely credulous narrative, which tacitly endorsed Alcor’s ongoing con job—and, by extension, the agenda of its Ayn Rand–worshiping techno-fetishist leadership.

Read the whole thing. These guys are running a scam, motivated by extreme libertarianism, a misbegotten transhumanism, and an ugly combination of ignorance and wishful thinking. They're also bankrolled by a lot of young, stupid, filthy rich Silicon Valley entrepreneurs.

I've complained before about the bogus 'science' behind these vultures, but this story exposes the outright venal stupidity driving it all. It isn't mere incompetence, it's malicious ineptitude.

More like this

And, of course, the Ph.D. neurobiologist can't come up with any constructive criticism to try to turn cryonics into a feasible technology.

I suspect that reflects his awareness that he has already exhausted whatever contributions he can make to his field. .

By Mark Plus (not verified) on 08 Mar 2016 #permalink

I have to take personal exception to Dr. Myers’s characterization of cryonics as a “scam.”

One, I helped to raise the bulk of the money for Miss Suozzi’s cryopreservation, on the order of $48,000. The New York Times article overlooked my personal role in this, but I don’t care because at least I pulled it off. And not a single dollar of that money wound up in my pocket. I worked on her fundraising in addition to my full time job, and without compensation.

And two, for the past 25 years I have worked for the Arizona businessman and cryonicist David Pizer. Mr. Pizer served on Alcor’s Board of Directors for several years in the 1990s’s also without compensation. This time came at the expense of working at his other business ventures. Fortunately he could depend on me to run a motel he owned at the time near a ski run in Southern California. I know from long personal acquaintance with many people in the cryonics movement that no one has gotten rich off of cryonics.

Dr. Myers would do better to become personally acquainted with cryonicists himself instead of ignorantly accusing us of taking financial advantage of people.

By Mark Plus (not verified) on 08 Mar 2016 #permalink

@Mark Plus
Was or was not a life insurance policy and/or its proceeds signed over to these people, and if so what was its value?

By Warren McIntosh (not verified) on 09 Mar 2016 #permalink

About "They’re also bankrolled by a lot of young, stupid, filthy rich Silicon Valley entrepreneurs." How did they get to be filthy rich so young, if they're stupid? Is it possible they are seeing something that you're not, because they're not stupid, but actually more intelligent than you?

By robin helweg-larsen (not verified) on 09 Mar 2016 #permalink

That's not a description of a technology, more of a vague hope. Given the omissions and failures it's difficult to think they even believe it.

PZ, thanks for publishing this, I'm with you, and sorry to see a bunch of defenders of it come rushing in with comments.

I'd suggest that you get in touch with Orac, whose ScienceBlog "Respectful Insolence" specializes in taking down quacks of all kinds. He's a successful cancer surgeon and no doubt he'll have some things to say about this from a position of real expertise in surgery.

The Temple of Alcor is only one "denomination" of Transhumanism. Another is The Singularity, which is, if anything, even more quacky.

The Singularity story goes: At some point (within the lifetime of Ray Kurzweil), artificial intelligence will achieve parity with humans, achieve consciousness, and rapidly bootstrap itself to godlike omniscience. This will Change Everything and usher in a new age. It will also make it possible for humans to "upload" their souls, er uh, minds, to the God-box, to achieve immortality.

At root it's a syncretism between Christian eschatology and Hindu reincarnation beliefs, wrapped up in high tech to make it seem "sciencey."

There's another version I refer to as "gradualist upload", whereby Singularitarians hope to replace biological neurons with silicon neurons, one at a time, thereby allowing a more "organic" (that word again) "transfer" (that word again) of mind to immortal machine.

About which I put together a spreadsheet to do the really simple arithmetic:

Given 86 billion neurons in an average brain. If you can replace one biological neuron (and its average of 7,000 connections) with a silicon neuron every second, replacing all the neurons will take 2,727 years. Plug in any replacement rate you like, to see how many years it will take. Or choose how many days you want the procedure to take, to find the replacement rate needed.

If you can replace neurons at the rate of 163,623 per second (and 1,145,357,686 connections per second), you can do it in one year. During which time there's also a hole in your skull that has to be kept surgically sterile, and by the way, you can't get out of the hospital bed during that year. To quote someone famous, "it ain't gonna' happen."

This stuff is the latest celebrity religious cult making the rounds, as with Scientology in Hollywood. But it's more pernicious than Scientology because at least for now, the media and others take it seriously, and oooh & aaah over the high tech, including new consumer baubles that come from the true believers.

Rationalists need to expose this crap for what it is.

Replying to other comments:

1) Nor does PZ or anyone else around here feel like trying to "come up with any constructive criticism to try to turn" homeopathy "into a feasible technology."

2) Not getting rich on this does not make it any more true. It means you're not acting with a criminal intent to defraud (which is good). It also supports my hypothesis that the underlying motive is essentially religious despite looking sciencey. That's OK, you have lots of company, for example Deepak Chopra.

4) They got filthy rich at a young age because they were brilliant in a narrow area that just happened to hit an economic boom. That happened in earlier eras with the steam engine, the automobile, etc etc. But that does not make them comparably intelligent in a broader sense, much less wise. Henry Ford was a smart guy too, but he believed in Naziism, and it took the Ford family decades of philanthropy to atone for that.

Please explain to us the neurobiology and physics involved in transplanting a mind from a brain into a computer, and tell us how that's any different to traditional reincarnation beliefs.

And, please explain to us how creating conscious AIs to do all of our Earthly chores for us isn't a new form of slavery. (Hint: "they'll be programmed to enjoy it" is no different to dosing up human slaves with happy-drugs to make them "enjoy it" too.)

I still don't understand how this is supposed to work. By the time the head is severed and frozen, the mental state we call "mind" has long since broken down.

Does anyone seriously expect that, once the brain is thawed and infused with new "blood" (whether natural or artificial), that the *mind* would spontaneously pick up where it left off just before death?

If not, then what's the point of all this?

Basically they _believe_ that in far future humanity will be able to thaw brains/cure cancer/reconstruct mind state. It doesn't matter how it might be possible, because Future Science.

How is this woman a victim. From the NYTimes article, it was clear that she knew about the inadequacies of Alcor's procedures and practices.

The Church of Perpetual Life* people who made the youtube video maybe need some help with spelling. They meant "Cryonaut", not "Cryonaught". "Naught" means "not", or "nothing". Well, in this case it means "big surprise".

In any reasonable future, these people will be awarded the Philip J. Fry "I never thought I'd die like this" Award

*Isn't the name kind of a dead (!) giveaway for the problems with this whole operation? Yes, Mark Plus?

Greg @ 11: Thanks for the heads-up about the Church of Perpetual Life, about which I've not heard before. They are now officially on my list of Transhumanist denominations. At least they have the honesty to describe themselves as a religion, something that Alcor and The Singularity should also do.

Simon @ 10: On one hand, if she was fully informed, she was a consenting adult. But on the other hand, if she was given unrealistic expectations and forked over $48K for this, then it's arguably fraud in a manner analogous to someone telling her they'd cure her cancer using homeopathy. And on the third hand, they apparently botched their own procedure so badly that if any of them are doctors, their screwups add up to malpractice or arguably so.

Fullmoon @ 9: Yes, exactly. They believe, but there is no empirical evidence to demonstrate that their beliefs are correct. Therefore what they have is faith (affirmative belief in a proposition, in the absence of empirical evidence), which supports the case that this is religion rather than science.

JFB @ 8: According to their beliefs, "at some point in the future" it will become possible to reconstruct the brain, cure the cancer, and then revive the person (presumably with a replacement for the rest of the body). One of the means that the cryo-faithful believe will accomplish this is nanotechnology. All of this is completely without any empirical backing, and based entirely on faith that certain things will become possible in the future. By analogy, I'm building an FTL (faster-than-light) spacecraft, but I'm leaving a big open space in back where the FTL Drive will go when it's invented in a century or two: would you like to invest? ;-)

In theory, if their beliefs were correct, the restart of the brain would bring about restart of the mind, and the subjective sensation would be similar to what occurs when someone is brought back from having been under general anaesthesia. But once again, we may as well be talking about investing money today in an FTL Drive for a spacecraft that "might" be possible to complete "some day."

If people want to believe that stuff as a matter of faith, they're welcome to do so as long as it's a free country. And those of us who are complete skeptics about all of it will point out that their beliefs are silly and pernicious. Same as with conventional religion.

"Dr. Myers would do better to become personally acquainted with cryonicists himself instead of ignorantly accusing us of taking financial advantage of people."

He won't do that, he's a dick. A smart dick, but still a dick. I don't think cryonics is likely to work, but Pzed wouldn't miss an opportunity of make an ass out of himself.

By Christopher M (not verified) on 09 Mar 2016 #permalink

I don't understand the "no evidence" and "faith" argument against cryonics, botched procedures aside. It's not faith, it's simply extrapolation. Your personality is almost entirely encoded by your brain. Your brain has limited volume and thus can be described via a limited amount of information. Preservation technology preserved some of that information and keeps improving, presumably preserving more. Does the current tech preserve enough information to preserve "you" to a degree that would be sufficient? We don't know, I would say more likely not, but it is possible that it does. Will it get there eventually? There's no fundamental reason why it can't. Then, there's a question of restoration - I belong to the um, upload school of thought - to restore the brain in any form from the primitive preservation procedure we'd need to understand the mechanics of how minds work, at which point it seems pointless to keep the old brain and restart it, instead of constructing (or uploading) a new one. Will we ever have advanced enough tech? I don't know, but there's no fundamental reason why we won't. Will anyone bother with those old brains? As far as I am concerned that's actually the biggest question here, the rest is just a matter of continuing technological progress (and that it actually continue, i.e. stable human civilization).

None of this is "faith" or "wishful thinking" like an FTL drive, it's simple extrapolation. Sure, it's a very long shot. I am not sure yet if it's worth it for me, for instance, given my current income/savings and my estimate of how much of a long shot it is. Still, if I happen to live to my expected age with my expected income and progress continuing apace, and/or I otherwise get more money than I need and/or there's faster-than-expected progress in the field, why not? The bottom line is, it's one hell of a better shot compared to the alternatives.

Re. SES @ 14:

One of two things is true:

1) If per current neuroscience, the mind is produced by the brain, then the two are identical: the mind is not separable from the brain. A mind is not software, it is not merely information, it is the product of the interaction between the information and the processing on carbon-based wetware. Further, neurons are not merely binary switches, they are also nodes in the overall network, and most of our conscious experience is the product of neurochemistry (emotions) rather than electrical events.

The problem with upload is the same as the problem with cloning. You clone yourself, your clone lives, you die, and you're still dead. You do not continue to live on in your clone any more than in your hypothetical identical twin.

That's the fundamental reason why it isn't possible. The practical reasons include:

a) The issue of getting fine enough conductive pathways into the neurons to actually record their individual electrical states (which may include quantum state information that is altered by the process of measurement; this is standard QM not Deepak Chopra nonsense).

b) The scaling problem whereby fine-enough conductors may themselves be subject to QM interference (as the computer chip manufacturers discovered re. ultra-miniaturization).

c) The size of the conductor bundle relative to brain structures that have not yet been measured as the process proceeds.

d) The problem of measuring the chemical states that are present in neurons, that will require sensors at the end of those conductors, that in turn will make the tips of the conductors larger, compounding problems (a) through (c).

e) The fact that each neuron isn't merely a relay, but a network node in and of itself, with internal processing occurring that affects other neurons downstream (typically 7,000 such connections per neuron).

f) You forgot about the glial cells, which are not merely structural tissue but also have a role in information-processing. Now instead of 86 million neurons only, you have another 84 million cells to worry about.

g) Something else I can't remember at the moment, dammit, I need to recharge the battery up there;-)

2) If, per traditional religion, mind is the product of the interaction between the brain and some other fundamental state of existence conventionally referred to as the soul, then yes you can upload that to any platform you choose. It's called reincarnation. If you can reincarnate into a computer, you can also reincarnate into a cat. Or a cow, sacred or otherwise.

And if, at death, your hypothetical soul is liberated into a hypothetical hereafter (in other words "uploaded" to a new body, or to the mind of God, or whatever), then why on Earth would you want to grab it and shackle it to a silicon prosthesis?

3) As the old song goes, "don't fear the reaper."

If (1) is true, then there's nothingness, which means there's nothing to fear. I would seriously and earnestly suggest you look up the Buddhist meditation practices that are geared toward reaching a state of nothingness, and practice them until you become sufficiently familiar with nothingness that it loses its capacity to inspire fear. This is not easy but it is well-known, understood, highly practical, and costs nothing. (There's a "nothing" pun in there somewhere;-)

If (2) is true, be a good scientist and prepare to explore a whole new world.

But interestingly, if (1) is true, you won't even know it! You won't exist to say "Aw darn, no heaven!" The only answer to the question of death that you will ever be able to experience after you're dead, is "Yes" (2). Look on the bright side and cheer up!

4) Re. the "gradualist" approach, you don't have an answer to my elementary arithmetic (2,727 years at 1 neurons per second, or one year at 163,623 neurons per second. You can't afford a year-long stay in the hospital.

5) It is not simple extrapolation.

Consider relativistic space travel (there's a reason I used this as an example;-) Simple extrapolation gets you "even though we can't achieve 1.0c (100% of light speed), some day we'll achieve 0.5c, and there's no reason we can't achieve 0.99c."

But the problem is that the energy required for acceleration increases as velocity increases, per Einstein's equations, so the amount of energy required to accelerate to 0.99c is completely unachievable in any realistic sense, now or later. (For this reason, my interstellar plans are based on a mean-average speed of 0.01c (1% of light speed), resulting in the need for many-generational ships, which in turn are entirely practical.)

Simple extrapolation does not work outside the everyday classical realm of macroscopic objects and sub-relativistic speeds, or when dealing with high levels of complexity. Simple extrapolation does not even predict the next genetic mutation of the yearly flu virus, hence each year's vaccine is a "best guess" and occasionally (such as last year) they don't guess correctly. Getting the flu shot is still a good idea, since partial immunity is better than none, but "partial" is not a word that applies to death, any more than it applies to pregnancy. There's no such thing as "a little bit dead" any more than "a little bit pregnant."

6) It's a long shot alright.

A long shot like curing cancer with homeopathy. Or flapping your arms to fly after jumping off a bridge.

7) But what if I'm wrong?

What if the Silly Con Valley elite find the magical magic that lets them foist their abundant egos upon the world until the day the Sun explodes and takes out the inner planets?

How would you feel knowing that you're still headed for a one way ticket to Oblivion, while those preening overweening narcissists with enormous bank accounts are heading for a ride on the Eternity Express? How would you feel if you were presently starving in (name of country here) whilst the men in the high castle are enjoying feasts fit for kings?

Would you just say, "oh, I'm just not good enough to deserve what my betters have"...? Would you accept that with equanimity and peace and good will?

Because if you would accept it, then the result as far as your trajectory is concerned, is no different to what it is now.

And if you wouldn't, then you should seriously consider voting for Bernie, because it's going to take democratic socialism to ensure that everyone gets an equal shot at Forever. (Don't count on winning the lottery; playing the lottery is also called "paying the Math Ignorance Tax.")

So, one more of two more things is true:

Either a) under laissez-faire capitalism, you get 70-some-odd years (maybe) while Billy Billionaire gets Forever, or b) under democratic socialism, you get your shot at Forever, no matter how silly I (and the consensus of neuroscientists) happen to think it might be.

Sorry if it sounds like checkmate, but there are two ways out, as I've described: get familiar enough with nothingness that it ceases to scare you, or work your derriere off to get the democratic socialist elected.

I'm trying to figure out of people are objecting to the horrific ineptitude/unpreparedness described in the article or if they are just dismissing out of hand the idea that we can and should cure the mechanical failings that lead to death ?

By ZiggyBoomBox (not verified) on 10 Mar 2016 #permalink

@Christopher M (comment 13): You're saying "dick" like it's a bad thing. Sometimes it's good to be a dick.

@G (15)

There's another aspect of this to consider.

Suppose these guys are right, and in, say, 250 years or so, they can bring you back to life and "upload" you (the original "you", not a copy) into a new body (organic or otherwise).

Your family and friends are dead (unless you got a group rate). Whatever skills you had will likely no longer be relevant in this time period. You may not even understand the language. Social mores will likely have changed. Popular culture will likely bear no resemblance to what you remember.

Think about how well an 18th century aristocrat would fare in modern society, and how disorienting it would be. Not just in terms of technology, but also in the social changes that have occurred. I'm sure a few would do just fine and rise to the challenge, but on balance I'm willing to bet most would be miserable.

Which brings up yet another question - why should any future society bother bringing you back at all? What's in it for them?

Any immortality will come from the manipulation of DNA, or perhaps we can someday download our brains into a computer program and live forever in a perfect matrix.

Ziggy @ 16: That phrase "the mechanical failings that lead to death" is a classic line of Extropian catechism. It demarcates the difference between the progress of medical science and faith in miracles. Rational people seek cures for diseases, not miracles; and faith is not science regardless of how it's dressed up.

But if we're going to argue philosophy, I would suggest concentrating on the quality of life rather than the mere quantity of it.

Chris @ 17: What I noticed about Christopher M's comment was the curious proximity of "dick" and "ass." In terms of practical applications, one needs to choose whose which is doing what;-)

JFB @ 18: Good points. Though, the True Believers will say that any existence is better than nonexistence, and up to a point that's not entirely as crazy as most of what they believe. It becomes crazy when "up to a point" is equated to an immeasurable quantity such as "eternity."

They will also say that joining the Freezers for Geezers club is a matter of personal choice, like smoking marijuana. Framed that way, who are we to spoil their fun with a sobering dose of reality? Isn't it merely a matter of personal taste, whether someone does or doesn't want to emerge into 24th Century civilization as a brain-damaged relic from the barbarian past?

None the less, I do agree with you, that this angle of attack might discourage some fence-sitters from joining the cult, so I'll happily adopt your memes into the tool kit and use them early & often.

Then there's this:

The Temple of Alcor charges what?, $120K for a whole corpsicle and $80K for a head only? (Which proves that if you can't get a life, at least you can get a head;-)

OK, so now assume that Extropianism or Singularitology become as popular as, say, Catholicism or Methodism. There will be waiting lines, but more to the point, there will be numerous families clinging tenuously to a middle class existence, when the parents decide to join the cult. In doing so, they will have to spend what meager savings they might otherwise have passed along to their children as inheritance.

Aside from the obvious narcissism of putting Me before Thee, this will have the further effect of destroying what's left of the middle class. Or, not to put too fine a point on it, selfish parents will screw their own children out of a sizable measure of their own economic security. That's morally comparable to spending food money on crack.

In the 1970s, parents worried about their children ditching out of college to join the Moonies or Scientology. Hardly half a century later, those children grow up and have to worry about their aging parents joining the Freezies or Singularitology. Perverse symmetry, that.

Re. Clay @ 19: The manipulation of DNA might get us another 20 years, which is not to be sneezed at, but is a poor excuse for the cultish attitude of Transhumanists.

They call it "upload," not "download," but in any case it's not possible. Per current science, mind is the product of brain and can't be transferred at all, full stop. Per traditional religion, mind is soul, and can be transferred to a hereafter.

"Uploading" to the Matrix is nothing more or less than a reincarnation belief. Once you accept reincarnation, why stop at silicon? I'd rather be a kangaroo in my next life, wouldn't you? If nothing else, somebody's failure to pay an electric bill wouldn't spell the end of me.

Don't these people, or some of their donors/associates believe in some weird "Quantum Theory" of the mind? That if you create an exact duplicate in silicon, that, essentially, it is still "you" and so the death of your body is meaningless and you continue to exist.

This makes it more understandable for people who fear death to want to do everything they can to continue their existence, but, really, it does also show an impressive degree of narcissism that you believe yourself to be so important and valuable that people hundreds or thousands of years from now will have need of your mind.

Come on guys, these people are job creators. Let's extropiate!

Chuck @ 22:

Extropians etc. not only don't believe in "quantum theories of mind," they explicitly oppose them for reasons I'll try to make clear.

Quantum theories of mind come in two flavors: (1) real science, and (2) unscientific twaddle.

1) The real science is primarily the work of Roger Penrose (yes, that Penrose, who collaborated with Stephen Hawking on the theory of black holes), Stuart Hameroff, and their associates.

Penrose reasoned that it is logically necessary that consciousness is not purely algorithmic and cannot be replicated in a Turing machine. He further reasoned that quantum wavefunction collapse likely plays a role in the activity of neurons.

Hameroff, an anaesthesiologist, was interested in the question of how surgical anaesthetics act to temporarily shut down consciousness. He reasoned that consciousness must originate with processes that occur within neurons, as well as between neurons. Those processes would occur on a scale that straddled the border between classical and quantum phenomena. To put it differently, Hameroff was interested in the mechanisms by which neurons could function as nodes in neural networks.

Hameroff already had track record with another of his hypotheses: that the glial cells ("white matter") in the brain, played a role in information processing. Up to that point, conventional wisdom was that the glial cells were only structural tissue, whose function was to hold together or glue together the "gray matter" of neurons: glial cells were thought to have no further activity related to mental functioning.

Hameroff's hypothesis about glial cells was strongly supported by empirical findings, and became part of the canon of current neuroscience. Probably as a result of this, he looked at other "purely structural" matter, inside the neurons, that might be a candidate for information processing, and decided that the "microtubules" that form the cytoskeletons of neurons, were a likely candidate, due to characteristics of the proteins from which they are built up.

Penrose & Hameroff met at a conference and each found the other's ideas to be the missing pieces they needed for their own. A series of empirical tests were undertaken, some of which have been supportive, others not, so the evidence thus far is equivocal. This work is ongoing.

Their theory has attracted quite a bit of controversy. One of the first objections was that QM phenomena can't occur at biological temperatures and scales. This was falsified by unrelated findings of QM activity in plant photosynthesis and in avian optics. The arguement continued in other forms, and will hopefully be resolved one way or another by unequivocal empirical findings.

2) The twaddle comes primarily from proponents of New Age religious beliefs, who latched onto QM and then subsequently latched onto Penrose & Hameroff, as somehow providing a basis for the existence of the soul. Logically that's a complete non-sequitur, but logic never stopped that sort of thing.

The chief culprit here is none other than Deepak Chopra, who probably needs no further introduction in these pages. Chopra has a track record of befriending various scientists, medical experts, and so on, and recruiting their ideas into support for his own. Lo and behold, he made friends with Hameroff and started babbling about "quantum mind," much to the dismay of those of us who would prefer that science & religion remain as separate as church & state.

I once heard Chapra on Public Radio talking about Christianity and Islam. I didn't know who was speaking until a station break announced his name, and I was surprised (having known of his reputation for intellectual sloppiness) at how well he knew his stuff. OTOH, I also followed a link on one of the skeptic blogs I read, to something on Chopra's website, and found it to be so incoherent that I couldn't bother to spend more than five minutes on that site.

3) Now we get back to Extropians: If Penrose & Hameroff are right, the actual complexity of the brain is about six decimal places greater than conventional estimates. Also, if they're right, then duplicating the classical functions of the brain only, will not suffice to produce consciousness. Lastly, if they're right, the quantum state of the proteins in the microtubules in the neurons will also have to be read in order to obtain an effective picture of the functioning of the mind.

These propositions spell _utter doom_ for the entire Extropian / Singularitarian / Etc. program of immortality. In short, Penrose & Hameroff are the atheists to the Extropian faith. For this reason, Extropians despise them and seek to discredit them.

Very often the people who do this don't identify themselves as to their own beliefs. The way to spot it is in the debate over "strong AI," the idea that a sufficiently sophisticated classical computer running sufficiently sophisticated software, can become conscious. If Penrose & Hameroff are correct, "strong AI" is not possible. Thus, "upload" of minds to machines is not possible.

The idea that one could produce an entangled state between the activity in neurons and the activity in quantum computers is "interesting" in theory (though it's fringe stuff), but hits a brick wall over the practicalities of quantum computing and the theoretical properties of quantum states. It's like arguing that "strong determinism" could be proved if one had a sufficiently sophisticated computer and sensors at the moment of the Big Bang.

The fear of death has always been a strong motivator for the vast majority of humans, even those who hold religious beliefs in an afterlife.

This presents a problem to atheists, some of whom have latched on to the Extropian paradigm. But fantasies of computer gods and immortality in silicon are no more or less rational than any other theistic and afterlife beliefs.

One way or another, people end up making peace with their own death. And again I'll recommend certain Buddhist meditation practices as a means of achieving some degree of objectivity and detachment, from which that sort of peace can follow. Those practices are fully compatible with whatever one does or does not believe about deities and hereafters, so atheists should be comfortable with them, and members of conventional religions should as well.

Yes, I agree with you that it's incredibly narcissistic to think that "people hundreds or thousands of years from now will have need of your mind" (very well said).

There's another component to that narcissism, which is the idea that one's _self_ is so important that it deserves to continue consuming a very large quantity of energy continuously for some period of hundreds or thousands of years, while the whole of humanity faces the limits to growth on a finite planet with finite resources. This is not the same thing as keeping a living person on life support in a hospital, which in any case has the finite boundary of a normal life span. It's the entitled mentality of people such as Larry Ellison, he of the 300-foot yachts, who believe they are superior to the rest of the human race.

For more about that mentality, read this:

And yes, those people are serious about it. So serious that they have become a very large subculture with wealthy friends in high places, much as with certain political groupings such as the Tea Party.

What I think is the case here is that Penrose & Hameroff are probably basically correct and will be supported when all the relevant findings are in. Artificial Intelligence will not achieve consciousness. Serious AI research will make progress using models of brain functioning to improve computer hardware and software for conventional purposes such as medical and climate modeling.

But there is a serious risk that Extropian / Singularitarian / Transhumanist / Etc. beliefs will spread in the culture, feeding off narcissism and reinforcing it, and having an impact on public policy. Already we see the "self-driving car" memes being much hyped, such as this week's cover story in Time magazine.

The situation is very similar to the 1970s when then-new cults such as the Moonies, Scientology, etc., were on the rise. The difference is that the new cults of today have something the old cults didn't: a feast of expensive consumer baubles, to tempt not only the proverbial masses, but entrepreneurs and policymakers.

For which reason I believe that rational people should spread the word that all this Extropian stuff is so much pseudoscience and nonsense, and is tied in with a type and degree of amorality that reasonable people should find repellent.

We would not want the Moonies taking over Congress, and we should not want the Extropians gaining comparable influence in the culture.

Point of clarification: where I said that Chopra "knew his stuff," I was referring solely to his talk about religion, and most emphatically not to anything science-related. Chopra would do well to stick to comparative religion & philosophy, where he could actually do some good. But he should stay away from science other than as an interested layperson.

To the extent that he (or other laypeople with audiences) want to comment on science, they should be willing to clearly mark off the difference between accepted theories and findings on one hand, and controversial ones on the other hand, and their own speculations on the third hand. This is a serious problem in science writing by laypeople in general, and particularly so with those who have large audiences and also have ideological or philosophical biases.


Lastly, re. GregH @ 23: Thanks for the verb "Extropiate", that's a clever meme and I'll use it.

@ jfb

Your family and friends are dead (unless you got a group rate). Whatever skills you had will likely no longer be relevant in this time period. You may not even understand the language. Social mores will likely have changed. Popular culture will likely bear no resemblance to what you remember.

I would heartily recommend the reading of "Cryoburn" from Lois McMaster Bujold on the topic of cryonics and increasing human lifespan.
It's sci-fi, and the author has usually an optimistic - and often humorous - view on technological innovations, especially regarding human health. (key word: "usually")

She initially imagined a cryonics' application as a battlefield emergency procedure. Standard stuff in sci-fiction nowadays. But then, in Cryoburn, she went onto imagining a whole planet where freezing oneself is the usual approach to end-of-life.

To avoid spoilers, I will just say the setting is not as optimistic as usual, and the main heroes don't see deep-freezing their loved ones - or themselves - as a solution. Repeatedly. In the case of one heroine who has been the moral compass of the whole Vorkosigan series, by the end of book she is shown to have actually reverted her previous negative opinion on euthanasia.

All the issues you mentioned, the author included them in her story. And then some.

By Helianthus (not verified) on 13 Mar 2016 #permalink

Re. G @ 15:

Wrt brain-is-you objection.
First of all, as far as I see, the mind is a (very complex) algorithm that runs on the brain. I don't think hardware-software distinctions apply, the brain is both. I don't see a good reason to identify "myself" with the brain; many people, including popular bloggers, in the community actually do (e.g. Fight Aging blogger often says that uploads are unacceptable), but I don't see why.
The gradual replacement example is a straightforward objection - if you can replace neurons one at a type (hypothetically), and still be you, why not 2 at a time? why not 4, 8, half the brain at a time?
Discontinuity is not a concern either - when you fall asleep you are not conscious nor have volition, yet "you" wake up. Moreover, if someone hits you over the head with a stick and you go into coma for 10 years, barring serious damage you'd still say it is "you" that wakes up.
I haven't formalized this in my head well at this point, but I basically think that the algorithm that reproduces "me" with high enough fidelity, and genuinely recognize itself as an "descendant state" from me-now, the same way as me-todays-morning has recognize me-yeterday as such when I woke up, would be "me". I think the problem is that it's extremely counter-intuitive for obvious reasons, but that is just because our intuitions cannot work outside certain constrains, kind of like "what was before Big Bang/inflation" question in the theories that postulate no time before initial state. So I don't believe that a clone living would be me dying. Again, if we had a hypothetical destructive atom-precision scanner and printer, would you pay to "teleport" rather than fly from New York to Beijing by the means of destruction and re-creation? Assuming the technology has been proven to work over many years, etc. I don't really see what makes such copy not-you, other than intuitions that cannot be explained (and are wrong as far as I'm concerned). So there's no fundamental reason.

Now as for technical challenges, yes, they are enormous, and as I said I am skeptical so far about the current preservation methods. Yet, the physical object (the brain) clearly exists and is produced somehow, which means it can be produced again. Moreover, if you consider e.g. yourself with brain damage, or with some other brain alterations, or having taken some drugs, that is still "you" - if you came out of that coma with some of your episodic memories erased, or slightly more impulsive, or even say, not being able to see the left side of the page, it would be less you, but to a certain limit you'd still say that's you. So there is no goal of perfect fidelity. For example, current transmitter state inside the neurons might be erased, but it might (might) just put your brain in a reset state where your working memory would be erased and you'd be in a state of shock with your brain having to recombobulate; or maybe it scrambles all your memories and makes the whole process meaningless, I don't know. For some things like this, from what I've read it seems closed to the first outcome (lose mood/working memory/...), for some to the latter.

Wrt the reaper.
Also, I am actually fairly content about death. It's quite obvious from my other views that I don't think there's any meaning in life, and yes I have read the Stoics... However, living is fun (so far). I don't see why die. The argument about friends and family is actually another strong one. Nothing (I hope) would prevent you from committing suicide if you are so inclined, upon waking up, however I don't think I would. Not being able to apply yourself in a vastly more advanced society is another one, and that's kind of the same thing I said in the first post, "who would bother with these old heads anyway" - that as far as I am concerned is the biggest fundamental objection to cryonics at this stage, not the crude tech. Maybe people will eventually recognize that Keynes was mostly right all along, and it's going to be a post-scarcity utopia (see, I'm a socialist, not libertarian, except I'm a heartless socialist, kind of like a bleeding-heart libertarian in reverse), so you'd apply yourself until you get tired or bored. Then again, maybe the superior future humans will let your head rot. Maybe the global warming, a gamma ray burst, or whatever, gets us. Maybe the civilization will collapse locally or entirely. Maybe you'd wake up without friends, family, and things to do, and kill yourself. It's impossible to predict this to any reasonable extent.

Wrt relativistic space travel example.
The precise point here is that we know exactly (well, to the best of our knowledge of physics) what the limitations are. A better analogy from physics is actually nuclear fusion - it's hard and may be impractical for a long time, or given the funding/alternatives/politics, but we see it happen in the sun, so we assume we can make it happen in a reactor. We also know, however, how a human brain is built - you have a variation of a human brain from, crudely speaking, one cell in a specific environment and a lot of extra nutrients, in ~9 months or so. It's kind of like Columbus watching a container ship being built - he now knows you can build huge ships, and control and propel trhem. . He doesn't have the faintest idea about how the thing is held together, or even materials or all the necessary equipment; even though he has seen it done it's impossible to have Columbus learn to build a container ship, but over ~500 years that can apparently be done. The process is there. We just need to figure out how it works or devise the equivalent.

> There’s no such thing as “a little bit dead” any more than “a little bit pregnant.”
Yes, there is, as I have explained above. You can remove a lot of information and abilities from the brain before you would stop considering it "you"

> A long shot like curing cancer with homeopathy. Or flapping your arms to fly after jumping off a bridge.
Again, bad examples, because we know precisely why it doesn't work in these cases. It's more like taking an early stage trial treatment for cancer that only ever worked in the dish, because otherwise you'd surely die, or trying to regroup to enter the water properly after falling off a bridge (the funny thing is, I love jumping off cliffs into the water, so I know people dive off 85ft cliffs and emerge healthy (and red), although I prefer safer heights ;)).

And yeah, I'd vote for Bernie ;)

Sorry for the typos, it took a long time to put together and there's no edit button. I want to elaborate my philosophical position, "why die" might be too succinct... If you read one Marcus Aurelius, he keeps repeating again and again on how change an dissolution are inevitable. Yet, he also keeps repeating that such and such attitude should be adopted because that's what gods intended. Now, I say if the gods don't exist, this is not founded nor even convincing. Similar with death. If I have to die, I will die. If I don't have to die... why die? There's no value in dying.

Helianthus, you might also enjoy "Accelerando" by Charles Stross. It's available as a free download, originally from him, so...

ses: There’s no value in dying.

No? It seems like a pretty good strategy for preventing evolutionary dead ends or having various life forms become permanently insensitive to changes in their environment. I think it's better if fossilization happens after death.