Ray Kurzweil is a genius. One of the greatest hucksters of the age. That's the only way I can explain how his nonsense gets so much press and has such a following. Now he has the cover of Time magazine, and an article called 2045: The Year Man Becomes Immortal. It certainly couldn't be taken seriously anywhere else; once again, Kurzweil wiggles his fingers and mumbles a few catchphrases and upchucks a remarkable prediction, that in 35 years (a number dredged out of his compendium of biased estimates), Man (one, a few, many? How? He doesn't know) will finally achieve immortality (seems to me you'd need to wait a few years beyond that goal to know if it was true). Now we've even got a name for the Kurzweil delusion: Singularitarianism.
There's room inside Singularitarianism for considerable diversity of opinion about what the Singularity means and when and how it will or won't happen. But Singularitarians share a worldview. They think in terms of deep time, they believe in the power of technology to shape history, they have little interest in the conventional wisdom about anything, and they cannot believe you're walking around living your life and watching TV as if the artificial-intelligence revolution were not about to erupt and change absolutely everything. They have no fear of sounding ridiculous; your ordinary citizen's distaste for apparently absurd ideas is just an example of irrational bias, and Singularitarians have no truck with irrationality. When you enter their mind-space you pass through an extreme gradient in worldview, a hard ontological shear that separates Singularitarians from the common run of humanity. Expect turbulence.
Wow. Sounds just like the Raelians, or Hercolubians, or Scientologists, or any of the modern New Age pseudosciences that appropriate a bit of jargon and blow it up into a huge mythology. Nice hyperbole there, though. Too bad the whole movement is empty of evidence.
One of the things I do really despise about the Kurzweil approach is their dishonest management of critics, and Kurzweil is the master. He loves to tell everyone what's wrong with his critics, but he doesn't actually address the criticisms.
Take the question of whether computers can replicate the biochemical complexity of an organic brain. Kurzweil yields no ground there whatsoever. He does not see any fundamental difference between flesh and silicon that would prevent the latter from thinking. He defies biologists to come up with a neurological mechanism that could not be modeled or at least matched in power and flexibility by software running on a computer. He refuses to fall on his knees before the mystery of the human brain. "Generally speaking," he says, "the core of a disagreement I'll have with a critic is, they'll say, Oh, Kurzweil is underestimating the complexity of reverse-engineering of the human brain or the complexity of biology. But I don't believe I'm underestimating the challenge. I think they're underestimating the power of exponential growth."
This is wrong. For instance, I think reverse-engineering the general principles of a human brain might well be doable in a few or several decades, and I do suspect that we'll be able to do things in ten years, 20 years, a century that I can't even imagine. I don't find Kurzweil silly because I'm blind to the power of exponential growth, but because:
Kurzweil hasn't demonstrated that there is exponential growth at play here. I've read his absurd book, and his "data" is phony and fudged to fit his conclusion. He cheerfully makes stuff up or drops data that goes against his desires to invent these ridiculous charts.
I'm not claiming he underestimates the complexity of the brain, I'm saying he doesn't understand biology, period. Handwaving is not enough — if he's going to make fairly specific claims of "immortality in 35 years", there had better be some understanding of the path that will be taken.
There is a vast difference between grasping a principle and implementing the specifics. If we understand how the brain works, if we can create a computer simulation that replicates and improves upon the function of our brain, that does not in any way imply that my identity and experiences can be translated into the digital realm. Again, Kurzweil doesn't have even a hint of a path that can be taken to do that, so he has no basis for making the prediction.
Smooth curves that climb upward into infinity can exist in mathematics (although Kurzweil's predictions don't live in state of rigor that would justify calling them "mathematical"), but they don't work in the real world. There are limits. We've been building better and more powerful power plants for aircraft for a century, but they haven't gotten to a size and efficiency to allow me to fly off with a personal jetpack. I have no reason to expect that they will, either.
While I don't doubt that science will advance rapidly, I also expect that the directions it takes will be unpredictable. Kurzweil confuses engineering, where you build something to fit a predetermined set of specifications, with science, in which you follow the evidence wherever it leads. Look at the so-called war on cancer: it isn't won, no one expects that it will be, but what it has accomplished is to provide limited success in improving health and quality of life, extending survival times, and developing new tools for earlier diagnosis — that's reality, and understanding reality is achieved incrementally, not by sudden surges in technology independent of human effort. It also generates unexpected spinoffs in deeper knowledge about cell cycles, signaling, gene regulation, etc. The problems get more interesting and diverse, and it's awfully silly of one non-biologist in 2011 to try to predict what surprises will pop out.
Kurzweil is a typical technocrat with limited breadth of knowledge. Imagine what happens IF we actually converge on some kind of immortality. Who gets it? If it's restricted, what makes Kurzweil think he, and not Senator Dumbbum who controls federal spending on health, or Tycoon Greedo the trillionaire, gets it? How would the world react if such a capability were available, and they (or their dying mother, or their sick child) don't have access? What if it's cheap and easy, and everyone gets it? Kurzweil is talking about a technology that would almost certainly destroy every human society on the planet, and he treats it as blithely as the prospect of getting new options for his cell phone. In case he hadn't noticed, human sociology and politics shows no sign of being on an exponential trend towards greater wisdom. Yeah, "expect turbulence."
He's guilty of a very weird form of reductionism that considers a human life can be reduced to patterns in a computer. I have no stock in spiritualism or dualism, but we are very much a product of our crude and messy biology — we percieve the world through imprecise chemical reactions, our brains send signals by shuffling ions in salt water, our attitudes and reactions are shaped by chemicals secreted by glands in our guts. Replicating the lightning while ignoring the clouds and rain and pressure changes will not give you a copy of the storm. It will give you something different, which would be interesting still, but it's not the same.
Kurzweil shows other signs of kookery. Two hundred pills a day? Weekly intravenous transfusions? Drinking alkalized water because he's afraid of acidosis? The man is an intelligent engineer, but he's also an obsessive crackpot.
Oh, well. I'll make my own predictions. Magazines will continue to praise Kurzweil's techno-religion in sporadic bursts, and followers will continue to gullibly accept what he says because it is what they wish would happen. Kurzweil will die while brain-uploading and immortality are still vague dreams; he will be frozen in liquid nitrogen, which will so thoroughly disrupt his cells that even if we discover how to cure whatever kills him, there will be no hope of recovering the mind and personality of Kurzweil from the scrambled chaos of his dead brain. 2045 will come, and those of us who are alive to see it, will look back and realize it is very, very different from what life was like in 2011, and also very different from what we expected life to be like. At some point, I expect artificial intelligences to be part of our culture, if we persist; they'll work in radically different ways than human brains, and they will revolutionize society, but I have no way of guessing how. Ray Kurzweil will be forgotten, mostly, but records of the existence of a strange shaman of the circuitry from the late 20th and early 21st century will be tucked away in whatever the future databases are like, and people and machines will sometimes stumble across them and laugh or zotigrate and say, "How quaint and amusing!", or whatever the equivalent in the frangitwidian language of the trans-entity circumsolar ansible network might be.
And that'll be kinda cool. I wish I could live to see it.