Anybody heard of the idea of The Singularity? Roughly, it goes like this: technological progress builds on itself, and this self-reinforcing feedback loop is eventually going to come to a head where humanity makes a quantum leap into an unknowable and godlike transhuman technological future - possibly as early as the middle of this century. The name of the idea comes from mathematics, where approximately speaking a singularity is a place where a function rockets off to infinity. Alternately, some adherents of this type of thinking believe that progress is exponential; formally this doesn't result in a true singularity but in this type of scenario it makes little difference.
I'm (probably) not going to take sides today. Instead we'll just look at some math that's relevant. Say you have a function P(t), that measures technological Progress as a function of time. Understandably this is hard to quantify, but as a thought experiment you get the drift. We might say that the rate of increase of progress is proportional to the progress we've already made. This is itself just a statement of a differential equation:
The solution to that equation, assuming suitable initial conditions, is just the exponential function:
Ok, nice enough. But is the rate of progress really just a function of previous progress? It can't always be. Take a heat engine, for instance. Throughout history people have turned heat difference into useful work in various ways, and today many heat engines are pretty efficient. They're getting more efficient all the time. But we do know as a matter of basic physical law that there is a limit - no engine can beat the Carnot limit for efficiency any more than a vehicle can beat the limit for the speed of light. Progress there can't rocket off to a singularity, physics causes the path of progress to lead into a deepening swamp terminating in an impenetrable brick wall.
So let's try a different model. What if progress is proportional to previous progress multiplied by a factor that shrinks as progress approaches the physical limit? For convenience, scale things such that the limit is 1:
When there are physical limits, this is a much more realistic equation than unlimited exponential growth. It's solvable, and the resulting P(t) is called the logistic function, and it's our Sunday Function:
Graphed, it looks like this:
Now here's a key thing to realize. If you're looking at this function well to the left of the origin, it looks pretty darn exponential. This is because if you let t become large and negative, e^-t is huge (because negative times negative is positive). This means the 1 + e^-t is essentially just e^-t, because that term is so much larger than 1. And that means that we have effectively 1/e^-t, which is just e^t. So if you're in the past relative to the origin, you might well think you're looking at true exponential growth which will explode into infinite progress in the future. But you'd be wrong - logistic doesn't stay exponential as time keeps marching on.
And since physics does set limits, be they light speed or Carnot efficiency or Heisenberg uncertainty or whatever, I suspect that technological progress is logistic. The remaining question is the height of the limit. If it's much much higher than progress as of 2009, then maybe we will get what amounts to a singularity. I rather doubt it though, many types of technological improvement have already come near their limits - CPU clock speed, for instance, has been stalled around 3 GHz* for a while now. On the other hand, some things like our understanding of the human genome are not even within a tiny fraction of their probable maximum.
But maximums they have, and most of them don't look like they're completely out of sight. So count me out of the singularity boosting. The future will be amazing, but probably not impossibly so.
Did I say I wasn't going to take sides? Sorry. ;)
*For clarity, clock speed is not a measurement of performance as such. A modern processor can do much more per clock cycle than previous generations. But it is a good example of one branch of computational advance that's not terribly likely to continue increasing indefinitely.
- Log in to post comments
Without weighing in on the shape of things to come, I'll point out that economic progress is not a physical dimension, but measured instead by billions of subjective evaluations. There has been a lot of nonsense written about the limits to economic growth that conflates that with some kind of physical measure, such as oil consumption, energy usage, or tons of trash produced.
The flip side of that is that economic comparisons that span many decades or even a couple of centuries need to be viewed with a jaundiced eye. Yes, we are much richer now than we were in the 18th century. It is less clear that that difference can be firmly quantified in a meaningful way, because the subjective evaluations that would do that necessarily come from generations quite a bit removed from each other. Most of the quantifiable comparisons come from simply extending comparisons that are used from year to year. That may make sense for some purposes. But likely gives a distorted picture if taken as some intergenerational absolute.
The logistic function comes in four equivalent forms. Each may be seen in forecasts of world oil production. The algebra connecting the four forms would keep you busy for a while, but you find it at this link. Enjoy!
http://sep.stanford.edu/sep/jon/hubbert.pdf
I imagine Ray Kurzweil staring at your logistic function to the left of the origin and furiously writing his article on the Singularity. You come along and drag his nose to the right of the origin and furious erasures commence.
I love it when someone brings "altitude" into a debate. No heavy oratory, just a new point of view, can make a big difference.
Good job -- one of your best.
Hmm, you've just addressed the 'Accelerating Change' Singularity school of thought. And IMHO its the not a very cogent school of thought.
See http://yudkowsky.net/singularity/schools for more information.
The 'Intelligence Explosion' appeals a lot more to my taste: "Intelligence has always been the source of technology. If technology can significantly improve on human intelligence â create minds smarter than the smartest existing humans â then this closes the loop and creates a positive feedback cycle. What would humans with brain-computer interfaces do with their augmented intelligence? One good bet is that theyâd design the next generation of brain-computer interfaces. Intelligence enhancement is a classic tipping point; the smarter you get, the more intelligence you can apply to making yourself even smarter."
It doesn't seem terribly likely to me that intelligence can increase without limit either. The AI field is far too young to say much about what might be the maximum limit to intelligence, but I would not be shocked if there were one. If nothing else it's going to be limited by the (eventually) logistic growth of fundamental computing power.
And infinite intelligence - whatever that might mean - can't break the laws of physics or mathematics anyway. As I explored in the post somewhat, those laws can only be exploited so far before reaching the physical limits. It would be funny if we finally invented a near-omniscient self-improving AI only to be told "Well, looks like you pretty much figured everything else out already. Good job."
I have to admit that I'm having philosophical trouble with the P(t) portion of the equation, as well as the dt. Surely it has to be d(values) or d(effort) or something.
You've just buggered economics by proving a nation $100 billion in debt (bupkis) is quantitatively different from a nation $100 trillion in debt (guess who).
Rather than fostering brilliance we allocated for its suppression. The fruits of our efforts have arrived! Be diverse in your contempt.
I recall someone opining that the exponential nature of Moore's Law would transition to a quadratic one. He seemed like a very bright guy, but why a quadratic function? The logistic function seems like it would be a more accurate and elegant description.
I discovered this function a completely different way.
I was interested in the question "what does it mean to double a probability?". Eventually, I decided that a nice way to handle it was that doubling p meant to double the ratio p/(1-p). That ratio goes from zero at p=0 to infinity at p=1.
I took the log of the number, and whaddaya know, it turned out beautiful and symetrical, -inf at p=0, +inf at p=1, and 0 at p=.5 .
I then flipped it around the diagonal (solving p for the ratio), and wound up with e^x / (e^x + 1), which is the same as your function (except that I did not divide through by e^x).
By summing a number of these (scaled and shifted, of course), you can approximate a set of steps to any degree of accuracy.
d(power consumption)
Computing, thinking, building - these things all require energy. Therefor, the rate at which they can be done is limited by available power. Not all computing and building contributes to technological progress - most of it is consumed maintaining the current state of technology. Assuming there are no other limits (such as building materials ...) technology could increase until all available power is being used. When power used is small compared to power available , very rapid technological progress should be possible, but as power used approaches power available, technological progress should become slower and slower - akin to the logistic function.
Please note many of the sources of power we presently rely most heavily on - coal, oil, gas, nuclear - are all guaranteed to run out quickly when subjected to exponentially increasing use. When these sources run out, if we fail to replace them, power available will decline, and technology available will decline as well. Of course, solar already works, it won't run out on reasonable time scales, and 87 petawatts of it reaches the Earth's surface.
d(consumption of least available limiting resource)
The energy-only analysis is nice, but technological progress requires many other resources, and the least available resources will play the strongest role in limiting its extent. What are those resources? Well, I don't know, but modern computing relies heavily on various rare-earths, and the trend over the last 30 years has been toward more and more reliance on rare-earths. And most of those are poorly recovered by current recycling processes. At some point, replacements will be required - or technological progress will at least end, and possibly reverse.
Singularities are actually quite common in human history. Usually we call them "collapses".
Matt, another great posting. Isn't that curve called an ogive in statistics?
You and your readers might enjoy a new book that was sent to me for review because I write a natural history column. It is "A Mathematical Nature Walk" by John A. Adam, an Old Dominion University math professor. The book is published by Princeton. It has some nice worked examples.
"It doesn't seem terribly likely to me that intelligence can increase without limit either." - Matt
Nobody with any credibility believes that. But I think its quite likely that there's room for lots of improvements on the current state of general intelligence (i.e. human minds).
Just look at the difference between apes and humans. The substrate is nearly the same: its the wiring that is different (not to mention the increased volume - although the difference is quite small when compared to the intelligence delta).
And you know that we're still very far from the physical limits on computation, so I don't think that physical constraints would stop the kind of singularity predicted by Yudkowksy (although of course I agree with your larger point that such constraints will cap progress at some point).
the "singularity" idea always seemed suspect to me, because technology really doesn't build on itself, nor fuel itself. "technology" isn't a thing with nay separate existence, it's a pigeonhole label for what humans do with the accumulated human knowledge. people build technology ever more advanced, by building on what's been learned before, but fundamentally it can't move faster than however fast people can work with it.
yeah, granted, there might come some point at which we start meddling with the maximum speed at which we can assimilate and work with knowledge --- direct neural interfaces to computers, maybe, i dunno --- but pretty much by definition, we couldn't make it move faster than it/we could keep up with.
human-level artificial intelligence is another story, of course. it's a story i don't expect to see realized in my lifetime (and i should have another half-century in me, easily), but even if i did --- why should we build (or help support) any AI we couldn't easily work with, as either coworkers or as very advanced tools? there'd be no point in it for us.
A progress graph for technology has a function which is exponentiated by the data density factor. The processing of chemical and electronic system data refines those sciences, the method of making technical progress. That all depends on the atomic model equation, since that is the data horizon of nanostructural features which holds the solutions to molecular or material quantum effects.
The rate will go up swiftly by accepting the RQT (relative quantum topological) function approach to physics. It is the informative combination of the relativistic Einstein-Lorenz transform functions for time, mass, and energy with the workon quantized electromagnetic wave equations for frequency and wavelength.
This system builds the atomic model as a picoyoctometric 3D animated video data point image by taking the nucleus as radiating forcons with valid joule values by {e=m(c^2)} transform of the nucleoplastic surface layer's mass to a spectrum of force fields. The equation is written as the series expansion differential of possible nucleoplastic transformation rates, with quantum symmetry numbers assigned along the progression to give topology to the solutions. Psi pulsates by cycles of nuclear emission and absorption of force at the frequency {Nhu=e/h}. This process is limited only by spacetime boundaries of {Gravity <-> Time}, making the GT integral atomic topological function.
Next, when the atom's internal momentum function is rearranged to the photon gain rule and integrated for GT limits a series of 26 waveparticle functions is found. Each is the picoyoctometric 3D topological function of a type of energy intermedon of the 5/2 kT J internal heat capacity energy cloud, accounting for all of them. Those 26 energy values intersect the sizes of the fundamental physical constants: h, h-bar, delta, nuclear magneton, beta magneton, k (series). The result is the exact picoyoctometric, 3D, interactive video atomic model imaging function, responsive to keyboard input of virtual photon gain events by quantized, relativistic shifts of electron, energy, and force fields.
The RQT method certainly leads the way toward nanotechnical progress, and beyond, with it's clear electron images and abundant RQT modes of intense, accurate data productivity.
Images of the h-bar magnetic energy waveparticle of ~175 picoyoctometers are available online with the complete RQT atomic modeling manual titled The Crystalon Door. TCD conforms to the unopposed motion of disclosure in U.S. District (NM) Court of 04/02/2001 titled The Solution to the Equation of Schrodinger, U.S. copyright TXu1-266-788.
The RQT functions, and that atomic modeling manual The Crystalon Door, are online at http://www.symmecon.com, with details of new, picoyocto technical video modeling physics at http://www.symmecon.com/GUPPP.aspx.
Wow Matt, you must be becoming famous, as cranks like Dale B. Ritter,B.A. are posting to your site! ;)
Have you started getting crank letters in the mail?
I still have my first official one framed and on my desk. The rest are landfill.
It appears that neither the blog author nor commenters have read Kurzweil's book.