Will "humans" still exist in 100 years?

This week's Ask a Scienceblogger question is "will the "human" race still be around in 100 years?"

The short answer is yes.

The slightly longer answer is this: we could face a number of catastrophes, including a pandemic, massive global warming, nuclear war, or all of the above. Our numbers could be reduced to a tiny fraction of what they are today. But we will most certainly still exist.

From a cognitive science perspective, there are a couple other interesting possibilities. What if, within the next 100 years, we succeeded in creating artificial intelligence that appeared to match human intelligence? What if we could create a robot that, externally at least, appeared to be "human"? Would we consider such a thing to be a part of the human race? If we couldn't distinguish it from other humans based on behavior and external appearance, then why shouldn't we consider it to be a human being, with human rights?

If so, then arguably the human race would be fundamentally different from what it is today, and so in that sense you might say that the human race no longer exists.

If this can't happen in 100 years, then certainly in 1,000 it might.

What's more, in 1,000 years, we may have discovered the secret of aging, and it might be possible for people to live infinitely long lives. Would such a creature be human? Isn't mortality part of what makes us who we are? Assuming some catastrophe doesn't bring humanity back to the stone age, it's possible that in 1,000 years we will have changed ourselves into something that is no longer recognizably human.

Last week's question was "If you could cause one invention from the last hundred years never to have been made at all, which would it be, and why?," and you can see a roundup of all the answers here. This week's roundup should be up on Stochastic by midday.

More like this

Neither growth of technology, the health situations of the world or even the changes in the earth's climate will wipe out the human race. In fact, it supports what actually happens [in my opinion]. The history of humanity is so heavily entrenched in its own ebb and flow, its own cycle. Plagues wipe out millions, but then something will make a birthing boom again. A catastrophic disaster will create a setback for a nation, but then new technology comes along. In its natural state and even with the essence of technology ever intermingling, humans are a very resilient species.

That said, I think nuclear war or the [over]growth of intelligent machines, artificial intelligence, could both prove detrimental to us.

Nuclear war and power...we hardly know the vastness of it, but men of greed and wealth wield such modern day swords quite lightly. It's rarely the people of a nation are bad at heart; it's often our rulers, our governments, that use such tools for their agendas. Nuclear power appears double-edged to me.

But I actually think artificial intelligence could be worse for us as humans, ironically. Of course, maybe that's just because I watched the "Animatrix"... :p

First, i'll point out that you cannot say that we will certainly still be around in 100 years, at all. It does seem probable, but we cannot say with perfect certainty that we will not be completely ripped apart by a giant asteroid, that the magnetosphere will hold, or countless other things that we are uncertain of - humans can be absolutely certain about nothing, just watch the matrix.

Based on what we think we know, the revelation that humans are extremely insignificant emerges. Compared to the universe we are nothing; compared to the grand scheme of evolution we are nothing; compared to time we are nothing - we have only been here about 200,000 years.

And yes, we will be gone some day - completely. Our sun will die out, our air will go away, all the stars in the sky will fade, the magnetosphere will vanish and expose us to radiation, giant asteroids will hit us, the very energy that allows us to do work will become too dissipated to support us. We have no where to go, other solar systems are simply too far to reach and other planets are not designed to support our perfectly fitted for Earth physiology - star wars is NOT real. These are all - as far as we know - facts.

So, would we call a robot a human? That completely depends on whatever we call it - human is just a word, an abstract concept. The idea of you being human is vague and subjective, and so is the concept of you.

If there's major differences between the created robot and human (one has a silicon brain) then no, we probably won't call it a human. If, however, the robot is physiologically the same as us and can even reproduce with us then we will probably call it human - I belive that this is extremely far of and may never be reached.

Although, if the robot can think like us then we might give it the same rights as ourselves, but I don't think we can say it is human - it would be equal or greater than us, but not us.

Looking at evolution, technically, we are a bunch of very disfigured mice with a vast amount of accumulated variations. If we can create AI then, in a way, a disfigured version of our psyche might live on beyond us for awhile, but really it will not be us - rather our creation. The gulf between robots and humans is even larger than with mice and men, but we do not think of ourselves as actual mice. Therefore, I don't think that we will see AI as ourselves (although we could depending on what we want to do).