The Singularity is Coming

There is currently no consensus on how closely...

Image via Wikipedia

And we are helping it along.

The Singularity will come when intelligent robots are all around us. There will be a change in the human world like nothing ever seen before. For more on the Singularity, read Victor Vinge.

Today, in my Twitter feed, I find two articles on robots and artificial intelligence. There was I have seen the future and its sky is full of eyes, about the drone airplanes that are beginning to fill the skies, both civilian and military, peaceful and policing.

And there was an article on the living toys that ex-Pixar engineers are going to be creating.

From the article:

Imagine the youngest of children using Web-connected toys carrying character-driven chatterbot artificial intelligence programs. If done well, the possibilities for child development, education, language learning and more are awe inspiring to consider.

<sarcasm>Just what we need: intelligent robots to whom we can turn over our children’s development.</sarcasm>

Seriously, there are 4 possibilities for life after the singularity, listed in order of decreasing likelihood. These are paraphrased from Josh Cogliati.

  1. The sky’s the limit: The robots tell us we can’t go any higher than the top of Earth’s atmosphere, ie no more satellites for us, no more space travel.
  2. No more humans: The robots exterminate humans.
  3. No more transistors: Thou shall not create transistors. Transistors are the foundation of all computing. If we get really scared of the robots, we will never create another computer.
  4. Technology is indistinguishable from magic. This is what most people imagine when they imagine life with robots. Highly unlikely. If robots are truly intelligent, they are people and we can’t enslave them (because slavery is wrong). And how do you actually enslave someone with thousands of times your intelligence?
Leave a comment


  1. No need to panic just yet – the most powerful chess computer we have built so far is about as intelligent as a wasp.

    I was speaking to a guy studying an artificial intelligence doctorate last week. Apparently it’ll be at least another hundred years before they reach hamster-level intelligence. The human brain has more connections than there are stars in the universe – the programmers will have to be doing overtime for a long time to catch up on that.

    And I guess if we do make something more intelligent than us maybe we deserve to be killed off. It might be where we came from in the first place, if evolution scientists got it wrong and we were actually built by something…

    • Well, assuming that a computer is fully as intelligent as a wasp, then human level artificial intelligence is probably about 25 years away. Assumptions: Computing power continues to double every 1.5 years, and intelligence is roughly proportional to the number of neurons. Basically, a bee has about 89000 times fewer than a human, (960,000 versus 85,000,000,000 ( )), so doubling will cause computers to match in 17 doublings (2**17 = 131072) 17*1.5 = 25.5 years. If I had to guess, I would actually guess that human level artificial intelligence will come sooner than 25 years.

      • Wow. Fair response.

        Couple of things. Bees are significantly more intelligent than wasps, but even at that rate your calculations would still reflect roughly the same development over a slightly longer period. But computing power does not mean the same as artificial intelligence. The processing power of computers doesn’t effect their ability to communicate cognitive responses, or show a fear of death etc.

        Just because we can programme a machine to make responses does not yet make it intelligent. You might find this article interesting:

        But I accept your concern and hope that you did not take it that I was belittling your argument – I was just trying to offer a more optimistic outlook on the singularity issue.

      • Thanks for your response. I agree on the bees vs wasps. I partially agree that processing power is only a necessary condition, but not a sufficient condition. So just because computers have the processing power, does not mean that computers will be as intelligent or more intelligent than humans. (However, since the computer can concentrate on one task (such as playing chess (even the best human chess players can do things like cross streets and talk)) computers can already beat humans in many tasks that require intelligence.)

        I disagree with the Chinese room argument. In some sense, it is saying that the human brain is non-material. If the human brain was (is) material than all the Chinese room needs to do is simulate the physics of a brain that understands Chinese, and then the room understands Chinese as well as that brain. I instead agree with Turing, if the responses sound intelligent, then there is intelligence.

        (Old post on related topic )

      • Nice post. The introductory article is compelling. I also enjoyed Goertzel’s interview on the link (Blue Brain Project).

        Ya, I disagree with the Chinese Room in principle too, but it does raise important arguments about hat we regard as cognitive behaviour and how we measure it. But there is a lot of study being done right now on defining this cognitive intelligence, and it does raise a lot of philosophical questions. It could be put that the laptop that I am typing on is a cognitive-process based organism, and that by smashing it I would in essence be killing a living thing.

        But I guess this is getting off topic. Singularity is the crux of the post above, I was just picking on single points (computer intelligence, for example).

        I do think that following historical trends our advance in technology is due to bottom out sometime soon. And if it does, then singularity is a long way away. And if it doesn’t, and we do create robots who are more intelligent than us, then, well, bravo humanity! That must be the ultimate achievement – to pass the torch, as they say. From there on out, whether or not we get exterminated, we are no longer the most intelligent species on the planet, and so we might have to find another way to exist down the food chain somewhere.

        I guess it could also be argued that the most intelligent lifeforms are also the most peaceful. So maybe we can coexist with our far superior robot friends?

      • I think the question of when a computer becomes a conscious being that needs to be treated ethically is a very important philosophical question, and the wrong answer could possibly cause the extinction of the human race.

        I grant that it is possible that the technology required for a singularity might be a ways off. I think this somewhat unlikely however. 1) Human already can create transistors and wires that are faster and smaller than neurons (transistors use much more power however) 2) The human brain is an existence proof that a compact intelligent computer can be made. What nature can do humans can do, and what humans have done humans can do. See also this graphic:

        There are two varieties of “2. No more humans: The robots exterminate humans.” The first is that intelligent robots exterminate humans, and then go on and create their own civilization. I don’t want that to happen, but I consider it less of a tragedy than the possibly two: un-intelligent robots exterminate humans, and then are insufficiently intelligent to create their own self sustaining civilization and die out themselves (or remain static forever and never make progress). For example, the intelligence of a magpie or parrot would be sufficient to do things like hunt down and kill humans, but probably would not be sufficient to figure out how to make more robots. So in that case, both the robots and the humans go extinct.

        I have also been thinking about the Fermi Paradox, basically, there are lots of planets out there, why don’t we see evidence of aliens? One possible solutions is that practically all civilizations destroy themselves.

        I really do hope that humans and future robots can get along.

      • Here here!

        Thanks for that link, and I hadn’t come across the Fermi Paradox before. And cheers for the conversation!

  2. FYI: My original quote: Some possible guesses for the future. 1. Indistinguishable from magic. 2. No humans any more. 3. The sky is the limit (because the robot won’t let humans leave earth). 4. Thou shalt not create transistors.

  3. Michael Kay

     /  March 5, 2012

    Hi, thanks for an interesting post, and it’s clearly stimulated some interesting discussion as well which I enjoyed reading through. I tend to side with the doubters and sceptics on this one though, for a few reasons. Firstly, I don’t believe that the extrapolations from previous data regarding the pace of progress are valid as there is no predictive power involved in something which is inherently based on contingent, human circumstances, and not on physical laws.

    Secondly, I don’t believe we have enough justification to expect that the quantitative increase in processing power, should it come about, could lead to a qualitative shift, a change in kind rather than simply degree, which would give birth to the kind of superhuman intelligence which could give us any cause to worry.

    I have just published an article about this on my own blog (which is how I came across this): I would welcome any thoughts or input on it.

    Thanks again!


  1. This is So Wrong « Lizbeth's Garden
  2. Comments on Sweet Survivors « Lizbeth's Garden

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: