Below I’ve sketched a few routes by which a Singularity might occur in this century. These are mostly negative, dystopian scenarios, since my mind seems better able to imagine catastrophic changes than a smooth upward transition to immortality and infinite transhuman potential. Besides, Ray Kurzweil, Hans Moravec, Eric Drexler and others have already described those futures in much more detail than I could hope to.
My thinking is that for a catastrophic Singularity to occur, there may have to be some kind of massive disruption of civilization, on the scale of the last Ice Age, World War II or the Black Plague, to bring about the kind of breakdown of ethical restraint that a negative Singularity would require. Some of these scenarios may seem science fictional or beyond the pale at the moment, but so did the idea of dropping atomic bombs on cities in 1909. If order again breaks down and human survival is threatened on a large scale, we should expect every option to be explored just as it was during previous periods of crisis.
When you look at the changes that accompanied the end of the last Ice Age – the onset of agriculture and the birth of civilization — or the new technological world that was born out of World War II in the form of nuclear energy, jet aircraft, rockets and computers, it’s pretty clear that technological change can be rapid and revolutionary in times of great stress. And since there is no shortage of looming catastrophes awaiting us in this century, on a similar time frame as our potentially disruptive technologies, we appear to have an almost ideal confluence of factors for producing some kind of Singularity in the near future.
Scenario #1: The Bottleneck / War Against Humanity
Will a coming bottleneck enable the next evolutionary leap?
Global Malthusian chaos from catastrophic climate change, famine, disease, energy shortages, cascading systems failure, etc. leads to desperate survival measures in the technologically advanced enclaves. These measures could include the unleashing of autonomous killing machines, designer viruses or nanotechnology to cull hostile populations. Unethical scientists may take advantage of the chaos to treat millions of human beings as guinea pigs, including perhaps a radical group of transhumanists which succeeds in creating a new cybernetically enhanced species or superhuman AI. The besieged elites may decide that transforming themselves into superhumans is the only way forward in a world that has lost all sense of restraint or equality. No longer bound by the notion that "all men are created equal", this new species proceeds to take control of the planet and enslave or exterminate the remaining humans.
Scenarios like this have played out countless times in the evolutionary history of our planet. Biologists tell us that genetic drift during a population bottleneck can lead to the emergence of a new species in just a few generations, even without the benefit of transhuman technology. We ourselves exterminated the Neanderthals and hunted numerous species of large mammals to extinction during the Holocene, with technology no more advanced than spears and arrows. Now that we have vanquished all other competition, the only remaining threats are ourselves and our machines. As George Dyson observed: “In the game of life and evolution, there are three players at the table: human beings, nature and machines. I am firmly on the side of nature, but nature, I suspect, is on the side of the machines.”
Scenario #2: Skynet / World War III
The global robotics/cyberwar arms race between leading industrial nations turns hot, leading to massive funding of sophisticated military AI and robotics systems. The rapid technological advancements that ensue culminate in autonomous robots and computer control systems that somehow develop their own agenda, decide that humans are the problem and start wiping us out. I'm sure you’ve all seen the Terminator movies, so I shouldn’t have to provide too much detail here.
Even if you reject the "Skynet spontaneously becomes self-aware" scenario, the technological acceleration that a World War would produce might lead to a singularity in the von Neumann sense ("the ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue."). I would consider World War II such a singularity, in that life as it was known before 1939 certainly could not continue after 1945. World War III promises to be even more disruptive.
Scenario #3: Brave New World Order
A global technocratic government uses computers, drugs, cybernetic implants, genetic engineering, etc. to control the population, automating most labor and enlisting only the creative elite to do productive work. The masses of humanity are allowed to live on welfare in exchange for behaving docilely and practicing strict birth control. As the economic need for humans approaches zero and their dependence on ever more intelligent machines approaches 100%, humanity quietly undergoes a "frog in a pot" Singularity. Ted Kaczynski wrote about this possibility at length in his Industrial Society and Its Future, as did Jay Hanson in Society of Sloth. This scenario might not make for a very entertaining movie, but I consider it the most likely non-catastrophic near future. Rising superpower China may point the way to this type of technocratic world order.
Scenario #4: Dr. Evil / "Oops, I Blew Up the World"
A renegade group of terrorists, industrialists, scientists, cultists, or just some bored blogger unleashes a newly developed self-replicating/self-improving GNR technology in a bid for world domination, depopulation, apocalypse, entertainment or some other nefarious purpose. Or there may be no evil intent, just a laboratory experiment that gets out of control. If the planet isn’t reduced to “gray goo”, it is transformed into computronium, overrun by a self-replicating robot army or super-virus, or in some other way transformed beyond recognition. This scenario might sound a little comic bookish, but the larger point is that the disruptive technologies of the 21st century probably won’t require the massive resources of a Manhattan Project to be unleashed. All that may be necessary is technical knowledge that is freely available, and the will to use it in destructive ways — in which case almost anyone has the potential to be a Singularity-starter. You might call this the “15 minutes of Singularity” problem, which could also explain why we haven’t found a universe teeming with signs of other civilizations. Technology may simply be self-extincting, and this could be the century that we learn this rather depressing fact of life.
Scenario #5: The Global Brain Awakens
An image of the internet: a new global brain forming?
I consider this scenario the most speculative and difficult to imagine of all. On an intuitive level, the idea that a super-organism is emerging from the billions of networked humans and their computers seems compelling (beautifully described by Kevin Kelly as the "One Machine"). Ants, bees and other swarming species provide a clear precedent in nature for the phenomenon of emergent intelligence. Some argue that collective intelligence enhancers like google’s PageRank provide a function analogous to the pheromone trails of ants or the neuronal learning mechanisms of the human brain. But just as an individual ant is a simple agent with no awareness of the larger computational functions of the colony, we might never be able to comprehend what the global brain is thinking or what its goals are. So you might call this the “what if a Singularity happened and no one noticed?” scenario. Still, I find this one of the most fascinating (and least frightening) possible routes to a Singularity.
New home: thecosmist.org
2 years ago