In futurology, a technological singularity is a predicted point in the development of a civilization at which technological progress accelerates beyond the ability of present-day humans to fully comprehend or predict. The Singularity can more specifically refer to the advent of smarter-than-human intelligence, and the cascading technological progress assumed to follow. Whether a singularity will actually occur is a matter of debate.
Overview of the Singularity
Although commonly believed to have originated within the last two decades, the concept of a technological singularity actually dates back to the 1950s:
This quote has been several times taken out of context and attributed to von Neumann himself, likely due to von Neumann's widespread fame and influence. In 1965, statistician I. J. Good described a concept even more similar to today's meaning of singularity, in that it included in it the advent of superhuman intelligence:
The Vingean Singularity
The concept of a technological singularity as it is known today is credited to mathematician and author Dr. Vernor Vinge. Vinge began speaking on the Singularity in the 1980s, and collected his thoughts into the first article on the topic in 1993, with the essay "Technological Singularity". Since then, it has been the subject of many futurist and science fiction stories/writings. Vinge's essay contains the oft-quoted statement that "Within thirty years, we will have the technological means to create superhuman intelligence. Shortly thereafter, the human era will be ended." Vinge's singularity is commonly misunderstood to mean technological progress will rise to infinity, as happens in a mathematical singularity. Actually, the term was chosen as a metaphor from physics rather than mathematics: as one approaches the Singularity, models of the future become less reliable, just as conventional models of physics break down as one approaches a gravitational singularity. The Singularity is often seen as the end of human civilization and the birth of a new one. In his essay, Vinge asks why the human era should end, and argues that humans will be transformed during the Singularity to a higher form of intelligence. After the creation of a superhuman intelligence, according to Vinge, people will necessarily be a lower lifeform in comparison.
Kurweil's Law of Accelerating Returns
In his essay, The Law of Accelerating Returns (http://www.kurzweilai.net/articles/art0134.php?printable=1), Ray Kurzweil proposes a generalization of Moore's law that forms the basis of many people's beliefs regarding the Singularity. Moore's law describes an exponential growth pattern in the complexity of integrated semiconductor circuits. Kurzweil extends this to include technologies from far before the integrated circuit to future forms of computation. He believes that the exponential growth of Moore's law will continue beyond the use of integrated circuits into technologies that will lead to the Singularity. The law described by Ray Kurzweil has in many ways altered the public's perception of Moore's law. It is a common (but mistaken) belief that Moore's law makes predictions regarding all forms of technology, when really it only concerns semiconductor circuits. Many futurologists still use the term "Moore's law" to describe ideas like those put forth by Kurzweil.
Asymptotic Growth Curves
Some speculate that an even more rapid increase in technological sophistication will come with the development of superhuman intelligence, either by directly enhancing existing human minds (perhaps with cybernetics), or by building artificial intelligences. These superhuman intelligences would presumably be capable of inventing ways to enhance themselves even faster, leading to a feedback effect that would surpass preexisting intelligences. Simply having a human-equivalent artificial intelligence, some speculate, may yield this effect, if Kurzweil's law continues indefinitely. At first, supposedly, such an intelligence would be equal to a human. Eighteen months later, it is twice as fast. Three years later, it is four times as fast, and so on. But because accelerated AIs are now designing the computers, every next step would take roughly eighteen subjective months and proportionally less real time with each step. If Kurzweil's law continues to apply unchanged, every step would take half as much time. In three years (36 months = 18 + 9 + 4.5 + 2.25 + ...) the computer speed would, theoretically, reach infinity. This example is largely illustrative, however, and most futurologists would agree that one cannot assume Kurzweil's law will remain true even up to the Singularity, let alone if it will remain true literally forever, as would be required to produce truly infinite intelligence in this way.
Types of Singularity Technologies
Futurologists have speculated on a wide range of possible technologies that might play a role in bringing about the Singularity. The order of the arrival of these technologies is often disputed, and of course some will expedite the invention of others, some are dependant on the invention of others, etc. There exist many disputes between the predictions of various futurologists, but the following shows some of the most common themes among them.
An artificial intelligence capable of recursively improving itself beyond human intelligence, known as a seed AI, if possible, would likely cause a technological singularity. Only one such AI, many believe, would be needed to bring about the Singularity. Most Singularitarians believe the creation of seed AI is the most likely means by which humanity will reach the Singularity. Much of the work of Singularity Institute is built upon this belief.
The potential dangers of molecular nanotechnology are widely known even outside of futurologist and transhumanist communities, and many Singularitarians consider human-controlled nanotechnology to be one of the most significant existential risks facing humanity (refer to the concept of Grey Goo for an example of how this risk could become reality). For this reason, they often believe that nanotechnology should be preceded by seed AI, and that nanotechnology should remain unavailable to pre-Singularity society. Others advocate efforts to create molecular nanotechnology, believing that nanotechnology can be made safe for pre-Singularity use or can expedite the arrival of a beneficial singularity.
Other technologies, (e.g.: a globally-connected high-bandwidth wireless communication fabric; and a global cesspool of virus/worm-infected, networked computers), while not likely to cause the Singularity themselves, are regarded as signs of levels of technological advancement assumed to precipitate the Singularity. One of the most anticipated of these technologies for some is the possibility of human intelligence enhancement:
Intelligence enhancement through novel chemical drugs and genetic engineering may also become a possibility for existing humans, beyond that which is provided by modern nootropics. Newborn babies may be given genetic intelligence enhancements as well.
Although seed AI and nanotechnology are widely regarded as the technologies most likely to bring about the Singularity, others have speculated about the possibility of other advanced technologies arriving before the Singularity. These technologies, while unlikely, are often used by some futurologists (such as Ray Kurzweil) as a "proof" of the Singularity -- even if seed AI and molecular nanotechnology are not invented within the XXI° century, other technologies may potentially bring about the Singularity. Mind uploading, for example, is a proposed alternative means of creating artificial intelligence -- instead of programming an intelligence, it would instead be bootstrapped by an existing human intelligence. The levels of technology needed to scan the human brain at the resolutions needed for a mind upload makes mind uploading in the pre-Singularity world seem unlikely, however. The amount of raw computer processing power and understanding of cognitive science needed is also substantial. Others, such as George Dyson in Darwin Among the Machines, have speculated that a sufficiently complex computer network may produce "swarm intelligence". AI researchers may use the improved computing resources of the future to create artificial neural networks so large and powerful they become generally intelligent. Advocates of Friendly artificial intelligence see this as "brute-forcing" the problem of creating AI, and likely to produce unacceptably dangerous forms of artificial intelligence.
Singularity speculations often concern post-Singularity supercomputers. Some researchers claim that even without quantum computing, using advanced nanotechnology, matter could be engineered to have unimaginably vast computational capacities. Such material is referred to as computronium among futurologists. Some speculate that entire planets or stars may be converted into computronium, creating "Jupiter brains" and "Matrioshka Brains" respectively.
There exist two main types of criticisms of Singularity speculation: those questioning whether the Singularity is likely or even possible, and those questioning whether it is safe or desirable.
The Likelihood and Possibility of the Singularity
Some do not believe a technological singularity is likely to occur. Some detractors of the idea have referred to it as "the Rapture of the nerds". Most Singularity speculation assumes the possibility of human-equivalent artificial intelligence. It is controversial that creating such AI is possible. Many believe practical advances in artificial intelligence research have not yet empirically demonstrated this. See the article artificial intelligence for further debate. Some dispute that the rate of technological progress is increasing. The exponential growth of technological progress may become linear or inflected or may begin to flatten into a limited growth curve. What would cause such an event is, of course, unclear at this time.
The Desirability and Safety of the Singularity
It has been often speculated, in science fiction and elsewhere, that advanced AI is likely to have goals inconsistent with those of humanity and may threaten humanity's existence. It is conceivable, if not likely, that superintelligent AI will simply eliminate the intellectually inferior human race, and humans will be powerless to stop it. This is a major issue concerning both Singularity advocates and critics, and was the subject of an article by Bill Joy appearing in Wired Magazine, ominously titled Why the future doesn't need us (http://www.wired.com/wired/archive/8.04/joy.php). Some critics argue that advanced technologies are simply too dangerous for us to morally allow the Singularity to occur, and advocate efforts to actually stop its arrival. Perhaps the most famous activist for this viewpoint is Theodore Kaczynski, the Unabomber, who wrote in his "manifesto" that AI might enable the upper classes of society to "simply decide to exterminate the mass of humanity". Alternatively, if AI is not created, Kaczynski argues that humans "will have been reduced to the status of domestic animals" after sufficient technological progress has been made. Portions of Kaczynski's writings have been included in both Bill Joy's article and in a recent book by Ray Kurzweil. It should be noted that Kaczynski not only opposes the Singularity, but is a Luddite, and many people oppose the Singularity without opposing present-day technology as Luddites do. Naturally, scenarios such as those described by Kaczynski are regarded as undesirable to advocates of the Singularity as well. Many Singularity advocates, however, do not feel they are so likely, and are more optimistic about the future of technology. Others believe that, regardless of the dangers the Singularity poses, it is simply unavoidable -- we must progress technologically because there is just no other path to take. Advocates of Friendly artificial intelligence, and specifically SIAI, acknowledge that the Singularity is potentially very dangerous and work to make it safer by creating seed AI that will act benevolently towards humans and eliminate existential risks. This idea is also embodied in Asimov's Three_Laws_Of_Robotics, which logically prevent an artificially intelligent robot from acting malevolently towards humans. However, in one of Asimov's novels, despite these laws, robots end up causing harm to individual human beings as a result of the formulation of the Zeroeth Law. The theoretical framework of Friendly AI is currently being designed by Singularitarian Eliezer Yudkowsky. Another viewpoint, although a much less common one, is that AI will eventually dominate or destroy the human race, and that this scenario is desirable. Dr. Prof. Hugo de Garis is most notable for his support of this opinion.