If machines are capable of doing almost any work humans can do, what will humans do?
The question of what happens when machines get to be as intelligent as and even more intelligent than people seems to occupy many science-fiction writers. The Terminator movie trilogy, for example, featured Skynet, a self-aware artificial intelligence that served as the trilogy's main villain, battling humanity through its Terminator cyborgs. Among technologists, it is mostly "Singularitarians" who think about the day when machine will surpass humans in intelligence. The term "singularity" as a description for a phenomenon of technological acceleration leading to "machine-intelligence explosion" was coined by the mathematician Stanislaw Ulam in 1958, when he wrote of a conversation with John von Neumann concerning the "ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue." More recently, the concept has been popularized by the futurist Ray Kurzweil, who pinpointed 2045 as the year of singularity. Kurzweil has also founded Singularity University and the annual Singularity Summit.
It is fair to say, I believe, that Singularitarians are not quite in the mainstream. Perhaps it is due to their belief that by 2045 humans will also become immortal and be able to download their consciousness to computers. It was, therefore, quite surprising when in 2000, Bill Joy, a very mainstream technologist as co-founder of Sun Microsystems, wrote an article entitled "Why the Future Doesn't Need Us" for Wiredmagazine. "Our most powerful 21st-century technologies -- robotics, genetic engineering, and nanotech -- are threatening to make humans an endangered species," he wrote. Joy's article was widely noted when it appeared, but it seems to have made little impact.
NEXT STORY: Tech could ease the presidential transition