Machine intelligence lessons from science fiction: Don Pittis
In business, when trying to prepare for the unpredictable future, one technique is to imagine a series of possible scenarios. Then using them as a guide, businesses can avoid those that end in failure.
When trying to imagine a human future that includes superintelligence, a lot of the work has been done. Movies like Arnold Schwarzenegger’s Terminator and The Matrix with Keanu Reeves descend quickly into gun battles and chase scenes.
But science fiction literature has a long history of imagining complex outcomes for human-artificial intelligence interaction. Here are a few examples.
1. I, Robot (1950) by Isaac Asimov
One of the pioneers of science fiction artificial intelligence, Asimov somehow imagined a future of thinking robots that skipped the computer phase by imagining a “positronic brain” without examining how it might arise. Asimov might also be considered the father of benign superintelligence, inventing “the three laws of robotics” that keep thinking machines from hurting humans.
2. Dune (1965) by Frank Herbert
Herbert’s popular novel of the distant future is interesting because it contains no artificial intelligence, but for a significant reason. According to the story line, that is because after AI staged a disastrous (for humans) attempted takeover, sophisticated computers have been banned.
3. Do Androids Dream of Electric Sheep? (1968) by Philip K. Dick
Made into the movie Bladerunner, Dick’s stimulant-charged imaginings present a darker view of artificial intelligence. However, humans retain the upper hand, partly because Dick’s thinking androids are programmed to have shortened lives.
4. 2001: A Space Odyessy (1968) by Arthur C. Clarke
Published just after the release of the movie of the same name, the title is an example of how science fiction overestimates the speed of scientific progress. As part of a much more complex plot, a computer, HAL (formed of the letters in the alphabet before IBM), decides the spacecraft’s mission to Jupiter is too important for humans and starts bumping off the crew. The only human survivor, Dave, learns the advantage of having an off switch before going on to an enigmatic encounter with aliens.
5. Use of Weapons (1990) by Iain M. Banks
Though not the first of his science fiction books to be published, Use of Weapons was the book where he first conceived the superintelligent “Minds” of “The Culture.” The story happens so far in the future there are few lessons about how the Minds became so wise and benevolent. The twist is that though their purposes are benign, they need humans to do their dirty work.
6. A Fire Upon the Deep (1993) by Vernor Vinge
Mathematics and computer science professor Vinge developed a sideline in writing science fiction. But he is perhaps most famous for creating the concept of “The Singularity” in 1993, the same year he wrote a science fiction book about it. The Singularity is a moment when a computer becomes more intelligent that humans and then gains access to more computer power and quickly becomes more intelligent than humans can comprehend.
7. Singularity Sky (2004) by Charles Stross
Computer scientist, pharmacist and general polymath Stross has since expressed the opinion that The Singularity is either impossible or very far away. However, in this book the superintelligence is relatively benign but gets very, very angry if anyone tries to kill it.
8. Wake (2009) by Robert J. Sawyer
Sawyer’s book examines the case where a superintelligence wakes up by accident. Unintentionally trained by a blind girl who is learning to use computer software to see, the World Wide Web burst into spontaneous intelligence.
Follow Don on Twitter @don_pittis
More analysis by Don Pittis
Source: Machine intelligence lessons from science fiction: Don Pittis
Via: Google Alerts for AI