True, computers keep getting faster and smarter. But the day when intelligent machines will take over the planet and enslave the human race is a trope of science fiction that will likely never arrive. Michael Littman’s essay first appeared at <a href="http://www.livescience.com/49625-robots-will-not-conquer-humanity.html">Live Science</a> on Wednesday, Jan. 28, 2015.

Every new technology brings its own nightmare scenarios. Artificial intelligence (AI) and robotics are no exceptions. Indeed, the word “robot” was coined for a 1920 play that dramatized just such a doomsday for humanity.

Earlier this month, an open letter about the future of AI, signed by a number of high-profile scientists and entrepreneurs, spurred a new round of harrowing headlines like “Top Scientists Have an Ominous Warning About Artificial Intelligence,” and “Artificial Intelligence Experts Pledge to Protect Humanity from Machines.” The implication is that the machines will one day displace humanity.

Let’s get one thing straight: A world in which humans are enslaved or destroyed by superintelligent machines of our own creation is purely science fiction. Like every other technology, AI has risks and benefits, but we cannot let fear dominate the conversation or guide AI research.

Nevertheless, the idea of dramatically changing the AI research agenda to focus on AI “safety” is the primary message of a group calling itself the Future of Life Institute (FLI). FLI includes a handful of deep thinkers and public figures such as Elon Musk and Stephen Hawking and worries about the day in which humanity is steamrolled by powerful programs run amok.

As eloquently described in the book Superintelligence: Paths, Dangers, Strategies (Oxford University Press, 2014) by FLI advisory board member and Oxford-based philosopher Nick Bostrom, the plot unfolds in three parts. In the first part — roughly where we are now — computational power and intelligent software develops at an increasing pace through the toil of scientists and engineers. Next, a breakthrough is made: Programs are created that possess intelligence on par with humans. These programs, running on increasingly fast computers, improve themselves extremely rapidly, resulting in a runaway “intelligence explosion.” In the third and final act, a singular super-intelligence takes hold — outsmarting, outmaneuvering, and ultimately outcompeting the entirety of humanity and perhaps life itself. End scene.

Let’s take a closer look at this apocalyptic storyline. Of the three parts, the first is indeed happening now, and Bostrom provides cogent and illuminating glimpses into current and near-future technology. The third part is a philosophical romp exploring the consequences of supersmart machines. It’s that second part — the intelligence explosion — that demonstrably violates what we know of computer science and natural intelligence.

Runaway intelligence?

The notion of the intelligence explosion arises from Moore’s Law, the observation that the speed of computers has been increasing exponentially since the 1950s. Project this trend forward and we’ll see computers with the computational power of the entire human race within the next few decades. It’s a leap to go from this idea to unchecked growth of machine intelligence, however.

First, ingenuity is not the sole bottleneck to developing faster computers. The machines need to actually be built, which requires real-world resources. Indeed, Moore’s law comes with exponentially increasing production costs as well — mass production of precision electronics does not come cheap. Further, there are fundamental physical laws — quantum limits — that bound how quickly a transistor can do its work. Non-silicon technologies may overcome those limits, but such devices remain highly speculative.

In addition to physical laws, we know a lot about the fundamental nature of computation and its limits. For example, some computational puzzles, like figuring out how to factor a number and thereby crack online cryptography schemes, are generally believed to be unsolvable by any fast program. They are part of a class of mathematically defined problems that are “NP-complete” meaning that they are exactly as hard as any problem that can be solved non-deterministically (N) in polynomial time (P), and they have resisted any attempt at scalable solution. As it turns out, most computational problems that we associate with human intelligence are known to be in this class

Wait a second, you might say. How does the human mind manage to solve mathematical problems that computer scientists believe can’t be solved? We don’t. By and large, we cheat. We build a cartoonish mental model of the elements of the world that we’re interested in and then probe the behavior of this invented miniworld. There’s a trade-off between completeness and tractability in these imagined microcosms. Our ability to propose and ponder and project credible futures comes at the cost of accuracy. Even allowing for the possibility of the existence of considerably faster computers than we have today, it is a logical impossibility that these computers would be able to accurately simulate reality faster than reality itself.

Countering the anti-AI cause

In the face of general skepticism in the AI and computer science communities about the possibility of an intelligence explosion, FLI still wants to win support for its cause. The group’s letter calls for increased attention to maximizing the societal benefits of developing AI. Many of my esteemed colleagues signed the letter to show their support for the importance of avoiding potential pitfalls of the technology. But a few key phrases in the letter such as “our AI systems must do what we want them to do” are taken by the press as an admission that AI researchers believe they might be creating something that "cannot be controlled." It also implies that AI researchers are asleep at the wheel, oblivious to the ominous possibilities, which is simply untrue.

To be clear, there are indeed concerns about the near-term future of AI — algorithmic traders crashing the economy or sensitive power grids overreacting to fluctuations and shutting down electricity for large swaths of the population. There’s also a concern that systemic biases within academia and industry prevent underrepresented minorities from participating and helping to steer the growth of information technology. These worries should play a central role in the development and deployment of new ideas. But dread predictions of computers suddenly waking up and turning on us are simply not realistic.

I welcome an open discussion about how AI can be made robust and beneficial and how we can engineer intelligent machines and systems that make society better. But let’s please keep the discussion firmly within the realm of reason and leave the robot uprisings to Hollywood screenwriters.

Michael L. Littman, professor of computer science, one of the faculty leaders of Brown University’s Humanity-Centered Robotics Initiative. His research includes an interest in user-programmable devices.