Friday, December 12, 2014

TIme to Talk Intelligently About AI: “Watson doesn't know it won Jeopardy!”


Tesla CEO Elon Musk worries it is “potentially more dangerous than nukes.” Physicist Stephen Hawking warns, “AI could be a big danger in the not-too-distant future.” Fear mongering about AI has also hit the box office in recent films such as Her and Transcendence.

So as an active researcher in the field for over 20 years, and now the CEO of the Allen Institute for Artificial Intelligence, why am I not afraid?

The popular dystopian vision of AI is wrong for one simple reason: it equates intelligence with autonomy. That is, it assumes a smart computer will create its own goals, and have its own will, and will use its faster processing abilities and deep databases to beat humans at their own game. It assumes that with intelligence comes free will, but I believe those two things are entirely different.

To say that AI will start doing what it wants for its own purposes is like saying a calculator will start making its own calculations. A calculator is a tool for humans to do math more quickly and accurately than they could ever do by hand; similarly AI computers are tools for us to perform tasks too difficult or expensive for us to do on our own, such as analyzing large data sets, or keeping up to date on medical research. Like calculators, AI tools require human input and human directions.

Now, autonomous computer programs exist and some are scary — such as viruses or cyber-weapons. But they are not intelligent. And most intelligent software is highly specialized; the program that can beat humans in narrow tasks, such as playing Jeopardy, has zero autonomy. IBM’s Watson is not champing at the bit to take on Wheel of Fortune next. Moreover, AI software is not conscious. As the philosopher John Searle put it, “Watson doesn't know it won Jeopardy!”

Anti-AI sentiment is often couched in hypothetical terms, as in Hawking’s recent comment that “The development of full artificial intelligence could spell the end of the human race.” The problem with hypothetical statements is that they ignore reality—the emergence of “full artificial intelligence” over the next twenty-five years is far less likely than an asteroid striking the earth and annihilating us.

So where does this confusion between autonomy and intelligence come from? From our fears of becoming irrelevant in the world. If AI (and its cousin, automation) takes over our jobs, then what meaning (to say nothing of income) will we have as a species? Since Mary Shelley’s Frankenstein, we have been afraid of mechanical men, and according to Isaac Asimov’s Robot novels, we will probably become even more afraid as mechanical men become closer to us, a phenomenon he called the Frankenstein Complex.

No comments: