Wednesday, 13 July 2016

Robots could hack Turing test by keeping silent

The test currently can't determine if a person is talking to another human being or a robot if the person being interrogated simply chooses to stay silent, new research shows.

While it's not news that the Turing test has flaws, the new study highlights just how limited the test is for answering deeper questions about artificial intelligence, said study co-author Kevin Warwick, a computer scientist at Coventry University in England. [Super-Intelligent Machines: 7 Robotic Futures]

"As machines are getting more and more intelligent, whether they're actually thinking and whether we need to give them responsibilities are starting to become very serious questions," Warwick told Live Science. "Obviously, the Turing test is not the one which can tease them out."

Imitation game

The now-famous Turing test was first described by British computer scientist Alan Turing in 1950 to address questions of when and how to determine if machines are sentient. The question of whether machines can think, he argued, is the wrong one: If they can pass off as human in what he termed the imitation game, that is good enough.

The test is simple: Put a machine in one room, a human interrogator in another, and have them talk to each other through a text-based conversation. If the interrogator can identify the machine as nonhuman, the device fails; otherwise, it passes.

The simple and intuitive test has become hugely influential in the philosophy of artificial intelligence. But from the beginning, researchers found flaws in the test. For one, the game focuses on deception and is overly focused on conversation as the metric of intelligence.

For instance, in the 1970s, an early language-processing program called ELIZA gave Turing test judges a run for their money by imitating a psychiatrist's trick of reflecting questions back to the questioner. And in 2014, researchers fooled a human interrogator using a "chatbot" named Eugene Goostman that was designed to pose as a 13-year-old Ukrainian boy.

Right to remain silent

Warwick was organizing Turing tests for the 60th anniversary of Turing's death when he and his colleague Huma Shah, also a computer scientist at Coventry University, noticed something curious: Occasionally, some of the AI chatbots broke and remained silent, confusing the interrogators.

"When they did so, the judge, on every occasion, was not able to say it was a machine," Warwick told Live Science. [The 6 Strangest Robots Ever Created]

By the rules of the test, if the judge can't definitively identify the machine, then the machine passes the test. By this measure then, a silent bot or even a rock could pass the Turing test, Warwick said.

On the flip side, many humans get unfairly tarred as AI, Warwick said.

"Very often, humans do get classified as being a machine, because some humans say silly things," Warwick said. In that scenario, if the machine competitor simply stayed silent, it would win by default, he added.

Better tests

The findings point to the need for an alternative to the Turing test, said Hector Levesque, an emeritus computer science professor at the University of Toronto in Canada, who was not involved w

ith the new research.

"Most people recognize that, really, it's a test to see if you can fool an interrogator," Levesque told Live Science. "It's not too surprising that there are different ways of fooling interrogators that don't have much to do with AI or intelligence."

Levesque has developed an alternative test, which he dubbed the Winograd schema (named after computer science researcher Terry Winograd, who first came up with some of the questions involved in the test).

No comments:
Write comments

Recommended Posts × +