Saturday 12 October 2024
Font Size
   
Wednesday, 16 February 2011 04:23

Watson Supercomputer Terminates Humans in First Jeopardy Round

Rate this item
(0 votes)
Watson Supercomputer Terminates Humans in First Jeopardy Round

Jeopardy champions Brad Rutter and Ken Jennings listen as IBM Watson chief researcher David Ferrucci explains how the supercomputer works following an exhibition round in January. After the first game concluded on Tuesday night, Watson led its human competitors by $25,334. Photo: Sam Gustin/Wired.com

IBM supercomputer Watson closed the pod-bay doors on its human competition Tuesday night in the first round of a two-game Jeopardy match designed to showcase the latest advances in artificial intelligence. The contest concludes Wednesday.

By the end of the Tuesday’s shellacking, Jeopardy’s greatest champions, Ken Jennings and Brad Rutter, were sporting decidedly sour looks.

Watson had a near-miss at the end of the game, when it incorrectly answered the Final Jeopardy clue, but when the dust settled, the supercomputer had earned $35,734, blowing out Rutter and Jennings, who had earned $10,400 and $4800, respectively.

That final missed clue puzzled IBM scientists. The category was US Cities, and the clue was:  “Its largest airport was named for a World War II hero; its second largest, for a World War II battle.”

Rutter and Jennings both correctly wrote “What is Chicago?” for O’Hare and Midway, but Watson’s response was a baffling “What is Toronto???” complete with the additional question marks.

How could the machine have been so wrong? David Ferrucci, the manager of the Watson project at IBM Research, explained on the company’s blog that several things probably confused Watson, as reported by Steve Hamm:

First, the category names on Jeopardy! are tricky. The answers often do not exactly fit the category. Watson, in his training phase,  learned that categories only weakly suggest the kind of answer that is expected, and, therefore, the machine downgrades their significance.  The way the language was parsed provided an advantage for the humans and a disadvantage for Watson, as well. “What US city” wasn’t in the question. If it had been, Watson would have given US cities much more weight as it searched for the answer. Adding to the confusion for Watson, there are cities named Toronto in the United States and the Toronto in Canada has an American League baseball team. It probably picked up those facts from the written material it has digested. Also, the machine didn’t find much evidence to connect either city’s airport to World War II. (Chicago was a very close second on Watson’s list of possible answers.)  So this is just one of those situations that’s a snap for a reasonably knowledgeable human but a true brain teaser for the machine.

Although the botched Chicago airport clue was a bit of a let-down after an otherwise commanding performance, the researchers were heartened by the fact that Watson bet “intelligently” at the end of the game.

Watson bet a mere $947 on the final clue and still won the game by a blowout. (Thanks to readers who corrected me that Final Jeopardy contestants bet after they see the category but before they see the clue.)

In other words, Watson made its super-low bet based on the category and its wide lead, but without knowledge of the final clue, which it got wrong.

“That’s smart,” Ferrucci said in the company blog posting. “You’re in the middle of the contest. Hold onto your money. Why take a risk?”

Follow us for disruptive tech news: Sam Gustin and Epicenter on Twitter.

See Also:

Authors:

to know more click here

French (Fr)English (United Kingdom)

Parmi nos clients

mobileporn