ImageAs I predicted, IBM's Watson pretty much trounced the humans. It was not a complete romp, however. The final score was $77,147 for Watson, $21,600 for Brad Rutter, and $24,000 for Ken Jennings. At it was an exhibition match, the $1,000,000 first prize goes to IBM (and then on to charity). Jennings earns $300,000 and Rutter comes away with $200,000. Both Jennings and Rutter said they will donate half their proceeds to charity. My assumption is IBM funded much of the event (it took place at their facility).

The third and final day was pretty much game playing. There was no background or build-up. I assume if you did not figure out what was going on by now, no amount of background was going to help. The second game was much closer then the first one. The human players actually got a chance to answer some questions.

The speed issue is of some concern in that when Watson "knows" an answer (it exceeds a certain confidence level) he is very quick to press the answer button (there is a mechanical device that allows him to actually press the button.) As I understand it, contestants cannot press the answer button until a light goes on behind the counter. In the case of Watson, he definitely has a speed advantage. In other words, he always seems to beat the humans to the punch. I'm not sure if this is really fair. Is the real measure of the contest who can push the button quicker? You can almost see the frustration on the face of Rutter and Jennings as they both seemed to know the questions (i.e. answers), but could not respond quick enough. At one point, it looked like they just kind of gave up. The only time they seemed to have a chance at answering was when Watson was not sure of his top answer. The each walked away with a nice amount of prize money so I'm sure they were not too disappointed.

Thinking about the whole contest, I found there are two kinds of "unknowns" for Watson (and humans). There are things Watson "knows," those that fall above the confidence threshold and those that fall below the threshold. As viewed in Watson's top three answer window at the bottom screen, these "wrong answers" were usually wrong. I do recall seeing the right answer with low confidence several times, however.

There are things that he thinks he "knows" that are wrong, these were rare, like when he said Chemise (with 96% confidence) for a "A loose fitting dress hanging strait from the shoulders to below the waist." (What is a shift?) What he thinks he "knows" is determined by the confidence level of the answer. It is not always correct as we have seen. What I find interesting is when Watson thinks he is right, he usually is. When he thinks he is wrong, he almost always is. This is a rather subtle point. I believe this is the best indication of the AI in Watson. He learns from his mistakes. Like us humans, we are "usually" right about most day-to-day things (survival issues). And we "understand" that we could be wrong and often learn from mistakes (it may take more than once!). And more importantly, we learn all the time (consciously and unconsciously), yet, like Watson, we still have the Anosognosics Dilemma (or how do we deal with not knowing what we do not know).

Overall, the Watson Jeopardy challenge sets another milestone for computer-human contests. The bar is now higher. Should we, as Ken Jennings wrote on his final Jeopardy answer "welcome our new computer overlords?" Some would argue we already have. If you think of life without computers or how much we rely on things like Watson's cousin Google, it may invite you to wonder.

You have no rights to post comments

Search

Login And Newsletter

Create an account to access exclusive content, comment on articles, and receive our newsletters.

Feedburner


This work is licensed under CC BY-NC-SA 4.0

©2005-2023 Copyright Seagrove LLC, Some rights reserved. Except where otherwise noted, this site is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International. The Cluster Monkey Logo and Monkey Character are Trademarks of Seagrove LLC.