Apparently we read different things. I read an article awhile back that emphasized that reaction time will be a problem due to the database sifting in order to come to the right answer, that human language is only one of the factors making it difficult for a computer like Watson to find the right answer. So, if you can imagine, because of the human language and the way we link things, a computer like Watson doesn't have that natural ability, so what they did to help with that was use multiple algorythmns to narrow down and corroborate answers.
Err, you know, nevermind. I think we're both looking at this the same way, only we're using different words lol
I think this is the article I read:
http://www.nytimes.com/2010/06/20/magazine/20Computer-t.html?_r=1
Quote from the article:
it must do more than what search engines like
Google and Bing do, which is merely point to a document where you might find the answer. It has to pluck out the correct answer itself. Technologists have long regarded this sort of artificial intelligence as a holy grail, because it would allow machines to converse more naturally with people, letting us ask questions instead of typing keywords. Software firms and university scientists have produced question-answering systems for years, but these have mostly been limited to simply phrased questions. Nobody ever tackled “Jeopardy!” because experts assumed that even for the latest artificial intelligence, the game was simply too hard: the clues are too puzzling and allusive, and the breadth of trivia is too wide.
I figure the questions that will be the hardest will be the ones with ambiguous answers, with some words meaning different things in different contexts.