IBM's Watson And Its Implications For Smart Computing

Like many connected with IBM as an employee, a customer, or an analyst, I watched IBM's Watson beat two smart humans in three games of Jeopardy.  However, I was able to do so under more privileged conditions than sitting on my couch.  Along with my colleague John Rymer, I attended an IBM event in San Francisco, in which two of the IBM scientists who had developed Watson provided background on Watson prior to, during commercial breaks in, and after the broadcast of the third and final Jeopardy game.  We learned a lot about the time, effort, and approaches that went into making Watson competitive in Jeopardy (including, in answer to John's question, that its code base was a combination of Java and C++).  This background information made clear how impressive Watson is as a milestone in the development of artificial intelligence.  But it also made clear how much work still needs to be done to take the Watson technology and deploy it against the IBM-identified business problems in healthcare, customer service and call centers, or security.

The IBM scientists showed a scattergram of the percentage of Jeopardy questions that winning human contestants got right vs. the percentage of questions that they answered, which showed that these winners generally got 80% or more of the answers right for 60% to 70% of the questions.  They then showed line charts of how Watson did against the same variables over time, with Watson well below this zone at the beginning, but then month by month moving higher and higher, until by the time of the contest it was winning over two-thirds of the test contests against past Jeopardy winners.  But what I noted was how long the training process took before Watson became competitive -- not to mention the amount of computing and human resources IBM put behind the project.

Read more