IBM’s Watson and Analytics: Less Than It Seems, Maybe More Than It Will Seem

Updated: February 10, 2011

Deep Analysis of Deep Analysis

First, let's pierce through the hype to understand what, from my viewpoint, Watson is doing. It appears that Watson is building on top of a huge amount of "domain knowledge" amassed in the past at such research centers as GTE Labs, plus the enormous amount of text that the Internet has placed in the public domain - that's its data. On top of these, it places well-established natural-language processing, AI (rules-based and computer-learning-based), querying, and analytics capabilities, with its own "special sauce" being to fine-tune these for a Jeopardy-type answer-question interaction. Note that sometimes Watson must combine two or more different knowledge domains in order to provide its question: "We call the first version of this an abacus (history). What is a calculator (electronics)?"

Nothing in this design suggests that Watson has made a giant leap in AI (or natural-language processing, or analytics). For 40 years and more, researchers have been building up AI rules, domains, natural-language translators, and learning algorithms - but progress towards meeting a true Turing test, in which the human side of the interaction can never tell that a computer is the other side of the interaction, has been achingly slow. All that the Jeopardy challenge shows is that the computer can now provide one-word answers to a particular type of tricky question - using beyond-human amounts of data and of processing parallelism.

Nor should we expect this situation to change soon. The key and fundamental insight of AI is that when faced with a shallow layer of knowledge above a vast sea of ignorance, the most effective learning strategy is to make mistakes and adjust your model accordingly. As a result, brute-force computations without good models don't get you to intelligence, models that attempt to approximate human learning fall far short of reality, and models that try to invent a new way of learning have turned out to be very inefficient. To get as far as it does, Watson uses 40 years of mistake-driven improvements in all three approaches, showing that it's going to require many years of further improvements - not just letting the present approach "learn" more - before we can seriously talk about human and computer intelligence as apples and apples.

The next point is that Jeopardy is all about text data: not numbers, yes, but not video, audio, or graphics (so-called "unstructured" data), either. The amount of text on Web sites is enormous, but it's dwarfed by the amount of other data from our senses inside and outside the business, and in our heads. In fact, even in the "semi-structured data" category to which Watson's Jeopardy data belongs, other types of information such as e-mails, text messages, and perhaps spreadsheets are now comparable in amount - although Watson could to some extent extend to these without effort. In any case, the name of the game in BI/analytics these days is to tap into not only the text on Facebook and Twitter, but also the information inherent in the videos and pictures provided via Facebook, GPS locators, and cell phones. As a result, Watson is still a ways away from providing good unstructured "context" to analytics - rendering it far less useful to BI/analytics. And bear in mind that analysis of visual information in AI, as evidenced in such areas as robotics, is still in its infancy, used primarily in small doses to direct an individual robot.

As noted above, I see the immediate value of Watson's capabilities to the large enterprise (although I suppose the cloud can make it available to the SMB as well) to be more in the area of cross-domain correlation in existing text databases, including archived emails. There, Watson could be used in historical and legal querying to do preliminary context analysis, to avoid having eDiscovery take every reference to nuking one's competitors as a terrorist threat. Ex post facto analysis of help desk interactions (one example that IBM cites) may improve understanding of what the caller wants, but Watson will likely do nothing for user irritation at language or dialect barriers from offshoring, not to mention encouraging "interaction speedup" that the most recent Sloan Management Review suggests actually loses customers.

Featured Research
  • The New 2017 Contact Center Comparison Guide

    We’ve put together a comparison guide that covers over 40 of the top call center software options, providing details on pricing, features, support, and integrations. If you want to save time and still make a great investment, this guide is a must read. more

  • Phone Systems Comparison Guide: VoIP for Small to Midsize Businesses

    It was a painstaking process, but to help B2B companies start 2017 off on the right foot, we recently compiled a comparative list of the top 43 small to midsize business phone vendors. more

  • 16 Mistakes to Avoid When Buying a Phone System

    Purchasing a phone system for your business is a major investment. With the average business changing phone systems only once every seven years, it’s important to make the right decision. more

  • 2017 Video Conferencing Trends

    New advancements are also making video more beneficial to a greater range of business areas including marketing, HR, and internal operations. Many solutions are economical, easy to use, and very effective at making communication more personal. more

  • [Infographic] Top 11 VoIP Vendors

    A good VoIP provider will offer additional benefits as well, but many first-time buyers find assessing each option to be difficult. Nevertheless, this is an important step in the buying process because a substandard provider can easily waste both your time and money. more