IBM’s Watson and Analytics: Less Than It Seems, Maybe More Than It Will Seem

Updated: February 10, 2011

Deep Analysis of Deep Analysis

First, let's pierce through the hype to understand what, from my viewpoint, Watson is doing. It appears that Watson is building on top of a huge amount of "domain knowledge" amassed in the past at such research centers as GTE Labs, plus the enormous amount of text that the Internet has placed in the public domain - that's its data. On top of these, it places well-established natural-language processing, AI (rules-based and computer-learning-based), querying, and analytics capabilities, with its own "special sauce" being to fine-tune these for a Jeopardy-type answer-question interaction. Note that sometimes Watson must combine two or more different knowledge domains in order to provide its question: "We call the first version of this an abacus (history). What is a calculator (electronics)?"

Nothing in this design suggests that Watson has made a giant leap in AI (or natural-language processing, or analytics). For 40 years and more, researchers have been building up AI rules, domains, natural-language translators, and learning algorithms - but progress towards meeting a true Turing test, in which the human side of the interaction can never tell that a computer is the other side of the interaction, has been achingly slow. All that the Jeopardy challenge shows is that the computer can now provide one-word answers to a particular type of tricky question - using beyond-human amounts of data and of processing parallelism.

Nor should we expect this situation to change soon. The key and fundamental insight of AI is that when faced with a shallow layer of knowledge above a vast sea of ignorance, the most effective learning strategy is to make mistakes and adjust your model accordingly. As a result, brute-force computations without good models don't get you to intelligence, models that attempt to approximate human learning fall far short of reality, and models that try to invent a new way of learning have turned out to be very inefficient. To get as far as it does, Watson uses 40 years of mistake-driven improvements in all three approaches, showing that it's going to require many years of further improvements - not just letting the present approach "learn" more - before we can seriously talk about human and computer intelligence as apples and apples.

The next point is that Jeopardy is all about text data: not numbers, yes, but not video, audio, or graphics (so-called "unstructured" data), either. The amount of text on Web sites is enormous, but it's dwarfed by the amount of other data from our senses inside and outside the business, and in our heads. In fact, even in the "semi-structured data" category to which Watson's Jeopardy data belongs, other types of information such as e-mails, text messages, and perhaps spreadsheets are now comparable in amount - although Watson could to some extent extend to these without effort. In any case, the name of the game in BI/analytics these days is to tap into not only the text on Facebook and Twitter, but also the information inherent in the videos and pictures provided via Facebook, GPS locators, and cell phones. As a result, Watson is still a ways away from providing good unstructured "context" to analytics - rendering it far less useful to BI/analytics. And bear in mind that analysis of visual information in AI, as evidenced in such areas as robotics, is still in its infancy, used primarily in small doses to direct an individual robot.

As noted above, I see the immediate value of Watson's capabilities to the large enterprise (although I suppose the cloud can make it available to the SMB as well) to be more in the area of cross-domain correlation in existing text databases, including archived emails. There, Watson could be used in historical and legal querying to do preliminary context analysis, to avoid having eDiscovery take every reference to nuking one's competitors as a terrorist threat. Ex post facto analysis of help desk interactions (one example that IBM cites) may improve understanding of what the caller wants, but Watson will likely do nothing for user irritation at language or dialect barriers from offshoring, not to mention encouraging "interaction speedup" that the most recent Sloan Management Review suggests actually loses customers.

Featured Research
  • Contact Center Software on a Budget

    Although contact center software is necessary for a modern contact center, it can be outrageously expensive. Many companies find that their budget bloats during the implementation process. more

  • How UC Can Help Your Business Survive the Holidays

    The holiday season is filled with frenzy and excitement for businesses and consumers alike. Consumers prepare gift lists, compare brands and prices, and begin shopping with a vigor that is not present most other times of the year. For many businesses, the holiday season accounts for a large profit bump at the end of each year, and companies strive to exceed their goals and keep customers happy during this rush late in the year. more

  • [Infographic] Switching Phone Systems

    There are a lot of possible reasons you might want to switch to a new phone system. The old one might cost too much or be too troublesome to operate and maintain. It might not be flexible enough. It might not be reliable enough. Or it just might not have the kinds of features and capabilities that you need in today’s competitive business climate. more

  • Business Intelligence Software Cost Guide

    Your choice in a BI (Business Intelligence) provider can lead you to make better, data-driven decisions for your business, resulting in significant ROI. Or it can cost hundreds of thousands of dollars with mixed results. more

  • The New 2016 ERP Comparison Guide

    Selecting an ERP solution is no easy undertaking. You have to select and configure a system that fits your exact business needs. The right system can make operations more streamlined, efficient, and agile. But choosing the right vendor can be difficult. more