The big buzz already this year in the world of game shows was not figuring out who was smarter than a middle-schooler, or which shiny briefcase held the most money, or even which contestant could look the silliest running, jumping and falling over an obstacle course. No, the real buzz was summed up in one word, “Watson”. And if you happened to catch an episode of Jeopardy! in front of your TV lift cabinet during February 14-16, you saw an IBM-built, artificial intelligence computer system capable of answering questions (in the form of a question) posed in natural human language.
Beyond what IBM’s Deep Blue did in 1997 to trounce world chess champion, Garry Kasparov, in a game with finite possibilities (as staggering as those possibilities were), Watson was able to process nuances in human language, calculate possible meanings, rank itself on how confident it was in his answer and buzz in – all in less than three seconds.
IBM and Jeopardy! producers joined together to pit Watson against the game show’s two most-winning, most well-known champions – Ken Jennings and Brad Rutter. Jennings holds the record for longest championship streak, winning 74 straight games. Rutter is the all-time money champion, earning $3.25 million and never losing a single Jeopardy! match – until now.
Watson broke that streak and bested both of the former champions in a two-day match that aired over three days. The first day was a “practice round,” and then the two-day match was split between the Jeopardy and Double Jeopardy rounds. The final results were Watson in first (winning $1 million), Ken Jennings in second and Brad Rutter in last place (with both human competitors winning $300,000 and $200,000, respectively).
Over five years in the making, Watson is a machine constructed of 90 different servers, 2,880 POWER7 processor cores and 16 Terabytes of RAM. It can understand slang, plays on word, double-meanings and phrases previously thought to be only understood intuitively. Watson did stumble on some clues (having a harder time with short clues), and he made a few missteps in game play, such as guessing an answer that was already answered by an opponent and incorrect. It also thought Toronto was a U.S. city.
However, it dissected questions (actually, answers) into keywords and sentence fragments until it could formulate the most-correct response. Programmers “fed” Watson millions of documents to ready him, including dictionaries, encyclopedias, and other reference material that it could use to build its knowledge base. Watson was not connected to the Internet during the game, so it could only rely on what it already “knew”.
IBM has stated that Watson’s future could unleash a world of potential good, especially in the medical fields where doctors could consult with Watson to find cures, diagnose ailments and prescribe remedies. Until then, we may just be watching Watson on our flat-screen TVs in our pop-up TV cabinets making his way through the game show circuit.