I am just back from the first ever Cognitive Computing Forum organized by DATAVERSITY in San Jose, California. I am not new to artificial intelligence (AI), and was a software developer in the early days of AI when I was just out of university. Back then, if you worked in AI, you would be called a SW Knowledge Engineer, and you would use symbolic programming (LISP) and first order logic programming (Prolog) or predicate calculus (MRS) to develop “intelligent” programs. Lot’s of research was done on knowledge representation and tools to support knowledge based engineers in developing applications that by nature required heuristic problem solving. Heuristics are necessary when problems are undefined, non-linear and complex. Deciding which financial product you should buy based on your risk tolerance, amount you are willing to invest, and personal objectives is a typical problem we used to solve with AI.
Fast forward 25 years, and AI is back, has a new name, it is now called cognitive computing. An old friend of mine, who’s never left the field, says, “AI has never really gone away, but has undergone some major fundamental changes.” Perhaps it never really went away from labs, research and very nich business areas. The change, however, is heavily about the context: hardware and software scale related constraints are gone, and there’s tons of data/knowledge digitally available (ironically AI missed big data 25 years ago!). But this is not what I want to focus on.
Recent news of a a computer program that passed the Turing Test is a great achievement for artificial intelligence (AI). Pulling down the barrier between human and machine has been a decades long holy grail pursuit. Right now, it is a novelty. In the near future, the implications are immense.
Which brings us to why should you care.
Earlier this week the House majority leader, Eric Cantor, suffered an enormous defeat in Virginia's Republican primary by Tea Party candidate David Brat. No one predicted this - the polls were wrong, by a long shot. Frank Luntz, a Republican pollster and communication advisor, offered up his opinion on what was missing in a New York Times Op-Ed piece - lack of face-to-face discussions and interviews with voters. He asserts that while data collection was limited to discrete survey questions, what it lacked was context. Information such as voter mood, perceptions, motives, and overall mind set were missing. Even if you collected quantitative data across a variety of sources, you don't get to these prescient indicators.
The new wave of AI (the next 2 - 5 years) makes capturing this insight possible and at scale. Marketing organizations are already using such capabilities to test advertising messages and positioning in focus group settings. But, if you took this a step further and allowed pollsters to ingest full discussions in person or through transcripts in research interviews, street polls, social media, news discussions and interviews, and other sources where citizen points of view manifest directly and indirectly to voting, that rich content translates into more accurate and insightful information.
IBM's acquisition of Cognea, a startup that creates virtual assistants of multiple personalities, further reinforces that voice is not enough for artificial intelligence. You need personality.
I for one cheer IBM's investment, because to be honest, IBM Watson's Jeopardy voice was a bit creepy. What has made Apple's Siri intriguing and personable, even if not always an effective capability, is the sultry sound of her voice and at times the hilarity of Siri's responses. However, if you were like me and changed from the female to male voice because you were curious, the personality of male Siri was disturbing (the first time I heard it I jumped). Personality is what you relate to.
The impression of intelligence is a factor of what is said and how it is delivered. Think about how accents influence our perception of people. It is why news media personalities work hard to refine and master a Mid-west accent. And, how one presents themselves in professional situations says a lot about whether you can trust their judgment. As much as I love my home town of Boston, our native accent and sometimes cold personalities have much to be desired by the rest of the country. And we have Harvard and MIT! Oh so smart maybe, but some feel we are not always easy to connect with.
It looks like the beginning of a new technology hype for artificial intelligence (AI). The media has started flooding the news with product announcements, acquisitions, and investments. The story is how AI is capturing the attention of tech firm and investor giants such as Google, Microsoft, IBM. Add to that the release of the movie ‘Her’, about a man falling for his virtual assistant modeled after Apple’s Siri (think they got the idea from Big Bang Theory when Raj falls in love with Siri), and you know we have begun the journey of geek-dom going mainstream and cool. The buzz words are great too: cognitive computing, deep learning, AI2.
For those who started their careers in AI and left in disillusionment (Andrew Ng confessed to this, yet jumped back in) or data scientists today, the consensus is often that artificial intelligence is just a new fancy marketing term for good old predictive analytics. They point to the reality of Apple’s Siri to listen and respond to requests as adequate but more often frustrating. Or, IBM Watson’s win on Jeopardy as data loading and brute force programming. Their perspective, real value is the pragmatic logic of the predictive analytics we have.
But, is this fair? No.
First, let’s set aside what you heard about financial puts and takes. Don’t try to decipher the geek speak of what new AI is compared to old AI. Let’s talk about what is on the horizon that will impact your business.
New AI breaks the current rule that machines must be better than humans: they must be smarter, faster analysts, or they manufacturing things better and cheaper.
IBM launched on January 9, 2014 its first business unit in 19 years to bring Watson, the machine that beat two Jeopardy champions in 2011, to the rest of us. IBM posits that Watson is the start of a third era in computing that started with manual tabulation, progressed to programmable, and now has become cognitive. Cognitive computing listens, learns, converses, and makes recommendations based on evidence.
IBM is placing big bets and big money, $1 billion, on transforming computer interaction from tabulation and programming to deep engagement. If they succeed, our interaction with technology will truly be personal through interactions and natural conversations that are suggestive, supportive, and as Terry Jones of Kayak explained, "makes you feel good" about the experience.
There are still hurdles for IBM and organizations, such as expense, complexity, information access, coping with ambiguity and context, the supervision of learning, and the implications of suggestions that are unrecognized today. To work, the ecosystem has to be open and communal. Investment is needed beyond the platform for applications and devices to deliver on Watson value. IBM's commitment and leadership are in place. The question is if IBM and its partners can scale Watson to be something more than a complex custom solution to become a truly transformative approach to businesses and our way of life.
Forrester believes that cognitive computing has the potential to address important problems that are unmet with today’s advanced analytics solutions. Though the road ahead is unmapped, IBM has now elevated its commitment to bring cognitive computing to life through this new business unit and the help of one third of its research organization, an ecosystem of partners, and pioneer companies willing to teach their private Watsons.