What’s taken artificial intelligence (AI) so long? We invented AI capabilities like first-order logical reasoning, natural-language processing, speech/voice/vision recognition, neural networks, machine-learning algorithms, and expert systems more than 30 years ago, but aside from a few marginal applications in business systems, AI hasn’t made much of a difference. The business doesn’t understand how or why it could make a difference; it thinks we can program anything, which is almost true. But there’s one thing we fail at programming: our own brain — we simply don’t know how it works.
What’s changed now? While some AI research still tries to simulate our brain or certain regions of it — and is frankly unlikely to deliver concrete results anytime soon — most of it now leverages a less human, but more effective, approach revolving around machine learning and smart integration with other AI capabilities.
What is machine learning? Simply put, sophisticated software algorithms that learn to do something on their own by repeated training using big data. In fact, big data is what’s making the difference in machine learning, along with great improvements in many of the above AI disciplines (see the AI market overview that I coauthored with Mike Gualtieri and Michele Goetz on why AI is better and consumable today). As a result, AI is undergoing a renaissance, developing new “cognitive” capabilities to help in our daily lives.
I am just back from the first ever Cognitive Computing Forum organized by DATAVERSITY in San Jose, California. I am not new to artificial intelligence (AI), and was a software developer in the early days of AI when I was just out of university. Back then, if you worked in AI, you would be called a SW Knowledge Engineer, and you would use symbolic programming (LISP) and first order logic programming (Prolog) or predicate calculus (MRS) to develop “intelligent” programs. Lot’s of research was done on knowledge representation and tools to support knowledge based engineers in developing applications that by nature required heuristic problem solving. Heuristics are necessary when problems are undefined, non-linear and complex. Deciding which financial product you should buy based on your risk tolerance, amount you are willing to invest, and personal objectives is a typical problem we used to solve with AI.
Fast forward 25 years, and AI is back, has a new name, it is now called cognitive computing. An old friend of mine, who’s never left the field, says, “AI has never really gone away, but has undergone some major fundamental changes.” Perhaps it never really went away from labs, research and very nich business areas. The change, however, is heavily about the context: hardware and software scale related constraints are gone, and there’s tons of data/knowledge digitally available (ironically AI missed big data 25 years ago!). But this is not what I want to focus on.
For those of us who write and think about the future of healthcare, the story of rapid and systemic change rocking the healthcare system is a recurrent theme. We usually point to the regulatory environment as the source of change. Laws like the Affordable Care Act and the HITECH Act are such glaring disruptive forces, but what empowers these regulations to succeed? Perhaps the deepest cause of change affecting healthcare, and the most disruptive force, is the digitalization of our clinical records. As we continue to switch to electronic charts, this force of the vast data being collected becomes increasingly obvious. One-fifth of the world’s data is purported to be administrative and clinical medical records. Recording medical observations, lab results, diagnoses, and the orders that care professionals make in binary form is a game-changer.
Workflows are dramatically altered because caregivers spend so much of their time using the system to record clinical facts and must balance these record-keeping responsibilities with the more traditional bedside skills. They have access to more facts more easily than before, which allows them to make better judgments. The increasing ability of caregivers to see what their colleagues are doing, or have done, across institutional boundaries is allowing for better coordination of care. The use of clinical data for research into what works and what is efficient is becoming pervasive. This research is conducted by combining records from several institutions and having the quality committees of individual institutions look at the history of care within their institutions to enhance the ways in which they create the institutional standards of care. The data represents a vast resource of evidence that allows great innovation.
Day one of the first Cognitive Computing Forum in San Jose, hosted by Dataversity, gave a great perspective on the state of cognitive computing; promising, but early. I am here this week with my research director Leslie Owens and analyst colleague Diego LoGudice. Gathering research for a series of reports for our cognitive engagement coverage, we were able to debrief tonight on what we heard and the questions these insights raise. Here are some key take-aways:
1) Big data mind shift to explore and accept failure is a heightened principle. Chris Welty, formerly at IBM and a key developer of Watson and it's Jeoapardy winning solution, preached restraint. Analytic pursuit of perfect answers delivers no business value. Keep your eye on the prize and move the needle on what matters, even if your batting average is only .300 (30%). The objective is a holistic pursuit of optimization.
2) The algorithms aren't new, the platform capabilities and greater access to data allow us to realize cognitive for production uses. Every speaker from academic, vendor, and expert was in agreement that the algorithms created decades ago are the same. Hardware and the volume of available data have made neural networks and other machine learning algorithms both possible and more effective.
Recent news of a a computer program that passed the Turing Test is a great achievement for artificial intelligence (AI). Pulling down the barrier between human and machine has been a decades long holy grail pursuit. Right now, it is a novelty. In the near future, the implications are immense.
Which brings us to why should you care.
Earlier this week the House majority leader, Eric Cantor, suffered an enormous defeat in Virginia's Republican primary by Tea Party candidate David Brat. No one predicted this - the polls were wrong, by a long shot. Frank Luntz, a Republican pollster and communication advisor, offered up his opinion on what was missing in a New York Times Op-Ed piece - lack of face-to-face discussions and interviews with voters. He asserts that while data collection was limited to discrete survey questions, what it lacked was context. Information such as voter mood, perceptions, motives, and overall mind set were missing. Even if you collected quantitative data across a variety of sources, you don't get to these prescient indicators.
The new wave of AI (the next 2 - 5 years) makes capturing this insight possible and at scale. Marketing organizations are already using such capabilities to test advertising messages and positioning in focus group settings. But, if you took this a step further and allowed pollsters to ingest full discussions in person or through transcripts in research interviews, street polls, social media, news discussions and interviews, and other sources where citizen points of view manifest directly and indirectly to voting, that rich content translates into more accurate and insightful information.