What’s taken artificial intelligence (AI) so long? We invented AI capabilities like first-order logical reasoning, natural-language processing, speech/voice/vision recognition, neural networks, machine-learning algorithms, and expert systems more than 30 years ago, but aside from a few marginal applications in business systems, AI hasn’t made much of a difference. The business doesn’t understand how or why it could make a difference; it thinks we can program anything, which is almost true. But there’s one thing we fail at programming: our own brain — we simply don’t know how it works.
What’s changed now? While some AI research still tries to simulate our brain or certain regions of it — and is frankly unlikely to deliver concrete results anytime soon — most of it now leverages a less human, but more effective, approach revolving around machine learning and smart integration with other AI capabilities.
What is machine learning? Simply put, sophisticated software algorithms that learn to do something on their own by repeated training using big data. In fact, big data is what’s making the difference in machine learning, along with great improvements in many of the above AI disciplines (see the AI market overview that I coauthored with Mike Gualtieri and Michele Goetz on why AI is better and consumable today). As a result, AI is undergoing a renaissance, developing new “cognitive” capabilities to help in our daily lives.
I am just back from the first ever Cognitive Computing Forum organized by DATAVERSITY in San Jose, California. I am not new to artificial intelligence (AI), and was a software developer in the early days of AI when I was just out of university. Back then, if you worked in AI, you would be called a SW Knowledge Engineer, and you would use symbolic programming (LISP) and first order logic programming (Prolog) or predicate calculus (MRS) to develop “intelligent” programs. Lot’s of research was done on knowledge representation and tools to support knowledge based engineers in developing applications that by nature required heuristic problem solving. Heuristics are necessary when problems are undefined, non-linear and complex. Deciding which financial product you should buy based on your risk tolerance, amount you are willing to invest, and personal objectives is a typical problem we used to solve with AI.
Fast forward 25 years, and AI is back, has a new name, it is now called cognitive computing. An old friend of mine, who’s never left the field, says, “AI has never really gone away, but has undergone some major fundamental changes.” Perhaps it never really went away from labs, research and very nich business areas. The change, however, is heavily about the context: hardware and software scale related constraints are gone, and there’s tons of data/knowledge digitally available (ironically AI missed big data 25 years ago!). But this is not what I want to focus on.
For those of us who write and think about the future of healthcare, the story of rapid and systemic change rocking the healthcare system is a recurrent theme. We usually point to the regulatory environment as the source of change. Laws like the Affordable Care Act and the HITECH Act are such glaring disruptive forces, but what empowers these regulations to succeed? Perhaps the deepest cause of change affecting healthcare, and the most disruptive force, is the digitalization of our clinical records. As we continue to switch to electronic charts, this force of the vast data being collected becomes increasingly obvious. One-fifth of the world’s data is purported to be administrative and clinical medical records. Recording medical observations, lab results, diagnoses, and the orders that care professionals make in binary form is a game-changer.
Workflows are dramatically altered because caregivers spend so much of their time using the system to record clinical facts and must balance these record-keeping responsibilities with the more traditional bedside skills. They have access to more facts more easily than before, which allows them to make better judgments. The increasing ability of caregivers to see what their colleagues are doing, or have done, across institutional boundaries is allowing for better coordination of care. The use of clinical data for research into what works and what is efficient is becoming pervasive. This research is conducted by combining records from several institutions and having the quality committees of individual institutions look at the history of care within their institutions to enhance the ways in which they create the institutional standards of care. The data represents a vast resource of evidence that allows great innovation.