This Time, AI Is Truly Here To Help Build Intelligent Applications

Diego Lo Giudice

What’s taken artificial intelligence (AI) so long? We invented AI capabilities like first-order logical reasoning, natural-language processing, speech/voice/vision recognition, neural networks, machine-learning algorithms, and expert systems more than 30 years ago, but aside from a few marginal applications in business systems, AI hasn’t made much of a difference. The business doesn’t understand how or why it could make a difference; it thinks we can program anything, which is almost true. But there’s one thing we fail at programming: our own brain — we simply don’t know how it works.

What’s changed now? While some AI research still tries to simulate our brain or certain regions of it — and is frankly unlikely to deliver concrete results anytime soon — most of it now leverages a less human, but more effective, approach revolving around machine learning and smart integration with other AI capabilities.

What is machine learning? Simply put, sophisticated software algorithms that learn to do something on their own by repeated training using big data. In fact, big data is what’s making the difference in machine learning, along with great improvements in many of the above AI disciplines (see the AI market overview that I coauthored with Mike Gualtieri and Michele Goetz on why AI is better and consumable today). As a result, AI is undergoing a renaissance, developing new “cognitive” capabilities to help in our daily lives.

Read more

My Three Assumptions For Why The Next Generation Of SW Innovation Will Be Cognitive!

Diego Lo Giudice

I am just back from the first ever Cognitive Computing Forum organized by DATAVERSITY in San Jose, California. I am not new to artificial intelligence (AI), and was a software developer in the early days of AI when I was just out of university. Back then, if you worked in AI, you would be called a SW Knowledge Engineer, and you would use symbolic programming (LISP) and first order logic programming (Prolog) or predicate calculus (MRS) to develop “intelligent” programs. Lot’s of research was done on knowledge representation and tools to support knowledge based engineers in developing applications that by nature required heuristic problem solving. Heuristics are necessary when problems are undefined, non-linear and complex. Deciding which financial product you should buy based on your risk tolerance, amount you are willing to invest, and personal objectives is a typical problem we used to solve with AI.

Fast forward 25 years, and AI is back, has a new name, it is now called cognitive computing. An old friend of mine, who’s never left the field, says, “AI has never really gone away, but has undergone some major fundamental changes.” Perhaps it never really went away from labs, research and very nich business areas. The change, however, is heavily about the context: hardware and software scale related constraints are gone, and there’s tons of data/knowledge digitally available (ironically AI missed big data 25 years ago!). But this is not what I want to focus on.

Read more

Make no mistake - IBM’s Watson (and others) provide the *illusion* of cognitive computing

IBM has just announced that one of Australia’s “big four” banks, the ANZ, will adopt the IBM Watson technology in their wealth management division for customer service and engagement. Australia has always been an early adopter of new technologies but I’d also like to think that we’re a little smarter and savvier than your average geek back in high school in 1982.

IBM’s Watson announcement is significant, not necessarily because of the sophistication of the Watson technology, but because of IBM's ability to successfully market the Watson concept.   

To take us all back a little, the term ‘cognitive computing’ emerged in response to the failings of what was once termed ‘artificial intelligence’. Though the underlying concepts have been around for 50 years or more, AI remains a niche and specialist market with limited applications and a significant trail of failed or aborted projects. That’s not to say that we haven’t seen some sophisticated algorithmic based systems evolve. There’s already a good portfolio of large scale, deep analytic systems developed in the areas of fraud, risk, forensics, medicine, physics and more.

Read more