Much has been written about how artificial intelligence (AI) will put white-collar workers out of a job eventually. Will robots soon be able to do what programmers do best — i.e., write software programs? Actually, if you are or were a developer, you’ve probably already written or used software programs that can generate other software programs. That’s called code generation; in the past, it was done through “next” generation programming languages (such as a second-, third-, fourth-, or even fifth-generation languages), today are called low code IDEs. But also Java, C and C++ geeks have been turning high level graphical models like UML or BPML into code. But that’s not what I am talking about: I am talking about a robot (or bot) or AI software system that, if given a business requirement in natural language, can write the code to implement it — or even come up with its own idea and write a program for it.
Pure AI is true intelligence that can mimic or exceed the intelligence of human beings. It is still a long way off, if it can even ever be achieved. But what if AI became pure — could perceive, think, act, and even replicate as we do? Look to humanity for the answer. Humanity has been both beautiful and brutal:
The beauty of ingenuity, survival, exploration, art, and kindness.
Artificial Intelligence (AI) is not one big, specific technology. Rather, it is comprised of one or more building block technologies. So, to understand AI, you have to understand each of these nine building block technologies. Now, you could argue that there are more technologies than the ones listed here, but any additional technology can fit under one of these building blocks. This is a follow-on to my post Artificial Intelligence: Fact, Fiction, How Enterprises Can Crush It
Here are the nine pragmatic AI technology building blocks that enterprises can leverage now:
■ Knowledge engineering. Knowledge engineering is a process to understand and then represent human knowledge in data structures, semantic models, and heuristics (rules). AD&D pros can embed this engineered knowledge in applications to solve complex problems that are generally associated with human expertise. For example, large insurers have used knowledge engineering to represent and embed the expertise of claims adjusters to automate the adjudication process. IBM Watson Health uses engineered knowledge in combination with a corpus of information that includes over 290 medical journals, textbooks, and drug databases to help oncologists choose the best treatment for their patients.
Forrester surveyed business and technology professionals and found that 58% of them are researching AI, but only 12% are using AI systems. This gap reflects growing interest in AI, but little actual use at this time. We expect enterprise interest in, and use of, AI to increase as software vendors roll out AI platforms and build AI capabilities into applications. Enterprises that plan to invest in AI expect to improve customer experiences, improve products and services, and disrupt their industry with new business models.
But the burning question is: how can your enterprise use AI today to crush it? To answer this question we first must bring clarity to the nebulous definition of AI.Let’s break it down further:
■ “Artificial” is the opposite of organic. Artificial simply means person-made versus occurring naturally in the universe. Computer scientists, engineers, and developers research, design, and create a combination of software, computers, and machine to manifest AI technology.
■ “Intelligence” is in the eye of the beholder. Philosophers will have job security for a very long time trying to define intelligence precisely. That’s because, intelligence is much tougher to define because we humans routinely assign intelligence to all matter of things including well-trained dachshunds, self-driving cars, and “intelligent” assistants such as Amazon Echo. Intelligence is relative. For AI purists, intelligence is more akin to human abilities. It means the ability to perceive its environment, take actions that satisfy a set of goals, and learn from both successes and failures. Intelligence among humans varies greatly and so too does it vary among AI systems.
That is exactly what Forrester wants to find out - is there something behind the AI and Cognitive Computing hype? What my research directors ask, "Is there a there there?"
AI and Cognitive Computing have captured the imagination and interest of organization large and small but does anyone really know how to bring this new capability in and get value from it? Will AI and Cognitive really change businesses and consumer experiences? And the bigger question - WHEN will this happen?
It is time to roll-up the sleeves and look beyond conversations, vendor pitches and media coverage to really define what AI and Cognitive Computing mean for businesses, are businesses ready, where they will invest, and who they will turn to to build these innovated solutions, and what benefits will result. As such, Forrester launched its Global Artificial Intelligence Survey and is reaching out to you - executives, data scientists, data analysts, developers, architects and researchers - to put a finger on the pulse. We would appreciate you take a little time out of your day to tell us your point of view.
As a thank you, you will receive a complimentary summary report of the findings.
If you have a great story to share that provides a perspective on what AI and Cogntivive can do, what benefits is has provided your company, and can share you learnings and best practices, we are also recruiting for interviews.
Simply contact our rock star researcher, Elizabeth Cullen, to schedule 30 minutes. email@example.com
I am just back from the first ever Cognitive Computing Forum organized by DATAVERSITY in San Jose, California. I am not new to artificial intelligence (AI), and was a software developer in the early days of AI when I was just out of university. Back then, if you worked in AI, you would be called a SW Knowledge Engineer, and you would use symbolic programming (LISP) and first order logic programming (Prolog) or predicate calculus (MRS) to develop “intelligent” programs. Lot’s of research was done on knowledge representation and tools to support knowledge based engineers in developing applications that by nature required heuristic problem solving. Heuristics are necessary when problems are undefined, non-linear and complex. Deciding which financial product you should buy based on your risk tolerance, amount you are willing to invest, and personal objectives is a typical problem we used to solve with AI.
Fast forward 25 years, and AI is back, has a new name, it is now called cognitive computing. An old friend of mine, who’s never left the field, says, “AI has never really gone away, but has undergone some major fundamental changes.” Perhaps it never really went away from labs, research and very nich business areas. The change, however, is heavily about the context: hardware and software scale related constraints are gone, and there’s tons of data/knowledge digitally available (ironically AI missed big data 25 years ago!). But this is not what I want to focus on.
At a CIO roundtable that Forrester held recently in Sydney, I presented one of my favourite slides (originally seen in a deck from my colleague Ted Schadler) about what has happened r.e. technology since January 2007 (a little over five years ago). The slide goes like this:
Source: Forrester Research, 2012
This makes me wonder: what the next five years will hold for us? Forecasts tend to be made assuming most things remain the same – and I bet in 2007 few people saw all of these changes coming… What unforeseen changes might we see?
Will the whole concept of the enterprise disappear as barriers to entry disappear across many market segments?
Will the next generation reject the “public persona” that is typical in the Facebook generation and perhaps return to “traditional values”?
How will markets respond to the aging consumer in nearly every economy?
How will environmental concerns play out in consumer and business technology purchases and deployments?
How will the changing face of cities change consumer behaviors and demands?
Will artificial intelligence (AI) technologies and capabilities completely redefine business?
OK, it’s time to stretch the 2012 writing muscles, and what better way to do it than with the time honored “retrospective” format. But rather than try and itemize all the news and come up with a list of maybe a dozen or more interesting things, I decided instead to pick the best and the worst – events and developments that show the amazing range of the technology business, its potentials and its daily frustrations. So, drum roll, please. My personal nomination for the best and worst of the year (along with a special extra bonus category) are:
The Best – IBM Watson stomps the world’s best human players in Jeopardy. In early 2011, IBM put its latest deep computing project, Watson, up against some of the best players in the world in a game of Jeopardy. Watson, consisting of hundreds of IBM Power CPUs, gazillions of bytes of memory and storage, and arguably the most sophisticated rules engine and natural language recognition capability ever developed, won hands down. If you haven’t seen the videos of this event, you should – seeing the IBM system fluidly answer very tricky questions is amazing. There is no sense that it is parsing the question and then sorting through 200 – 300 million pages of data per second in the background as it assembles its answers. This is truly the computer industry at its best. IBM lived up to its brand image as the oldest and strongest technology company and showed us a potential for integrating computers into untapped new potential solutions. Since the Jeopardy event, IBM has been working on commercializing Watson with an eye toward delivering domain-specific expert advisors. I recently listened to a presentation by a doctor participating in the trials of a Watson medical assistant, and the results were startling in terms of the potential to assist medical professionals in diagnostic procedures.