You can't turn anywhere without bumping into artificial intelligence, machine learning, or cognitive computing jumping out at you. Our cars brake for us, park for us, and some are even driving us. Our movie lists are filled with Ex Machina, Her, and Lucy. The news tells about the latest vendor and cool use of technology, minute by minute. Vendors are filling our voicemail and email with enticements. It's all so very cool!
But cool doesn't build a business. Results do.
Which brings me to the biggest barrier companies have in adopting artificial intelligence. Companies are asking the wrong questions:
What is artificial intelligence (or insert: machine learning or cognitive computing)?
In November, Forrester released its mobile predictions for 2016, highlighting how mobile will act as a catalyst for business transformation and explaining why the battle for mobile moments will redefine the vendor landscape.
Let’s now take a closer look at how mobile will impact marketing in 2016.
A year ago, Forrester argued that most brands would underinvest in mobile in 2015. This is likely to remain the case this year, since too many marketers still have a narrow view of mobile as a “sub-digital” medium and channel. This is good news for the 20% of marketers who told us they have the budget they need and for the 33% who said they know how to measure mobile ROI. In 2016, this growing minority of leading marketers will start to fully integrate mobile into their marketing strategies. These mature mobile marketers will measure the impact of mobile across channels, see a clear opportunity to differentiate their brands, and increase their investments in mobile initiatives. Here’s what else we expect to happen:
Integrating mobile into your marketing strategy will become a key differentiator. While most brands are trying to mobilize their ads, few are going the extra mile: serving their customers in their mobile moments by transforming the entire customer experience. Only those that do go that extra mile will differentiate their brands via mobile. Leaders will also start measuring the impact of mobile on offline channels and will end up allocating up to 20% of their marketing budgets to mobile.
You can't bring up semantics without someone inserting an apology for the geekiness of the discussion. If you're a data person like me, geek away! But for everyone else, it's a topic best left alone. Well, like every geek, the semantic geeks now have their day — and may just rule the data world.
It begins with a seemingly innocent set of questions:
"Is there a better way to master my data?"
"Is there a better way to understand the data I have?"
"Is there a better way to bring data and content together?"
"Is there a better way to personalize data and insight to be relevant?"
Semantics discussions today are born out of the data chaos that our traditional data management and governance capabilities are struggling under. They're born out of the fact that even with the best big data technology and analytics being adopted, business stakeholder satisfaction with analytics has decreased by 21% from 2014 to 2015, according to Forrester's Global Business Technographics® Data And Analytics Survey, 2015. Innovative data architects and vendors realize that semantics is the key to bringing context and meaning to our information so we can extract those much-needed business insights, at scale, and more importantly, personalized.
As companies get serious about digital transformation, we see investments shifting toward extensible software platforms used to build and manage a differentiated customer experience. My colleague John McCarthy has an excellent slide describing what's happening:
Before, tech management spent most of its time and budget managing a set of monolithic enterprise applications and databases. With an addressable market of a finite number of networked PCs, spending on the front end was largely an afterthought.
Today, applications must scale to millions, if not billions of connected devices while retaining a rich and seamless user experience. Infrastructure, in turn, must flex to meet these new specs. Since complete overhauls of the back end are a nonstarter for large enterprises with 30-plus years of investments in mainframes and legacy server systems, new investments gear toward the intermediary software platforms that connect digital touchpoints with enterprise applications and transaction systems.
At Forrester, we’ve been working to quantify some of the most viable software categories that exemplify this shift. A shortlist below:
· API management solutions: US CAGR 2015-2020: 22%.
· Public cloud platforms: Global CAGR 2015-2020: 30%. (Note: We have a forecast update in the works that segments the market into subcategories.)
What’s taken artificial intelligence (AI) so long? We invented AI capabilities like first-order logical reasoning, natural-language processing, speech/voice/vision recognition, neural networks, machine-learning algorithms, and expert systems more than 30 years ago, but aside from a few marginal applications in business systems, AI hasn’t made much of a difference. The business doesn’t understand how or why it could make a difference; it thinks we can program anything, which is almost true. But there’s one thing we fail at programming: our own brain — we simply don’t know how it works.
What’s changed now? While some AI research still tries to simulate our brain or certain regions of it — and is frankly unlikely to deliver concrete results anytime soon — most of it now leverages a less human, but more effective, approach revolving around machine learning and smart integration with other AI capabilities.
What is machine learning? Simply put, sophisticated software algorithms that learn to do something on their own by repeated training using big data. In fact, big data is what’s making the difference in machine learning, along with great improvements in many of the above AI disciplines (see the AI market overview that I coauthored with Mike Gualtieri and Michele Goetz on why AI is better and consumable today). As a result, AI is undergoing a renaissance, developing new “cognitive” capabilities to help in our daily lives.
Software is getting smarter, thanks to predictive analytics, machine learning, and artificial intelligence (AI). Whereas the current generation of software is about enabling smarter decision-making for humans, we’re starting to see “invisible software" capable of performing tasks without human intervention.
One such example is x.ai, a software-based personal assistant that schedules meetings for you. With no user interface, you simply cc “Amy” on an email thread and she goes to work engaging with the recipient to find a date and optimal place to meet.
It’s not a perfectly automated system. AI trainers oversee Amy’s interactions and make adjustments on the fly. But over time, she becomes a great personal assistant who is sensitive to your meeting and communication preferences.
One can imagine Amy extending into new domains — taking on parts of sales/customer service operations or business processes like expense management and DevOps. Indeed, we’ll see a new generation of AI-powered apps, as predicted here.
I sat down with Steve Cowley, General Manager for IBM Watson, on Tuesday at IBM Insights to talk about Watson successes, challenges since the January launch, and what is in store. While the potential has always intrigued me, the initial use cases and message gave me more than a bit of pause: the daunting task to develop and train the corpus, the narrowness of the use cases, what would this actually cost? Jump ahead nine months and the IBM Watson world is in a very different place.
IBM is clearly in its market building phase. It is as much about what IBM Watson is and how IBM overall is repositioning itself as it is about changing the business model for selling technology. However, it is easy to get negative very fast on this strategy as seen with the tremors on Wall Street as IBM's stock has gone from a 52 week high of $199 to $164 at close on Friday 10/31, much of that happening in the past month since earnings release. Wall Street may not like company uncertainty during transitional periods, but enterprise architects care about what will make their organizations successful, make development and management of technology easier, and making sure costs don't sky rocket when new bright shiny objects come in. And, that is where IBM is headed with an eye toward changing the game.
IBM Watson delivers on information over technology.
Steve surprised me with this statement, "[With] traditional programmed systems, the system is at its best when it is deployed, because it is closest to the business need it was written for. Over time these systems get further and further away from the shifting business need and so either they fall in effectiveness, or require a great deal or maintenance." Steve pointed out that data is what is changing the game.*
I am just back from the first ever Cognitive Computing Forum organized by DATAVERSITY in San Jose, California. I am not new to artificial intelligence (AI), and was a software developer in the early days of AI when I was just out of university. Back then, if you worked in AI, you would be called a SW Knowledge Engineer, and you would use symbolic programming (LISP) and first order logic programming (Prolog) or predicate calculus (MRS) to develop “intelligent” programs. Lot’s of research was done on knowledge representation and tools to support knowledge based engineers in developing applications that by nature required heuristic problem solving. Heuristics are necessary when problems are undefined, non-linear and complex. Deciding which financial product you should buy based on your risk tolerance, amount you are willing to invest, and personal objectives is a typical problem we used to solve with AI.
Fast forward 25 years, and AI is back, has a new name, it is now called cognitive computing. An old friend of mine, who’s never left the field, says, “AI has never really gone away, but has undergone some major fundamental changes.” Perhaps it never really went away from labs, research and very nich business areas. The change, however, is heavily about the context: hardware and software scale related constraints are gone, and there’s tons of data/knowledge digitally available (ironically AI missed big data 25 years ago!). But this is not what I want to focus on.
Recent news of a a computer program that passed the Turing Test is a great achievement for artificial intelligence (AI). Pulling down the barrier between human and machine has been a decades long holy grail pursuit. Right now, it is a novelty. In the near future, the implications are immense.
Which brings us to why should you care.
Earlier this week the House majority leader, Eric Cantor, suffered an enormous defeat in Virginia's Republican primary by Tea Party candidate David Brat. No one predicted this - the polls were wrong, by a long shot. Frank Luntz, a Republican pollster and communication advisor, offered up his opinion on what was missing in a New York Times Op-Ed piece - lack of face-to-face discussions and interviews with voters. He asserts that while data collection was limited to discrete survey questions, what it lacked was context. Information such as voter mood, perceptions, motives, and overall mind set were missing. Even if you collected quantitative data across a variety of sources, you don't get to these prescient indicators.
The new wave of AI (the next 2 - 5 years) makes capturing this insight possible and at scale. Marketing organizations are already using such capabilities to test advertising messages and positioning in focus group settings. But, if you took this a step further and allowed pollsters to ingest full discussions in person or through transcripts in research interviews, street polls, social media, news discussions and interviews, and other sources where citizen points of view manifest directly and indirectly to voting, that rich content translates into more accurate and insightful information.
IBM's acquisition of Cognea, a startup that creates virtual assistants of multiple personalities, further reinforces that voice is not enough for artificial intelligence. You need personality.
I for one cheer IBM's investment, because to be honest, IBM Watson's Jeopardy voice was a bit creepy. What has made Apple's Siri intriguing and personable, even if not always an effective capability, is the sultry sound of her voice and at times the hilarity of Siri's responses. However, if you were like me and changed from the female to male voice because you were curious, the personality of male Siri was disturbing (the first time I heard it I jumped). Personality is what you relate to.
The impression of intelligence is a factor of what is said and how it is delivered. Think about how accents influence our perception of people. It is why news media personalities work hard to refine and master a Mid-west accent. And, how one presents themselves in professional situations says a lot about whether you can trust their judgment. As much as I love my home town of Boston, our native accent and sometimes cold personalities have much to be desired by the rest of the country. And we have Harvard and MIT! Oh so smart maybe, but some feel we are not always easy to connect with.