To be blunt, if you miss this event, you’ll be sorry. Sure there are loads of marketing conferences out there, but Forrester’s Forums clear the clutter and help you focus on the issues that matter most to your success. Last year, we told you that we're in a post-digital world now, and that marketing must adapt to new rules. This year, on April 5-7, we'll show you exactly how to do that and more. Whether you’re developing and refining your marketing strategy to engage today’s empowered consumer, or your planning the next investment in your Martech application portfolio, Forrester’s Consumer Marketing Forum will be the smartest investment of time that you’ll make this year. Here’s a just few highlights:
Learn exactly how consumers’ behaviors are changing. Analyst Anjali Lai will share Forrester's Empowered Customer segmentation.
Discover how to avoid the illusion of insights. VP and Research Director Sri Sridharan will show you how to avoid potential pitalls in your question to become and insights-driven business.
Reveal what really matters in Martech and Adtech. VP and Principal Analyst Joe Stanhope will bring clarity to the chaos of an unhealthy technology ecosystem.
Recently, the largest annual get together of the mobile industry, Mobile World Congress (MWC) took place in Barcelona. In my opinion, the biggest themes at MWC in 2017 that are relevant for enterprise customers were the internet of things (IoT), artificial intelligence (AI), platforms, collaboration, and connectivity. These themes underline how mobility is becoming part of the broader digital transformation initiative. I discuss this shift in this separate blog and report. MWC provided several valuable insights for business and technology leaders to align their mobile to their digital strategies:
-> Not everything that claims to be AI is true AI. Many vendors that claimed during MWC to be AI-proficient are in fact able to deliver true machine-learning solutions to generate transformative customer and operational insights. Most solutions that were branded as AI at MWC rely on preprogrammed responses and statistics rather than machine learning.
IBM hosted an artificial intelligent (AI) event at its Munich Watson IoT HQ, where it underlined its claim as a leading global AI and internet-of-things (IoT) platform providers in the enterprise context. AI and the IoT are both very important topics for enterprise users. However, there remains some uncertainty among enterprises regarding the exact benefits that both AI and IoT can generate and how businesses should prepare for the deployment of AI and IoT in their organizations.
One year into the launch of its Munich-based Watson IoT headquarters, IBM invited about one thousand customers to share an update of its AI and IoT activities to date. The IBM “Genius of Things” Summit presented interesting insights for both AI and IoT deployments. It underlined that IBM is clearly one of the leading global AI and IoT platform providers in the enterprise context. Some of the most important insights for me were that:
AI solutions require a partner ecosystem. IBM is well aware of the fact that it cannot provide IoT services on its own. For this reason, IBM is tapping into its existing partner ecosystem. Those partners are not only other vendors. IBM’s ecosystem partnership approach embraces also customers such as Schäffler, Airbus, Vaillant, or Tesco. The event demonstrated how far IBM has matured in living and breathing customer partnerships in the IoT solutions space. For instance, IBM’s cooperation with Visa regarding secure payment experiences for any device connected to the IoT is an example of a new quality of ecosystem partnership.
Artificial intelligence (AI) is real, albiet maturing slowly. You experience it when you talk to Alexa, when you see a creepily-targeted online ad, and when Netxflix turns you on to Stranger Things. Oh yea, and that self-driving car over there is AI super-powered! AI is indeed cool, but many are scared about how it ultimatley may impact society. Stephen Hawking, Elon Musk, and even the Woz warned that "...artificial intelligence can potentially be more dangerous than nuclear war." In a nutshell, they are concerned about AI that may evolve to outsmart humans and kill people - a valid concern. But, I have another more terrifying concern that would likely be an insidious precursor to runaway, killer AI.
Much has been written about how artificial intelligence (AI) will put white-collar workers out of a job eventually. Will robots soon be able to do what programmers do best — i.e., write software programs? Actually, if you are or were a developer, you’ve probably already written or used software programs that can generate other software programs. That’s called code generation; in the past, it was done through “next” generation programming languages (such as a second-, third-, fourth-, or even fifth-generation languages), today are called low code IDEs. But also Java, C and C++ geeks have been turning high level graphical models like UML or BPML into code. But that’s not what I am talking about: I am talking about a robot (or bot) or AI software system that, if given a business requirement in natural language, can write the code to implement it — or even come up with its own idea and write a program for it.
Pure AI is true intelligence that can mimic or exceed the intelligence of human beings. It is still a long way off, if it can even ever be achieved. But what if AI became pure — could perceive, think, act, and even replicate as we do? Look to humanity for the answer. Humanity has been both beautiful and brutal:
The beauty of ingenuity, survival, exploration, art, and kindness.
Artificial Intelligence (AI) is not one big, specific technology. Rather, it is comprised of one or more building block technologies. So, to understand AI, you have to understand each of these nine building block technologies. Now, you could argue that there are more technologies than the ones listed here, but any additional technology can fit under one of these building blocks. This is a follow-on to my post Artificial Intelligence: Fact, Fiction, How Enterprises Can Crush It
Here are the nine pragmatic AI technology building blocks that enterprises can leverage now:
■ Knowledge engineering. Knowledge engineering is a process to understand and then represent human knowledge in data structures, semantic models, and heuristics (rules). AD&D pros can embed this engineered knowledge in applications to solve complex problems that are generally associated with human expertise. For example, large insurers have used knowledge engineering to represent and embed the expertise of claims adjusters to automate the adjudication process. IBM Watson Health uses engineered knowledge in combination with a corpus of information that includes over 290 medical journals, textbooks, and drug databases to help oncologists choose the best treatment for their patients.
Forrester surveyed business and technology professionals and found that 58% of them are researching AI, but only 12% are using AI systems. This gap reflects growing interest in AI, but little actual use at this time. We expect enterprise interest in, and use of, AI to increase as software vendors roll out AI platforms and build AI capabilities into applications. Enterprises that plan to invest in AI expect to improve customer experiences, improve products and services, and disrupt their industry with new business models.
But the burning question is: how can your enterprise use AI today to crush it? To answer this question we first must bring clarity to the nebulous definition of AI.Let’s break it down further:
■ “Artificial” is the opposite of organic. Artificial simply means person-made versus occurring naturally in the universe. Computer scientists, engineers, and developers research, design, and create a combination of software, computers, and machine to manifest AI technology.
■ “Intelligence” is in the eye of the beholder. Philosophers will have job security for a very long time trying to define intelligence precisely. That’s because, intelligence is much tougher to define because we humans routinely assign intelligence to all matter of things including well-trained dachshunds, self-driving cars, and “intelligent” assistants such as Amazon Echo. Intelligence is relative. For AI purists, intelligence is more akin to human abilities. It means the ability to perceive its environment, take actions that satisfy a set of goals, and learn from both successes and failures. Intelligence among humans varies greatly and so too does it vary among AI systems.
That is exactly what Forrester wants to find out - is there something behind the AI and Cognitive Computing hype? What my research directors ask, "Is there a there there?"
AI and Cognitive Computing have captured the imagination and interest of organization large and small but does anyone really know how to bring this new capability in and get value from it? Will AI and Cognitive really change businesses and consumer experiences? And the bigger question - WHEN will this happen?
It is time to roll-up the sleeves and look beyond conversations, vendor pitches and media coverage to really define what AI and Cognitive Computing mean for businesses, are businesses ready, where they will invest, and who they will turn to to build these innovated solutions, and what benefits will result. As such, Forrester launched its Global Artificial Intelligence Survey and is reaching out to you - executives, data scientists, data analysts, developers, architects and researchers - to put a finger on the pulse. We would appreciate you take a little time out of your day to tell us your point of view.
As a thank you, you will receive a complimentary summary report of the findings.
If you have a great story to share that provides a perspective on what AI and Cogntivive can do, what benefits is has provided your company, and can share you learnings and best practices, we are also recruiting for interviews.
Simply contact our rock star researcher, Elizabeth Cullen, to schedule 30 minutes. firstname.lastname@example.org
I am just back from the first ever Cognitive Computing Forum organized by DATAVERSITY in San Jose, California. I am not new to artificial intelligence (AI), and was a software developer in the early days of AI when I was just out of university. Back then, if you worked in AI, you would be called a SW Knowledge Engineer, and you would use symbolic programming (LISP) and first order logic programming (Prolog) or predicate calculus (MRS) to develop “intelligent” programs. Lot’s of research was done on knowledge representation and tools to support knowledge based engineers in developing applications that by nature required heuristic problem solving. Heuristics are necessary when problems are undefined, non-linear and complex. Deciding which financial product you should buy based on your risk tolerance, amount you are willing to invest, and personal objectives is a typical problem we used to solve with AI.
Fast forward 25 years, and AI is back, has a new name, it is now called cognitive computing. An old friend of mine, who’s never left the field, says, “AI has never really gone away, but has undergone some major fundamental changes.” Perhaps it never really went away from labs, research and very nich business areas. The change, however, is heavily about the context: hardware and software scale related constraints are gone, and there’s tons of data/knowledge digitally available (ironically AI missed big data 25 years ago!). But this is not what I want to focus on.