We've been talking about Adaptive Intelligence (AI) for a while now. As a refresher, AI is is the real-time, multidirectional sharing of data to derive contextually appropriate, authoritative knowledge that helps maximize business value.
Increasingly in inquiries, workshops, FLB sessions, and advisories, we hear from our customer insights (CI) clients that developing the capabilities required for adaptive intelligence would actually help them solve a lot of other problems, too. For example:
A systematic data innovation approach encourages knowledge sharing throughout the organization, reduces data acquisition redundancies, and brings energy and creativity to the CI practice.
A good handle on data origin kickstarts your marketing organization's big data process by providing a well-audited foundation to build upon.
Better data governance and data controls improve your privacy and security practices by ensuring cross-functional adoption of the same set of standards and processes.
Better data structure puts more data in the hands of analysts and decision-makers, in the moment and within the systems of need (eg, campaign management tools, content management systems, customer service portals, and more).
More data interoperability enables channel-agnostic customer recognition, and the ability to ingest novel forms of data -- like preference, wearables data, and many more -- that can vastly improve your ability to deliver great customer experiences.
Management consultants and business intelligence, analytics and big data system integrations often use the terms accelerators, blueprints, solutions, frameworks, and products to show off their industry and business domain (sales, marketing, finance, HR, etc) expertise, experience and specialization. Unfortunately, they often use these terms synonymously, while in pragmatic reality meanings vary quite widely. Here’s our pragmatic take on the tangible reality behind the terms (in the increasing order of comprehensiveness):
Fameworks. Often little more than a collection of best practices and lessons learned from multiple client engagements. These can sometimes shave off 5%-10% of a project time/effort mainly by enabling buyers to learn from the mistakes others already made and not repeating them.
Solution Accelerators. Aka Blueprints, these are usually a collection of deliverables, content and other artifacts from prior client engagements. Such artifacts could be in the form of data connectors, transformation logic, data models, metrics, reports and dashboards, but they are often little more than existing deliverables that can be cut/pasted or otherwise leveraged in a new client engagement. Similar to Frameworks, Solution Accelerators often come with a set of best practices. Solution Accelerators can help you hit the ground running and rather than starting from scratch, find yourself 10%-20% into a project.
Solutions. A step above Solution Accelerators, Solutions prepackage artifacts from prior client engagements, by cleansing and stripping them of proprietary content and/or irrelevant info. Count on shaving 20% to 30% off the effort.
So you need some work done that you’ve never had done before or you need to buy something you’ve never bought before. What should you pay? That can be a tough question. What seems reasonable? Sometimes we set arbitrary rules. It’s OK if it’s under $50 or under $100. But that’s just a reassurance that you’re not getting ripped off too badly. Certainly the best way to avoid that outcome is to know how much that service or thing is worth, or at least know what others have paid for the same thing.
Fortunately now, in the age of the customer, that’s easier to find out. Price information for most consumer goods is easier to come by, making the buying process more efficient. But what about governments? We’ve all heard about the $600 toilet seat or the $400 hammer. Stories of government spending excess and mismanagement abound. Some are urban legends or misrepresentations. Others have legs — such as the recent reports of Boeing overcharging the US Army. While these incidents are likely not things of the past, open data initiatives have made significant progress in exposing spending data and improving transparency. Citizens can visit sites such as USAspending.gov for US federal government spending or "Where Does My Money Go?" for details on UK national government spending, and most large cities publish spending as well.
To jump on this R feeding frenzy most leading BI vendors claim that they “integrate with R”, but what does that claim really mean? Our take on this – not all BI/R integration is created equal. When evaluating BI platforms for R integration, Forrester recommends considering the following integration capabilities:
A journalist called and asked me today about the market size for wearables. I replied, “That’s not the big story.”
So what is? It's data, and what you can do with it.
First you have to collect the data and have the permission to do so. Most of these relationships are one-to-one. I have these relationships with Nike, Jawbone, Basis, RunKeeper, MyFitnessPal and a few others. I have an app for each on my phone that harvests the data and shows it to me in a way I can understand. Many of these devices have open APIs, so I can import my Fitbit or Jawbone data into MyFitnessPal, for example.
From the story on 9to5mac.com, it is clear that Apple (like with Passbook) is creating a single place for consumers to store a wide range of healthcare and fitness information. From the screenshots they have, it also appears that one can trend this information over time. The phone is capable of collecting some of this information, and is increasingly doing so with less battery burn due to efficiencies in how the sensor data is crunched, so to speak. Wearables – perhaps one from Apple – will collect more information. Other data will certainly come from third-party wearables - such as fitness wearables, patches, bandages, socks and shirt - and attachments, such as the Smartphone Physical. There will always be tradeoffs between the amount of information you collect and the form factor. While I don't want to wear a chubby, clunky device 24x7, it gets better every day.
IBM recently kicked off its big data market planning for 2014 and released a white paper that discusses how analytics create new business value for end user organizations. The major differences compared with last year’s event:
Organizational change. IBM has assigned a new big data practice leader for China, similar to what it’s done for other new technologies including mobile, social, and cloud. IBM can integrate resources from infrastructure (IBM STG), software (IBM SWG), and services (IBM GBS/GTS) teams, although the team members do not report directly to them.
A new analytics platform powered by Watson technology. The Watson Foundation platform has three new functions. It can be deployed on SoftLayer; it extends IBM’s big data analysis capabilities to social, mobile, and cloud; and it offers enterprises the power and ease of use of Watson analysis.
Measurable benefits from customer insights analysis. Chinese organizations have started to buy into the value of analytics and would like to invest in technology tools to optimize customer insights. AmorePacific, a Hong Kong-based skin care and cosmetics company, is using IBM’s SPSS predictive analytics solution to craft tailored messages to its customers and has improved its response rate by more than 30%. It primarily analyzes point-of-sale data, demographic information from its loyalty program, and market data such as property values in the neighborhoods where customers live.
It’s been a long wait, about four years if memory serves me well, since Intel introduced the Xeon E7, a high-end server CPU targeted at the highest performance per-socket x86, from high-end two socket servers to 8-socket servers with tons of memory and lots of I/O. In the ensuing four years (an eternity in a world where annual product cycles are considered the norm), subsequent generations of lesser Xeons, most recently culminating in the latest generation 22 nm Xeon E5 V2 Ivy Bridge server CPUs, have somewhat diluted the value proposition of the original E7.
So what is the poor high-end server user with really demanding single-image workloads to do? The answer was to wait for the Xeon E7 V2, and at first glance, it appears that the wait was worth it. High-end CPUs take longer to develop than lower-end products, and in my opinion Intel made the right decision to skip the previous generation 22nm Sandy Bridge architecture and go to Ivy Bridge, it’s architectural successor in the Intel “Tick-Tock” cycle of new process, then new architecture.
What was announced?
The announcement was the formal unveiling of the Xeon E7 V2 CPU, available in multiple performance bins with anywhere from 8 to 15 cores per socket. Critical specifications include:
Up to 15 cores per socket
24 DIMM slots, allowing up to 1.5 TB of memory with 64 GB DIMMs
Approximately 4X I/O bandwidth improvement
New RAS features, including low-level memory controller modes optimized for either high-availability or performance mode (BIOS option), enhanced error recovery and soft-error reporting
It looks like the beginning of a new technology hype for artificial intelligence (AI). The media has started flooding the news with product announcements, acquisitions, and investments. The story is how AI is capturing the attention of tech firm and investor giants such as Google, Microsoft, IBM. Add to that the release of the movie ‘Her’, about a man falling for his virtual assistant modeled after Apple’s Siri (think they got the idea from Big Bang Theory when Raj falls in love with Siri), and you know we have begun the journey of geek-dom going mainstream and cool. The buzz words are great too: cognitive computing, deep learning, AI2.
For those who started their careers in AI and left in disillusionment (Andrew Ng confessed to this, yet jumped back in) or data scientists today, the consensus is often that artificial intelligence is just a new fancy marketing term for good old predictive analytics. They point to the reality of Apple’s Siri to listen and respond to requests as adequate but more often frustrating. Or, IBM Watson’s win on Jeopardy as data loading and brute force programming. Their perspective, real value is the pragmatic logic of the predictive analytics we have.
But, is this fair? No.
First, let’s set aside what you heard about financial puts and takes. Don’t try to decipher the geek speak of what new AI is compared to old AI. Let’s talk about what is on the horizon that will impact your business.
New AI breaks the current rule that machines must be better than humans: they must be smarter, faster analysts, or they manufacturing things better and cheaper.
Many of us have spent the past 10 years focusing on business intelligence solutions in order to help our businesses make better fact-based decisions. In fact, BI has been among CIOs’ top 10 priorities for more than a decade. These solutions have, for the most part, been successful — and we continue to improve our BI capabilities as the demand for fact-based decision-making goes deeper, wider, and further into the business.
This whole time, we’ve also been aware of the significant amount of unstructured data that resides within our business, and the fact that we struggle to use it to make better decisions. To begin to get value from this data, we have made our organizations more collaborative and implemented tools and platforms to support that collaboration — with varying degrees of success.
The fact remains that there’s a huge amount of unstructured information and data that we do not get value from. However, a growing number of solutions are beginning to mine elements of this data: product information, software code, legal case files, medical literature, messaging data, and other unstructured business data.
I’ve recently been working with TrustSphere, which is a messaging intelligence provider. TrustSphere has an interesting solution that mines your messaging data to get real insights and information from the mountains of emails and messages that bounce into, out of, and around your organization every day. This is an interesting concept, and TrustSphere has developed a number of use cases for its solution. I’ll be presenting at a webinar hosted by TrustSphere on February 25— feel free to register here.
During 2014, we’ll pass a key milestone: an installed base of 2 billion smartphones globally. Mobile is becoming not only the new digital hub but also the bridge to the physical world. That’s why mobile will affect more than just your digital operations — it will transform your entire business. 2014 will be the year that companies increase investments to transform their businesses, with mobile as a focal point.
Let’s highlight a few of the mobile trends that we predict for 2014:
Competitive advantage in mobile will shift from experience design to big data and analytics. Mobile is transformative but only if you can engage your consumers in their exact moment of need with the right services, content, or information. Not only do you need to understand their context in that moment but you also need insights gleaned from data over time to know how to best serve them in that moment.
Mobile contextual data will offer deep customer insights — beyond mobile. Mobile is a key driver of big data. Most advanced marketers will get that mobile’s value as a marketing tool will be measured by more than just the effectiveness of marketing to people on mobile websites or apps. They will start evaluating mobile’s impact on other channels.