The Obama 2012 campaign famously used big data predictive analytics to influence individual voters. They hired more than 50 analytics experts, including data scientists, to predict which voters will be positively persuaded by political campaign contact such as a call, door knock, flyer, or TV ad. Uplift modeling (aka persuasion modeling) is one of the hottest forms of predictive analytics, for obvious reasons — most organizations wish to persuade people to to do something such as buy! In this special episode of Forrester TechnoPolitics, Mike interviews Eric Siegel, Ph.D., author of Predictive Analytics, to find out: 1) What exactly is uplift modeling? and 2) How did the Obama 2012 campaign use it to persuade voters? (< 4 minutes)
Where customer experience and analytics meet, in real time
For a while now, I’ve been using Hailo as a European poster child for innovation in the context of big data analytics. Due to the level of interest generated by this example, and the number of questions I’ve received along the way about Hailo, its technology and business model, etc., I decided to put together this blog post rather than write loads of separate emails.
Ironically, I’ve not actually been able to use Hailo myself (much as I would like to), as I have neither an iOS or Android-based smartphone. I have, however, met lots of people who’re using Hailo as customers, and I’ve also spoken to taxi drivers about it. I have yet to meet anybody who isn’t a fan.
For those of you who don’t know Hailo, it’s an app that allows you to hail a registered cab from your smartphone; as it was started in London, it’s often also called “the black cab app.” With the company founders being three London cabbies (black cab drivers), the entire service has been uniquely focused around the needs of the two main participants in a taxi ride: the customer and the driver.
Notes from the TechAmerica Europe seminar in Brussels, March 27, 2013
This may not be the most timely event write-up ever produced, but in light of all the discussions I’ve had on the same themes during the past few weeks, I thought I’d share my notes anyway.
The purpose of the event was to peel away some of the hype layers around the “big data” discussion, and — from a European perspective — take a look at the opportunities as well as challenges brought by the increasing amounts of data that is available, and the technologies that enable its exploitation. As was to be expected, an ever-present subtext was the potential of having laws and regulations put in place which — while well-intentioned — can ultimately stifle innovation and even act against consumer interests. And speaking of innovation: Another theme running through several of the discussions was the seeming lack of technology-driven innovation in Europe, in particular when considered in the context of an economic environment in dire need of every stimulus it can get.
The scene was set by John Boswell, senior VP, chief legal officer, and corporate secretary at SAS, who provided a neat summary of the technology developments (cheap storage, unprecedented access to compute power, pervasive connectivity) giving rise to countless opportunities related to the availability, sharing and exploitation of ever-increasing amounts of data. He also outlined the threats posed to companies, governments, and individuals by those who with more sinister intent when it comes to data exploitation, be it for ideological, financial, or political reasons. Clearly, those threats require mitigation, but John also made the point that “regulatory overlays” can also hinder progress, through limiting or even preventing altogether the free flow of data.
Why all the fervor about big data? The answer is that it provides deep insights and predictive models that can dramatically improve business outcomes. But you need a data scientist to get there. There’s a lot of mythology about what a data scientist is and isn’t. In this episode of TechnoPolitics, Mike Gualtieri explains what a data scientist is, what skills they need, and how to hire one. You may also be interested in What Is Hadoop.
About Forrester Instant Insight
Navigating the fast changing world of business technology is a constant challenge. Forrester Instant Insight aims to provide simple, complete answers to some popular questions. Our goal: You will watch the video and be enlightened in 5-minutes or less.
This Forrester Instant Insight was produced by Mike Gualtieri and edited by Lindsay Gualtieri
In advance of next week’s Forrester’s European Business Technology Forums in London on June 10 and 11, we had an opportunity to speak with Greg Swimer about information management and how Unilever delivers real-time data to its employees. Greg Swimer is a global IT leader at Unilever, responsible for delivering new information management, business intelligence, reporting, consolidation, analytics, and master data solutions to more than 20,000 users across all of Unilever’s businesses globally.
1) What are the two forces you and the Unilever team are balancing with your “Data At Your Fingertips” vision?
Putting the data at Unilever’s fingertips means working on two complementary aspects of information management. One aspect is to build an analytics powerhouse with the capacity to handle big data, providing users with the technological power to analyse that data in order to gain greater insight and drive better decision-making. The other aspect is the importance of simplifying and standardizing that data so that it’s accessible enough to understand and act upon. We want to create a simplified landscape, one that allows better decisions, in real time, where there is a common language and a great experience for users.
2) What keys to success have you uncovered in your efforts?
How many of you suffer from at least mild “cyberchondria"? Do you run to the computer to Google your latest ailments? Are you often convinced that the headache you have is the first sign of some terminal illness you’ve been reading about?
Well, Symcat takes a new approach to Internet-assisted self-diagnosis. It provides not only the symptoms but the probability of getting the disease, using CDC data to rank results by the likelihood of the different conditions. It then allows users to further filter results by typing in information such as their gender, the duration of their symptoms and medical history. No, that headache you’ve had all week is likely not spinal stenosis or even viral pharyngitis. But if you’ve had a fall or a blow to the head you might want to consider a concussion.
As Symcat puts it, they “use data to help you feel better.” Never underestimate the palliative effects of peace of mind.
I had the chance to ask Craig Monsen, MD, co-founder and CEO of Symcat, a few questions about how they got their start with the business and their innovation with open data.
What was the genesis of Symcat? Can you describe the "ah-ha" moment of determining the need for Symcat?
There are multiple maturity models and associated assessments for Data Governance on the market. Some are from software vendors, or from consulting companies, which use these as the basis for selling services. Others are from professional groups like the one from the Data Governance Council.
They are all good – but frankly not adequate for the data economy many companies are entering into. I think it is useful to reshuffle some too well established ideas...
Maturity models in general are attractive because:
- Using a maturity model is nearly a ‘no-brainer’ exercise. You run an assessment and determine your current maturity level. Then you can make a list of the actions which will drive you to the next level. You do not need to ask your business for advice, nor involve too many people for interviews.
- Most data governance maturity models are modeled on the very well known CMMI. That means that they are similar at least in terms of structure/levels. So the debate between the advantages of one vs another is limited to its level of detail.
But as firms move into the data economy – with what this means for their sourcing, analyzing and leveraging data, I think that today’s maturity models for data governance are becoming less relevant – and even an impediment:
As an analyst on Forrester's Customer Insight's team, I spend a lot of time counseling clients on best-practice customer data usage strategies. And if there's one thing I've learned, it's that there is no such thing as a 360-degree view of the customer.
Here's the cold, hard truth: you can't possibly expect to know your customer, no matter how much data you have, if all of that data 1) is about her transactions with YOU and you 2) is hoarded away from your partners. And this isn't just about customer data either -- it's about product data, operational data, and even cultural-environmental data. As our customers become more sophisticated and collaborative with each other ("perpetually connected"), so organizations must do the same. That means sharing data, creating collaborative insight, and becoming willing participants in open data marketplaces.
Now, why should you care? Isn't it kind of risky to share your hard-won data? And isn't the data you have enough to delight your customers today? Sure, it might be. But I'd put money on the fact that it won't be for long, because digital disruptors are out there shaking up the foundations of insight and analytics, customer experience, and process improvement in big ways. Let me give you a couple of examples:
Every company generates data that would be of significant value to its customers, partners and potential partners; information that could be combined with insights from this ecosystem, public data and other sources to generate significant new discoveries, products and business values. But making our data available, easily consumable and getting payback for sharing it are significant hurdles.
Over many years we have built up an ever-more complex web of security, legal and data management practices that make it nearly impossible to share valuable info between companies in an open marketplace style – which is exactly what is needed to open up this value.
But it doesn’t have to be this way. There is a new approach that leading enterprises and governments are taking today that is significantly simpler, more manageable and empowers companies to share their key data more freely, opening up massive new market opportunities for all. Here's how a few Forrester clients are taking advantage of this new model:
I met with a group of clients recently on the evolution of data management and big data. One retailer asked, “Are you seeing the business going to external sources to do Big Data?”
My first reaction was, “NO!” Yet, as I thought about it more and went back to my own roots as an analyst, the answer is most likely, “YES!”
Ignoring nomenclature, the reality is that the business is not only going to external sources for big data, but they have been doing it for years. Think about it; organizations that have considered data a strategic tool have invested heavily in big data going back to when mainframes came into vogue. More recently, banking, retail, consumer packaged goods, and logistics have marquis case studies on what sophisticated data use can do.
Before Hadoop, before massive parallel processing, where did the business turn? Many have had relationships with market research organizations, consultancies, and agencies to get them the sophisticated analysis that they need.
Think about the fact, too, that at the beginning of social media, it was PR agencies that developed the first big data analysis and visualization of Twitter, LinkedIn, and Facebook influence. In a past life, I worked at ComScore Networks, an aggregator and market research firm analyzing and trending online behavior. When I joined, they had the largest and fastest growing private cloud to collect web traffic globally. Now, that was big data.
Today, the data paints a split picture. When surveying IT across various surveys, social media and online analysis is a small percentage of business intelligence and analytics that is supported. However, when we look to the marketing and strategy clients at Forrester, there is a completely opposite picture.