Notes from the TechAmerica Europe seminar in Brussels, March 27, 2013
This may not be the most timely event write-up ever produced, but in light of all the discussions I’ve had on the same themes during the past few weeks, I thought I’d share my notes anyway.
The purpose of the event was to peel away some of the hype layers around the “big data” discussion, and — from a European perspective — take a look at the opportunities as well as challenges brought by the increasing amounts of data that is available, and the technologies that enable its exploitation. As was to be expected, an ever-present subtext was the potential of having laws and regulations put in place which — while well-intentioned — can ultimately stifle innovation and even act against consumer interests. And speaking of innovation: Another theme running through several of the discussions was the seeming lack of technology-driven innovation in Europe, in particular when considered in the context of an economic environment in dire need of every stimulus it can get.
The scene was set by John Boswell, senior VP, chief legal officer, and corporate secretary at SAS, who provided a neat summary of the technology developments (cheap storage, unprecedented access to compute power, pervasive connectivity) giving rise to countless opportunities related to the availability, sharing and exploitation of ever-increasing amounts of data. He also outlined the threats posed to companies, governments, and individuals by those who with more sinister intent when it comes to data exploitation, be it for ideological, financial, or political reasons. Clearly, those threats require mitigation, but John also made the point that “regulatory overlays” can also hinder progress, through limiting or even preventing altogether the free flow of data.
Why all the fervor about big data? The answer is that it provides deep insights and predictive models that can dramatically improve business outcomes. But you need a data scientist to get there. There’s a lot of mythology about what a data scientist is and isn’t. In this episode of TechnoPolitics, Mike Gualtieri explains what a data scientist is, what skills they need, and how to hire one. You may also be interested in What Is Hadoop.
About Forrester Instant Insight
Navigating the fast changing world of business technology is a constant challenge. Forrester Instant Insight aims to provide simple, complete answers to some popular questions. Our goal: You will watch the video and be enlightened in 5-minutes or less.
This Forrester Instant Insight was produced by Mike Gualtieri and edited by Lindsay Gualtieri
In advance of next week’s Forrester’s European Business Technology Forums in London on June 10 and 11, we had an opportunity to speak with Greg Swimer about information management and how Unilever delivers real-time data to its employees. Greg Swimer is a global IT leader at Unilever, responsible for delivering new information management, business intelligence, reporting, consolidation, analytics, and master data solutions to more than 20,000 users across all of Unilever’s businesses globally.
1) What are the two forces you and the Unilever team are balancing with your “Data At Your Fingertips” vision?
Putting the data at Unilever’s fingertips means working on two complementary aspects of information management. One aspect is to build an analytics powerhouse with the capacity to handle big data, providing users with the technological power to analyse that data in order to gain greater insight and drive better decision-making. The other aspect is the importance of simplifying and standardizing that data so that it’s accessible enough to understand and act upon. We want to create a simplified landscape, one that allows better decisions, in real time, where there is a common language and a great experience for users.
2) What keys to success have you uncovered in your efforts?
How many of you suffer from at least mild “cyberchondria"? Do you run to the computer to Google your latest ailments? Are you often convinced that the headache you have is the first sign of some terminal illness you’ve been reading about?
Well, Symcat takes a new approach to Internet-assisted self-diagnosis. It provides not only the symptoms but the probability of getting the disease, using CDC data to rank results by the likelihood of the different conditions. It then allows users to further filter results by typing in information such as their gender, the duration of their symptoms and medical history. No, that headache you’ve had all week is likely not spinal stenosis or even viral pharyngitis. But if you’ve had a fall or a blow to the head you might want to consider a concussion.
As Symcat puts it, they “use data to help you feel better.” Never underestimate the palliative effects of peace of mind.
I had the chance to ask Craig Monsen, MD, co-founder and CEO of Symcat, a few questions about how they got their start with the business and their innovation with open data.
What was the genesis of Symcat? Can you describe the "ah-ha" moment of determining the need for Symcat?
There are multiple maturity models and associated assessments for Data Governance on the market. Some are from software vendors, or from consulting companies, which use these as the basis for selling services. Others are from professional groups like the one from the Data Governance Council.
They are all good – but frankly not adequate for the data economy many companies are entering into. I think it is useful to reshuffle some too well established ideas...
Maturity models in general are attractive because:
- Using a maturity model is nearly a ‘no-brainer’ exercise. You run an assessment and determine your current maturity level. Then you can make a list of the actions which will drive you to the next level. You do not need to ask your business for advice, nor involve too many people for interviews.
- Most data governance maturity models are modeled on the very well known CMMI. That means that they are similar at least in terms of structure/levels. So the debate between the advantages of one vs another is limited to its level of detail.
But as firms move into the data economy – with what this means for their sourcing, analyzing and leveraging data, I think that today’s maturity models for data governance are becoming less relevant – and even an impediment:
As an analyst on Forrester's Customer Insight's team, I spend a lot of time counseling clients on best-practice customer data usage strategies. And if there's one thing I've learned, it's that there is no such thing as a 360-degree view of the customer.
Here's the cold, hard truth: you can't possibly expect to know your customer, no matter how much data you have, if all of that data 1) is about her transactions with YOU and you 2) is hoarded away from your partners. And this isn't just about customer data either -- it's about product data, operational data, and even cultural-environmental data. As our customers become more sophisticated and collaborative with each other ("perpetually connected"), so organizations must do the same. That means sharing data, creating collaborative insight, and becoming willing participants in open data marketplaces.
Now, why should you care? Isn't it kind of risky to share your hard-won data? And isn't the data you have enough to delight your customers today? Sure, it might be. But I'd put money on the fact that it won't be for long, because digital disruptors are out there shaking up the foundations of insight and analytics, customer experience, and process improvement in big ways. Let me give you a couple of examples:
I met with a group of clients recently on the evolution of data management and big data. One retailer asked, “Are you seeing the business going to external sources to do Big Data?”
My first reaction was, “NO!” Yet, as I thought about it more and went back to my own roots as an analyst, the answer is most likely, “YES!”
Ignoring nomenclature, the reality is that the business is not only going to external sources for big data, but they have been doing it for years. Think about it; organizations that have considered data a strategic tool have invested heavily in big data going back to when mainframes came into vogue. More recently, banking, retail, consumer packaged goods, and logistics have marquis case studies on what sophisticated data use can do.
Before Hadoop, before massive parallel processing, where did the business turn? Many have had relationships with market research organizations, consultancies, and agencies to get them the sophisticated analysis that they need.
Think about the fact, too, that at the beginning of social media, it was PR agencies that developed the first big data analysis and visualization of Twitter, LinkedIn, and Facebook influence. In a past life, I worked at ComScore Networks, an aggregator and market research firm analyzing and trending online behavior. When I joined, they had the largest and fastest growing private cloud to collect web traffic globally. Now, that was big data.
Today, the data paints a split picture. When surveying IT across various surveys, social media and online analysis is a small percentage of business intelligence and analytics that is supported. However, when we look to the marketing and strategy clients at Forrester, there is a completely opposite picture.
Whether you are just starting on your BI journey or are continuing to improve on past successes, a shortage of skilled and experienced BI resources is going to be one of your top challenges. You are definitely not alone in this quest. Here are some scary statistics:
“By 2018, the United States alone could face a shortage of 140,000 to 190,000 people with deep analytical skills, as well as 1.5 million managers and analysts with the know-how to use the analysis of big data to make effective decisions.” (Source: May 2012 McKinsey Global Institute report on Big Data)
“… trigger a talent shortage, with up to 190,000 skilled professionals needed to cope with demand in the US alone over the next five years.” (Source: 2012 Deloitte report on technology trends)
“Fewer than 25% of the survey respondents worldwide said they have the skills and resources to analyze unstructured data, such as text, voice, and sensor data.” (Source: 2012 research report by IBM and the Saïd Business School at the University of Oxford)
Data management history has shown, it is not what you buy; it is how you are able to use it that makes a difference. According to survey results from the Q4 2012 Forrsights BI/Big Data Survey, this is a story that is again ringing true as big data changes the data management landscape.
Overall . . .
Big technology adoption across various capabilities ranges from 8% to just over 25%.
Plans to implement big data technology across various capabilities is as high as 31%.
Pilot projects are the preferred method to get started.
However . . .
High-performing organizations (15%-plus annual growth) are expanding big data investments by one to two times in many big data areas compared with other organizations.
The key takeaway . . .
For most organizations, big data projects aren't leaving the pilot stage and aren't failing to attain strong return on investment (ROI).
Why? What organization couldn’t benefit from making better decisions? Just ask the Obama campaign, which used sophisticated uplift modeling to target and influence swing voters. Or telecom firms that use predictive analytics to help prevent customer churn. Or police departments that use it to reduce crime. The list goes on and on and on. Virtually every organization could benefit from predictive analytics. Don’t confuse traditional business intelligence (BI) with predictive analytics. BI is about reports, dashboards, and advanced visualizations (which are still essential to every organization). Predictive is different. Predictive analytics uses machine learning algorithms on large and small data sets alike to predict outcomes. But predictive is not about absolutes; it doesn’t gaurentee an outcome. Rather, it’s about probabilities. For example, there is a 76% chance that this person will click on this display ad. Or there is a 63% chance that this customer will buy at a certain price. Or there is an 89% chance that this part will fail. Good stuff, but it’s hard to understand and harder to do. It’s worth it, though: Organizations that employ predictive analytics can dramatically reduce risk, disrupt competitors, and save tons of dough. Many are doing it now. More want to.
Few understand the what, why, and how of predictive analytics. Here’s a short, ordered reading list designed to get you up to speed super fast: