Initial business intelligence (BI) ployment efforts are often difficult to predict and may dwarf the investment you made in BI platform software. The effort and costs associated with professional services, whether you use internal staff or hire contractors, depend not only on the complexity of business requirements like metrics, measures, reports, dashboards, and alerts, but also on the number of data sources you are integrating, the complexity of your data integration processes, and logical and physical data modeling. At the very least Forrester recommends considering the following components and their complexity to estimate development, system integration and deployment effort:
The Obama 2012 campaign famously used big data predictive analytics to influence individual voters. They hired more than 50 analytics experts, including data scientists, to predict which voters will be positively persuaded by political campaign contact such as a call, door knock, flyer, or TV ad. Uplift modeling (aka persuasion modeling) is one of the hottest forms of predictive analytics, for obvious reasons — most organizations wish to persuade people to to do something such as buy! In this special episode of Forrester TechnoPolitics, Mike interviews Eric Siegel, Ph.D., author of Predictive Analytics, to find out: 1) What exactly is uplift modeling? and 2) How did the Obama 2012 campaign use it to persuade voters? (< 4 minutes)
In advance of next week’s Forrester’s European Business Technology Forums in London on June 10 and 11, we had an opportunity to speak with Greg Swimer about information management and how Unilever delivers real-time data to its employees. Greg Swimer is a global IT leader at Unilever, responsible for delivering new information management, business intelligence, reporting, consolidation, analytics, and master data solutions to more than 20,000 users across all of Unilever’s businesses globally.
1) What are the two forces you and the Unilever team are balancing with your “Data At Your Fingertips” vision?
Putting the data at Unilever’s fingertips means working on two complementary aspects of information management. One aspect is to build an analytics powerhouse with the capacity to handle big data, providing users with the technological power to analyse that data in order to gain greater insight and drive better decision-making. The other aspect is the importance of simplifying and standardizing that data so that it’s accessible enough to understand and act upon. We want to create a simplified landscape, one that allows better decisions, in real time, where there is a common language and a great experience for users.
2) What keys to success have you uncovered in your efforts?
As an analyst on Forrester's Customer Insight's team, I spend a lot of time counseling clients on best-practice customer data usage strategies. And if there's one thing I've learned, it's that there is no such thing as a 360-degree view of the customer.
Here's the cold, hard truth: you can't possibly expect to know your customer, no matter how much data you have, if all of that data 1) is about her transactions with YOU and you 2) is hoarded away from your partners. And this isn't just about customer data either -- it's about product data, operational data, and even cultural-environmental data. As our customers become more sophisticated and collaborative with each other ("perpetually connected"), so organizations must do the same. That means sharing data, creating collaborative insight, and becoming willing participants in open data marketplaces.
Now, why should you care? Isn't it kind of risky to share your hard-won data? And isn't the data you have enough to delight your customers today? Sure, it might be. But I'd put money on the fact that it won't be for long, because digital disruptors are out there shaking up the foundations of insight and analytics, customer experience, and process improvement in big ways. Let me give you a couple of examples:
BI professionals spend a significant portion of their time trying to instill the discipline of datadriven performance management into their business partners. However, isn’t there something wrong with teaching someone else to fly when you’re still learning to walk? Few BI pros have a way to measure their BI performance quantitatively (46% do not measure BI performance efficiencies and 55% do not measure effectiveness). Everyone collects statistics on the database and BI application server performance, and many conduct periodic surveys to gauge business users’ level of satisfaction. But how do you really know if you have a high-performing, widely used, popular BI environment? For example, you should know BI performance
Efficiency metrics such as number of times a report is used or a number of duplicate/similar reports, etc
Effectiveness metrics such as average number of clicks to find a report and clicks within a report to find an answer to a question and many others
Metric attributes/dimensions such as users, roles, departments, LOBs, regions and others
Whether you are just starting on your BI journey or are continuing to improve on past successes, a shortage of skilled and experienced BI resources is going to be one of your top challenges. You are definitely not alone in this quest. Here are some scary statistics:
“By 2018, the United States alone could face a shortage of 140,000 to 190,000 people with deep analytical skills, as well as 1.5 million managers and analysts with the know-how to use the analysis of big data to make effective decisions.” (Source: May 2012 McKinsey Global Institute report on Big Data)
“… trigger a talent shortage, with up to 190,000 skilled professionals needed to cope with demand in the US alone over the next five years.” (Source: 2012 Deloitte report on technology trends)
“Fewer than 25% of the survey respondents worldwide said they have the skills and resources to analyze unstructured data, such as text, voice, and sensor data.” (Source: 2012 research report by IBM and the Saïd Business School at the University of Oxford)
Mobile BI and cloud BI are among the top trends that we track in the industry. Our upcoming Enterprise BI Platforms Wave™ will dedicate a significant portion of vendor evaluation on these two capabilities. These capabilities are far from yes/no checkmarks. Just asking vague questions like “Can you deliver your BI functionality on mobile devices?” and “Is your BI platform available in the cloud as software-as-a-service?” will lead to incomplete vendor answers, which in turn may lead you to make the wrong vendor selections. Instead, we plan to evaluate these two critical BI platform capabilities along the following parameters:
Animations. Does the product support animations? For example, if a particular dimension, such as time, has hundreds or thousands of values (as in daily values over multiple years), manually clicking through every day is not practical. Launching an automated, animated scroll up and down such a dimension is a more practical approach.
BI is used to build, report, and analyze business performance metrics and indicators. What about measuring the performance of BI itself? How do you know if you have a high-performing, widely used BI environment? Is your opinion based on qualitative “pulse checks” or is it based on quantitative metrics? BI practitioners who preach to their business counterparts to run their business by the numbers need to eat their own dog food: run their BI environment, platforms, and apps by the numbers. For example, do you know:
How many reports and queries do end users create by themselves versus how many IT creates? That's a great efficiency metric.
How many clicks within a dashboard does it take to find an answer to a question? That’/s another great efficiency metric.
How long does each user stay within each report? Do they just run and print the reports, or export the data to Excel, or do they really slice, dice, and analyze the information? That’s a good example of how effective your BI environment is.
Do you see any patterns in BI usage? User by user, department by department, or line of business by line of business?
How many reports, queries, and other objects are being used, how many are shelfware (not being used)? How often are people using the ones that are being used?
I often see two ends of the extreme when I talk to clients who are trying to deal with data confidence challenges. One group typically sees it as a problem that IT has to address, while business users continue to use spreadsheets and other home-grown apps for BI. At the other end of the extreme, there's a strong, take-no-prisoners, top-down mandate for using only enterprise BI apps. In this case, a CEO may impose a rule that says that you can't walk into my office, ask me to make a decision, ask for a budget, etc., based on anything other than data coming from an enterprise BI application. This may sound great, but it's not often very practical; the world is not that simple, and there are many shades of grey in between these two extremes. No large, global, heterogeneous, multi-business- and product-line enterprise can ever hope to clean up all of its data - it's always a continuous journey. The key is knowing what data sources feed your BI applications and how confident you are about the accuracy of data coming from each source.
For example, here's one approach that I often see work very well. In this approach, IT assigns a data confidence index (an extra column attached to each transactional record in your data warehouse, data mart, etc.) during ETL processes. It may look something like this:
If data is coming from a system of record, the index = 100%.
If data is coming from nonfinancial systems and it reconciles with your G/L, the index = 100%. If not, it's < 100%.