Whether you are just starting on your BI journey or are continuing to improve on past successes, a shortage of skilled and experienced BI resources is going to be one of your top challenges. You are definitely not alone in this quest. Here are some scary statistics:
“By 2018, the United States alone could face a shortage of 140,000 to 190,000 people with deep analytical skills, as well as 1.5 million managers and analysts with the know-how to use the analysis of big data to make effective decisions.” (Source: May 2012 McKinsey Global Institute report on Big Data)
“… trigger a talent shortage, with up to 190,000 skilled professionals needed to cope with demand in the US alone over the next five years.” (Source: 2012 Deloitte report on technology trends)
“Fewer than 25% of the survey respondents worldwide said they have the skills and resources to analyze unstructured data, such as text, voice, and sensor data.” (Source: 2012 research report by IBM and the Saïd Business School at the University of Oxford)
Mobile BI and cloud BI are among the top trends that we track in the industry. Our upcoming Enterprise BI Platforms Wave™ will dedicate a significant portion of vendor evaluation on these two capabilities. These capabilities are far from yes/no checkmarks. Just asking vague questions like “Can you deliver your BI functionality on mobile devices?” and “Is your BI platform available in the cloud as software-as-a-service?” will lead to incomplete vendor answers, which in turn may lead you to make the wrong vendor selections. Instead, we plan to evaluate these two critical BI platform capabilities along the following parameters:
Animations. Does the product support animations? For example, if a particular dimension, such as time, has hundreds or thousands of values (as in daily values over multiple years), manually clicking through every day is not practical. Launching an automated, animated scroll up and down such a dimension is a more practical approach.
BI is used to build, report, and analyze business performance metrics and indicators. What about measuring the performance of BI itself? How do you know if you have a high-performing, widely used BI environment? Is your opinion based on qualitative “pulse checks” or is it based on quantitative metrics? BI practitioners who preach to their business counterparts to run their business by the numbers need to eat their own dog food: run their BI environment, platforms, and apps by the numbers. For example, do you know:
How many reports and queries do end users create by themselves versus how many IT creates? That's a great efficiency metric.
How many clicks within a dashboard does it take to find an answer to a question? That’/s another great efficiency metric.
How long does each user stay within each report? Do they just run and print the reports, or export the data to Excel, or do they really slice, dice, and analyze the information? That’s a good example of how effective your BI environment is.
Do you see any patterns in BI usage? User by user, department by department, or line of business by line of business?
How many reports, queries, and other objects are being used, how many are shelfware (not being used)? How often are people using the ones that are being used?
I often see two ends of the extreme when I talk to clients who are trying to deal with data confidence challenges. One group typically sees it as a problem that IT has to address, while business users continue to use spreadsheets and other home-grown apps for BI. At the other end of the extreme, there's a strong, take-no-prisoners, top-down mandate for using only enterprise BI apps. In this case, a CEO may impose a rule that says that you can't walk into my office, ask me to make a decision, ask for a budget, etc., based on anything other than data coming from an enterprise BI application. This may sound great, but it's not often very practical; the world is not that simple, and there are many shades of grey in between these two extremes. No large, global, heterogeneous, multi-business- and product-line enterprise can ever hope to clean up all of its data - it's always a continuous journey. The key is knowing what data sources feed your BI applications and how confident you are about the accuracy of data coming from each source.
For example, here's one approach that I often see work very well. In this approach, IT assigns a data confidence index (an extra column attached to each transactional record in your data warehouse, data mart, etc.) during ETL processes. It may look something like this:
If data is coming from a system of record, the index = 100%.
If data is coming from nonfinancial systems and it reconciles with your G/L, the index = 100%. If not, it's < 100%.
In the face of rising data volume and complexity and increased need for self-service, enterprises need an effective business intelligence (BI) reference architecture to utilize BI as a key corporate asset for competitive differentiation. BI stakeholders — such as project managers, developers, data architects, enterprise architects, database administrators, and data quality specialists — may find the myriad choices and constant influx of new business requirements overwhelming. Forrester's BI reference architecture provides a framework with architectural patterns and building blocks to guide these BI stakeholders in managing BI strategy and architecture.
Enterprise information management (EIM) is complex — from a technical, organizational, and operational standpoint. But to business users, all that complexity is behind the scenes. What they need is BI, an interface to enterprise data — whether it's structured, semistructured, or unstructured. Our June 2011 Global Technology Trends Online Survey showed that BI topped even mobility — the frontrunner in recent years — as the technology most likely to provide business value over the next three years.
As John Brand and I recently wrote, business intelligence (BI) adoption drivers, technology understanding, and organizational process maturity continue to vary widely across Asia Pacific (AP). But there is one constant in this market: the regularity with which BI appears at or near the top of CIOs’ priority lists.
While the gap between global best practices and regional implementations is closing, social, cultural, economic, and underlying technology trends will continue to affect BI adoption in the region for the foreseeable future:
Social. The adoption of social computing is expanding rapidly across all AP markets, but is particularly strong in growth markets like China, Indonesia, and the Philippines. As in North America and Western Europe, this adoption is already having profound effects on how organizations identify, understand, and engage with customers and other market influencers. But the lack of significant BI investments means that organizations in these growth markets are far more likely to consider issues like sentiment analysis, predictive analytics, and near real-time data access when sourcing initial BI projects.
There's certainly a lot of hype out there about big data. As I previously wrote, some of it is indeed hype, but there are still many legitimate big data cases - I saw a great example during my last business trip. Hadoop certainly plays a key role in the big data revolution, so all business intelligence (BI) vendors are jumping on the bandwagon and saying that they integrate with Hadoop. But what does that really mean? First of all, Hadoop is not a single entity; it's a conglomeration of multiple projects, each addressing a certain niche within the Hadoop ecosystem, such as data access, data integration, DBMS, system management, reporting, analytics, data exploration, and much much more. To lift the veil of hype, I recommend that you ask your BI vendors the following questions
Which specific Hadoop projects do you integrate with (HDFS, Hive, HBase, Pig, Sqoop, and many others)?
Do you work with the community edition software or with commercial distributions from MapR, EMC/Greenplum, Hortonworks, or Cloudera? Have these vendors certified your Hadoop implementations?
Do you have tools, utilities to help the client data into Hadoop in the first place (see comment from Birst)?
Are you querying Hadoop data directly from your BI tools (reports, dashboards) or are you ingesting Hadoop data into your own DBMS? If the latter:
Are you selecting Hadoop result sets using Hive?
Are you ingesting Hadoop data using Sqoop?
Is your ETL generating and pushing down Map Reduce jobs to Hadoop? Are you generating Pig scripts?
I recently had both the privilege and pleasure to do a deep dive into the cold and warm BI waters in Russia and Israel. Cold - because some of my experiences were sobering. Warm - because the reception could not have been more pleasant. My presentations were well attended (sponsored by www.in4media.ru in Russia and www.matrix.co.il in Israel), showing high levels of BI interest, adoption, experience, and expertise. Challenges remain the same, as Russian and Israeli businesses struggle with BI governance, ownership, SDLC and PMO methodologies, data, and app integration just like the rest of the world. I spent long evening hours with a large global company in Israel that grew rapidly by M&A and is struggling with multiple strategic challenges: centralize or localize BI, vendor selection, end user empowerment, etc. Sound familiar?
But it was not all business as usual. A few interesting regional peculiarities did come out. For example, the "BI as a key competitive differentiator" message fell on mostly deaf ears in Russia, as Russian companies don't really compete against each other. Territories, brands, markets, and spheres of influence are handed top down from the government or negotiated in high-level deals behind closed doors. That is not to say, however, that BI in Russia is only used for reporting - multiple businesses are pushing BI to the limits such as advanced customer segmentation for better upsell/cross-sell rates.
I was also pleasantly surprised and impressed a few times (and for those of you who know me well, you know that it's pretty hard to impress the old veteran):
In a recent media interview I was asked about whether the requirements for data visualization had changed. The questions were focused around whether users are still satisfied with dashboards, graphs and charts or do they have new needs, demands and expectations.
Arguably, Ancient Egyptian hieroglyphics were probably the first real "commercial" examples of data visualization (though many people before the Egyptians also used the same approach — but more often as a general communications tool). Since then, visualization of data has certainly always been both a popular and important topic. For example, Florence Nightingale changed the course of healthcare with a single compelling polar area chart on the causes of death during the Crimean War.
In looking at this question of how and why data visualization might be changing, I identified at least 5 major triggers. Namely:
Increasing volumes of data. It's no surprise that we now have to process much larger volumes of data. But this also impacts the ways we need to represent it. The volume of data stimulates new forms of visualization tools. While not all of these tools are new (strictly speaking), they have at least begun to find a much broader audience as we find the need to communicate much more information much more rapidly. Time walling and infographics are just two approaches that are not necessarily all that new but they have attracted much greater usage as a direct result of the increasing volume of data.