CIO pushback is part of a typical growing pain of all business intelligence (BI) startups. It means your land and expand strategy is working. Once you start expanding beyond a single department CIOs will notice. As a general rule, the earlier the CIO is brought on board, the better. CIOs who feel left out are likely to raise more objections than those who are involved in the early stages. A number of BI vendors that started out with a strategy of purposely avoiding the CIO found over time that they had to change their strategies - ultimately, there’s no way round the CIO. Forrester has also noticed that the more a vendor gets the reputation of “going round” the CIO, the greater the resistance is from CIOs once they do get involved.
There is of course also the situation where the business side doesn’t want the CIO involved, sometimes for very good reason. That notwithstanding, if there’s a dependency on the CIO when it comes to sign-off, Forrester would strongly recommend encouraging the business to bring him/her to the table.
The two key aspects to bear in mind in this context are:
CIOs look for transparency. Have architecture diagrams to hand out, be prepared to explain your solution in as much technical detail as required, and have answers ready regarding the enterprise IT capabilities listed below.
Delivering broad access to data and analytics to a diverse base of users is an intimidating task, yet it is an essential foundation to becoming an insights-driven organization. To win and keep customers in an increasingly competitive world, firms need to take advantage of the huge swaths of data available and put it into the hands of more users. To do this, business intelligence (BI) pros must evolve disjointed and convoluted data and analytics practices into well-orchestrated systems of insight that deliver actionable information. But implementing digital insights is just the first step with these systems — and few hit the bull's eye the first time. Continuously learning from previous insights and their results makes future efforts more efficient and effective. This is a key capability for the next-generation BI, what Forrester calls systems of insight.
"It's 10 o'clock! Do you know if your insights support actual verifiable facts?" This is a real challenge, as measuring report and dashboard effectiveness today involves mostly discipline and processes, not technology. For example, if a data mining analysis predicted a certain number of fraudulent transactions, do you have the discipline and processes to go back and verify whether the prediction came true? Or if a metrics dashboard was flashing red, telling you that inventory levels were too low for the current business environment, and the signal caused you to order more widgets, do you verify if this was a good or a bad decision? Did you make or lose money on the extra inventory you ordered? Organizations are still struggling with this ultimate measure of BI effectiveness. Only 8% of Forrester clients report robust capabilities for such continuous improvement, and 39% report just a few basic capabilities.
Are you lost in a confusing soup of vendor-speak about what their data analytics stack actually offers? Big data, data platforms, advanced analytics, data lakes, real-time everything, streaming, the IoT, customer analytics, digital intelligence, real-time interaction, customer decision hubs, new-stuff-as-a-service, the list goes on.
Recognize the convergence happening as vendors evolve their technologies from doing just one thing like predictive analytics or search to many things together. For example, data integration, data warehouse, and BI tools are typically sold separately, but breakout vendor Looker combines data integration, model governance, basic BI, and a runtime for data applications all in one software layer that sits on your data lake. As another example, consider predictive analytics vendor Alpine Data Labs or SAS Viya from SAS. These vendors have built out a lot of data management and insight delivery tooling into their platforms because without it users struggle to maximize value. Another trend is big data search vendors like Maana that now also include hooks for predictive model execution as well as more data management functions. Lastly, systems integrators are packaging their IP and offering it as a data management and analytics integrated product — for example, Saama’s Fluid Analytics Engine or Infosys’ Information Platform.
In fact, the list of innovative vendors blending data management, analytics, and insight execution technology is growing by leaps and bounds. To address this trend, I just published a report, Insight Platforms Accelerate Digital Transformation, in which I created a broad definition that labels this trend:
One of the reasons for only a portion of enterprise and external (about a third of structured and a quarter of unstructured -) data being available for insights is a restrictive architecture of SQL databases. In SQL databases data and metadata (data models, aka schemas) are tightly bound and inseparable (aka early binding, schema on write). Changing the model often requires at best just rebuilding an index or an aggregate, at worst - reloading entire columns and tables. Therefore many analysts start their work from data sets based on these tightly bound models, where DBAs and data architects have already built business requirements (that may be outdated or incomplete) into the models. Thus the data delivered to the end-users already contains inherent biases, which are opaque to the user and can strongly influence their analysis. As part of the natural evolution of Business Intelligence (BI) platforms data exploration now addresses this challenge. How? BI pros can now take advantage of ALL raw data available in their enterprises by:
When I read articles like today's WSJ article on mutual funds exiting high tech startups and triangulate the content with Forrester client interactions over the last 12 to 18 months (and some rumors) I am now becoming convinced that there will be some Business Intelligence (BI) and analytics vendor shake ups in 2016. Even though according to our research enterprises are still only leveraging 20%-40% of their entire universe of data for insights and decisions, and 50%-80% of all BI/analytics apps are still done in spreadsheets, the market is over saturated with vendors. Just take a look at the 50+ vendors we track in our BI Vendor Landscape. IMHO we are nearing a saturating point where the buy side of the market cannot sustain so many sellers. Indeed we are already seeing a trend where large enterprises, which a couple of years ago had 10+ different BI platforms, today usually only deploy somewhere between 3 and 5. And, in case you missed it, we already saw what is surely to be a much bigger trend of BI/analytics M&A - SAP acquiring mobile BI vendor Roambi. Start hedging your BI vendor bets!
Rule #1. Don't just jump into creating a hefty enterprise wide Business Intelligence (BI)
Business intelligence and its next iteration, systems of insight (SOI), have moved to the top of BI pros' agendas for enterprise software adoption. Investment in BI tools and applications can have a number of drivers, both external (such as regulatory requirements or technology obsolescence) and internal (such as the desire to improve processes or speed up decision-making). However, putting together a BI business case is not always a straightforward process. Before embarking on a BI business case endeavor, consider that:
You may not actually need a business case. Determining whether a BI business case is necessary includes three main considerations. Is it an investment that the organization must make to stay in business, should consider because other investments are changing the organization's IT landscape, or wants to make because of expected business benefits?
A business sponsor does not obviate the need for a business case. It may be tempting to conclude that you can skip making a business case for BI whenever there is a strong push for investment from the business side, in particular when budget holders are prepared to commit money. Resist this impulse whenever possible: The resulting project will likely suffer from a lack of focus, and recriminations are likely to follow sooner or later.
Major conferences are often the occasion for key vendor announcements, and SAP didn’t disappoint. At the 2016 SAP Insider event on BI/Hana in Las Vegas, SAP announced the acquisition of independent mobile BI specialist Roambi’s solution portfolio and key assets. With this acquisition, SAP underlines its commitment not only to mobile and cloud but also to getting the right data into the hands of the right people at the right time. With this acquisition, SAP underlines its commitment not only to mobile and cloud but also to getting the right data into the hands of the right people at the right time. The Roambi acquisition adds the following to SAP’s mobile BI portfolio:
An attractive set of prebuilt visualizations for fast creation of mobile dashboards.
A cloud-based back end that can connect to a variety of data and BI sources.
The capability to create data-rich, interactive, eBook-like publications.
There are both tactical and strategic aspects to SAP’s acquisition of Roambi, which:
Adds attractive capabilities to SAP’s mobile BI portfolio, even for customers who may already be using BusinessObjects Mobile.
Provides an instant cloud option for mobile BI to customers running on-premises BI environments, but who can’t, or don’t want to, support a mobile BI solution.
Can be leveraged as an important building block for the mobile capabilities of SAP Cloud for Analytics.
Brings more than software to the SAP stable. In one fell swoop, SAP gains a team of professionals who’ve been living and breathing mobile BI for a long time.
Do you ever feel like you’re facing a moving target? Whether it’s the latest customer requirements, or how to improve operations, or to retain your best employees, or to price your products, the context in which you are doing business is increasingly dynamic. And, so are the tools you need to better understand that context? Everyone is talking about the promise of big data and advanced analytics, but we all know that companies struggle to reach the Holy Grail.
Data and analytics tools and the skills required to use them are changing faster than ever. Technologies that were university research projects just last year are now part of a wide range of products and services. How can firms keep up with the accelerated pace of innovation? Alas, many cannot. According to Forrester's Q3 2015 Global State Of Strategic Planning, Enterprise Architecture, And PMO Online Survey, 73% of companies understand the business value of data and aspire to be data-driven but just 29% confirm that they are actually turning data into action. Many firms report having mature data management, governance, and analytics practices, but yesterday's skills are not necessarily what they will need tomorrow — or even today.
The same goes for data sources. We all know that using external data sources enhances the insights from our business intelligence. But which data and where to get it?
With the incredible popularity of big data and Hadoop every Business Intelligence (BI) vendor wants to also be known as a "BI on Hadoop" vendor. But what they really can do is limited to a) querying HDFS data organized in HIVE tables using HiveQL or b) ingest any flat file into memory and analyze the data there. Basically, to most of the BI vendors Hadoop is just another data source. Let's now see what qualifies a BI vendor as a "Native Hadoop BI Platform". If we assume that all BI platforms have to have data extraction/integration, persistence, analytics and visualization layers, then "Native Hadoop/Spark BI Platforms" should be able to (ok, yes, I just had to add Spark)
Use Hadoop/Spark as the primary processing platform for MOST of the aforementioned functionality. The only exception is visualization layer which is not what Hadoop/Spark do.
Use distributed processing frameworks natively, such as
Generation of MapReduce and/or Spark jobs
Management of distributed processing framework jobs by YARN, etc
Note, generating Hive or SparkSQL queries does not qualify
Do declarative work in the product’s main user interface interpreted and executed on Hadoop/Spark directly. Not via a "pass through" mode.
Natively support Apache Sentry and Apache Ranger security
I am kicking off a research stream which will result in the "Text Analytics Roles & Responsibilities" doc. Before I finalize an RFI to our clients to see who/how/when/where they employ for these projects and applications, I'd like to explore what the actual roles and responsibilities are. So far we've come up with the following roles and their respective responsibilities
Business owner. The ultimate recipient of text analytics process results. So far I have
Customer intelligence analyst
Customer service/call center analyst
Competitive intelligence analyst
Product R&D analyst
Linguist/Data Scientist. Builds language and statistical rules for text mining (or modifies these from an off-the-shelf-product). Works with business owners to
Create "golden copies" of documents/content which will be used as base for text analytics
Works with data stewards and business ownes to define corporate taxonomies and lexicon
Data Steward. Owns corporate lexicon and taxonomies
Architect. Owns big data strategy and architecture (include data hubs, data warehouses, BI, etc) where unstructured data is one of the components
Developer/integrator. Develops custom built text analytics apps or embeds text analytics functionality into other applications (ERP, CRM, BI, etc)