How does an enterprise — especially a large, global one with multiple product lines and multiple enterprise resource planning (ERP) applications — make sense of operations, logistics, and finances? There’s just too much information for any one person to process. It’s business intelligence (BI) to the rescue! But what is BI, and how does BI differ from reporting and management information systems (MIS)? What is the business impact, and what are the costs versus the benefits? What is the appropriate strategy for implementing BI and achieving continued BI success? Our new report will give business and IT executives an understanding of the four critical phases of strategizing around BI to achieve business goals — or “everything you wanted to know but were afraid to ask” about BI. Here’s a sneak preview of the kinds of topics the report covers and the kinds of BI questions one needs to ask in order to build an effective and efficient enterprise BI environment:
Prepare For Your BI Program
The future of BI is all about agility. IT no longer has exclusive control of BI platforms, tools, and applications; business users demand more empowerment (or make empowered changes without IT involvement), and previously unshakable pillars of the BI foundation such as relational databases are quickly being supplemented with alternative BI platforms. It’s no longer business as usual. Ask yourself:
What are the main business and IT trends driving BI?
What are the latest BI technologies that I need to know about?
Today IBM announced its plans to acquire Vivisimo - an enterprise search vendor with big data capabilities. Our research shows that only 1% to 5% of all enterprise data is in a structured, modeled format that fits neatly into enterprise data warehouses (EDWs) and data marts. The rest of enterprise data (and we are not even talking about external data such as social media data, for example) may not be organized into structures that easily fit into relational or multidimensional databases. There’s also a chicken-and-the-egg syndrome going on here. Before you can put your data into a structure, such as a database, you need to understand what’s out there and what structures do or may exist. But in order for you to explore the data in the first place, traditional data integration technologies require some structures to even start the exploration (tables, columns, etc). So how do you explore something without a structure, without a model, and without preconceived notions? That’s where big data exploration and discovery technologies such as Hadoop and Vivisimo come into play. (There are many others vendors in this space as well, including Oracle Endeca, Attivio, and Saffron Technology. While these vendors may not directly compete with Vivisimo and all use different approaches and architectures, the final objective - data discovery - is often the same.) Data exploration and discovery was one of our top 2012 business intelligence predictions. However, it’s only a first step in the full cycle of business intelligence and
Join us at Forrester’s CIO Forum in Las Vegas on May 3 and 4 for “The New Age Of Business Intelligence.”
The amount of data is growing at tremendous speed — inside and outside of companies’ firewalls. Last year we did hit approximately 1 zettabyte (1 trillion gigabytes) of data in the public Web, and the speed by which new data is created continues to accelerate, including unstructured data in the form of text, semistructured data from M2M communication, and structured data in transactional business applications.
Fortunately, our technical capabilities to collect, store, analyze, and distribute data have also been growing at a tremendous speed. Reports that used to run for many hours now complete within seconds using new solutions like SAP’s HANA or other tailored appliances. Suddenly, a whole new world of data has become available to the CIO and his business peers, and the question is no longer if companies should expand their data/information management footprint and capabilities but rather how and where to start with. Forrester’s recent Strategic Planning Forrsights For CIOs data shows that 42% of all companies are planning an information/data project in 2012, more than for any other application segment — including collaboration tools, CRM, or ERP.
My colleagues and I have just completed yet another engagement with a large client — one of dozens recently — who was facing a to be or not to be decision: whether to move its BI platform and applications to the cloud. It’s a very typical question that our clients are asking these days, mainly for the following two reasons:
In many cases, their current on-premises BI solutions are too inflexible to support the business now, much less in the future.
The relative success of cloud-based CRM (SFDC and others) solutions may indicate that cloud offers a better alternative.
These clients put these two statements together and make the reasonable assumption that cloud BI will solve many of the current BI challenges that cloud-based CRM solved. Reasonable? Yes. Correct? Not so fast — the only correct answer is “It depends.”
Let’s take a couple of steps back. First, let’s define applications or packaged solutions vs. platforms (because BI requires both).
Subscribe to a solution-like CRM
Provide standard business functions to all customers (which makes it different from “hosting;” see below)
Difficult to tailor to specific needs
Usually are used synonymously (but incorrectly, see below) with software-as-a-service (SaaS)
Platforms for building solutions
Subscribe to tools and resources to build solutions like CRM
Provide standard technical functions to developers
Contain limited, if any, business application functionality
Usually labeled either as platform-as-a-service (PaaS) or infrastructure-as-a-service (IaaS).
As one of the industry-renowned data visualization experts Edward Tufte once said, “The world is complex, dynamic, multidimensional; the paper is static, flat. How are we to represent the rich visual world of experience and measurement on mere flatland?” There’s indeed just too much information out there to be effectively analyzed by all categories of knowledge workers. More often than not, traditional tabular row-and-column reports do not paint the whole picture or — even worse — can lead an analyst to a wrong conclusion. There are multiple reasons to use data visualization; the three main ones are that one:
Cannot see a pattern without data visualization. Simply seeing numbers on a grid often does not tell the whole story; in the worst case, it can even lead one to a wrong conclusion. This is best demonstrated by Anscombe’s quartet, where four seemingly similar groups of x and y coordinates reveal very different patterns when represented in a graph.
Cannot fit all of the necessary data points onto a single screen. Even with the smallest reasonably readable font, single line spacing, and no grid, one cannot realistically fit more than a few thousand data points using numerical information only. When using advanced data visualization techniques, one can fit tens of thousands data points onto a single screen — a difference of an order of magnitude. In The Visual Display of Quantitative Information, Edward Tufte gives an example of more than 21,000 data points effectively displayed on a US map that fits onto a single screen.
Demands by users of business intelligence (BI) applications to "just get it done" are turning typical BI relationships, such as business/IT alignment and the roles that traditional and next-generation BI technologies play, upside down. As business users demand more control over BI applications, IT is losing its once-exclusive control over BI platforms, tools, and applications. It's no longer business as usual: For example, organizations are supplementing previously unshakable pillars of BI, such as tightly controlled relational databases, with alternative platforms. Forrester recommends that business and IT professionals responsible for BI understand and start embracing some of the latest BI trends — or risk falling behind.
Traditional BI approaches often fall short for the two following reasons (among many others):
BI hasn't fully empowered information workers, who still largely depend on IT
BI platforms, tools and applications aren't agile enough
“… and they lived happily ever after.” This is the typical ending of most Hollywood movies, which is why I am not a big fan. I much prefer European or independent movies that leave it up to the viewer to draw their own conclusions. It’s just so much more realistic. Keep this in mind, please, as you read this blog, because its only purpose is to present my point of view on what’s happening in the cloud BI market, not to predict where it’s going. I’ll leave that up to your comments — just like your own thoughts and feelings after a good, thoughtful European or indie movie.
First of all, let’s define the market. Unfortunately, the terms SaaS and cloud are often used synonymously and therefore, alas, incorrectly.
SaaS is just a licensing structure. Many vendors (open source, for example) offer SaaS software subscription models, which has nothing to do with cloud-based hosting.
Cloud, in my humble opinion, is all about multitenant software hosted on public or private clouds. It’s not about cloud hosting of traditional software innately architected for single tenancy.
This is a very smart move by Oracle. Until the Siebel and Hyperion acquisitions, Oracle was not a leader in the BI and analytics space. Those acquisitions put them squarely in the top three together with IBM and SAP. However, until this morning, Oracle played mostly in the traditional BI space: reporting, querying, and analytics based on relational databases. But these mainstream relational databases are an awkward fit for BI. You can use them, but it requires lots of tuning and customization and constant optimization — which is difficult, time-consuming, and costly. Unfortunately, row-based RDBMSes like IBM DB2, Microsoft SQL Server, Oracle, and Sybase ASE were originally designed and architected for transaction processing, not reporting and analysis. In order to tune such a RDBMS for BI usage, specifically data warehousing, architects usually:
Denormalize data models to optimize reporting and analysis.
Build indexes to optimize queries.
Build aggregate tables to optimize summary queries.
Build OLAP cubes to further optimize analytic queries.
Whoa! Hold your horses. If this is indeed a key challenge that you’ve tried to address in the past without much success, consider switching jobs. This is not a joke. Business intelligence (BI) is an employee market right now; a key challenge for most BI employers is finding, recruiting, and retaining top — or actually any, for that matter — BI talent. Consider that IBM BAO alone added more than 4,000 (!) BI positions in just over a year! Every other major, midsize, and boutique BI consultancy I talk to is struggling to find BI resources. So if you’ve been fighting this uphill Sisyphean battle for a while, consider new channels for your noble efforts.
Now, some more practical advice — albeit not as exciting. Start from the top down. In a few minutes I am getting ready to talk to yet another large client whose CEO does not “get” BI. Can you rightfully blame him/her? Yes and no. Yes, because how can you manage any business without measurement and insight into your internal and external processes? So if your CEO didn’t learn that in his/her MBA 101, suggest that he/she look for another job. And if you’re still standing after that and have suffered only a mild concussion, consider that many BI projects have been less than successful, and ROI on BI — one of the most expensive enterprise apps — is extremely difficult to show. So can you really blame your CEO?
I need your help. I am conducting research into business intelligence (BI) software prices: averages, differences between license and subscription deals, differences between small and large vendor offerings, etc. In order to help our clients look beyond just the software pricese and consider the fully loaded total cost of ownership, I also want to throw in service and hardware costs (I already have data on annual maintenance and initial training costs). I’ve been in this market long enough to understand that the only correct answer is “It depends” — on the levels of data complexity, data cleanliness, use cases, and many other factors. But, if I could pin you down to a ballpark formula for budgeting and estimation purposes, what would that be? Here are my initial thoughts — based on experience, other relevant research, etc.
Initial hardware as a percentage of software cost = 33% to 50%
Ongoing hardware maintenance = 20% of the initial hardware cost
Initial design, build, implementation of services. Our rule of thumb has always been 300% to 700%, but that obviously varies by deal sizes. So here’s what I came up with:
Less than $100,000 in software = 100% in services
$100,000 to $500,000 in software = 300% in services
$500,000 to $2 million in software = 200% in services
$2 million to $10 million in software = 50% in services
More than $10 million in software = 25% in services
Then 20% of the initial software cost for ongoing maintenance, enhancements, and support
Thoughts? Again, I am not looking for “it depends” answers, but rather for some numbers and ranges based on your experience.