Wanted to run the following two questions and my answers by the community:
Q. What is the average age of reporting applications at large enterprises?
A. Reporting apps typically involve source data integration, data models, metrics, reports, dashboards, and queries. I'd rate the longevity of these in descending order (data sources being most stable and queries changing all the time).
Q. What is the percentage of reporting applications that are homegrown versus custom built?
A. These are by no means solid data points but rather my off-the-cuff – albeit educated - guesses:
The majority (let's say >50%) of reports are still being built in Excel and Access.
Very few (let's say <10%) are done in non-BI-specific environments (programming languages).
The other 40% I'd split 50/50 between:
off-the-shelf reports and dashboards built into ERP or BI apps,
and custom-coded in BI tools
Needless to say, this differs greatly by industry and business domain. Thoughts?
As one of the industry-renowned data visualization experts Edward Tufte once said, “The world is complex, dynamic, multidimensional; the paper is static, flat. How are we to represent the rich visual world of experience and measurement on mere flatland?” Indeed, there’s just too much information out there for all categories of knowledge workers to visualize it effectively. More often than not, traditional reports using tabs, rows, and columns do not paint the whole picture or, even worse, lead an analyst to a wrong conclusion. Firms need to use data visualization because information workers:
Cannot see a pattern without data visualization. Simply seeing numbers on a grid often does not convey the whole story — and in the worst case, it can even lead to a wrong conclusion. This is best demonstrated by Anscombe’s quartet where four seemingly similar groups of x/y coordinates reveal very different patterns when represented in a graph.
Cannot fit all of the necessary data points onto a single screen. Even with the smallest reasonably readable font, single-line spacing, and no grid, one cannot realistically fit more than a few thousand data points on a single page or screen using numerical information only. When using advanced data visualization techniques, one can fit tens of thousands (an order-of-magnitude difference) of data points onto a single screen. In his book The Visual Display of Quantitative Information, Edward Tufte gives an example of more than 21,000 data points effectively displayed on a US map that fits onto a single screen.
I get the following question very often. What are the best practices for creating an enterprise reporting policy as to when to use what reporting tool/application? Alas, as with everything else in business intelligence, the answer is not that easy. The old days of developers versus power users versus casual users are gone. The world is way more complex these days. In order to create such a policy, you need to consider the following dimensions:
Historical (what happened)
Operational (what is happening now)
Analytical (why did it happen)
Predictive (what might happen)
Prescriptive (what should I do about it)
Exploratory (what's out there that I don't know about)
Looking at static report output only
Lightly interacting with canned reports (sorting, filtering)
Fully interacting with canned reports (pivoting, drilling)
Assembling existing report, visualizations, and metrics into customized dashboards
Full report authoring capabilities
External (customers, partners)
Report latency, as in need the report:
In a few days
In a few weeks
Strategic (a few complex decisions/reports per month)
Tactical (many less-complex decisions/reports per month)
Operational (many complex/simple decisions/reports per day)
Traditional BI approaches and technologies — even when using the latest technology, best practices, and architectures — almost always have a serious side effect: a constant backlog of BI requests. Enterprises where IT addresses more than 20% of BI requirements will continue to see the snowball effect of an ever-growing BI requests backlog. Why? Because:
BI requirements change faster than an IT-centric support model can keep up. Even with by-the-book BI applications, firms still struggle to turn BI applications on a dime to meet frequently changing business requirements. Enterprises can expect a life span of at least several years out of enterprise resource planning (ERP), customer relationship management (CRM), human resources (HR), and financial applications, but a BI application can become outdated the day it is rolled out. Even within implementation times of just a few weeks, the world may have changed completely due to a sudden mergers and acquisitions (M&A) event, a new competitive threat, new management structure, or new regulatory reporting requirements.
Earlier this week Dell joined arch-competitor HP in endorsing ARM as a potential platform for scale-out workloads by announcing “Copper,” an ARM-based version of its PowerEdge-C dense server product line. Dell’s announcement and positioning, while a little less high-profile than HP’s February announcement, is intended to serve the same purpose — to enable an ARM ecosystem by providing a platform for exploring ARM workloads and to gain a visible presence in the event that it begins to take off.
Dell’s platform is based on a four-core Marvell ARM V7 SOC implementation, which it claims is somewhat higher performance than the Calxeda part, although drawing more power, at 15W per node (including RAM and local disk). The server uses the PowerEdge-C form factor of 12 vertically mounted server modules in a 3U enclosure, each with four server nodes on them for a total of 48 servers/192 cores in a 3U enclosure. In a departure from other PowerEdge-C products, the Copper server has integrated L2 network connectivity spanning all servers, so that the unit will be able to serve as a low-cost test bed for clustered applications without external switches.
Dell is offering this server to selected customers, not as a GA product, along with open source versions of the LAMP stack, Crowbar, and Hadoop. Currently Cannonical is supplying Ubuntu for ARM servers, and Dell is actively working with other partners. Dell expects to see OpenStack available for demos in May, and there is an active Fedora project underway as well.
How does an enterprise — especially a large, global one with multiple product lines and multiple enterprise resource planning (ERP) applications — make sense of operations, logistics, and finances? There’s just too much information for any one person to process. It’s business intelligence (BI) to the rescue! But what is BI, and how does BI differ from reporting and management information systems (MIS)? What is the business impact, and what are the costs versus the benefits? What is the appropriate strategy for implementing BI and achieving continued BI success? Our new report will give business and IT executives an understanding of the four critical phases of strategizing around BI to achieve business goals — or “everything you wanted to know but were afraid to ask” about BI. Here’s a sneak preview of the kinds of topics the report covers and the kinds of BI questions one needs to ask in order to build an effective and efficient enterprise BI environment:
Prepare For Your BI Program
The future of BI is all about agility. IT no longer has exclusive control of BI platforms, tools, and applications; business users demand more empowerment (or make empowered changes without IT involvement), and previously unshakable pillars of the BI foundation such as relational databases are quickly being supplemented with alternative BI platforms. It’s no longer business as usual. Ask yourself:
What are the main business and IT trends driving BI?
What are the latest BI technologies that I need to know about?
Today IBM announced its plans to acquire Vivisimo - an enterprise search vendor with big data capabilities. Our research shows that only 1% to 5% of all enterprise data is in a structured, modeled format that fits neatly into enterprise data warehouses (EDWs) and data marts. The rest of enterprise data (and we are not even talking about external data such as social media data, for example) may not be organized into structures that easily fit into relational or multidimensional databases. There’s also a chicken-and-the-egg syndrome going on here. Before you can put your data into a structure, such as a database, you need to understand what’s out there and what structures do or may exist. But in order for you to explore the data in the first place, traditional data integration technologies require some structures to even start the exploration (tables, columns, etc). So how do you explore something without a structure, without a model, and without preconceived notions? That’s where big data exploration and discovery technologies such as Hadoop and Vivisimo come into play. (There are many others vendors in this space as well, including Oracle Endeca, Attivio, and Saffron Technology. While these vendors may not directly compete with Vivisimo and all use different approaches and architectures, the final objective - data discovery - is often the same.) Data exploration and discovery was one of our top 2012 business intelligence predictions. However, it’s only a first step in the full cycle of business intelligence and
Join us at Forrester’s CIO Forum in Las Vegas on May 3 and 4 for “The New Age Of Business Intelligence.”
The amount of data is growing at tremendous speed — inside and outside of companies’ firewalls. Last year we did hit approximately 1 zettabyte (1 trillion gigabytes) of data in the public Web, and the speed by which new data is created continues to accelerate, including unstructured data in the form of text, semistructured data from M2M communication, and structured data in transactional business applications.
Fortunately, our technical capabilities to collect, store, analyze, and distribute data have also been growing at a tremendous speed. Reports that used to run for many hours now complete within seconds using new solutions like SAP’s HANA or other tailored appliances. Suddenly, a whole new world of data has become available to the CIO and his business peers, and the question is no longer if companies should expand their data/information management footprint and capabilities but rather how and where to start with. Forrester’s recent Strategic Planning Forrsights For CIOs data shows that 42% of all companies are planning an information/data project in 2012, more than for any other application segment — including collaboration tools, CRM, or ERP.
My colleagues and I have just completed yet another engagement with a large client — one of dozens recently — who was facing a to be or not to be decision: whether to move its BI platform and applications to the cloud. It’s a very typical question that our clients are asking these days, mainly for the following two reasons:
In many cases, their current on-premises BI solutions are too inflexible to support the business now, much less in the future.
The relative success of cloud-based CRM (SFDC and others) solutions may indicate that cloud offers a better alternative.
These clients put these two statements together and make the reasonable assumption that cloud BI will solve many of the current BI challenges that cloud-based CRM solved. Reasonable? Yes. Correct? Not so fast — the only correct answer is “It depends.”
Let’s take a couple of steps back. First, let’s define applications or packaged solutions vs. platforms (because BI requires both).
Subscribe to a solution-like CRM
Provide standard business functions to all customers (which makes it different from “hosting;” see below)
Difficult to tailor to specific needs
Usually are used synonymously (but incorrectly, see below) with software-as-a-service (SaaS)
Platforms for building solutions
Subscribe to tools and resources to build solutions like CRM
Provide standard technical functions to developers
Contain limited, if any, business application functionality
Usually labeled either as platform-as-a-service (PaaS) or infrastructure-as-a-service (IaaS).
As one of the industry-renowned data visualization experts Edward Tufte once said, “The world is complex, dynamic, multidimensional; the paper is static, flat. How are we to represent the rich visual world of experience and measurement on mere flatland?” There’s indeed just too much information out there to be effectively analyzed by all categories of knowledge workers. More often than not, traditional tabular row-and-column reports do not paint the whole picture or — even worse — can lead an analyst to a wrong conclusion. There are multiple reasons to use data visualization; the three main ones are that one:
Cannot see a pattern without data visualization. Simply seeing numbers on a grid often does not tell the whole story; in the worst case, it can even lead one to a wrong conclusion. This is best demonstrated by Anscombe’s quartet, where four seemingly similar groups of x and y coordinates reveal very different patterns when represented in a graph.
Cannot fit all of the necessary data points onto a single screen. Even with the smallest reasonably readable font, single line spacing, and no grid, one cannot realistically fit more than a few thousand data points using numerical information only. When using advanced data visualization techniques, one can fit tens of thousands data points onto a single screen — a difference of an order of magnitude. In The Visual Display of Quantitative Information, Edward Tufte gives an example of more than 21,000 data points effectively displayed on a US map that fits onto a single screen.