I just received yet another call from a reporter asking me to comment on yet another BI vendor announcing R integration. All leading BI vendors are embedding/integrating with R these days, so I was not sure what was really new in the announcement. I guess the real question is the level of integration. For example:
Since R is a scripting language, does a BI vendor provide point-and-click GUI to generate R code?
Can R routines leverage and take advantage of all of the BI metadata (data structures, definitions, etc.) without having to redefine it again just for R?
How easily can the output from R calculations (scores, rankings) be embedded in the BI reports and dashboards? Do the new scores just become automagically available for BI reports, or does somebody need to add them to BI data stores and metadata?
Can the BI vendor import/export R models based on PMML?
Is it a general R integration, or are there prebuilt vertical (industry specific) or domain (finance, HR, supply chain, risk, etc) metrics as part of a solution?
What server are R models executed in? Reporting server? Database server? Their own server?
Then there's the whole business of model design, management, and execution, which is usually the realm of advanced analytics platforms. How much of these capabilities does the BI vendor provide?
Did I get that right? Any other features/capabilities that really distinguish one BI/R integration from another? Really interested in hearing your comments.
Cloud Services Offer New Opportunities For Big Data Solutions
What’s better than writing about one hot topic? Well, writing about two hot topics in one blog post — and here you go:
The State Of BI In The Cloud
Over the past few years, BI business intelligence (BI) was the overlooked stepchild of cloud solutions and market adoption. Sure, some BI software-as-a-service (SaaS) vendors have been pretty successful in this space, but it was success in a niche compared with the four main SaaS applications: customer relationship management (CRM), collaboration, human capital management (HCM), and eProcurement. While those four applications each reached cloud adoption of 25% and more in North America and Western Europe, BI was leading the field of second-tier SaaS solutions used by 17% of all companies in our Forrester Software Survey, Q4 2011. Considering that the main challenges of cloud computing are data security and integration efforts (yes, the story of simply swiping your credit card to get a full operational cloud solution in place is a fairy tale), 17% cloud adoption is actually not bad at all; BI is all about data integration, data analysis, and security. With BI there is of course the flexibility to choose which data a company considers to run in a cloud deployment and what data sources to integrate — a choice that is very limited when implementing, e.g., a CRM or eProcurement cloud solution.
“38% of all companies are planning a BI SaaS project before the end of 2013.”
Wanted to run the following two questions and my answers by the community:
Q. What is the average age of reporting applications at large enterprises?
A. Reporting apps typically involve source data integration, data models, metrics, reports, dashboards, and queries. I'd rate the longevity of these in descending order (data sources being most stable and queries changing all the time).
Q. What is the percentage of reporting applications that are homegrown versus custom built?
A. These are by no means solid data points but rather my off-the-cuff – albeit educated - guesses:
The majority (let's say >50%) of reports are still being built in Excel and Access.
Very few (let's say <10%) are done in non-BI-specific environments (programming languages).
The other 40% I'd split 50/50 between:
off-the-shelf reports and dashboards built into ERP or BI apps,
and custom-coded in BI tools
Needless to say, this differs greatly by industry and business domain. Thoughts?
As one of the industry-renowned data visualization experts Edward Tufte once said, “The world is complex, dynamic, multidimensional; the paper is static, flat. How are we to represent the rich visual world of experience and measurement on mere flatland?” Indeed, there’s just too much information out there for all categories of knowledge workers to visualize it effectively. More often than not, traditional reports using tabs, rows, and columns do not paint the whole picture or, even worse, lead an analyst to a wrong conclusion. Firms need to use data visualization because information workers:
Cannot see a pattern without data visualization. Simply seeing numbers on a grid often does not convey the whole story — and in the worst case, it can even lead to a wrong conclusion. This is best demonstrated by Anscombe’s quartet where four seemingly similar groups of x/y coordinates reveal very different patterns when represented in a graph.
Cannot fit all of the necessary data points onto a single screen. Even with the smallest reasonably readable font, single-line spacing, and no grid, one cannot realistically fit more than a few thousand data points on a single page or screen using numerical information only. When using advanced data visualization techniques, one can fit tens of thousands (an order-of-magnitude difference) of data points onto a single screen. In his book The Visual Display of Quantitative Information, Edward Tufte gives an example of more than 21,000 data points effectively displayed on a US map that fits onto a single screen.
I get the following question very often. What are the best practices for creating an enterprise reporting policy as to when to use what reporting tool/application? Alas, as with everything else in business intelligence, the answer is not that easy. The old days of developers versus power users versus casual users are gone. The world is way more complex these days. In order to create such a policy, you need to consider the following dimensions:
Historical (what happened)
Operational (what is happening now)
Analytical (why did it happen)
Predictive (what might happen)
Prescriptive (what should I do about it)
Exploratory (what's out there that I don't know about)
Looking at static report output only
Lightly interacting with canned reports (sorting, filtering)
Fully interacting with canned reports (pivoting, drilling)
Assembling existing report, visualizations, and metrics into customized dashboards
Full report authoring capabilities
External (customers, partners)
Report latency, as in need the report:
In a few days
In a few weeks
Strategic (a few complex decisions/reports per month)
Tactical (many less-complex decisions/reports per month)
Operational (many complex/simple decisions/reports per day)
Traditional BI approaches and technologies — even when using the latest technology, best practices, and architectures — almost always have a serious side effect: a constant backlog of BI requests. Enterprises where IT addresses more than 20% of BI requirements will continue to see the snowball effect of an ever-growing BI requests backlog. Why? Because:
BI requirements change faster than an IT-centric support model can keep up. Even with by-the-book BI applications, firms still struggle to turn BI applications on a dime to meet frequently changing business requirements. Enterprises can expect a life span of at least several years out of enterprise resource planning (ERP), customer relationship management (CRM), human resources (HR), and financial applications, but a BI application can become outdated the day it is rolled out. Even within implementation times of just a few weeks, the world may have changed completely due to a sudden mergers and acquisitions (M&A) event, a new competitive threat, new management structure, or new regulatory reporting requirements.
Earlier this week Dell joined arch-competitor HP in endorsing ARM as a potential platform for scale-out workloads by announcing “Copper,” an ARM-based version of its PowerEdge-C dense server product line. Dell’s announcement and positioning, while a little less high-profile than HP’s February announcement, is intended to serve the same purpose — to enable an ARM ecosystem by providing a platform for exploring ARM workloads and to gain a visible presence in the event that it begins to take off.
Dell’s platform is based on a four-core Marvell ARM V7 SOC implementation, which it claims is somewhat higher performance than the Calxeda part, although drawing more power, at 15W per node (including RAM and local disk). The server uses the PowerEdge-C form factor of 12 vertically mounted server modules in a 3U enclosure, each with four server nodes on them for a total of 48 servers/192 cores in a 3U enclosure. In a departure from other PowerEdge-C products, the Copper server has integrated L2 network connectivity spanning all servers, so that the unit will be able to serve as a low-cost test bed for clustered applications without external switches.
Dell is offering this server to selected customers, not as a GA product, along with open source versions of the LAMP stack, Crowbar, and Hadoop. Currently Cannonical is supplying Ubuntu for ARM servers, and Dell is actively working with other partners. Dell expects to see OpenStack available for demos in May, and there is an active Fedora project underway as well.
How does an enterprise — especially a large, global one with multiple product lines and multiple enterprise resource planning (ERP) applications — make sense of operations, logistics, and finances? There’s just too much information for any one person to process. It’s business intelligence (BI) to the rescue! But what is BI, and how does BI differ from reporting and management information systems (MIS)? What is the business impact, and what are the costs versus the benefits? What is the appropriate strategy for implementing BI and achieving continued BI success? Our new report will give business and IT executives an understanding of the four critical phases of strategizing around BI to achieve business goals — or “everything you wanted to know but were afraid to ask” about BI. Here’s a sneak preview of the kinds of topics the report covers and the kinds of BI questions one needs to ask in order to build an effective and efficient enterprise BI environment:
Prepare For Your BI Program
The future of BI is all about agility. IT no longer has exclusive control of BI platforms, tools, and applications; business users demand more empowerment (or make empowered changes without IT involvement), and previously unshakable pillars of the BI foundation such as relational databases are quickly being supplemented with alternative BI platforms. It’s no longer business as usual. Ask yourself:
What are the main business and IT trends driving BI?
What are the latest BI technologies that I need to know about?
Today IBM announced its plans to acquire Vivisimo - an enterprise search vendor with big data capabilities. Our research shows that only 1% to 5% of all enterprise data is in a structured, modeled format that fits neatly into enterprise data warehouses (EDWs) and data marts. The rest of enterprise data (and we are not even talking about external data such as social media data, for example) may not be organized into structures that easily fit into relational or multidimensional databases. There’s also a chicken-and-the-egg syndrome going on here. Before you can put your data into a structure, such as a database, you need to understand what’s out there and what structures do or may exist. But in order for you to explore the data in the first place, traditional data integration technologies require some structures to even start the exploration (tables, columns, etc). So how do you explore something without a structure, without a model, and without preconceived notions? That’s where big data exploration and discovery technologies such as Hadoop and Vivisimo come into play. (There are many others vendors in this space as well, including Oracle Endeca, Attivio, and Saffron Technology. While these vendors may not directly compete with Vivisimo and all use different approaches and architectures, the final objective - data discovery - is often the same.) Data exploration and discovery was one of our top 2012 business intelligence predictions. However, it’s only a first step in the full cycle of business intelligence and
Join us at Forrester’s CIO Forum in Las Vegas on May 3 and 4 for “The New Age Of Business Intelligence.”
The amount of data is growing at tremendous speed — inside and outside of companies’ firewalls. Last year we did hit approximately 1 zettabyte (1 trillion gigabytes) of data in the public Web, and the speed by which new data is created continues to accelerate, including unstructured data in the form of text, semistructured data from M2M communication, and structured data in transactional business applications.
Fortunately, our technical capabilities to collect, store, analyze, and distribute data have also been growing at a tremendous speed. Reports that used to run for many hours now complete within seconds using new solutions like SAP’s HANA or other tailored appliances. Suddenly, a whole new world of data has become available to the CIO and his business peers, and the question is no longer if companies should expand their data/information management footprint and capabilities but rather how and where to start with. Forrester’s recent Strategic Planning Forrsights For CIOs data shows that 42% of all companies are planning an information/data project in 2012, more than for any other application segment — including collaboration tools, CRM, or ERP.