I get the following question very often. What are the best practices for creating an enterprise reporting policy as to when to use what reporting tool/application? Alas, as with everything else in business intelligence, the answer is not that easy. The old days of developers versus power users versus casual users are gone. The world is way more complex these days. In order to create such a policy, you need to consider the following dimensions:
Historical (what happened)
Operational (what is happening now)
Analytical (why did it happen)
Predictive (what might happen)
Prescriptive (what should I do about it)
Exploratory (what's out there that I don't know about)
Looking at static report output only
Lightly interacting with canned reports (sorting, filtering)
Fully interacting with canned reports (pivoting, drilling)
Assembling existing report, visualizations, and metrics into customized dashboards
Full report authoring capabilities
External (customers, partners)
Report latency, as in need the report:
In a few days
In a few weeks
Strategic (a few complex decisions/reports per month)
Tactical (many less-complex decisions/reports per month)
Operational (many complex/simple decisions/reports per day)
Traditional BI approaches and technologies — even when using the latest technology, best practices, and architectures — almost always have a serious side effect: a constant backlog of BI requests. Enterprises where IT addresses more than 20% of BI requirements will continue to see the snowball effect of an ever-growing BI requests backlog. Why? Because:
BI requirements change faster than an IT-centric support model can keep up. Even with by-the-book BI applications, firms still struggle to turn BI applications on a dime to meet frequently changing business requirements. Enterprises can expect a life span of at least several years out of enterprise resource planning (ERP), customer relationship management (CRM), human resources (HR), and financial applications, but a BI application can become outdated the day it is rolled out. Even within implementation times of just a few weeks, the world may have changed completely due to a sudden mergers and acquisitions (M&A) event, a new competitive threat, new management structure, or new regulatory reporting requirements.
If you think the term "Big Data" is wishy washy waste, then you are not alone. Many struggle to find a definition of Big Data that is anything more than awe-inspiring hugeness. But Big Data is real if you have an actionable definition that you can use to answer the question: "Does my organization have Big Data?" Proposed is a definition that takes into account both the measure of data and the activities performed with the data. Be sure to scroll down to calculate your Big Data Score.
Big Data Can Be Measured
Big Data exhibits extremity across one or many of these three alliterate measures:
I said last year that this would happen sometime in the first half of this year, but for some reason my colleagues and clients have kept asking me exactly when we would see a real ARM server running a real OS. How about now?
To copy from Calxeda’s most recent blog post:
“This week, Calxeda is showing a live Calxeda cluster running Ubuntu 12.04 LTS on real EnergyCore hardware at the Ubuntu Developer and Cloud Summit events in Oakland, CA. … This is the real deal; quad-core, w/ 4MB cache, secure management engine, and Calxeda’s fabric all up and running.”
This is a significant milestone for many reasons. It proves that Calxeda can indeed deliver a working server based on its scalable fabric architecture, although having HP signing up as a partner meant that this was essentially a non-issue, but still, proof is good. It also establishes that at least one Linux distribution provider, in this case Ubuntu, is willing to provide a real supported distribution. My guess is that Red Hat and Centos will jump on the bus fairly soon as well.
Most importantly, we can get on with the important work of characterizing real benchmarks on real systems with real OS support. HP’s discovery centers will certainly play a part in this process as well, and I am willing to bet that by the end of the summer we will have some compelling data on whether the ARM server will deliver on its performance and energy efficiency promises. It’s not a slam dunk guaranteed win – Intel has been steadily ratcheting up its energy efficiency, and the latest generation of x86 server from HP, IBM, Dell, and others show promise of much better throughput per watt than their predecessors. Add to that the demonstration of a Xeon-based system by Sea Micro (ironically now owned by AMD) that delivered Xeon CPUs at a 10 W per CPU power overhead, an unheard of efficiency.
In the latest evolution of its Linux push, IBM has added to its non-x86 Linux server line with the introduction of new dedicated Power 7 rack and blade servers that only run Linux. “Hah!” you say. “Power already runs Linux, and quite well according to IBM.” This is indeed true, but when you look at the price/performance of Linux on standard Power, the picture is not quite as advantageous, with the higher cost of Power servers compared to x86 servers offsetting much if not all of the performance advantage.
Enter the new Flex System p24L (Linux) Compute Node blade for the new PureFlex system and the IBM PowerLinuxTM 7R2 rack server. Both are dedicated Linux-only systems with 2 Power 7 6/8 core, 4 threads/core processors, and are shipped with unlimited licenses for IBM’s PowerVM hypervisor. Most importantly, these systems, in exchange for the limitation that they will run only Linux, are priced competitively with similarly configured x86 systems from major competitors, and IBM is betting on the improvement in performance, shown by IBM-supplied benchmarks, to overcome any resistance to running Linux on a non-x86 system. Note that this is a different proposition than Linux running on an IFL in a zSeries, since the mainframe is usually not the entry for the customer — IBM typically sells to customers with existing mainframe, whereas with Power Linux they will also be attempting to sell to net new customers as well as established accounts.
Today IBM announced its plans to acquire Vivisimo - an enterprise search vendor with big data capabilities. Our research shows that only 1% to 5% of all enterprise data is in a structured, modeled format that fits neatly into enterprise data warehouses (EDWs) and data marts. The rest of enterprise data (and we are not even talking about external data such as social media data, for example) may not be organized into structures that easily fit into relational or multidimensional databases. There’s also a chicken-and-the-egg syndrome going on here. Before you can put your data into a structure, such as a database, you need to understand what’s out there and what structures do or may exist. But in order for you to explore the data in the first place, traditional data integration technologies require some structures to even start the exploration (tables, columns, etc). So how do you explore something without a structure, without a model, and without preconceived notions? That’s where big data exploration and discovery technologies such as Hadoop and Vivisimo come into play. (There are many others vendors in this space as well, including Oracle Endeca, Attivio, and Saffron Technology. While these vendors may not directly compete with Vivisimo and all use different approaches and architectures, the final objective - data discovery - is often the same.) Data exploration and discovery was one of our top 2012 business intelligence predictions. However, it’s only a first step in the full cycle of business intelligence and
Nowadays, there are two topics that I’m very passionate about. The first is the fact that spring is finally here and it’s time to dust off my clubs to take in my few first few rounds of golf. The second topic that I’m currently passionate about is the research I’ve been doing around the connection between big data and big process.
While most enterprise architects are familiar with the promise — and, unfortunately, the hype — of big data, very few are familiar with the newer concept of “big process.” Forrester first coined this term back in August of 2011 to describe the shift we see in organizations moving from siloed approaches to BPM and process improvement to more holistic approaches that stitch all the pieces together to drive business transformation.
Our working definition for big process is:
“Methods and techniques that provide a more holistic approach to process improvement and process transformation initiatives.”
The US economy continues to show improvement – for example, today’s news that new jobless claims were near a four-year low. As the economy outlook has improved, so, too, have prospects for the US tech market. In our updated Forrester forecast for US tech purchases, "US Tech Market Outlook For 2012 To 2013: Improving Economic Prospects Create Upside Potential," we now project growth of 7.5% in 2012 and 8.3% in 2013 for business and government purchases of information technology goods and services (without telecom services). Including telecom services, business and government spending on information and communications technology (ICT) will increase by 7.1% in 2012 and 7.4% in 2013.
The lead tech growth category will shift from computer equipment in 2011 to software in 2012 and 2013, with and IT consulting and systems integration services playing a strong supporting role. Following strong growth of 9.6% in 2011, computer equipment purchases will slow to 4.5% in 2012, as the lingering effects of Thailand's 2011 floods hurt parts supply in the first half and the prospect of Windows 8 dampens Wintel PC sales until the fall. Apple Macs and iPad tablets will post strong growth in the corporate market, though, and server and storage should grow in the mid-single digits.
At IBM's Smarter Analytics event this week, clients and partners presented success stories about how organizations are driving business value out of big data, analytics, and IBM Watson technology.
- City of Dublin, Ireland using thousands of data points from local transportation and traffic signals to optimize public transit and deliver information to riders.
- Seton Healthcare mining through vast amounts of unstructured data captured in notes and dictation to get a more complete view of patients. Seton currently uses this information to construct programs that target treatments to the right patients with a goal of minimizing hospitalizations in the way that most efficiently optimizes costs with benefits. The ability to mine unstructured data gives a much more complete view of patients, including factors such as their support system, their ability to have transportation to and from appointments, and whether or not they have a primary care physician.
- WellPoint using Watson technology to improve real-time decision-making by mining through millions of pages of medical information while doctors and nurses are face-to-face with patients.
But, clients warned that as much as the technology is advancing, the biggest hurdles remained the internal ones. Clients stressed that they face a critical challenge in introducing, driving, and changing the organizational mindset to work in a new way that can take advantage of these great advances in technology. What did they suggest?
1) Executive sponsorship from the top (C-level)
2) Hiring or retraining for new roles like data scientists (schools like Syracuse are introducing and promoting new programs out of their iSchool, which can help with reskilling experienced talent from other areas)
The Indian government announced its 2012-2013 budget on March 16, 2012. While the announced budget does not contain direct incentives to promote the domestic ICT industry, there will be adequate indirect opportunities for vendors to explore. The excise duty will increase from 10% to 12%; this will have a marginal impact on the sale of PCs (desktops, laptops, and tablets), but the government’s focus on improving infrastructure, creating efficient delivery mechanisms, and improving e-governance will provide substantial indirect opportunities to IT vendors.
The latest budget aims to achieve long-term and inclusive growth for the economy and is in sync with my upcoming report, “India’s 12th National Five-Year Plan (2012-2017) Provides Massive ICT Opportunities.” The report answers questions such as why and how technology will act as a key enabler for the Indian government to achieve its growth target.
The 2012-2013 budget will provide adequate ICT opportunities for vendors, such as:
Packaged and industry-specific applications, e-governance, mobile apps, and analytics will support the strong need for sustainable revenue sources to fund investments. A common problem that India faces today is the significant imbalance between expenditures and revenues. The budget categorically highlights the need to deliver more with existing resources; we will witness increased demand for packaged and industry-specific applications, e-governance, and mobile apps to help generate sustainable revenue to fund investments. Also, the outlay for e-governance projects will increase by 210%, from the equivalent of US$62 million to US$192 million; applications from software vendors for e-governance initiatives will present some of the most exciting opportunities in India. And the government will use various analytical tools to improve revenue sources and take corrective actions by identifying gaps.