Data Quality Reboot Series For Big Data: Part 1 Master Data

Michele Goetz

What data do you trust? Increasingly, business stakeholders and data scientists trust the information hidden in the bowels of big data. Yet, how data is mined mostly circumvents existing data governance and data architecture due to speed of insight required and support data discovery over repeatable reporting.

The key to this challenge is a data quality reboot: rethink what matters, and rethink data governance.

Part 1 of our Data Quality Reboot Series is to rethink master data management (MDM) in a big data world.

Current thinking: Master data as a single data entity. A common theme I hear from clients is that master data is about the linked data elements for a single record. No duplication or variation exists to drive consistency and uniqueness. Master data in the current thinking represents a defined, named entity (customer, supplier, product, etc.). This is a very static view of master data and does not account for the various dimensions required for what is important within a particular use case. We typically see this approach tied tightly to an application (customer resource management, enterprise resource management) for a particular business unit (marketing, finance, product management, etc.). It may have been the entry point for MDM initiatives, and it allowed for smaller scope tangible wins. But, it is difficult to expand that master data to other processes, analysis, and distribution points. Master data as a static entity only takes you so far, regardless of whether big data is incorporated into the discussion or not.

Read more

Big Data Meets Cloud

Holger Kisker

Cloud Services Offer New Opportunities For Big Data Solutions

What’s better than writing about one hot topic? Well, writing about two hot topics in one blog post — and here you go:

The State Of BI In The Cloud

Over the past few years, BI business intelligence (BI) was the overlooked stepchild of cloud solutions and market adoption. Sure, some BI software-as-a-service (SaaS) vendors have been pretty successful in this space, but it was success in a niche compared with the four main SaaS applications: customer relationship management (CRM), collaboration, human capital management (HCM), and eProcurement. While those four applications each reached cloud adoption of 25% and more in North America and Western Europe, BI was leading the field of second-tier SaaS solutions used by 17% of all companies in our Forrester Software Survey, Q4 2011. Considering that the main challenges of cloud computing are data security and integration efforts (yes, the story of simply swiping your credit card to get a full operational cloud solution in place is a fairy tale), 17% cloud adoption is actually not bad at all; BI is all about data integration, data analysis, and security. With BI there is of course the flexibility to choose which data a company considers to run in a cloud deployment and what data sources to integrate — a choice that is very limited when implementing, e.g., a CRM or eProcurement cloud solution.

“38% of all companies are planning a BI SaaS project before the end of 2013.”

Read more

The First Rule Of Big Data — Don't Talk About Big Data

Brian  Hopkins

I’ll be chairing Big Data World Europe on September 19 in London; in advance of that event, here are a few thoughts.

Since late 2011, we’ve seen the big data noise level eclipse cloud and even BYOD, and we are seeing the backlash too (see Death By Big Data, to which I tweeted, “Yes, I suppose, ‘too much of anything is a bad thing’”). The number one thing clients want to know is, “What is my competition doing? Give me examples I can talk to my business about.” These questions reflect a curiosity on the part of IT and a “peeking under the hood to see what’s there” attitude.

My advice is to start the big data journey with your feet on the ground and your head around what it really is. Here are some “rules” I’ve been using with folks I talk to:

First rule of big data: don’t talk about big data. The old adage holds true here — those that can do big data do it, those that can’t talk <yup, I see the irony :-)>. I was on the phone with a VP of analytics who reflected that her IT people were constantly bringing new technologies to them like a dog with a bone. Her general reaction is, show me the bottom-line value. So what to do? Instead of talking to your business about big data, find ways to solve problems more affordably with data at greater scale. Now that’s “doing big data.”

Read more

Categories:

Forrester Wave For Master Data Management — Enterprise, Big Data, Data Governance

Michele Goetz

As the new analyst on the block at Forrester, the first question everyone is asking is, “What research do you have planned?” Just to show that I’m up for the task, rather than keeping it simple with a thoughtful report on data quality best practices or a maturity assessment on data management, I thought I’d go for broke and dive into the master data management (MDM) landscape. Some might call me crazy, but this is more than just the adrenaline rush that comes from doing such a project. In over 20 inquiries with clients in the past month, questions show increased sophistication in how managing master data can strategically contribute to the business.

What do I mean by this?

Number 1: Clients want to know how to bring together transitional data (structured) and content (semi-structured and unstructured) to understand the customer experience, improve customer engagement, and maximize the value of the customer. Understanding customer touch points across social media, e-commerce, customer service, and content consumption provides a single customer view that lets you customize your interactions and be highly relevant to your customer. MDM is at the heart of bringing this view together.

Number 2: Clients have begun to analyze big data within side projects as a way to identify opportunities for the business. This intelligence has reached the point that clients are now exploring how to distribute and operationalize these insights throughout the organization. MDM is the point that will align discoveries within the governance of master data for context and use.

Read more

Let Big Data Predictive Analytics Rock Your World

Mike Gualtieri

I love predictive analytics. I mean, who wouldn't want to develop an application that could help you make smart business decisions, sell more stuff, make customers happy, and avert disasters. Predictive analytics can do all that, but it is not easy. In fact, it can range from being impossible to hard depending on:

  • Causative data. The lifeblood of predictive analytics is data. Data can come from internal systems such as customer transactions or manufacturing defect data. It is often appropriate to include data from external sources such as industry market data, social networks, or statistics. Contrary to popular technology beliefs, it does not always need to be big data. It is far more important that the data contain variables that can be used to predict an effect. Having said that, the more data you have, the better chance you have of finding cause and effect. Big data no guarantee of success.
Read more

Dell Joins The ARMs Race, Announces ARM-Based 'Copper' Server

Richard Fichera

Earlier this week Dell joined arch-competitor HP in endorsing ARM as a potential platform for scale-out workloads by announcing “Copper,” an ARM-based version of its PowerEdge-C dense server product line. Dell’s announcement and positioning, while a little less high-profile than HP’s February announcement, is intended to serve the same purpose — to enable an ARM ecosystem by providing a platform for exploring ARM workloads and to gain a visible presence in the event that it begins to take off.

Dell’s platform is based on a four-core Marvell ARM V7 SOC implementation, which it claims is somewhat higher performance than the Calxeda part, although drawing more power, at 15W per node (including RAM and local disk). The server uses the PowerEdge-C form factor of 12 vertically mounted server modules in a 3U enclosure, each with four server nodes on them for a total of 48 servers/192 cores in a 3U enclosure. In a departure from other PowerEdge-C products, the Copper server has integrated L2 network connectivity spanning all servers, so that the unit will be able to serve as a low-cost test bed for clustered applications without external switches.

Dell is offering this server to selected customers, not as a GA product, along with open source versions of the LAMP stack, Crowbar, and Hadoop. Currently Cannonical is supplying Ubuntu for ARM servers, and Dell is actively working with other partners. Dell expects to see OpenStack available for demos in May, and there is an active Fedora project underway as well.

Read more

What's Your Big Data Score?

Mike Gualtieri

If you think the term "Big Data" is wishy washy waste, then you are not alone. Many struggle to find a definition of Big Data that is anything more than awe-inspiring hugeness. But Big Data is real if you have an actionable definition that you can use to answer the question: "Does my organization have Big Data?" Proposed is a definition that takes into account both the measure of data and the activities performed with the data. Be sure to scroll down to calculate your Big Data Score.

Big Data Can Be Measured

Big Data exhibits extremity across one or many of these three alliterate measures:

Read more

ARM Arrives – Calxeda Shows Real Hardware Running Linux

Richard Fichera

I said last year that this would happen sometime in the first half of this year, but for some reason my colleagues and clients have kept asking me exactly when we would see a real ARM server running a real OS. How about now?

 To copy from Calxeda’s most recent blog post:

“This week, Calxeda is showing a live Calxeda cluster running Ubuntu 12.04 LTS on real EnergyCore hardware at the Ubuntu Developer and Cloud Summit events in Oakland, CA. … This is the real deal; quad-core, w/ 4MB cache, secure management engine, and Calxeda’s fabric all up and running.”

This is a significant milestone for many reasons. It proves that Calxeda can indeed deliver a working server based on its scalable fabric architecture, although having HP signing up as a partner meant that this was essentially a non-issue, but still, proof is good. It also establishes that at least one Linux distribution provider, in this case Ubuntu, is willing to provide a real supported distribution. My guess is that Red Hat and Centos will jump on the bus fairly soon as well.

Most importantly, we can get on with the important work of characterizing real benchmarks on real systems with real OS support. HP’s discovery centers will certainly play a part in this process as well, and I am willing to bet that by the end of the summer we will have some compelling data on whether the ARM server will deliver on its performance and energy efficiency promises. It’s not a slam dunk guaranteed win – Intel has been steadily ratcheting up its energy efficiency, and the latest generation of x86 server from HP, IBM, Dell, and others show promise of much better throughput per watt than their predecessors. Add to that the demonstration of a Xeon-based system by Sea Micro (ironically now owned by AMD) that delivered Xeon CPUs at a 10 W per CPU power overhead, an unheard of efficiency.

Read more

IBM Rounds Out Its Linux Offerings With Power Linux

Richard Fichera

In the latest evolution of its Linux push, IBM has added to its non-x86 Linux server line with the introduction of new dedicated Power 7 rack and blade servers that only run Linux. “Hah!” you say. “Power already runs Linux, and quite well according to IBM.” This is indeed true, but when you look at the price/performance of Linux on standard Power, the picture is not quite as advantageous, with the higher cost of Power servers compared to x86 servers offsetting much if not all of the performance advantage.

Enter the new Flex System p24L (Linux) Compute Node blade for the new PureFlex system and the IBM PowerLinuxTM 7R2 rack server. Both are dedicated Linux-only systems with 2 Power 7 6/8 core, 4 threads/core processors, and are shipped with unlimited licenses for IBM’s PowerVM hypervisor. Most importantly, these systems, in exchange for the limitation that they will run only Linux, are priced competitively with similarly configured x86 systems from major competitors, and IBM is betting on the improvement in performance, shown by IBM-supplied benchmarks, to overcome any resistance to running Linux on a non-x86 system. Note that this is a different proposition than Linux running on an IFL in a zSeries, since the mainframe is usually not the entry for the customer — IBM typically sells to customers with existing mainframe, whereas with Power Linux they will also be attempting to sell to net new customers as well as established accounts.

Read more

Data Discovery And Exploration - IBM Acquires Vivisimo

Boris Evelson

Today IBM announced its plans to acquire Vivisimo - an enterprise search vendor with big data capabilities. Our research shows that only 1% to 5% of all enterprise data is in a structured, modeled format that fits neatly into enterprise data warehouses (EDWs) and data marts. The rest of enterprise data (and we are not even talking about external data such as social media data, for example) may not be organized into structures that easily fit into relational or multidimensional databases. There’s also a chicken-and-the-egg syndrome going on here. Before you can put your data into a structure, such as a database, you need to understand what’s out there and what structures do or may exist. But in order for you to explore the data in the first place, traditional data integration technologies require some structures to even start the exploration (tables, columns, etc). So how do you explore something without a structure, without a model, and without preconceived notions? That’s where big data exploration and discovery technologies such as Hadoop and Vivisimo come into play. (There are many others vendors in this space as well, including Oracle Endeca, Attivio, and Saffron Technology. While these vendors may not directly compete with Vivisimo and all use different approaches and architectures, the final objective - data discovery - is often the same.) Data exploration and discovery was one of our top 2012 business intelligence predictions. However, it’s only a first step in the full cycle of business intelligence and

Read more