Do Not Confuse Data Governance With Data Management

Henry Peyret

Last week, I participated in a roundtable during a conference in Paris organized by the French branch of DAMA, the data management international organization. During the question/answer part of the conference, it became clear that most of the audience was confusing data management with data governance (DG). This is a challenge my Forrester colleague Michele Goetz identified early in the DG tooling space. Because data quality and master data management embed governance features, many view them as data governance tooling. But the reality is that they remain data management tooling — their goal is to improve data quality by executing rules. This tooling confusion is only a consequence of how much the word governance is misused and misunderstood, and that leads to struggling data governance efforts.

So what is “governance”? Governance is the collaboration, organization, and metrics facilitating a decision path between at least two conflicting objectives. Governance is finding the acceptable balance between the interests of two parties. For example, IT governance is needed when you would like to support all possible business projects but you have limited budget, skills, or resources available. Governance is needed when objectives are different for different stakeholders, and the outcome of governance is that they do not get the same priority. If everyone has the same objective, then this is data management.

Read more

Information Fabric 3.0 Delivers The Next Generation Of Data Virtualization

Noel Yuhanna

For decades, firms have deployed applications and BI on independent databases and warehouses, supporting custom data models, scalability, and performance while speeding delivery. It’s become a nightmare to try to integrate the proliferation of data across these sources in order to deliver the unified view of business data required to support new business applications, analytics, and real-time insights. The explosion of new sources, driven by the triple-threat trends of mobile, social, and the cloud, amplified by partner data, market feeds, and machine-generated data, further aggravates the problem. Poorly integrated business data often leads to poor business decisions, reduces customer satisfaction and competitive advantage, and slows product innovation — ultimately limiting revenue.

Forrester’s latest research reveals how leading firms are coping with this explosion using data virtualization, leading us to release a major new version of our reference architecture, Information Fabric 3.0. Since Forrester invented the category of data virtualization eight years ago with the first version of information fabric, these solutions have continued to evolve. In this update, we reflect new business requirements and new technology options including big data, cloud, mobile, distributed in-memory caching, and dynamic services. Use information fabric 3.0 to inform and guide your data virtualization and integration strategy, especially where you require real-time data sharing, complex business transactions, more self-service access to data, integration of all types of data, and increased support for analytics and predictive analytics.

Information fabric 3.0 reflects significant innovation in data virtualization solutions, including:

Read more

Make data confidence index part of your BI architecture

Boris Evelson

I often see two ends of the extreme when I talk to clients who are trying to deal with data confidence challenges. One group typically sees it as a problem that IT has to address, while business users continue to use spreadsheets and other home-grown apps for BI. At the other end of the extreme, there's a strong, take-no-prisoners, top-down mandate for using only enterprise BI apps. In this case, a CEO may impose a rule that says that you can't walk into my office, ask me to make a decision, ask for a budget, etc., based on anything other than data coming from an enterprise BI application. This may sound great, but it's not often very practical; the world is not that simple, and there are many shades of grey in between these two extremes. No large, global, heterogeneous, multi-business- and product-line enterprise can ever hope to clean up all of its data - it's always a continuous journey. The key is knowing what data sources feed your BI applications and how confident you are about the accuracy of data coming from each source.

For example, here's one approach that I often see work very well. In this approach, IT assigns a data confidence index (an extra column attached to each transactional record in your data warehouse, data mart, etc.) during ETL processes. It may look something like this:

  • If data is coming from a system of record, the index = 100%.
  • If data is coming from nonfinancial systems and it reconciles with your G/L, the index = 100%. If not, it's < 100%.
Read more

How To Partner With Data Quality Pros To Deliver Better Customer Service Experiences

Kate Leggett

Customer service leaders know that a good customer experience has a quantifiable impact on revenue, as measured by increased rates of repurchase, increased recommendations, and decreased willingness to defect from a brand. They also conceptually understand that clean data is important, but many can’t make the connection between how master data management and data quality investments directly improve customer service metrics. This means that IT initiates data projects more than two-thirds of the time, while data projects that directly affect customer service processes rarely get funded.

 What needs to happen is that customer service leaders have to partner with data management pros — often working within IT — to reframe the conversation. Historically, IT organizations would attempt to drive technology investments with the ambiguous goal of “cleaning dirty customer data” within CRM, customer service, and other applications. Instead of this approach, this team must articulate the impact that poor-quality data has on critical business and customer-facing processes.

To do this, start by taking an inventory of the quality of data that is currently available:

  • Chart the customer service processes that are followed by customer service agents. 80% of customer calls can be attributed to 20% of the issues handled.
  • Understand what customer, product, order, and past customer interaction data are needed to support these processes.
Read more