You can't bring up semantics without someone inserting an apology for the geekiness of the discussion. If you're a data person like me, geek away! But for everyone else, it's a topic best left alone. Well, like every geek, the semantic geeks now have their day — and may just rule the data world.
It begins with a seemingly innocent set of questions:
"Is there a better way to master my data?"
"Is there a better way to understand the data I have?"
"Is there a better way to bring data and content together?"
"Is there a better way to personalize data and insight to be relevant?"
Semantics discussions today are born out of the data chaos that our traditional data management and governance capabilities are struggling under. They're born out of the fact that even with the best big data technology and analytics being adopted, business stakeholder satisfaction with analytics has decreased by 21% from 2014 to 2015, according to Forrester's Global Business Technographics® Data And Analytics Survey, 2015. Innovative data architects and vendors realize that semantics is the key to bringing context and meaning to our information so we can extract those much-needed business insights, at scale, and more importantly, personalized.
First there was Hadoop. Then there were data scientists. Then came Agile BI on big data. Drum roll, please . . . bum, bum, bum, bum . . .
Now we have data preparation!
If you are as passionate about data quality and governance and I am, then the 5+-year wait for a scalable capability to take on data trust is amazingly validating. The era for "good enough" when it comes to big data is giving way to an understanding that the way analysts have gotten away with "good enough" was through a significant amount of manual data wrangling. As an analyst, it must have felt like your parents saying you can't see your friends and play outside until you cleaned your room (and if it's anything like my kids' rooms, that's a tall order).
There is no denying that analysts are the first to benefit from data preparation tools such as Altyrex, Paxata, and Trifacta. It's a matter of time to value for insight. What is still unrecognized in the broader data management and governance strategy is that these early forays are laying the foundation for data citizenry and the cultural shift toward a truly data-driven organization.
Today's data reality is that consumers of data are like any other consumers; they want to shop for what they need. This data consumer journey begins by looking in their own spreadsheets, databases, and warehouses. When they can't find what they want there, data consumers turn to external sources such as partners, third parties, and the Web. Their tool to define the value of data, and ultimately if they will procure it and possibly pay for it, is what data preparation tools help with. The other outcome of this data-shopping experience is that they are taking on the risk and accountability for the value of the data as it is introduced into analysis, decision-making, and automation.
Since when did data management and data governance become interchangeable?
This is a question that has both confounded and frustrated me. The pursuit of data management vendors to connect with business stakeholders, because of the increasing role business units have had in decison making and holding the purse strings to technology purchases, means data governance as a term was hijacked to snuff out the bad taste of IT data projects gone sour.
The funny thing is, vendors actually began drinking their own marketing Kool-aid and think of their MDM, quality, security, and lifecycle management products as data governance tools/solutions. Storage and virtualizations vendors are even starting to grock on to this claiming they govern data. Big data vendors jumped over data management altogether and just call their catalogs, security, and lineage capabilities data governance.
Yes, this is a pet peeve of mine - just as data integration is now called blending, and data cleansing and transformation is now called wrangling or data preparation. But more on that is another blog...
First, you (vendor or data professional) cannot simply sweep the history of legacy data investments that were limited in results and painful to implement under the MadMen carpet. Own it and address the challenges through technology innovation rather than words.
You’ve heard it before but we said it again – this time in our recent webinar. There's a new kid in town: the chief data officer. Why the new role? Because of an increasing awareness of the value of data and the painful recognition of an inability to take advantage of the opportunities that it provides — due to technology, business, or basic cultural barriers. That was the topic of our webinar presented to a full house a few days ago; we discussed our recent report, Top Performers Appoint Chief Data Officers. Fortunately for those who weren’t there, the presentation – Chief Data Officers Cross The Chasm – is available (to clients) for download.
As the title suggests, chief data officers are no longer just for the early adopters – those enthusiasts and visionaries on the forefront of new technology trends. With 45% of global companies having appointed a chief data officer (not to be confused with a chief digital officer, as we specifically asked about “data”) and another 16% planning to make an appointment in the next 12 months – according to Forrester's Business Technographics surveys, the role of the chief data officer really has move into the mainstream.
However, there remain many companies who are not sure of whether they need a CDO or not. Many of those in our audience fell into that category. We asked two questions of the audience to gauge their interest and their actions to improve their data maturity:
Are you making organizational changes specifically to improve your data capabilities?
Gene Leganza and I just published a report on the role of the Chief Data Officer that we’re hearing so much about these days – Top Performers Appoint Chief Data Officers. To introduce the report, we sat down with our press team at Forrester to talk about the findings, and the implications for our clients.
Forrester PR: There's a ton of fantastic data in the report around the CDO. If you had to call out the most surprising finding, what would top your list?
Gene: No question it's the high correlation between high-performing companies and those with CDOs. Jennifer and I both feel that strong data capabilities are critical for organizations today and that the data agenda is quite complex and in need of strong leadership. That all means that it's quite logical to expect a correlation between strong data leadership and company performance - but given the relative newness of the CDO role it was surprising to see firm performance so closely linked to the role.
Of course, you can't infer cause and effect from correlation – the data could mean that execs in high-performing companies think having a CDO role is a good idea as much as it could mean CDOs are materially contributing to high performance. Either way that single statistic should make one take a serious look at the role in organizations without clear data leadership.
When I think about data, I can't help but think about hockey. As a passionate hockey mom, it's hard to separate my conversations about data all week with clients from the practices and games I sit through, screaming encouragement to my son and his team (sometimes to the embarrassment of my husband!). So when I recently saw a documentary on the building of the Russian hockey team that our miracle US hockey team beat at the 1980 Olympics, the story of Anatoli Tarsov stuck with me.
Before the 1960s, Russia didn't have a hockey team. Then the Communist party determined that it was critical that Russia build one — and compete on the world stage. They selected Anatoli Tarsov to build the team and coach. He couldn't see films on hockey. He couldn't watch teams play. There was no reference on how to play the game. And yet, he built a world-class hockey club that not only beat the great Nordic teams but went on to crush the Canadian teams that were the standard for hockey excellence.
This is a lesson for us all when it comes to data. Do we stick with our standards and recipes from Inmon and Kimball? Do we follow check-box assessments from CMMI, DM-BOK, or TOGAF's information architecture framework? Do we rely on governance compliance to police our data?
Or do we break the rules and create our own that are based on outcomes and results? This might be the scarier path. This might be the riskier path. But do you want data to be where your business needs it, or do you want to predefine, constrain, and bias the insight?
It is easy to get ahead of ourselves with all the innovation happening with data and analytics. I wouldn't call it hype, as that would imply no value or competency has been achieved. But I would say that what is bright, shiny, and new is always more interesting than the ordinary.
And, to be frank, there is still a lot of ordinary in our data management world.
In fact, over the past couple of weeks, discussions with companies have uncommonly focused on the ordinary. This in some ways appeared to be unusual because questions focused on the basic foundational aspects of data management and governance — and for companies that I have seen talk publicly about their data management successes.
"Where do I clean the data?"
"How do I get the business to invest in data?"
"How do I get a single customer view of my customer for marketing?"
What this tells me is that companies are under siege by zombie data.
Data is living in our business under outdated data policies and rules. Data processes and systems are persisting single-purpose data. As data pros turn over application rocks and navigate through the database bogs to centralize data for analytics and virtualize views for new data capabilities, zombie data is lurching out to consume more of the environment, blocking other potential insight to keep the status quo.
The questions you and your data professional cohorts are asking, as illustrated above, are anything but basic. The fact that these foundational building blocks have to be assessed once again demonstrates that organizations are on a path to crush the zombie data siege, democratize data and insight, and advance the business.
Keep asking basic questions — if you aren't, zombie data will eventually take over, and you and your organization will become part of the walking dead.
Big data and Hadoop (Yellow Elephants) are so synonymous that you can easily overlook the vast landscape of architecture that goes into delivering on big data value. Data scientists (Pink Unicorns) are also raised to god status as the only real role that can harness the power of big data -- making insights obtainable from big data as far away as a manned journey to Mars. However, this week, as I participated at the DGIQ conference in San Diego and colleagues and friends attended the Hadoop Summit in Belgium, it has become apparent that organizations are waking up to the fact that there is more to big data than a "cool" playground for the privileged few.
The perspective that the insight supply chain is the driver and catalyst of actions from big data is starting to take hold. Capital One, for example, illustrated that if insights from analytics and data from Hadoop were going to influence operational decisions and actions, you need the same degree of governance as you established in traditional systems. A conversation with Amit Satoor of SAP Global Marketing talked about a performance apparel company linking big data to operational and transactional systems at the edge of customer engagement and that it had to be easy for application developers to implement.
Hadoop distribution, NoSQL, and analytic vendors need to step up the value proposition to be more than where the data sits and how sophisticated you can get with the analytics. In the end, if you can't govern quality, security, and privacy for the scale of edge end user and customer engagement scenarios, those efforts to migrate data to Hadoop and the investment in analytic tools cost more than dollars; they cost you your business.
The business has an insatiable appetite for data and insights. Even in the age of big data, the number one issue of business stakeholders and analysts is getting access to the data. If access is achieved, the next step is "wrangling" the data into a usable data set for analysis. The term "wrangling" itself creates a nervous twitch, unless you enjoy the rodeo. But, the goal of the business isn't to be an adrenalin junky. The goal is to get insight that helps them smartly navigate through increasingly complex business landscapes and customer interactions. Those that get this have introduced a softer term, "blending." Another term dreamed up by data vendor marketers to avoid the dreaded conversation of data integration and data governance.
The reality is that you can't market message your way out of the fundamental problem that big data is creating data swamps even in the best intentioned efforts. (This is the reality of big data's first principle of a schema-less data.) Data governance for big data is primarily relegated to cataloging data and its lineage which serve the data management team but creates a new kind of nightmare for analysts and data scientist - working with a card catalog that will rival the Library of Congress. Dropping a self-service business intelligence tool or advanced analytic solution doesn't solve the problem of familiarizing the analyst with the data. Analysts will still spend up to 80% of their time just trying to create the data set to draw insights.
Early this year a host of inquires were coming in about data quality challenges in CRM systems. This led to a number of joint inquires between myself and CRM expert Kate Legget, VP and Principal Analyst in our application development and delivery team. Seems that the expectations that CRM systems could provide a single trusted view of the customer was starting to hit a reality check. There is more to collecting customer data and activities, you need validation, cleansing, standardization, consolidation, enrichment and hierarchies. CRM applications only get you so far, even with more and more functionality being added to reduce duplicate records and enforce classifications and groups. So, what should companies do?