We last spoke about how to reboot our thinking on master data to provide a more flexible and useful structure when working with big data. In the structured data world, having a model to work from provides comfort. However, there is an element of comfort and control that has to be given up with big data, and that is our definition and the underlying premise for data quality.
Current thinking: Persistence of cleansed data.For years data quality efforts have focused on finding and correcting bad data. We used the word “cleansing” to represent the removal of what we didn’t want, exterminating it like it was an infestation of bugs or rats. Knowing what your data is, what it should look like, and how to transform it into submission defined the data quality handbook. Whole practices were stood up to track data quality issues, establish workflows and teams to clean the data, and then reports were produced to show what was done. Accomplishment was the progress and maintenance of the number of duplicates, complete records, last update, conformance to standards, etc. Our reports may also be tied to our personal goals. Now comes big data — how do we cleanse and tame that beast?
Reboot: Disposability of data quality transformation. The answer to the above question is, maybe you don’t. The nature of big data doesn’t allow itself to traditional data quality practices. The volume may be too large for processing. The volatility and velocity of data change too frequently to manage. The variety of data, both in scale and visibility, is ambiguous.
I recently finished reading Moneyball, the Michael Lewis bestseller and slightly above-average Hollywood movie. It struck me how great baseball minds could be so off in their focus on the right metrics to win baseball games. And by now you know the story — paying too much for high batting averages with insufficient focus where it counts —metrics that correlate with scoring runs, like on-base percentage. Not nearly as dramatic — but business is having its own “Moneyball” experience with way too much focus on traditional metrics like productivity and quality and not enough on customer experience and, most importantly, agility.
Agility is the ability to execute change without sacrificing customer experience, quality, and productivity and is “the” struggle for mature enterprises and what makes them most vulnerable to digital disruption. Enterprises routinely cite the incredible length of time to get almost any change made. I’ve worked at large companies and it’s just assumed that things move slowly, bureaucratically, and inefficiently. But why do so many just accept this? For one thing, poor agility undermines the value of other collected BPM metrics. Strong customer experience metrics are useless if you can’t respond to them in a timely manner, and so is enhanced productivity if it only results in producing out-of-date products or services faster.
In a recent media interview I was asked about whether the requirements for data visualization had changed. The questions were focused around whether users are still satisfied with dashboards, graphs and charts or do they have new needs, demands and expectations.
Arguably, Ancient Egyptian hieroglyphics were probably the first real "commercial" examples of data visualization (though many people before the Egyptians also used the same approach — but more often as a general communications tool). Since then, visualization of data has certainly always been both a popular and important topic. For example, Florence Nightingale changed the course of healthcare with a single compelling polar area chart on the causes of death during the Crimean War.
In looking at this question of how and why data visualization might be changing, I identified at least 5 major triggers. Namely:
Increasing volumes of data. It's no surprise that we now have to process much larger volumes of data. But this also impacts the ways we need to represent it. The volume of data stimulates new forms of visualization tools. While not all of these tools are new (strictly speaking), they have at least begun to find a much broader audience as we find the need to communicate much more information much more rapidly. Time walling and infographics are just two approaches that are not necessarily all that new but they have attracted much greater usage as a direct result of the increasing volume of data.
I'm going to tell you a story of opportunity. I will warn you in advance that it paints the art of the possible, but ultimately it's a cautionary tale.
I have a 17-year-old son. He's a high school senior and attends a private high school in our city. In Forrester terminology, you could call him "empowered." So much so that over his first three years he rarely wore the required school uniform. Now, the school uniform is far from draconian. It's a polo, color of your choice, with a school logo. I actually think they look good, but he says they itch. To get around it he simply wore the polo of his choice under a sweater. It would seem all polo collars look the same. This worked well until a new principal came in last year and figured out what was going on. A new dress code was instituted that required that students also wear school approved outer wear so that a school logo was always visible.
"Innovate or die" is not just a catchy slogan. It’s the way that businesses need to operate in this market-driven world. And, as technology underpins more and more products, services, processes, and go-to-market strategies, the CIO must be involved in driving business-impacting innovations. This involvement ranges from supporting internal R&D to unearthing and vetting new technologies out in the market that can be internalized to disrupt the status quo and propel the organization forward.
Most organizations are cognizant of this reality. However, few have mastered making innovation into a sustainable practice with defined processes that take into account the differences between incremental change and true innovation. What is needed is less hyperbole and more practical information and examples of how to the CIO can and should support an innovation process to drive business value.
To deliver, you’ll need to understand and internalize the trends, understand the business capabilities required to deliver on sustainable innovation, and assess how prepared you actually are to deliver. Based on this insight, you then need to plot out a strategy and carefully plan your people, process, and technology. From there you have implement — building out your innovation network, and developing a governance model to enforce the right behaviors. And to continually improve, you need to focus on metrics, peer comparison, and change management.
On Tuesday, September 4, Microsoft made the official announcement of Windows Server 2012, ending what has seemed like an interminable sequence of rumors, Beta releases, and endless speculation about this successor to Windows Server 2008.
So, is it worth the wait and does it live up to its hype? All omens point to a resounding “YES.”
Make no mistake, this is a really major restructuring of the OS, and a major step-function in capabilities aligned with several major strategic trends for both Microsoft and the rest of the industry. While Microsoft’s high level message is centered on the cloud, and on the Windows Server 2012 features that make it a productive platform upon which both enterprises and service providers can build a cost-effective cloud, its features will be immensely valuable to a wide range of businesses.
What It Does
The reviewers guide for Windows Server 2012 is over 220 pages long, and the OS has at least 100 features that are worth noting, so a real exploration of the features of this OS is way beyond what I can do here. Nonetheless, we can look at several buckets of technology to get an understanding of the general capabilities. Also important to note is that while Microsoft has positioned this as a very cloud-friendly OS, almost all of these cloud-related features are also very useful to an enterprise IT environment.
New file system — Included in WS2012 is ReFS, a new file system designed to survive failures that would bring down or corrupt the previous NTFS file system (which is still available). Combined with improvements in cluster management and failover, this is a capability that will play across the entire user spectrum.
Chances are that you have employees using Apple Macs at your firm today, and they’re doing this without the support and guidance of the infrastructure and operations (I&O) organization. IT consumerization has put an end to the days of one operating system (OS) to support. For I&O pros, this change carries new concerns about security, potential information loss, and unexpected support needs, to name a few. Forrester has found that IT organizations struggle in building a support and management strategy for Macs that works.
Fortunately, there are many firms who have blazed the trails and figured out how to support both employee-owned and company-owned Macs for their employees, and we've assembled our findings in the latest document on managing Macs. Hint: Leave the Windows PC management tools and techniques in the toolbox. It’s easy to understand why I&O professionals sometimes apply the same techniques and tools they are familiar with in the Windows world for managing Macs, but the reality is that they are different animals, and what is a best practice for one is irrelevant for the other — and can even cripple worker productivity.
I had a great time, as always, at VMworld last week. My seventh time was busier than ever. If I had to summarize my gut feeling about this year, it was VMware’s return to the future of the datacenter. Yes, there was plenty of cloud-ness, but the main thrust of VMware’s message was: there’s a lot left to virtualize, encapsulate, and mobilize in the datacenter, and we’re the best company to help you do it…whether or not you’re heading for the clouds. The cloud isn’t everything, nor should it be. It’s one of many paths to a more efficient, responsive, and available IT infrastructure. Companies aren’t going from datacenters and managed services to the cloud in one monolithic transition. They’re looking at everything from their virtualized workloads to their big databases to their productivity apps and asking two questions: Can I run them cheaper, faster, and better in-house first? And, when will it make more sense to run them in my or someone else’s cloud? Part of that decision is cost — will cloud save money?
A bigger question, though, is: Who decides? Will application teams and app developers go to the cloud themselves, without waiting for IT? In many cases, they already are. Or will today’s virtualization admins lead the way? VMware’s betting on both, and it used VMworld this year to arm its core audience —VMware admins — with a strategy. My colleague Glenn O’Donnell calls VMware’s core audience the Illuminati (heh), and VMworld is certainly designed for them.
Last year Netflix attempted to shift its business strategy to focus mainly on streaming video. Although I wasn’t present in the boardroom discussions, it’s a reasonable bet that Reed Hastings and his team had decided the future was online streaming and that physical discs were a dinosaur. Since the war for content would be fought over streaming, Netflix would focus on adding value to its streaming customers and spin off the disc customers. On the surface this seemed to many a reasonable strategy, especially since Netflix reported that its digital streaming customers and the disc-in-the-mail customers were mostly not one and the same. So Netflix execs crunched the numbers and decided this was the right move for them. Perhaps they had hoped to spin off the disc side of the business to raise some capital. Whatever their thinking, their strategy choices left some gaping unanswered questions for observers like me:
The most notable news to come out of the VMworld conference last week was the coronation of Pat Gelsinger as the new CEO of VMware. His tenure officially started over the weekend, on September 1, to be exact.
For those who don’t know Pat’s career, he gained fame at Intel as the personification of the x86 processor family. It’s unfair to pick a single person as the father of the modern x86 architecture, but if you had to pick just one person, it’s probably Pat. He then grew to become CTO, and eventually ran the Digital Enterprise Group. This group accounted for 55% of Intel’s US$37.586B in revenue according to its 2008 annual report, the last full year of Pat’s tenure. EMC poached him from Intel in 2009, naming him president of the Information Infrastructure Products group. EMC’s performance since then has been very strong, with a 17.5% YoY revenue increase in its latest annual report. Pat’s group contributed 53.7% of that revenue. While he’s a geek at heart (his early work), he proved without a doubt that he also has the business execution chops (his later work). Both will serve him well at VMware, especially the latter.