This week I have been travelling to see Forrester’s I&O Leadership Board (FLB) members in Paris and working on my I&O FLB workshop session for Orlando and London happening in October, titled ‘An Outside In Approach To Your IT Strategy’. During my conversations I have been discussing Forrester’s excellent new book entitled ‘Outside In, The Power of Putting Customers At The Center Of Your Business’. It contains great insight and examples on how successful companies are adapting to the “age of the customer” by ensuring experience rich relationships.
So what does 'putting the customers at the center of your business' mean to I&O Professionals?
Firstly, we need to ditch the word ‘users’. It’s a dirty word in my vocabulary as it conjures up images of employees being ‘addicted’ to our IT services. Our employees are not going to go ‘cold turkey’ on us if they don’t get their corporate IT fix. They are our internal IT customers who have feelings, needs and wants plus are increasingly able to source their own technology services to increase their productivity.
Regardless of what our minds conjure up when we think of airline travel, one thing we can readily observe is that while the weather, the experience of the flight crew, the mechanical condition of the aircraft, and the destination of the flight are all variables, the system of getting an aircraft from one place to another, in one piece, is extraordinarily reliable. Herb Kelleher of Southwest Airlines once joked that the airline business is the only place where the capital assets travel at 500 miles per hour.
Every commercial flight starts with a flight plan, a flight crew, an aircraft, and a destination. The dispatcher creates the plan based on the expected conditions for the flight, the limitations of the pilot and passengers, and the capabilities of the aircraft. Time is built into the plan to climb to cruise altitude and to descend again to reach the destination safely. How much fuel will be required is built into the plan and pumped into the tanks. Every activity is done to achieve a singular purpose: getting the aircraft and its passengers safely to the destination, and everyone involved knows where the destination is. Aviation is a study in viable systems design.
How strange it seems then, that thousands of IT projects begin every day, but more than one-third of them crash enroute. Why? I would argue that it's because there is seldom a clear destination in mind, a rational plan to get there, or a viable system approach in place to execute the plan. Most of the time, the destination and the means to get there are only vague estimates, and the elements of the strategy are rooted in hope.
[For some reason this has been unpublished since April — so here it is well after AMD announced its next spin of the SeaMicro product.]
At its recent financial analyst day, AMD indicated that it intended to differentiate itself by creating products that were advantaged in niche markets, with specific mention, among other segments, of servers, and to generally shake up the trench warfare that has had it on the losing side of its lifelong battle with Intel (my interpretation, not AMD management’s words). Today, at least for the server side of the business, it made a move that can potentially offer it visibility and differentiation by acquiring innovative server startup SeaMicro.
SeaMicro has attracted our attention since its appearance (blog post 1, blog post 2) with its innovative architecture that dramatically reduces power and improves density by sharing components like I/O adapters, disks, and even BIOS over a proprietary fabric. The irony here is that SeaMicro came to market with a tight alignment with Intel, who at one point even introduced a special dual-core packaging of its Atom CPU to allow SeaMicro to improve its density and power efficiency. Most recently SeaMicro and Intel announced a new model that featured Xeon CPUs to address the more mainstream segments that were not a part of SeaMicro’s original Atom-based offering.
This week, the New York Times ran a series of articles about data center power use (and abuse) “Power, Pollution and the Internet” (http://nyti.ms/Ojd9BV) and “Data Barns in a Farm Town, Gobbling Power and Flexing Muscle” (http://nyti.ms/RQDb0a). Among the claims made in the articles were that data centers were “only using 6 to 12 % of the energy powering their servers to deliver useful computation. Like a lot of media broadsides, the reality is more complex than the dramatic claims made in these articles. Technically they are correct in claiming that of the electricity going to a server, only a very small fraction is used to perform useful work, but this dramatic claim is not a fair representation of the overall efficiency picture. The Times analysis fails to take into consideration that not all of the power in the data center goes to servers, so the claim of 6% efficiency of the servers is not representative of the real operational efficiency of the complete data center.
On the other hand, while I think the Times chooses drama over even-keeled reporting, the actual picture for even a well-run data center is not as good as its proponents would claim. Consider:
A new data center with a PUE of 1.2 (very efficient), with 83% of the power going to IT workloads.
Then assume that 60% of the remaining power goes to servers (storage and network get the rest), for a net of almost 50% of the power going into servers. If the servers are running at an average utilization of 10%, then only 10% of 50%, or 5% of the power is actually going to real IT processing. Of course, the real "IT number" is the server + plus storage + network, so depending on how you account for them, the IT usage could be as high as 38% (.83*.4 + .05).
Every culture has its coming of age rituals — Confirmation, Bar Mitzvah, being hunted by tribal elders, surviving in the wilderness, driving at high speed while texting — all of which mark the progress from childhood to adulthood. In the high-tech world, one of the rituals marking the maturation of a company is the user group. When a company has a strategy it wants to communicate, a critical mass of customers, and prospects bright enough that it wants to highlight them rather than obscure them, it is time for a user group meeting.
This year, having passed a year since the acquisition of Novell by AttachMate and its subsequent instantiation as a standalone division, as well as being its 20th anniversary, SUSE had its first user group meeting. All in all, the portents were good, and SUSE got its core messages across to an audience of about 500 of its users as well as a cadre of the more sophisticated (IMHO) industry analysts.
Among My Key Takeaways:
SUSE is a stable company with rational management — With profitable revenues of over $200M and a publicly stated plan to hit $234 for the next fiscal year, SUSE is a reasonably sized company (technically a division of $1.3B Attachmate, but it looks and acts like an independent company), with growth rates that look to be a couple of points higher than its segment.
SUSE’s management has done an excellent job of focusing the company — SUSE, acknowledging its size disadvantage over competitor Red Hat, has chosen to focus heavily on enterprise Linux, publicly disavowing desktop and mobile device directions. SUSE’s claim is that their market share in the core enterprise segment is larger than their overall market share compared to Red Hat. This is a hard number to even begin to tweeze out, but it feels like a reasonable claim.
In today’s world of 24x7x365 global operations and competition, downtime results not only in immediate lost revenue and productivity, but also in lasting damage to corporate reputation that erodes customer confidence in your brand. No organization is immune: with ever increasing risks and more dependence on technology, major outages are becoming more common and more costly. We've reached a critical juncture where resiliency is more critical than ever because:
There is less tolerance for downtime — of any kind. BC/DR historically focused on events such as natural disasters, extreme weather, major IT failures, critical infrastructure failures, pandemics/epidemics, and other events that have a low probability of occurring but have a very high impact on the business. However, in today’s world of global, 24x7x365 operations and intense competition, downtime, regardless of whether it’s a natural disaster, a simple hard drive failure, or a security breach, is unacceptable. The business doesn’t care what caused the downtime; instead, it wants service restored as quickly as possible with as little data loss as possible, regardless of which groups are responsible for the execution.
I hate looking at my AT&T Wireless bill each month, because it tallies up all my unused rollover minutes. Sure, it might be nice to know I have them just in case I decide to have a marathon long-distance conversation, but realistically, it's a reminder that I am overspending on talk time. Even worse is when it reminds me of the expiration date for those minutes. They are basically throwing my inefficiencies in my face. Thanks, AT&T. :(
Wouldn't it be nice if your employees actually asked you this question before they went off and signed up for a cloud service or deployed a new app to a cloud platform? If they did ask, would you know what to tell them? Forrester's research shows that most enterprises wouldn't have a clear response or know where to point the employee for better guidance. Oops.
The answer to this question is actually pretty simple in concept but more difficult than you think in execution — you need a cloud use policy. What should be in this policy? What form should it take? What tone should it carry? Where do I start? All these questions are answered in my latest research report on writing an effective cloud policy. And some of its guidance may feel very counterintuitive. First of all, this will probably be a much softer and more malleable policy than others you have in your company. The cloud is still evolving, and thus your policy will need to do the same. What you might not allow today may be perfectly ok tomorrow. And unlike other IT policies, it's highly likely that IT isn't the most knowledgeable team about cloud within your company. Be prepared to work with the true leaders in crafting this policy — fail this and you shouldn't even try.
It's too late for your policy to say, "The use of cloud services is not allowed," so you need to start from an assumption that it is already happening — and that more of it is happening behind your back than in front of your nose. In fact, any policy that takes a draconianly negative tone probably won't go over very well (it might just be blatantly ignored).
On Tuesday, September 4, Microsoft made the official announcement of Windows Server 2012, ending what has seemed like an interminable sequence of rumors, Beta releases, and endless speculation about this successor to Windows Server 2008.
So, is it worth the wait and does it live up to its hype? All omens point to a resounding “YES.”
Make no mistake, this is a really major restructuring of the OS, and a major step-function in capabilities aligned with several major strategic trends for both Microsoft and the rest of the industry. While Microsoft’s high level message is centered on the cloud, and on the Windows Server 2012 features that make it a productive platform upon which both enterprises and service providers can build a cost-effective cloud, its features will be immensely valuable to a wide range of businesses.
What It Does
The reviewers guide for Windows Server 2012 is over 220 pages long, and the OS has at least 100 features that are worth noting, so a real exploration of the features of this OS is way beyond what I can do here. Nonetheless, we can look at several buckets of technology to get an understanding of the general capabilities. Also important to note is that while Microsoft has positioned this as a very cloud-friendly OS, almost all of these cloud-related features are also very useful to an enterprise IT environment.
New file system — Included in WS2012 is ReFS, a new file system designed to survive failures that would bring down or corrupt the previous NTFS file system (which is still available). Combined with improvements in cluster management and failover, this is a capability that will play across the entire user spectrum.
Chances are that you have employees using Apple Macs at your firm today, and they’re doing this without the support and guidance of the infrastructure and operations (I&O) organization. IT consumerization has put an end to the days of one operating system (OS) to support. For I&O pros, this change carries new concerns about security, potential information loss, and unexpected support needs, to name a few. Forrester has found that IT organizations struggle in building a support and management strategy for Macs that works.
Fortunately, there are many firms who have blazed the trails and figured out how to support both employee-owned and company-owned Macs for their employees, and we've assembled our findings in the latest document on managing Macs. Hint: Leave the Windows PC management tools and techniques in the toolbox. It’s easy to understand why I&O professionals sometimes apply the same techniques and tools they are familiar with in the Windows world for managing Macs, but the reality is that they are different animals, and what is a best practice for one is irrelevant for the other — and can even cripple worker productivity.