Carrying on from my thoughts in Part 1: It’s time to start deploying purely standards-based infrastructure outside the data center; data center protocols are just starting to be created for a converged and virtualized world. With the amount of tested and deployed standards protocols, there’s no excuse for networks to be locked in to a certain vendor with proprietary protocols when standards-based network solutions provide access to compelling volume economics, flexibility to adapt a much wider array of solutions, and relief from hiring specialized talent to run a science project. Although many organizations understand that standards-based networking provides them with the flexibility to choose from the best available solutions at a lower cost of ownership, they still feel trapped. Listed below are three top shackles and the keys to open them up:
Look for the new "Community" tab on the Forrester site. This is your access to a community of like-minded peers. You can use the community to start and participate in discussions, share ideas and experiences, and help guide Forrester Research for your role. The success or failure of this community effort depends largely on you. The analysts will participate, but in this forum they have less weight than do you, the Forrester I&O user. So help us, help your peers, and help yourself make this an active and thriving online community. Some thoughts to maybe get you going: Have any particularly good or bad experiences with products, solutions or technology? What key enablers are you looking at as you transform your data centers and operations? What does "cloud" mean to you? Any thoughts on vendor management and negotiations? This is just a random stream of consciousness selection. Make the community yours by adding your own topics.
A funny thing happened while we in IT were focused on ITIL, data center consolidation and standardization. The business went shopping for better technology solutions. We’ve been their go-to department for technology since the mainframe days and have been doing what they asked. When they wanted higher SLAs we invested in high availability solutions. When they asked for greater flexibility we empowered them with client-server, application servers and now virtual machines. All the while they have relentlessly badgered us for lower costs and greater efficiencies. And we’ve given it to them. And until recently, they seemed satisfied. Or so we thought.
Sure, we’ve tolerated the occasional SaaS application here and there. We’ve let them bring in Macs (just not too many of them) and we’ve even supported their pesky smart phones. But each time they came running to us for assistance when the technical support need grew too great.
I was listening to a briefing the other day and got swept up in a Western melodrama, set against the backdrop of Calamity Jane’s saloon in Deadwood Gulch, South Dakota, revolving around three major characters: helpless heroine (the customer); valiant hero (vendor A, riding a standards-based white horse); and the scoundrel villain (a competitor, riding the proprietary black stallion) (insert boo and hiss). Vendor A tries to evoke sympathy at his plight of not offering the latest features because he doesn’t have the same powers as the villain and has chosen to follow the morally correct path, which is filled with prolonged and undeserved suffering supporting standards-based functions. What poppycock! There is no such thing as good and evil in networking. If the vendors were reversed in positions, vendor A would be doing the same thing as its competitors. Every vendor has some type of special sauce to differentiate themselves. Anyway, it’s business, plain and simple; networking fundamentally needs proprietary and standards-based features. However, there’s a time and place for both.
With that in mind, I want to let you know that I’m a big proponent of standards-based networking. The use of open standards improves the choices that help you reduce risk, implement durable solutions, obtain flexibility, and benefit from quality. Ninety-plus percent of networking should be leveraging standard protocols, but to get to that point features need to go through three stages:
I attended Intel Developer Forum (IDF) in San Francisco last week, one of the premier events for anyone interested in microprocessors, system technology, and of course, Intel itself. Among the many wonders on display, including high-end servers, desktops and laptops, and presentations related to everything Cloud, my attention was caught by a pair of small wonders – very compact, low power servers paradoxically targeted at some of the largest hyper-scale web-facing workloads. Despite being nominally targeted at an overlapping set of users and workloads, the two servers, the Dell “Viking” and the SeaMicro SM10000, represent a study in opposite design philosophies on how to address the problem of scaling infrastructure to address high-throughput web workloads. In this case, the two ends of the spectrum are adherence to an emerging standardized design and utilization of Intel’s reference architectures as a starting point versus a complete refactoring of the constituent parts of a server to maximize performance per watt and physical density.
The other day I was just reminiscing with a friend who works at HP about all the good times I had there with my ProCurve family. When I left for a once in lifetime opportunity, I had so much hope for HP’s networking division. Like many of my inquiries from global customers looking for a Cisco alternative, I’m concerned about the division and its long-term viability. I’m not worried if HP will continue to exist without Mark Hurd. Companies are more than a single leader. There is plenty of research, books, and online debates about the effect of a single person: Jack Welch, Steve Jobs, John Chambers, etc. The issue at hand is the existence of product lines within enormous companies, like networking within HP. One of my mentors always said, “If you look at networking over the last twenty years, no major IT company or voice vendor has been able to pull off being a serious networking vendor if networking wasn’t its first priority.” Fundamentally networking is one of the few technologies where a vendor has to be all in. The networking graveyard is full of headstones: Nortel fell off the face of earth, IBM sold off its assets, and Dell hobbles along.
Ah, you might say, what about HP? That brings me to my three observations that every IT manager should consider when including HP in their network architecture:
It was reported that sometime over the past weekend the number of tweets and blogs about VMworld exceeded Plankk’s limit (postulated by blogger Marvin Plankk, now confined to an obscure institution in an unidentified state with more moose than people), and quietly coalesced into an undifferentiated blob of digital entropy as a result of too many semantically identical postings online at the same time. So this leaves the field clear for me to write the first VMworld post in the new cycle.
This year was my first time at VMworld, and it left a profound impression – while the energy and activity among the 17,000 attendees, exhibitors and VMware itself would have been impressive in any context, the underlying evidence of a fundamental transformation of the IT landscape was even more so. The theme this year was “clouds,” but to some extent I think themes of major shows like this are largely irrelevant. The theme serves as an organizing principle for the communications and promotion of the show, but the technology content of the show, particularly as embodied by its exhibitors and attendees, is based on what is actually being done in the real world. If the technology was not already there, the show might have to find another label. Keeping the cart firmly behind the horse, this activity is being driven by real IT problems, real investments in solutions, and real technology being brought to market. So to me the revelation of the show was not in the fact that VMware called it “cloud,” but that the world is really thinking “cloud.”
But Empowered isn’t only about employees. It also lays out a strategy for engaging your most influential customers. Consumer product strategy professionals should wield Empowered concepts for exactly that reason – to energize your best customers. In the mobile space, product strategists are looking for ideas to help them develop innovative, leading-edge applications for Smartphone users on platforms like the iPhone or Android. So we’ve just released a report to help product strategists do just that, called “Designing A Mobile Empowered Product Strategy.” It applies ideas from Empowered to product strategy, and includes numerous case studies of mobile applications that exemplify Empowered approaches.
Despite its networking roots, today’s Interop events have evolved to address an expansive range of IT roles, responsibilities and topics. While networking managers will still feel at home in the networking track, Interop addresses a variety of themes very relevant to the broader interests of IT Infrastructure & Operations (I&O) professionals, like cloud computing, virtualization, storage, wireless and mobility, and IT management.
IT professionals responsible for the “I” (or Infrastructure) in I&O will find the event particularly relevant. So much so that Forrester has partnered with Interop to develop track agendas, identify speakers, moderate panels, and even present. For the last two years, I have chaired the Data Center and Green IT tracks at Interop’s Las Vegas and New York events. And I am doing the same this year at Interop New York 2010 from October 18th to 22nd.