I was part of a Forrester Team that recently completed a multi-country rollout tour with Emerson Network Power as they formally released their Trellis DCIM product, a comprehensive DCIM environment many years in the building. One of the key takeaways was both an affirmation of our fundamental assertions about DCIM, plus hints about its popularity and attraction for potential customers that in some ways expand on the original value proposition we envisioned. Our audiences were in total approximately 500 selected data center users, most current Emerson customers of some sort, plus various partners.
The audiences uniformly supported the fundamental thesis around DCIM – there exists a strong underlying demand for integrated DCIM products, with a strong proximal emphasis on optimizing power and cooling to save opex and avoid the major disruption and capex of new data center capacity. Additionally, the composition of the audiences supported our contention that these tools would have multiple stakeholders in the enterprise. As expected, the groups were heavy with core Infrastructure & Operations types – the people who have to plan, provision and operate the data center infrastructure to deliver the services needed for their company’s operations. What was heartening was the strong minority presence of facilities people, ranging from 10% to 30% of the attendees, along with a sprinkling of corporate finance and real-estate executives. Informal conversations with a number of these people gave us consistent input that they understood the need, and in some cases were formerly tasked by their executives, to work more closely with the I&O group. All expressed the desire for an integrated tool to help with this.
I’ve participated in cloud events in four different countries over the past two weeks. Attendees were primarily senior and mid-level IT decision-makers seeking guidance and best practices for implementing private clouds within their organizations. Regardless of the country of origin, industry focus or level of cloud-related experience, one common theme stood out above all others during both formal and informal discussions – the importance of effective communication.
The key takeaway – don’t get dogmatic about terminology. In fact, when it comes to cloud-related initiatives, choose your words carefully and be prepared for the reaction you’re likely to get.
‘Cloud computing’ as a term remains over-hyped, over-used, and still often poorly understood – because of this, typical reactions to the term are likely to range from cynicism and doubt to defensiveness and derision and all the way to outright hostility. Ironically, the fact that it’s not a technical term actually creates more confusion in many instances since its meaning is so general as to apply to practically anything (or nothing, depending on your point of view or perhaps your level of cynicism).
At all four events over the past two weeks – and in fact in nearly all discussions of IT priorities I’ve had over the past six months – CIOs and other senior IT decision-makers have consistently made clear that ‘cloud computing’ as a general objective or direction isn’t a top priority per se. However, they are unanimous in their belief that data center transformation is essential to supporting business requirements and expectations.
Data centers, like any other aspect of real estate, follow the age-old adage of “location, location, location,” and if you want to build one that is really efficient in terms of energy consumption as well as possessing all the basics of reliability, you have to be really picky about ambient temperatures, power availability and, if your business is hosting for others rather than just needing one for yourself, potential expansion. If you want to achieve a seeming impossibility – a zero carbon footprint to satisfy increasingly draconian regulatory pressures – you need to be even pickier. In the end, what you need is:
Low ambient temperature to reduce your power requirements for cooling.
Someplace where you can get cheap “green” energy, and lots of it.
A location with adequate network connectivity, both in terms of latency as well as bandwidth, for global business.
A cooperative regulatory environment in a politically stable venue.
The world of hyper scale web properties has been shrouded in secrecy, with major players like Google and Amazon releasing only tantalizing dribbles of information about their infrastructure architecture and facilities, on the presumption that this information represented critical competitive IP. In one bold gesture, Facebook, which has certainly catapulted itself into the ranks of top-tier sites, has reversed that trend by simultaneously disclosing a wealth of information about the design of its new data center in rural Oregon and contributing much of the IP involving racks, servers, and power architecture to an open forum in the hopes of generating an ecosystem of suppliers to provide future equipment to themselves and other growing web companies.
The Data Center
By approaching the design of the data center as an integrated combination of servers for known workloads and the facilities themselves, Facebook has broken some new ground in data center architecture with its facility.
At a high level, a traditional enterprise DC has a utility transformer that feeds power to a centralized UPS, and then power is subsequently distributed through multiple levels of PDUs to the equipment racks. This is a reliable and flexible architecture, and one that has proven its worth in generations of commercial data centers. Unfortunately, in exchange for this flexibility and protection, it extracts a penalty of 6% to 7% of power even before it reaches the IT equipment.
I am off to the annual itSMF USA conference in Dallas TX, better known as Fusion 09. This is expected to be the biggest and best IT Servcie Management conference yet and the pinnacle of the itSMF USA organizations progress to date. I hope these predictions come true because I am an avid supporter of itSMF and its mission to promote service management excellence.
As one element of a new partnership between Forrester Research and itSMF USA, we will be holding one-on-one meetings between conference attendees and Forrester analysts. Both my delightful and brilliant colleague Evelyn Hubbert and I will be there and we look forward to one-on-one meetings with as many people we can fit in!
With all the wonderful sessions that will be happening at the conference, it is tough to pick favorites. Still, here are the sessions I hope to catch while I'm there.
Storage-as-a-Service is relatively new. Today the main value proposition is as a cloud target for on-premise deployments of backup and archiving software. If you have a need to retain data for extended periods of time (1 year plus in most cases) tape is still the more cost effective option given it's low capital acquisition cost and removability. If you have long term data retention needs and you want to eliminate tape, that's where a cloud storage target comes in. Electronically vault that data to a storage-as-service provider who can store that data at cents per GB. You just can't beat the economies of scale these providers are able to achieve.
If you're a small business and you don't have the staff to implement and manage a backup solution or if you're an enterprise and you're looking for a PC backup or a remote office backup solution, I think it's worthwhile to compare the three year total cost of ownership of an on-premise solution versus backup-as-a-service.