Emerson Network Power today announced that it is entering into a significant partnership with IBM to both integrate Emerson’s new Trellis DCIM suite into IBM’s ITSM products as well as to jointly sell Trellis to IBM customers. This partnership has the potential to reshape the DCIM market segment for several reasons:
Connection to enterprise IT — Emerson has sold a lot of chillers, UPS and PDU equipment and has tremendous cachet with facilities types, but they don’t have a lot of people who know how to talk IT. IBM has these people in spades.
IBM can use a DCIM offering — IBM, despite being a huge player in the IT infrastructure and data center space, does not have a DCIM product. Its Maximo product seems to be more of a dressed up asset management product, and this partnership is an acknowledgement of the fact that to build a full-fledged DCIM product would have been both expensive and time-consuming.
IBM adds sales bandwidth — My belief is that the development of the DCIM market has been delivery bandwidth constrained. Market leaders Nlyte, Emerson and Schneider do not have enough people to address the emerging total demand, and the host of smaller players are even further behind. IBM has the potential to massively multiply Emerson’s ability to deliver to the market.
I was part of a Forrester Team that recently completed a multi-country rollout tour with Emerson Network Power as they formally released their Trellis DCIM product, a comprehensive DCIM environment many years in the building. One of the key takeaways was both an affirmation of our fundamental assertions about DCIM, plus hints about its popularity and attraction for potential customers that in some ways expand on the original value proposition we envisioned. Our audiences were in total approximately 500 selected data center users, most current Emerson customers of some sort, plus various partners.
The audiences uniformly supported the fundamental thesis around DCIM – there exists a strong underlying demand for integrated DCIM products, with a strong proximal emphasis on optimizing power and cooling to save opex and avoid the major disruption and capex of new data center capacity. Additionally, the composition of the audiences supported our contention that these tools would have multiple stakeholders in the enterprise. As expected, the groups were heavy with core Infrastructure & Operations types – the people who have to plan, provision and operate the data center infrastructure to deliver the services needed for their company’s operations. What was heartening was the strong minority presence of facilities people, ranging from 10% to 30% of the attendees, along with a sprinkling of corporate finance and real-estate executives. Informal conversations with a number of these people gave us consistent input that they understood the need, and in some cases were formerly tasked by their executives, to work more closely with the I&O group. All expressed the desire for an integrated tool to help with this.
Data centers, like any other aspect of real estate, follow the age-old adage of “location, location, location,” and if you want to build one that is really efficient in terms of energy consumption as well as possessing all the basics of reliability, you have to be really picky about ambient temperatures, power availability and, if your business is hosting for others rather than just needing one for yourself, potential expansion. If you want to achieve a seeming impossibility – a zero carbon footprint to satisfy increasingly draconian regulatory pressures – you need to be even pickier. In the end, what you need is:
Low ambient temperature to reduce your power requirements for cooling.
Someplace where you can get cheap “green” energy, and lots of it.
A location with adequate network connectivity, both in terms of latency as well as bandwidth, for global business.
A cooperative regulatory environment in a politically stable venue.