DCIM And The New Reality Of Infrastructure & Operations

Richard Fichera

I recently published an update on power and cooling in the data center (http://www.forrester.com/go?docid=60817), and as I review it online, I am struck by the combination of old and new. The old – the evolution of semiconductor technology, the increasingly elegant attempts to design systems and components that can be incrementally throttled, and the increasingly sophisticated construction of the actual data centers themselves, with increasing modularity and physical efficiency of power and cooling.

The new is the incredible momentum I see behind Data Center Infrastructure Management software. In a few short years, DCIM solutions have gone from simple aggregated viewing dashboards to complex software that understands tens of thousands of components, collects, filters and analyzes data from thousands of sensors in a data center (a single CRAC may have in excess of 20 sensors, a server over a dozen, etc.) and understands the relationships between components well enough to proactively raise alarms, model potential workload placement and make recommendations about prospective changes.

Of all the technologies reviewed in the document, DCIM offers one of the highest potentials for improving overall efficiency without sacrificing reliability or scalability of the enterprise data center. While the various DCIM suppliers are still experimenting with business models, I think that it is almost essential for any data center operations group that expects significant change, be it growth, shrinkage, migration or a major consolidation or cloud project, to invest in DCIM software. DCIM consumers can expect to see major competitive action among the current suppliers, and there is a strong potential for additional consolidation.

Is IT Infrastructure & Operations Still Relevant In “The Age Of The Customer”?

Doug Washburn

Yes, but you must adapt by demonstrating your ability to drive business growth and differentiation, not just cost savings and uptime. Here’s a personal example of a much broader trend as to why this is so important to your business and your role as an I&O professional:

It’s a cool Autumn day, which reminds me I need a new jacket. I walk into Patagonia. I evaluate several models and then buy one – but not from Patagonia. It turns out a competitor located two miles away is offering the jacket at a discount. How did I know this? I scanned the product's bar code using the RedLaser app on my iPhone, which displayed several local retailers with lower prices. If I had been willing to wait three days for shipping, I could have purchased that same jacket while standing in Patagonia from an online retailer with an even better deal. [Truth be told: I actually bought the jacket from Patagonia's store after validating no better deals existed… but The Home Depot wasn’t so lucky this summer when I bought the same, but cheaper air conditioner from Amazon while standing in aisle 4.] 

This is a prime example of what Forrester calls the “The Age Of The Customer” where empowered buyers have information at their fingertips to check a price, read a product review, or ask for advice from a friend right from the screen of their smartphone. This type of technology-led disruption is eroding traditional competitive barriers across all industries; manufacturing strength, distribution power, and information mastery can't save you.

Read more

Pulling Off A Razor-Razorblade Product Strategy, Like Amazon's Product Strategists

JP Gownder

Amazon’s product strategists shocked some constituencies with their $199 price point for the Amazon Kindle Fire tablet announced today.  But there’s a fundamental product strategy lesson to this pricing, and it’s an old product strategy model:  The so-called Razor-Razorblade Pricing model.

We all know this model well, as consumers: your initial purchase of razor is relatively cheap, but the cost of replacement razorblades really adds up over time. If you don’t buy razors, perhaps you’re familiar with this scenario from your inkjet printer.  Remember how cheap that scanner/printer was -- but have you ever seen the price of refill inkjet cartridges?

The Razor-Razorblade model works when “dependent goods” – the refills, the stuff you need to keep buying to use the product – are closely related to the anchor product.  In the case of the Amazon Fire Tablet, the dependent goods are content and services – MP3s, streaming videos, and of course books, magazines, newspapers, etc. and cloud services that allow you to store and synchronize your content across devices. Amazon’s product strategists can afford to charge a low entry price to raise adoption of the device, and then (they hope) deliver an experience that’s attractive enough for Kindle Fire owners to pay for as a service.

Hence Amazon CEO Jeff Bezos’ portrayal of the Kindle Fire product strategy:  “What we are doing is offering premium products at non- premium prices,” Bezos says. Other tablet contenders “have not been competitive on price” and “have just sold a piece of hardware. We don’t think of the Kindle Fire as a tablet. We think of it as a service.”

Read more

Now This Is How To Do The App Internet Right — Autodesk Cloud Shows The Way

James Staten

Much of the discussion around integrating applications with the Internet has centered on mobile applications connected to web backends that deliver greater customer experiences than mobile apps or web sites could by themselves. But the real power of this concept comes when a full ecosystem can be delivered that leverages the true power and appropriateness of mobile, desktop and cloud-based compute power. And if you want to see this in action, just look to Autodesk. The company, we highlighted in this blog last year for its early experimentation with cloud-based rendering, has moved that work substantially forward and aims to change the way architects, engineers and designers get their jobs done and dramatically improve how they interact with clients.

Read more

Oracle Delivers On SPARC Promises With New T4 Processors And Systems

Richard Fichera

Background – Promises And Potential

Last year I wrote about Oracle’s new plans for SPARC, anchored by a new line of SPARC CPUs engineered in conjunction with Fujitsu (Does SPARC have a Future?), and commented that the first deliveries of this new technology would probably be in early 2012, and until we saw this tangible evidence of Oracle’s actual execution of this road map we could not predict with any confidence the future viability of SPARC.

The T4 CPU

Fast forward a year and Oracle has delivered the first of the new CPUs, ahead of schedule and with impressive gains in performance that make it look like SPARC will remain a viable platform for years. Specifically, Oracle has introduced the T4 CPU and systems based on them. The T4, an evolution of Oracle’s highly threaded T-Series architecture, is implemented with an entirely new core that will form the basis, with variations in number of threads versus cores and cache designs, of the future M and T series systems. The M series will have fewer threads and more performance per thread, while the T CPUs will, like their predecessors, emphasize throughput for highly threaded workloads. The new T4 will have 8 cores, and each core will have 8 threads. While the T4 emphasizes highly threaded workload performance, it is important to note that Oracles has radically improved single-thread performance over its predecessors, with Oracle claiming performance per thread improvements of 5X over its predecessors, greatly improving its utility as a CPU to power less thread-intensive workloads as well.

The SPARC SuperCluster

Read more

New Study Yields Eye-Opening IT Service Management Benefits

Glenn O'Donnell

In April and May of this year, Forrester and the IT Service Management Forum’s US chapter (itSMF-USA) conducted a joint study to assess the state of ITSM. We collected data from 491 qualified subjects that are heavily involved in ITSM efforts (69% have two or more years of ITSM experience and 95% hold some level of ITIL certification; 50% at an advanced level). Since it was in conjunction with the US chapter, the responses were heavily US-centric.

The results offer empirical evidence of something ITSM professionals already know: ITSM offers significant benefits to the organization and to the professionals themselves. The full report is now in the final editing stages and will be available soon to all Forrester clients, all itSMF-USA members, and all participants who do not already fall into one of those groups. Forrester clients and itSMF USA members will receive email notifications when it is ready. Others will be contacted directly by itSMF.

This morning (Monday, September 26, 2011), I presented the results at the itSMF-USA’s national conference known as Fusion 11. Here are a few key insights from the study:

  • 51% of ITSM efforts are driven primarily by IT or business executives
  • ITIL has had an overwhelming positive impact on:
    • Organizational productivity: 85% positive and 2% negative
    • Service quality: 83% positive and 1% negative
    • IT’s reputation with the business: 65% positive and 3% negative
    • Operational costs: 41% positive and 4% negative
Read more

Security & Risk And Infrastructure & Operations Pros: Drive Customer Growth And Business Differentiation

Laura Koetzle

Security & Risk (S&R) chiefs and Infrastructure & Operations (I&O) leaders have a lot in common, and in great companies, we work in concert to run an efficient, reliable technology infrastructure that keeps critical business assets safe. Much has changed in the world of technology since I pulled my first all-nighter in a data center (falling asleep next to the EMC Symmetrix array was not one of my better ideas – those corners were sharp!), but that partnership is still the same – it takes security engineers and network/server engineers working together to solve really thorny problems.

We have our frictions, of course – I&O pros prioritize operational stability and continuity of service, while S&R pros must occasionally interrupt that continuity to contain security breaches. But when a serious incident (whether security breach or system failure) threatens to sideline our business systems, it falls to us to find and fix the problems – together. We may be organizationally separate now, with I&O reporting into the CIO and the CISO reporting into a COO or Head of Operational Risk, but we share a set of fundamental challenges.  We must excel in our own domains (not exactly a cakewalk) but also anticipate and deliver on what our businesses need (much harder).

 And what our businesses seek today is growth – in Forrester’s most recent survey of business decision-makers, the top two priorities were growing overall company revenue and acquiring and retaining customers. S&R pros have already worked hard to escape their “Department of No” reputations, and I&O pros have labored tirelessly to get out of the data center and into the business. 

But that’s not enough. 

Read more

Categories:

The Revenge Of The Politburo! One Company's Quest For Soviet-esque Virtual Desktop Infrastructure

David Johnson

The Politburo is making a comeback
Winston Churchill described Soviet-era politics as a riddle, wrapped in a mystery, inside an enigma. It came to mind recently as I was engaged in a conversation with an I&O professional who works for a US-based company, and he needed help. Seems his executives had decided that due to two data breaches over the past year from stolen hard drives, that the new Central Committee policy should be to have everyone use a locked down virtual desktop, no matter their role or workstyle. It was hard for me to conjure up a picture of the profound lack of understanding that led to such a misguided policy, though images of nondescript buildings, row after row of undifferentiated cubicles, and Gulag-style productivity quotas came quickly to mind. Had he not been on the other end of a telephone line, he could've knocked me over with a feather.

Big vendors are using top party relationships to push huge pork-barrel deals under the banner of security and mobility

Read more

Brocade Offers I&O An Opportunity To Control Costs With Their Subscription Program

Andre Kindness

Brocade isn’t the loudest networking vendor on the block, but more than two weeks ago it released a subscription switching service that should have sent a shockwave through the industry. With Brocade Network Subscription,customers pay for their network infrastructure on a monthly basis.  Sadly, the new service was not some new xfabric or new-fangled technology, the industry was quick to dismiss the news as anything more than another cloud announcement, and so Brocade’s subscription program registered only a murmur. What was missed was that the service helps to solidify I&O as a business unit on the same level as manufacturing, services, energy, and other businesses.

I’ve written extensively about how networking solutions need to support two business realities: 1) Enterprises are embedding themselves in their customers’ lives, and 2) businesses are forming symbiotic relationships with their vendors. In regard to the latter, businesses want to ensure that their vendor is creating products and solutions that are in the best interest of that company, and so there is an expectation that their partners will carry some of the financial risk and burden, ensuring that they will stay committed. On the vendor side and with respect to embedding themselves, the reasoning is twofold. First, Wall Street rewards recurring revenue streams, and this is more likely if the vendor can create something the customers can only get from that particular source. Second, vendors know it costs ten times as much to find new customers and would prefer to have a customer keep coming back to keep their operating costs as low as possible.

As a result, there has been a shift to a subscription service model. Take for example three distinct markets that support this strategy:

Read more

Intel Developer Forum (IDF) - Cloud. And Cloud, Cloud, Cloud. Oh, Yes, Did I Mention “Cloud”?

Richard Fichera

I just attended IDF and I’ve got to say, Intel has certainly gotten the cloud message. Almost everything is centered on clouds, from the high-concept keynotes to the presentations on low-level infrastructure, although if you dug deep enough there was content for general old-fashioned data center and I&O professionals. Some highlights:

Chips and processors and low-level hardware

Intel is, after all, a semiconductor foundry, and despite their expertise in design, their true core competitive advantage is their foundry operations – even their competitors grudgingly acknowledge that they can manufacture semiconductors better than anyone else on the planet. As a consequence, showing off new designs and processes is always front and center at IDF, and this year was no exception. Last year it was Sandy Bridge, the 22nm shrink of the 32nm Westmere (although Sandy Bridge also incorporated some significant design improvements). This year it was Ivy Bridge, the 22nm “tick” of the Intel “tick-tock” design cycle. Ivy Bridge is the new 22nm architecture and seems to have inherited Intel’s recent focus on power efficiency, with major improvements beyond the already solid advantages of their 22nm process, including deeper P-States and the ability to actually shut down parts of the chip when it is idle. While they did not discuss the server variants in any detail, the desktop versions will get an entirely new integrated graphics processor which they are obviously hoping will blunt AMD’s resurgence in client systems. On the server side, if I were to guess, I would guess more cores and larger caches, along with increased support for virtualization of I/O beyond what they currently have.

Read more