Systems of Engagement vs Systems of Reference – Core Concept for Infrastructure Architecture

Richard Fichera

My Forrester colleagues Ted Schadler and John McCarthy have written about the differences between Systems of Reference (SoR) and Systems of Engagement (SoE) in the customer-facing systems and mobility, but after further conversations with some very smart people at IBM, I think there are also important reasons for infrastructure architects to understand this dichotomy. Scalable and flexible systems of engagement, engagement, built with the latest in dynamic web technology and the back-end systems of record, highly stateful usually transactional systems designed to keep track of the “true” state of corporate assets are very different animals from an infrastructure standpoint in two fundamental areas:

Suitability to cloud (private or public) deployment – SoE environments, by their nature, are generally constructed using horizontally scalable technologies, generally based on some level of standards including web standards, Linux or Windows OS, and some scalalable middleware that hides the messy details of horizontally scaling a complex application. In addition, the workloads are generally highly parallel, with each individual interaction being of low value. This characteristic leads to very different demands on the necessity for consistency and resiliency.

Read more

Dell Grabs Enstratius in Cloud Management Land Grab

Dave Bartoletti

Dell just picked up Enstratius for an undisclosed amount today, making the cloud management vendor the latest well-known cloud controller to get snapped up by a big infrastructure or OS vendor. Dell will add Enstratius cloud management capabilities to its existing management suite for converged and cloudy infrastructure, which includes element manager and configuration automator Active System Manager (ASM, the re-named assets acquired with Gale Technologies in November), Quest Foglight performance monitoring, and (maybe) what’s still around from Scalent and DynamicOps.

This is a good move for Dell, but it doesn’t exactly clarify where all these management capabilities will fall out. The current ASM product seems to be a combo of code from the original Scalent acquisition upgraded with the GaleForce product; regardless of what’s in it, though, what it does is discover, configure and deploy physical and virtual converged infrastructure components. A private cloud automation platform, basically. Like all private cloud management stacks, it does rapid template-based provisioning and workflow orchestration. But it doesn’t provision apps or provision to public or open-source cloud stacks. That’s where Enstratius comes in.

Read more

Ericsson's Biggest Challenge Is Complacency

Dan Bieler

At its recent analyst event, Ericsson outlined its strategy, product, and service ambitions. Ericsson remains the overall benchmark for network infrastructure vendors. The company has a leading market position in the growth segments of mobile broadband and network services and delivers a solid financial performance — despite the disappointing Q3 2012 results. Still, in my view, Ericsson has several challenges that it needs to address:

·         The cloud strategy is built on a questionable assumption.Clearly network infrastructure is becoming more, not less, important for cloud-based solutions. Ericsson therefore assumes that carriers are well positioned to be cloud providers. But CIO perceptions suggest otherwise. CIOs tell us that carriers are far from the preferred choice for cloud-solutions (see Figure 9 in the “Prepare For The Connected Enterprise Now” Forrester report). Carriers therefore need help in addressing the potential of cloud computing. For instance, Ericsson’s cloud solutions ought to help carriers cooperate with cloud partners regarding embedded connectivity in devices and applications.

Read more

Data Center Power And Efficiency – Public Enemy #1 Or The Latest Media Punching Bag?

Richard Fichera

This week, the New York Times ran a series of articles about data center power use (and abuse) “Power, Pollution and the Internet” (http://nyti.ms/Ojd9BV) and “Data Barns in a Farm Town, Gobbling Power and Flexing Muscle” (http://nyti.ms/RQDb0a). Among the claims made in the articles were that data centers were “only using 6 to 12 % of the energy powering their servers to deliver useful computation. Like a lot of media broadsides, the reality is more complex than the dramatic claims made in these articles. Technically they are correct in claiming that of the electricity going to a server, only a very small fraction is used to perform useful work, but this dramatic claim is not a fair representation of the overall efficiency picture. The Times analysis fails to take into consideration that not all of the power in the data center goes to servers, so the claim of 6% efficiency of the servers is not representative of the real operational efficiency of the complete data center.

On the other hand, while I think the Times chooses drama over even-keeled reporting, the actual picture for even a well-run data center is not as good as its proponents would claim. Consider:

  • A new data center with a PUE of 1.2 (very efficient), with 83% of the power going to IT workloads.
  • Then assume that 60% of the remaining power goes to servers (storage and network get the rest), for a net of almost 50% of the power going into servers. If the servers are running at an average utilization of 10%, then only 10% of 50%, or 5% of the power is actually going to real IT processing. Of course, the real "IT number" is the server + plus storage + network, so depending on how you account for them, the IT usage could be as high as 38% (.83*.4 + .05).
Read more

Impressions From Google Enterprise’s Road Show

Dan Bieler
Recently I attended one of the day-long events in Munich that Google offers as part of its atmosphere on tour road show that visits 24 cities globally in 2012. The event series is aimed at enterprise customers and aims to get them interested in Google’s enterprise solutions, including Google Apps, search, analytics and mapping services, as well as the Chrome Book and Chrome Box devices.

Google Enterprise as a division has been around for some time, but it is only fairly recently that Google started to push the enterprise solutions more actively into the market through marketing initiatives. The cloud-delivery model clearly plays a central role for Google’s enterprise pitch (my colleague Stefan Ried also held a presentation on the potential of cloud computing at the event).

Still, the event itself was a touch light on details and remained pretty high level throughout. Whilst nobody expects Google to communicate a detailed five-year plan, it would have been useful to get more insights into Google’s vision for the enterprise and how it intends to cater to these needs. Thankfully, prior to the official event, Google shared some valuable details of this vision with us. The four main themes that stuck out for us are:

Read more

Rethink Your IT Strategy If You’re Serious About Cloud

Brian  Hopkins

Cloud – people can’t agree on exactly what it is, but everyone can agree that they want some piece of it. I have not talked to a single client who isn’t doing something proactively to pursue cloud in some form or fashion. This cloud-obsession was really evident in our 2011 technology tweet jam as well, which is why this year’s business technology and technology trends reports cover cloud extensively. Our research further supports this – for example, 29% of infrastructure and operations executives surveyed stated that building a private cloud was a critical priority for 2011, while 28% plan to use public offerings, and these numbers are rising every year.

So what should EAs think about cloud? My suggestion is that you think about how your current IT strategy supports taking advantage of what cloud is offering (and what it’s not). Here are our cloud-related technology trends along with some food for thought:

  • The next phase of IT industrialization begins. This trend points out how unprepared our current IT delivery model is for the coming pace of technology change, which is why cloud is appealing. It offers potentially faster ways to acquire technology services. Ask yourself – is my firm’s current IT model and strategy good enough to meet technology demands of the future?
Read more

AMD Releases Interlagos And Valencia – Bulldozers In The Cloud

Richard Fichera

This week AMD finally released their AMD 6200 and 4200 series CPUs. These are the long-awaited server-oriented Interlagos and Valencia CPUs, based on their new “Bulldozer” core, offering up to 16 x86 cores in a single socket. The announcement was targeted at (drum roll, one guess per customer only) … “The Cloud.” AMD appears to be positioning its new architectures as the platform of choice for cloud-oriented workloads, focusing on highly threaded throughput oriented benchmarks that take full advantage of its high core count and unique floating point architecture, along with what look like excellent throughput per Watt metrics.

At the same time it is pushing the now seemingly mandatory “cloud” message, AMD is not ignoring the meat-and-potatoes enterprise workloads that have been the mainstay of server CPUs sales –virtualization, database, and HPC, where the combination of many cores, excellent memory bandwidth and large memory configurations should yield excellent results. In its competitive comparisons, AMD targets Intel’s 5640 CPU, which it claims represents Intel’s most widely used Xeon CPU, and shows very favorable comparisons in regards to performance, price and power consumption. Among the features that AMD cites as contributing to these results are:

  • Advanced power and thermal management, including the ability to power off inactive cores contributing to an idle power of less than 4.4W per core. Interlagos offers a unique capability called TDP, which allows I&O groups to set the total power threshold of the CPU in 1W increments to allow fine-grained tailoring of power in the server racks.
  • Turbo CORE, which allows boosting the clock speed of cores by up to 1 GHz for half the cores or 500 MHz for all the cores, depending on workload.
Read more

Silk Browser, The BIG Leap For Amazon’s Fire, Shows Innovative Use Of App Internet

Richard Fichera

My colleague James Staten recently wrote about AutoDesk Cloud as an exemplar of the move toward App Internet, the concept of implementing applications that are distributed between local and cloud resources in a fashion that is transparent to the user except for the improved experience. His analysis is 100% correct, and AutoDesk Cloud represents a major leap in CAD functionality, intelligently offloading the inherently parallel and intensive rendering tasks and facilitating some aspects of collaboration.

But (and there’s always a “but”), having been involved in graphics technology on and off since the '80s, I would say that “cloud” implementation of rendering and analysis is something that has been incrementally evolving for decades, with hundreds of well-documented distributed environments with desktops fluidly shipping their renderings to local rendering and analysis farms that would today be called private clouds, with the results shipped back to the creating workstations. This work was largely developed and paid for either by universities and by media companies as part of major movie production projects. Some of them were of significant scale, such as “Massive,” the rendering and animation farm for "Lord of the Rings" that had approximately 1,500 compute nodes, and a subsequent installation at Weta that may have up to 7,000 nodes. In my, admittedly arguable, opinion, the move to AutoDesk Cloud, while representing a major jump in capabilities by making the cloud accessible to a huge number of users, does not represent a major architectural innovation, but rather an incremental step.

Read more

DCIM And The New Reality Of Infrastructure & Operations

Richard Fichera

I recently published an update on power and cooling in the data center (http://www.forrester.com/go?docid=60817), and as I review it online, I am struck by the combination of old and new. The old – the evolution of semiconductor technology, the increasingly elegant attempts to design systems and components that can be incrementally throttled, and the increasingly sophisticated construction of the actual data centers themselves, with increasing modularity and physical efficiency of power and cooling.

The new is the incredible momentum I see behind Data Center Infrastructure Management software. In a few short years, DCIM solutions have gone from simple aggregated viewing dashboards to complex software that understands tens of thousands of components, collects, filters and analyzes data from thousands of sensors in a data center (a single CRAC may have in excess of 20 sensors, a server over a dozen, etc.) and understands the relationships between components well enough to proactively raise alarms, model potential workload placement and make recommendations about prospective changes.

Of all the technologies reviewed in the document, DCIM offers one of the highest potentials for improving overall efficiency without sacrificing reliability or scalability of the enterprise data center. While the various DCIM suppliers are still experimenting with business models, I think that it is almost essential for any data center operations group that expects significant change, be it growth, shrinkage, migration or a major consolidation or cloud project, to invest in DCIM software. DCIM consumers can expect to see major competitive action among the current suppliers, and there is a strong potential for additional consolidation.

Intel Developer Forum (IDF) - Cloud. And Cloud, Cloud, Cloud. Oh, Yes, Did I Mention “Cloud”?

Richard Fichera

I just attended IDF and I’ve got to say, Intel has certainly gotten the cloud message. Almost everything is centered on clouds, from the high-concept keynotes to the presentations on low-level infrastructure, although if you dug deep enough there was content for general old-fashioned data center and I&O professionals. Some highlights:

Chips and processors and low-level hardware

Intel is, after all, a semiconductor foundry, and despite their expertise in design, their true core competitive advantage is their foundry operations – even their competitors grudgingly acknowledge that they can manufacture semiconductors better than anyone else on the planet. As a consequence, showing off new designs and processes is always front and center at IDF, and this year was no exception. Last year it was Sandy Bridge, the 22nm shrink of the 32nm Westmere (although Sandy Bridge also incorporated some significant design improvements). This year it was Ivy Bridge, the 22nm “tick” of the Intel “tick-tock” design cycle. Ivy Bridge is the new 22nm architecture and seems to have inherited Intel’s recent focus on power efficiency, with major improvements beyond the already solid advantages of their 22nm process, including deeper P-States and the ability to actually shut down parts of the chip when it is idle. While they did not discuss the server variants in any detail, the desktop versions will get an entirely new integrated graphics processor which they are obviously hoping will blunt AMD’s resurgence in client systems. On the server side, if I were to guess, I would guess more cores and larger caches, along with increased support for virtualization of I/O beyond what they currently have.

Read more