ARM-Based Servers – Looming Tsunami Or Just A Ripple In The Industry Pond?

Richard Fichera

From nothing more than an outlandish speculation, the prospects for a new entrant into the volume Linux and Windows server space have suddenly become much more concrete, culminating in an immense buzz at CES as numerous players, including NVIDIA and Microsoft, stoked the fires with innuendo, announcements, and demos.

Consumers of x86 servers are always on the lookout for faster, cheaper, and more power-efficient servers. In the event that they can’t get all three, the combination of cheaper and more energy-efficient seems to be attractive to a large enough chunk of the market to have motivated Intel, AMD, and all their system partners to develop low-power chips and servers designed for high density compute and web/cloud environments. Up until now the debate was Intel versus AMD, and low power meant a CPU with four cores and a power dissipation of 35 – 65 Watts.

The Promised Land

The performance trajectory of processors that were formerly purely mobile device processors, notably the ARM Cortex, has suddenly introduced a new potential option into the collective industry mindset. But is this even a reasonable proposition, and if so, what does it take for it to become a reality?

Our first item of business is to figure out whether or not it even makes sense to think about these CPUs as server processors. My quick take is yes, with some caveats. The latest ARM offering is the Cortex A9, with vendors offering dual core products at up to 1.2 GHz currently (the architecture claims scalability to four cores and 2 GHz). It draws approximately 2W, much less than any single core x86 CPU, and a multi-core version should be able to execute any reasonable web workload. Coupled with the promise of embedded GPUs, the notion of a server that consumes much less power than even the lowest power x86 begins to look attractive. But…

Read more

NetApp Acquires Akorri – Moving Up The Virtualization Stack

Richard Fichera

NetApp recently announced that it was acquiring Akorri, a small but highly regarded provider of management solutions for virtualized storage environments. All in all, this is yet another sign of the increasingly strategic importance of virtualized infrastructure and the need for existing players, regardless of how strong their positions are in their respective silos, to acquire additional tools and capabilities for management of an extended virtualized environment.

NetApp, while one of the strongest suppliers in the storage industry, not only faces continued pressure from not only EMC, which owns VMware and has been on a management software acquisition binge for years, but also renewed pressure from IBM and HP, who are increasingly tying their captive storage offerings into their own integrated virtualized infrastructure offerings. This tighter coupling of proprietary technology, while not explicitly disenfranchising external storage vendors, will still tighten the screws slightly and reduce the number of opportunities for NetApp to partner with them. Even Dell, long regarded as the laggard in high-end enterprise presence, has been ramping up its investment management and ability to deliver integrated infrastructure, including both the purchase of storage technology and a very clear signal with its run at 3Par and recent investments in companies such as Scalent (see my previous blog on Dell as an enterprise player and my colleague Andrew Reichman’s discussion of the 3Par acquisition) that it wants to go even further as a supplier of integrated infrastructure.

Read more

How Complexity Spilled The Oil

Jean-Pierre Garbani

The Gulf oil spill of April 2010 was an unprecedented disaster. The National Oil Spill Commission’s report summary shows that this could have been prevented with the use of better technology. For example, while the Commission agrees that the monitoring systems used on the platform provided the right data, it points out that the solution used relied on engineers to make sense of that data and correlate the right elements to detect anomalies. “More sophisticated, automated alarms and algorithms” could have been used to create meaningful alerts and maybe prevent the explosion. The Commission’s report shows that the reporting systems used have not kept pace with the increased complexity of drilling platforms. Another conclusion is even more disturbing, as it points out that these deficiencies are not uncommon and that other drilling platforms in the Gulf of Mexico face similar challenges.

If we substitute “drilling platform” with “data center,” this sound awfully familiar. How many IT organizations are relying on relatively simple data collection coming from point monitoring such as network, server, or application while trying to manage the performance and availability of increasingly complex applications? IT operations engineers sift through mountains of data from different sources trying to make sense of what is happening and usually fall short of finding meaningful alerts. The consequences may not be as dire as the Gulf oil spill, but they can still translate into lost productivity and revenue.

The fact that many IT operations have not (yet) faced a meltdown is not a valid counterargument: There is, for example, a good reason to purchase hurricane insurance when one lives in Florida, even though destructive storms are not that common. Like the weather, there are so many variables at play in today’s business services that mere humans can’t be expected to make sense of it.

Read more

Intel Announces Sandy Bridge. A Big Deal? You Bet!

Richard Fichera

Intel today officially announced the first products based on the much-discussed Sandy Bridge CPU architecture, and first impressions are highly favorable, with my take being that Sandy Bridge represents the first step in a very aggressive product road map for Intel in 2011.

Sandy Bridge is the next architectural spin after Intel’s Westmere shrink of the predecessor Nehalem architecture (the “tick” in Intel’s famous “tick-tock” progression of architectural changes followed by process shrink) and incorporates some major innovations compared to the previous architecture:

  • Minor but in toto significant changes to many aspects of the low-level microarchitecture – more registers, better prefetch, changes to the way instructions and operands are decode, cached and written back to registers and cache.
  • Major changes in integration of functions on the CPU die – Almost all major subsystems, including CPU, memory controller, graphics controller and PCIe controller, are now integrated onto the same die, along with the ability to share data with much lower latency than in previous generations. In addition to more efficient data sharing, this level of integration allows for better power efficiency.
  • Improvements to media processing – A dedicated video transcoding engine and an extended vector instruction set for media and floating point calculations improves Sandy Bridge capabilities in several major application domains.
Read more

Networks Are About The Users, Not The Apps!

Andre Kindness

Virtualization and cloud talk just woke the sleeping giant, networking. For too long, we were so isolated in our L2-L4 world and soundly sleeping as VMs were created and a distant cousin was born, vSwitches. Sure, we can do a little of this and little of that in this virtual world, but the reality is everything is very manually driven and a one-off process. For example, vendors talk about moving policies from one port to another when a VM moves, but they don’t discuss policies moving around automatically on links from edge switches to the distribution switches. Even management tools are scrambling to solve issues within the data center. In this game of catch, I’m hearing people banter the word “app” around. Server personnel to networking administrators are trying to relate to an app. Network management tools, traffic sensors, switches, wan optimization are being developed to measure, monitor, or report on the performance of apps in some form or another.

Why is “app” the common language? Why are networks relating to “apps”? With everything coming down the pike, we are designing for yesterday instead of tomorrow. Infrastructure and operations professionals will have deal with:

  • Web 2.0tools. Traditional apps can alienate users when language and customs aren’t designed into the enterprise apps, yet no one app can deal with sheer magnitude of languages. Web 2.0 technologies — such as social networking sites, blogs, wikis, video-sharing sites, hosted services, web applications, mashups, and folksonomies — connect people with each other globally to collaborate and share information, but in a way that is easily customized and localized. For example, mashups allow apps to be easily created in any language and data sourced from a variety of locations.
Read more

What Is The Cost Of Being Blind, Insecure, And Unmanaged?

Andre Kindness

With the increased presence of business principles within the IT arena, I get a lot of inquiries from  Infrastructure & Operations Professionals who are trying to figure out how to justify their investment in a particular product or solution in the security, monitoring, and management areas. Since most marketing personnel view this either as a waste of resources in a futile quest of achievement or too intimidating to even begin to tackle, IT vendors have not provided their customers more than marketing words:  lower TCO, more efficient, higher value, more secure, or more reliable. It’s a bummer since the request is a valid concern for any IT organization. Consider that other industries -- nuclear power plants, medical delivery systems, or air traffic control -- with complex products and services look at risk and reward all the time to justify their investments. They all use some form of probabilistic risk assessment (PRA) tools to figure out technological, financial, and programmatic risk by combining it with disaster costs: revenue losses, productivity losses, compliance and/or reporting penalties, penalties and loss of discounts, impact to customers and strategic partners, and impact to cash flow.

PRA teams use fault tree analysis(FTA) for top-down assessment and failure mode and effect analysis(FMEA) for bottom-up. 

Read more

Checking In On Linux – Latest Linux Releases Show Continued Progress

Richard Fichera

I’ve recently had the opportunity to talk with a small sample of SLES 11 and RH 6 Linux users, all developing their own applications. All were long-time Linux users, and two of them, one in travel services and one in financial services, had applications that can be described as both large and mission-critical.

The overall message is encouraging for Linux advocates, both the calm rational type as well as those who approach it with near-religious fervor. The latest releases from SUSE and Red Hat, both based on the 2.6.32 Linux kernel, show significant improvements in scalability and modest improvements in iso-configuration performance. One user reported that an application that previously had maxed out at 24 cores with SLES 10 was now nearing production certification with 48 cores under SLES 11. Performance scalability was reported as “not linear, but worth doing the upgrade.”

Overall memory scalability under Linux is still a question mark, since the widely available x86 platforms do not exceed 3 TB of memory, but initial reports from a user familiar with HP’s DL 980 verify that the new Linux Kernel can reliably manage at least 2TB of RAM under heavy load.

File system options continue to expand as well. The older Linux FS standard, ETX4, which can scale to “only” 16 TB, has been joined by additional options such as XFS (contributed by SGI), which has been implemented in several installations with file systems in excess of 100 TB, relieving a limitation that may have been more psychological than practical for most users.

Read more

Evaluating Complexity

Jean-Pierre Garbani

We’re starting to get inquiries about complexity. Key questions are how to evaluate complexity in an IT organization and consequently how to evaluate its impact on availability and performance of applications. Evaluating complexity wouldn’t be like evaluating the maturity of IT processes, which is like fixing what’s broken, but more like preventive maintenance: understanding what’s going to break soon and taking action to prevent the failure.

Volume of application and services certainly has something to do with complexity. Watts Humphrey said that code size (in KLOC: thousands of lines of code) doubles every two years, certainly due to increase in hardware capacity and speed, and this is easily validated by the evolution of operating systems over the past years. It stands to reason that, as a consequence, the total number of errors in the code also doubles every two years.

But code is not the only cause of error: Change, configuration, and capacity are right there, too. Intuitively, the chance of an error in change and configuration would depend on the diversity of infrastructure components and on the volume of changes. Capacity issues would also be dependent on these parameters.

There is also a subjective aspect to complexity: I’m sure that my grandmother would have found an iPhone extremely complex, but my granddaughter finds it extremely simple. There are obviously human, cultural, and organizational factors in evaluating complexity.

Can we define a “complexity index,” should we turn to an evaluation model with all its subjectivity, or is the whole thing a wild goose chase?

Read more

Benchmark Your Job Responsibilities Against Peers

JP Gownder

Consumer product strategists hold a wide variety of job titles: product manager, product development manager, services manager, or a variation of general manager, vice president, or even, sometimes, CEO or other C-level title. Despite these varying titles, many of you share a great number of job responsibilities with one another.

We recently fielded our Q4 Global Consumer Product Strategy Research Panel Online Survey to 256 consumer product strategy professionals from a wide variety of industries. Why do this? One reason was to better understand the job responsibilities that you, in your role, take on every day. But the other reason was to help you succeed: By benchmarking yourself against peers, you can identify new job responsibilities for growth, improve your effectiveness, and ultimately advance your career.

What did we find out? The bottom line is that consumer product strategy jobs are pretty tough. We found a wide range of skills are required to do the job well, since consumer product strategists are expected to:

  • Drive innovation. Consumer product strategists are front-and-center in driving innovation, which ideally suffuses the entire product life cycle. Being innovative is a tall task, but all of you are expected to be leaders here.
     
  • Think strategically... You've got to have a strategic view of your markets, identifying new concepts and business models and taking a long-term view of tomorrow's products.
     
  • ...but execute as a business person. While thinking strategically, you generally have to execute tactically as well. You're business unit owners. At the senior-most levels, you hold the P&L for the product or portfolio of products.
     
Read more

Apply A “Startup” Mentality To Your IT Infrastructure And Operations

Doug Washburn

Cash-starved. Fast-paced. Understaffed. Late nights. T-shirts. Jeans.

These descriptors are just as relevant to emerging tech startups as they are to the typical enterprise IT infrastructure and operations (I&O) department. And to improve customer focus and develop new skills, I&O professionals should apply a “startup” mentality.

A few weeks ago, I had the opportunity to spend time with Locately, a four-person Boston-based startup putting a unique spin on customer insights and analytics: Location. By having consumers opt-in to Locately’s mobile application, media companies and brands can understand how their customers spend their time and where they go. Layered with other contextual information – such as purchases, time, and property identifiers (e.g. store names, train stops) – marketers and strategists can drive revenues and awareness, for example, by optimizing their marketing and advertising tactics or retail store placement.

The purpose of my visit to Locately was not to write this blog post, at least not initially. It was to give the team of five Research Associates that I manage exposure to a different type of technology organization than they usually have access to – the emerging tech startup. Throughout our discussion with Locately, it struck me that I&O organizations share a number of similarities with startups. In particular, here are two entrepreneurial characteristics that I&O professionals should embody in their own organizations:

Read more