Intel Announces Sandy Bridge. A Big Deal? You Bet!

Richard Fichera

Intel today officially announced the first products based on the much-discussed Sandy Bridge CPU architecture, and first impressions are highly favorable, with my take being that Sandy Bridge represents the first step in a very aggressive product road map for Intel in 2011.

Sandy Bridge is the next architectural spin after Intel’s Westmere shrink of the predecessor Nehalem architecture (the “tick” in Intel’s famous “tick-tock” progression of architectural changes followed by process shrink) and incorporates some major innovations compared to the previous architecture:

  • Minor but in toto significant changes to many aspects of the low-level microarchitecture – more registers, better prefetch, changes to the way instructions and operands are decode, cached and written back to registers and cache.
  • Major changes in integration of functions on the CPU die – Almost all major subsystems, including CPU, memory controller, graphics controller and PCIe controller, are now integrated onto the same die, along with the ability to share data with much lower latency than in previous generations. In addition to more efficient data sharing, this level of integration allows for better power efficiency.
  • Improvements to media processing – A dedicated video transcoding engine and an extended vector instruction set for media and floating point calculations improves Sandy Bridge capabilities in several major application domains.
Read more

Networks Are About The Users, Not The Apps!

Andre Kindness

Virtualization and cloud talk just woke the sleeping giant, networking. For too long, we were so isolated in our L2-L4 world and soundly sleeping as VMs were created and a distant cousin was born, vSwitches. Sure, we can do a little of this and little of that in this virtual world, but the reality is everything is very manually driven and a one-off process. For example, vendors talk about moving policies from one port to another when a VM moves, but they don’t discuss policies moving around automatically on links from edge switches to the distribution switches. Even management tools are scrambling to solve issues within the data center. In this game of catch, I’m hearing people banter the word “app” around. Server personnel to networking administrators are trying to relate to an app. Network management tools, traffic sensors, switches, wan optimization are being developed to measure, monitor, or report on the performance of apps in some form or another.

Why is “app” the common language? Why are networks relating to “apps”? With everything coming down the pike, we are designing for yesterday instead of tomorrow. Infrastructure and operations professionals will have deal with:

  • Web 2.0tools. Traditional apps can alienate users when language and customs aren’t designed into the enterprise apps, yet no one app can deal with sheer magnitude of languages. Web 2.0 technologies — such as social networking sites, blogs, wikis, video-sharing sites, hosted services, web applications, mashups, and folksonomies — connect people with each other globally to collaborate and share information, but in a way that is easily customized and localized. For example, mashups allow apps to be easily created in any language and data sourced from a variety of locations.
Read more

What Is The Cost Of Being Blind, Insecure, And Unmanaged?

Andre Kindness

With the increased presence of business principles within the IT arena, I get a lot of inquiries from  Infrastructure & Operations Professionals who are trying to figure out how to justify their investment in a particular product or solution in the security, monitoring, and management areas. Since most marketing personnel view this either as a waste of resources in a futile quest of achievement or too intimidating to even begin to tackle, IT vendors have not provided their customers more than marketing words:  lower TCO, more efficient, higher value, more secure, or more reliable. It’s a bummer since the request is a valid concern for any IT organization. Consider that other industries -- nuclear power plants, medical delivery systems, or air traffic control -- with complex products and services look at risk and reward all the time to justify their investments. They all use some form of probabilistic risk assessment (PRA) tools to figure out technological, financial, and programmatic risk by combining it with disaster costs: revenue losses, productivity losses, compliance and/or reporting penalties, penalties and loss of discounts, impact to customers and strategic partners, and impact to cash flow.

PRA teams use fault tree analysis(FTA) for top-down assessment and failure mode and effect analysis(FMEA) for bottom-up. 

Read more

Checking In On Linux – Latest Linux Releases Show Continued Progress

Richard Fichera

I’ve recently had the opportunity to talk with a small sample of SLES 11 and RH 6 Linux users, all developing their own applications. All were long-time Linux users, and two of them, one in travel services and one in financial services, had applications that can be described as both large and mission-critical.

The overall message is encouraging for Linux advocates, both the calm rational type as well as those who approach it with near-religious fervor. The latest releases from SUSE and Red Hat, both based on the 2.6.32 Linux kernel, show significant improvements in scalability and modest improvements in iso-configuration performance. One user reported that an application that previously had maxed out at 24 cores with SLES 10 was now nearing production certification with 48 cores under SLES 11. Performance scalability was reported as “not linear, but worth doing the upgrade.”

Overall memory scalability under Linux is still a question mark, since the widely available x86 platforms do not exceed 3 TB of memory, but initial reports from a user familiar with HP’s DL 980 verify that the new Linux Kernel can reliably manage at least 2TB of RAM under heavy load.

File system options continue to expand as well. The older Linux FS standard, ETX4, which can scale to “only” 16 TB, has been joined by additional options such as XFS (contributed by SGI), which has been implemented in several installations with file systems in excess of 100 TB, relieving a limitation that may have been more psychological than practical for most users.

Read more

Benchmark Your Job Responsibilities Against Peers

JP Gownder

Consumer product strategists hold a wide variety of job titles: product manager, product development manager, services manager, or a variation of general manager, vice president, or even, sometimes, CEO or other C-level title. Despite these varying titles, many of you share a great number of job responsibilities with one another.

We recently fielded our Q4 Global Consumer Product Strategy Research Panel Online Survey to 256 consumer product strategy professionals from a wide variety of industries. Why do this? One reason was to better understand the job responsibilities that you, in your role, take on every day. But the other reason was to help you succeed: By benchmarking yourself against peers, you can identify new job responsibilities for growth, improve your effectiveness, and ultimately advance your career.

What did we find out? The bottom line is that consumer product strategy jobs are pretty tough. We found a wide range of skills are required to do the job well, since consumer product strategists are expected to:

  • Drive innovation. Consumer product strategists are front-and-center in driving innovation, which ideally suffuses the entire product life cycle. Being innovative is a tall task, but all of you are expected to be leaders here.
  • Think strategically... You've got to have a strategic view of your markets, identifying new concepts and business models and taking a long-term view of tomorrow's products.
  • ...but execute as a business person. While thinking strategically, you generally have to execute tactically as well. You're business unit owners. At the senior-most levels, you hold the P&L for the product or portfolio of products.
Read more

Apply A “Startup” Mentality To Your IT Infrastructure And Operations

Doug Washburn

Cash-starved. Fast-paced. Understaffed. Late nights. T-shirts. Jeans.

These descriptors are just as relevant to emerging tech startups as they are to the typical enterprise IT infrastructure and operations (I&O) department. And to improve customer focus and develop new skills, I&O professionals should apply a “startup” mentality.

A few weeks ago, I had the opportunity to spend time with Locately, a four-person Boston-based startup putting a unique spin on customer insights and analytics: Location. By having consumers opt-in to Locately’s mobile application, media companies and brands can understand how their customers spend their time and where they go. Layered with other contextual information – such as purchases, time, and property identifiers (e.g. store names, train stops) – marketers and strategists can drive revenues and awareness, for example, by optimizing their marketing and advertising tactics or retail store placement.

The purpose of my visit to Locately was not to write this blog post, at least not initially. It was to give the team of five Research Associates that I manage exposure to a different type of technology organization than they usually have access to – the emerging tech startup. Throughout our discussion with Locately, it struck me that I&O organizations share a number of similarities with startups. In particular, here are two entrepreneurial characteristics that I&O professionals should embody in their own organizations:

Read more

ScaleMP – Interesting Twist On Systems Scalability And Virtualization

Richard Fichera

I just spent some time talking to ScaleMP, an interesting niche player that provides a server virtualization solution. What is interesting about ScaleMP is that rather than splitting a single physical server into multiple VMs, they are the only successful offering (to the best of my knowledge) that allows I&O groups to scale up a collection of smaller servers to work as a larger SMP.

Others have tried and failed to deliver this kind of solution, but ScaleMP seems to have actually succeeded, with a claimed 200 customers and expectations of somewhere between 250 and 300 next year.

Their vSMP product comes in two flavors, one that allows a cluster of machines to look like a single system for purposes of management and maintenance while still running as independent cluster nodes, and one that glues the member systems together to appear as a single monolithic SMP.

Does it work? I haven’t been able to verify their claims with actual customers, but they have been selling for about five years, claim over 200 accounts, with a couple of dozen publicly referenced. All in all, probably too elaborate a front to maintain if there was really nothing there. The background of the principals and the technical details they were willing to share convinced me that they have a deep understanding of the low-level memory management, prefectching, and caching that would be needed to make a collection of systems function effectively as a single system image. Their smaller scale benchmarks displayed good scalability in the range of 4 – 8 systems, well short of their theoretical limits.

My quick take is that the software works, and bears investigation if you have an application that:

  1. Either is certified to run with ScaleMP (not many), or one where that you control the code.
  2. You understand the memory reference patterns of the application, and
Read more

Oracle Rolls Out Private Cloud Architecture And World-Record Transaction Performance

Richard Fichera

On Dec. 2, Oracle announced the next move in its program to integrate its hardware and software assets, with the introduction of Oracle Private Cloud Architecture, an integrated infrastructure stack with Infiniband and/or 10G Ethernet fabric, integrated virtualization, management and servers along with software content, both Oracle’s and customer-supplied. Oracle has rolled out the architecture as a general platform for a variety of cloud environments, along with three specific implementations, Exadata, Exalogic and the new Sunrise Supercluster, as proof points for the architecture.

Exadata has been dealt with extensively in other venues, both inside Forrester and externally, and appears to deliver the goods for I&O groups who require efficient consolidation and maximum performance from an Oracle database environment.

Exalogic is a middleware-targeted companion to the Exadata hardware architecture (or another instantiation of Oracle’s private cloud architecture, depending on how you look at it), presenting an integrated infrastructure stack ready to run either Oracle or third-party apps, although Oracle is positioning it as a Java middleware platform. It consists of the following major components integrated into a single rack:

  1. Oracle x86 or T3-based servers and storage.
  2. Oracle Quad-rate Infiniband switches and the Oracle Solaris gateway, which makes the Infiniband network look like an extension of the enterprise 10G Ethernet environment.
  3. Oracle Linux or Solaris.
  4. Oracle Enterprise Manager Ops Center for management.
Read more

IPv6: Drive Innovation With Rewards, Not Fear

Andre Kindness

I’m a sucker for good, biting humor, and in the spirit of Stephen Colbert’s Medals of Fear that he gave to a few distinguished souls (the press, Mark Zuckerberg, Anderson Cooper) at the rally in Washington D.C., I would like to hand a medal to the U.S. State Department for its 1999 publication of a country-by-country set of "Y2K" warnings — “End of Days” scenarios and solutions — for Americans doing business in 194 nations. I would give another medal to IPv6, the most drawn-out killer technology to date — and one that has had the longest run at trying to scare everyone about the end of IPv4. At Forrester, we are starting to see the adoption freighter slowly turning via the number of inquiries rolling in; governments accelerating their adoption with new mandates; vendors including IPv6 in their solutions; and the Number Resource Organization escalating its announcements about the depletion of IPv4 addresses (only 5% left!). To add to the drama, vendors are in the process of creating IPv4 address countdown clocks to generate buzz and differentiation. These scare tactics haven’t worked because technology pundits haven’t spoken about IPv6 in business terms. There is enormous business value in IPv6; those who embrace it will be the new leaders in their space.

Read more

Open Data Center Alliance – Lap Dog Or Watch Dog?

Richard Fichera

In October, with great fanfare, the Open Data Center Alliance unfurled its banners. The ODCA is a consortium of approximately 50 large IT consumers, including large manufacturing, hosting and telecomm providers, with the avowed intent of developing standards for interoperable cloud computing. In addition to the roster of users, the announcement highlighted Intel with an ambiguous role as a technology advisor to the group. The ODCA believes that it will achieve some weight in the industry due to its estimated $50 billion per year of cumulative IT purchasing power, and the trade press was full of praises for influential users driving technology as opposed to allowing rapacious vendors such as HP and IBM to drive users down proprietary paths that lead to vendor lock-in.

Now that we’ve had a month or more to allow the purple prose to settle a bit, let’s look at the underlying claims, potential impact of the ODCA and the shifting roles of vendors and consumers of technology. And let’s not forget about the role of Intel.

First, let me state unambiguously that one of the core intentions of the ODCA, the desire to develop common use case models that will in turn drive vendors to develop products that comply with the models based on the economic clout of the ODCA members (and hopefully there will be a correlation between ODCA member requirements and those of a wider set of consumers), is a good idea. Vendors spend a lot of time talking to users and trying to understand their requirements, and having the ODCA as a proxy for the requirements of a lot of very influential customers will be a benefit to all concerned.

Read more