Part 1: Standards And Proprietary Technology: A Time And Place For Both

Andre Kindness

I was listening to a briefing the other day and got swept up in a Western melodrama, set against the backdrop of Calamity Jane’s saloon in Deadwood Gulch, South Dakota, revolving around three major characters: helpless heroine (the customer); valiant hero (vendor A, riding a standards-based white horse); and the scoundrel villain (a competitor, riding the proprietary black stallion) (insert boo and hiss). Vendor A tries to evoke sympathy at his plight of not offering the latest features because he doesn’t have the same powers as the villain and has chosen to follow the morally correct path, which is filled with prolonged and undeserved suffering supporting standards-based functions. What poppycock! There is no such thing as good and evil in networking. If the vendors were reversed in positions, vendor A would be doing the same thing as its competitors. Every vendor has some type of special sauce to differentiate themselves. Anyway, it’s business, plain and simple; networking fundamentally needs proprietary and standards-based features. However, there’s a time and place for both.  

With that in mind, I want to let you know that I’m a big proponent of standards-based networking. The use of open standards improves the choices that help you reduce risk, implement durable solutions, obtain flexibility, and benefit from quality. Ninety-plus percent of networking should be leveraging standard protocols, but to get to that point features need to go through three stages:

Read more

Little Servers For Big Applications At Intel Developer Forum

Richard Fichera

I attended Intel Developer Forum (IDF) in San Francisco last week, one of the premier events for anyone interested in microprocessors, system technology, and of course, Intel itself. Among the many wonders on display, including high-end servers, desktops and laptops, and presentations related to everything Cloud, my attention was caught by a pair of small wonders – very compact, low power servers paradoxically targeted at some of the largest hyper-scale web-facing workloads. Despite being nominally targeted at an overlapping set of users and workloads, the two servers, the Dell “Viking” and the SeaMicro SM10000, represent a study in opposite design philosophies on how to address the problem of scaling infrastructure to address high-throughput web workloads. In this case, the two ends of the spectrum are adherence to an emerging standardized design and utilization of Intel’s reference architectures as a starting point versus a complete refactoring of the constituent parts of a server to maximize performance per watt and physical density.

Read more

Rating: Hold - Stormy Waters Continue To Toss The Minnow: HP Networking

Andre Kindness

The other day I was just reminiscing with a friend who works at HP about all the good times I had there with my ProCurve family.  When I left for a once in lifetime opportunity, I had so much hope for HP’s networking division.  Like many of my inquiries from global customers looking for a Cisco alternative, I’m concerned about the division and its long-term viability.  I’m not worried if HP will continue to exist without Mark Hurd.  Companies are more than a single leader.  There is plenty of research, books, and online debates about the effect of a single person: Jack Welch, Steve Jobs, John Chambers, etc.  The issue at hand is the existence of product lines within enormous companies, like networking within HP.  One of my mentors always said, “If you look at networking over the last twenty years, no major IT company or voice vendor has been able to pull off being a serious networking vendor if networking wasn’t its first priority.”  Fundamentally networking is one of the few technologies where a vendor has to be all in.  The networking graveyard is full of headstones:  Nortel fell off the face of earth, IBM sold off its assets, and Dell hobbles along.

Ah, you might say, what about HP?  That brings me to my three observations that every IT manager should consider when including HP in their network architecture:

Read more

Interesting Hardware Products At VMworld: NEC FT x86 Servers And Xsigo I/O Virtualization

Richard Fichera

Not a big surprise, but VMworld was overwhelmingly focused on software. I was concerned that great hardware might get lost in the software noise, so I looked for hardware standouts using two criteria:

  1. They had to have some relevance to virtualization. Most offerings could check this box – we were at WMworld, after all.
  2. They had to have some potential impact on Infrastructure & Operations (I&O) for a wide range of companies.
Read more

Oh No! Not Another VMworld Blog Post!

Richard Fichera

It was reported that sometime over the past weekend the number of tweets and blogs about VMworld exceeded Plankk’s limit (postulated by blogger Marvin Plankk, now confined to an obscure institution in an unidentified state with more moose than people), and quietly coalesced into an undifferentiated blob of digital entropy as a result of too many semantically identical postings online at the same time. So this leaves the field clear for me to write the first VMworld post in the new cycle.

This year was my first time at VMworld, and it left a profound impression – while the energy and activity among the 17,000 attendees, exhibitors and VMware itself would have been impressive in any context, the underlying evidence of a fundamental transformation of the IT landscape was even more so. The theme this year was “clouds,” but to some extent I think themes of major shows like this are largely irrelevant. The theme serves as an organizing principle for the communications and promotion of the show, but the technology content of the show, particularly as embodied by its exhibitors and attendees, is based on what is actually being done in the real world. If the technology was not already there, the show might have to find another label. Keeping the cart firmly behind the horse, this activity is being driven by real IT problems, real investments in solutions, and real technology being brought to market. So to me the revelation of the show was not in the fact that VMware called it “cloud,” but that the world is really thinking “cloud.”

Read more

New Report: Designing An Empowered Mobile Product Strategy

JP Gownder

Forrester’s new book, Empowered, (which is free for U.S. based Amazon Kindle owners from September 7 to 10!) helps companies thrive in the new era of disruptive technologies like social media and mobility. Authored by two of my amazing Forrester colleagues, Josh Bernoff and Ted Schadler, Empowered tells companies to give their most innovative employees – their highly engaged and resourceful operatives, or HEROes – the permission and tools to serve customers using these same emerging technologies.

But Empowered isn’t only about employees. It also lays out a strategy for engaging your most influential customers. Consumer product strategy professionals should wield Empowered concepts for exactly that reason – to energize your best customers. In the mobile space, product strategists are looking for ideas to help them develop innovative, leading-edge applications for Smartphone users on platforms like the iPhone or Android. So we’ve just released a report to help product strategists do just that, called “Designing A Mobile Empowered Product Strategy.” It applies ideas from Empowered to product strategy, and includes numerous case studies of mobile applications that exemplify Empowered approaches.

Read more

Are You Attending Interop New York 2010? If So, Forrester Will See You There

Doug Washburn

Despite its networking roots, today’s Interop events have evolved to address an expansive range of IT roles, responsibilities and topics. While networking managers will still feel at home in the networking track, Interop addresses a variety of themes very relevant to the broader interests of IT Infrastructure & Operations (I&O) professionals, like cloud computing, virtualization, storage, wireless and mobility, and IT management.

IT professionals responsible for the “I” (or Infrastructure) in I&O will find the event particularly relevant. So much so that Forrester has partnered with Interop to develop track agendas, identify speakers, moderate panels, and even present. For the last two years, I have chaired the Data Center and Green IT tracks at Interop’s Las Vegas and New York events. And I am doing the same this year at Interop New York 2010 from October 18th to 22nd

Read more

Wall Street Pilgrimage – Infrastructure & Operations At The Top

Richard Fichera

A Wall Street infrastructure & operations day

I recently had an opportunity to spend a day in three separate meetings with infrastructure & operations professionals from three of the top six financial service firms in the country, and discuss topics ranging from long-term business and infrastructure strategy to specific likes and dislikes regarding their Tier-1 vendors and their challengers. The day’s meetings were neither classic consulting nor classic briefings, but rather a free-form discussion, guided only loosely by an agenda and, despite possible Federal regulations to the contrary, completely devoid of PowerPoint presentations. As in the past, these in-depth meetings provided a wealth of food for thought, interesting and sometimes contradictory indicators from the three groups. There was a lot of material to ponder, but I’ll try and summarize some of the high-level takeaways in this post.

Servers and Vendors

These companies between them own in the neighborhood of 180,000 servers, and probably purchase 30,000 - 50,000 servers per year in various cycles of procurements. In short, these are heavyweight users. One thing that struck me in the course of the conversations was the Machiavellian view of their Tier-1 server vendors. Viewed as key partners, at the same time the majority of this group of users devoted a substantial amount of time to keeping their key vendors at arm’s length through aggressive vendor management techniques like deliberate splitting of procurements between competitors. They understand their suppliers' margins and cost structures well, and are committed to driving hardware supplier margins to “as close to zero as we can,” in the words of one participant.

Read more

When Consumers Want To Share Products

JP Gownder

Product strategists should check out this article in today’s New York Times about online borrowing.  Think of it as a Web-empowered peer-to-peer product rental program. The article describes how Web sites like SnapGoods allow private owners of products to rent them out for temporary periods of time to consumers who want to use – but do not (or cannot) own – those same products. It’s a product rental marketplace, smaller than but resembling a product sales marketplace (like eBay).

This peer-to-peer product rental approach to sharing complements another sharing technique that has been around for a while: timesharing. Vacationers who own 1/8 of a condominium in the Bahamas get to use it part of the time, as do their fellow timeshare partners. More recently, the Web enabled Zipcar to grow to over 275,000 users by 2009. Zipcar users make reservations to use vehicles in their neighborhoods on an hourly basis.

Read more

Oracle Cancels OpenSolaris – What’s The Big Deal?

Richard Fichera

There has been turmoil and angst recently in the 0pen source community of late over Oracle’s decision to cancel OpenSolaris. Since this community can be expected to react violently anytime something is taken out of open source, the real question is whether this action has any impact on real-world IT and operations professionals. The short answer is no.

 Enterprise Solaris users, be they small, medium or large, are using it to run critical applications; and as far as we can tell, the uptake of OpenSolaris as opposed to Solaris supplied and sold by Sun was very low in commercial accounts, other than possibly a surge in test and dev environments. The decision to take Solaris into the open source arena was, in my opinion, fundamentally flawed, and Oracle’s subsequent decision to change this is eminently rational – Oracle’s customers almost certainly are not going to run their companies on an OS that is built and maintained by any open source community (even the vast majority of corporate Linux use is via a distribution supported by a major vendor and under a paid subscription model), and Oracle cannot continue to develop Solaris unless they have absolute control over it, just as is the case with every other enterprise OS. In the same vein, unless Oracle can also have an expectation of being compensated for their investments in future Solaris development, there is little motivation for them to continue to invest heavily in Solaris.