Suddenly, Dell Is A Software Company!

Glenn O'Donnell

The Dell brand is one of the most recognizable in technology. It was born a hardware company in 1984 and deservedly rocketed to fame, but it has always been about the hardware. In 2009, its big Perot Systems acquisition marked the first real departure from this hardware heritage. While it made numerous software acquisitions, including some good ones like Scalent, Boomi, and KACE, it remains a marginal player in software. That is about to change.

Read more

Dell World – New Image. New Company?

Richard Fichera

I just spent several days at Dell World, and came away with the impression of a company that is really trying to change its image. Old Dell was boxes, discounts and low cost supply chain. New Dell is applications, solution, cloud (now there’s a surprise!) and investments in software and integration. OK, good image, but what’s the reality? All in all, I think they are telling the truth about their intentions, and their investments continue to be aligned with these intentions.

As I wrote about a year ago, Dell seems to be intent on climbing up the enterprise food chain. It’s investment in several major acquisitions, including Perot Systems for services and a string of advanced storage, network and virtual infrastructure solution providers has kept the momentum going, and the products have been following to market. At the same time I see solid signs of continued investment in underlying hardware, and their status as he #1 x86 server vendor in N. America and #2 World-Wide remains an indication of their ongoing success in their traditional niches. While Dell is not a household name in vertical solutions, they have competent offerings in health care, education and trading, and several of the initiatives I mentioned last year are definitely further along and more mature, including continued refinement of their VIS offerings and deep integration of their much-improved DRAC systems management software into mainstream management consoles from VMware and Microsoft.

Read more

DCIM And The New Reality Of Infrastructure & Operations

Richard Fichera

I recently published an update on power and cooling in the data center (http://www.forrester.com/go?docid=60817), and as I review it online, I am struck by the combination of old and new. The old – the evolution of semiconductor technology, the increasingly elegant attempts to design systems and components that can be incrementally throttled, and the increasingly sophisticated construction of the actual data centers themselves, with increasing modularity and physical efficiency of power and cooling.

The new is the incredible momentum I see behind Data Center Infrastructure Management software. In a few short years, DCIM solutions have gone from simple aggregated viewing dashboards to complex software that understands tens of thousands of components, collects, filters and analyzes data from thousands of sensors in a data center (a single CRAC may have in excess of 20 sensors, a server over a dozen, etc.) and understands the relationships between components well enough to proactively raise alarms, model potential workload placement and make recommendations about prospective changes.

Of all the technologies reviewed in the document, DCIM offers one of the highest potentials for improving overall efficiency without sacrificing reliability or scalability of the enterprise data center. While the various DCIM suppliers are still experimenting with business models, I think that it is almost essential for any data center operations group that expects significant change, be it growth, shrinkage, migration or a major consolidation or cloud project, to invest in DCIM software. DCIM consumers can expect to see major competitive action among the current suppliers, and there is a strong potential for additional consolidation.

A Rift At The High-End For Server Requirements?

Richard Fichera

We have been repeatedly reminded that the requirements of hyper-scale cloud properties are different from those of the mainstream enterprise, but I am now beginning to suspect that the top strata of the traditional enterprise may be leaning in the same direction. This suspicion has been triggered by the combination of a recent day in NY visiting I&O groups in a handful of very large companies and a number of unrelated client interactions.

The pattern that I see developing is one of “haves” versus “have nots” in terms of their ability to execute on their technology vision with internal resources. The “haves” are the traditional large sophisticated corporations, with a high concentration in financial services. They have sophisticated IT groups, are capable fo writing extremely complex systems management and operations software, and typically own and manage 10,000 servers or more. The have nots are the ones with more modest skills and abilities, who may own 1000s of servers, but tend to be less advanced than the core FSI companies in terms of their ability to integrate and optimize their infrastructure.

The divergence in requirements comes from what they expect and want from their primary system vendors. The have nots are companies who understand their limitations and are looking for help form their vendors in the form of converged infrastructures, new virtualization management tools, and deeper integration of management software to automate operational tasks, These are people who buy HP c-Class, Cisco UCS, for example, and then add vendor-supplied and ISV management and automation tools on top of them in an attempt to control complexity and costs. They are willing to accept deeper vendor lock-in in exchange for the benefits of the advanced capabilities.

Read more

Egenera Lands HP As A Partner – A Win For Both

Richard Fichera

Egenera, arguably THE pioneer in what the industry is now calling converged infrastructure, has had a hard life. Early to market in 2000 with a solution that was approximately a decade ahead of its time, it offered an elegant abstraction of physical servers into what chief architect Maxim Smith described as “fungible and anonymous” resources connected by software defined virtual networks. Its interface was easy to use, allowing the definition of virtualized networks, NICs, servers with optional failover and pools of spare resources with a fluidity that has taken the rest of the industry almost 10 years to catch up to. Unfortunately this elegant presentation was chained to a completely proprietary hardware architecture, which encumbered the economics of x86 servers with an obsolete network fabric, expensive system controller and physical architecture (but it was the first vendor to include blue lights on its servers). The power of the PanManager software was enough to keep the company alive, but not enough to overcome the economics of the solution and put them on a fast revenue path, especially as emerging competitors began to offer partial equivalents at lower costs. The company is privately held and does not disclose revenues, but Forrester estimates it is still less than $100 M in annual revenues.

In approximately 2006, Egenera began the process of converting its product to a pure software offering capable of running on commodity server hardware and standard Ethernet switches. In subsequent years they have announced distribution arrangements with Fujitsu (an existing partner for their earlier products) and an OEM partnership with Dell, which apparently was not successful, since Dell subsequently purchased Scalent, an emerging software competitor. Despite this, Egenera claims that its software business is growing and has been a factor in the company’s first full year of profitability.

Read more