I recently spent a day with IBM’s x86 team, primarily to get back up to speed on their entire x86 product line, and partially to realign my views of them after spending almost five years as a direct competitor. All in all, time well spent, with some key takeaways:
IBM has fixed some major structural problems with the entire x86 program and it perception in the company – As recently as two years ago, it appeared to the outside world that IBM was not really serious about x86 servers. Between licensing its low-end server designs to Lenovo (although IBM continued to sell its own versions) and an apparent retreat to the upper-end of the segment, it appeared that IBM was not serious about x86 severs. New management, new alignment with sales, and a higher internal profile for x86 seems to have moved the division back into IBM’s mainstream.
Increased investment – It looks like IBM significantly ramped up investments in x86 products about three years ago. The result has been a relatively steady flow of new products into the marketplace, some of which, such as the HS22 blade, significantly reversed deficits versus equivalent HP products. Others followed in high-end servers, virtualization and systems management, and increased velocity of innovation in low-end systems.
Established leadership in new niches such as dense modular server deployments – IBM’s iDataplex, while representing a small footprint in terms of their total volume, gave them immediate visibility as an innovator in the rapidly growing niche for hyper scale dense deployments. Along the way, IBM has also apparently become the leader in GPU deployments as well, another low-volume but high-visibility niche.
Despite its networking roots, today’s Interop events have evolved to address an expansive range of IT roles, responsibilities and topics. While networking managers will still feel at home in the networking track, Interop addresses a variety of themes very relevant to the broader interests of IT Infrastructure & Operations (I&O) professionals, like cloud computing, virtualization, storage, wireless and mobility, and IT management.
IT professionals responsible for the “I” (or Infrastructure) in I&O will find the event particularly relevant. So much so that Forrester has partnered with Interop to develop track agendas, identify speakers, moderate panels, and even present. For the last two years, I have chaired the Data Center and Green IT tracks at Interop’s Las Vegas and New York events. And I am doing the same this year at Interop New York 2010 from October 18th to 22nd.
Historically, the positioning of Dell versus its two major competitors for high-value enterprise business, particularly where it involved complex services and the ability to deliver deeply integrated infrastructure and management stacks, has been as sort of an also ran. Competitors looked at Dell as a price spoiler and a channel for standard storage and networking offerings from its partners, not as a potential threat to the high-ground of being able to deliver complex integrated infrastructure solutions.
This comforting image of Dell as being a glorified box pusher appears to be coming to an end. When my colleague Andrew Reichman recently wrote about Dell’s attempted acquisition of 3Par, it made me take another look at Dell’s recent pattern of investments and the series of announcements they have made around delivering integrated infrastructure with a message and solution offering that looks like it is aimed squarely at HP and IBM's Virtual Fabric.
I’ve been getting a number of inquiries recently regarding benchmarking potential savings from consolidating multiple physical servers onto a smaller number of servers using VMs, usually VMware. The variations in the complexity of the existing versus new infrastructures, operating environments, and applications under consideration make it impossible to come up with consistent rules of thumb, and in most cases, also make it very difficult to predict with any accuracy what the final outcome will be absent a very tedious modeling exercise.
However, the major variables that influence the puzzle remain relatively constant, giving us the ability to at least set out a framework to help analyze potential consolidation projects. This list usually includes:
Yesterday I attended Computacenter’s Analyst Event. It’s a major independent provider of IT infrastructure services in Europe, ranging from reselling hardware and software to managing data centers and providing outsourced desktop management. My main interest was how it manages the potential conflict between properly advising the client and maximizing revenue from selling software. For instance, clients often ask me if it's dangerous to employ their value-added reseller (VAR) to advise them on license management in case the reseller tips off its vendors about a potential source of licence revenue.
An excellent customer case study at the event provided another example. A UK water company engaged Computacenter to implement a new desktop strategy involving 90% fully virtualized thin clients. Such a project creates major licensing challenges on both the desktop and server sides, because the software companies haven’t enhanced their models to properly cope with this scenario. The VAR’s dilemma is whether to design a solution that will be cheapest for the customer or one that will be most lucrative for itself.
As we said in our recent report “Refresher Course: Hiring VARs,” sourcing managers should decide whether they want their VARs to provide design and integration services like these or merely process orders at a minimum margin.
Computacenter will do either, but they clearly want to do more of the VA part and less (proportionately) of the R. So, according to their executives, they have no hesitation doing what is best for the customer even if it reduces their commission in the short term. But they didn’t think many of their competitors would take the same view.
The rise and rise of cloud has been dominating the headlines for the past few years, and for CIOs, it has become a more serious priority only recently. People like cloud computing. Well - at least they like the concept of cloud computing. It is fast to implement, affordable, and scales to business requirements easily. On closer inspection, cloud poses many challenges for organizations. For CIOs there are the considerable challenges around how you restructure your IT department and IT services to cope with the new demands that cloud computing will place on your business - and often these demands come from the business, as they start to get the idea that they can get so many more business cases over the line for new capabilities, products and/or services, as they realize that cloud computing lowers the costs and hastens the time to value.
Every spring I’m faced with the wonderful opportunity – and challenge – of choosing the best questions for Forrester's annual 20 minute Web survey of commercial buyers of IT infrastructure and hardware across North America and Europe.
As technology industry strategists, what themes or hypotheses in IT infrastructure do you think we should focus on? What are the emerging topics with the potential for large, long term consequences, such as cloud computing, that you’d like to see survey data on? Please offer your suggestions in the comments below by May 21!
This year, I’m proposing the following focus areas for the survey:
New client system deployment strategies– virtual desktops, bring-your-own-PC, Win 7, smartphones, and tablets
Hypothesis: Early adopters are embracing virtual desktops and bring-your-own-PC, but the mainstream will proceed with standard Win 7 deployments
Like many movements before it, IT is rapidly evolving to an industrial model. A process or profession becomes industrialized when it matures from an art form to a widespread, repeatable function with predictable result and accelerated by technology to achieve far higher levels of productivity. Results must be deterministic (trustworthy) and execution must be fast and nimble, two related but different qualities. Customer satisfaction need not be addressed directly because reliability and speed result in lower costs and higher satisfaction.
IT should learn from agriculture and manufacturing, which have perfected industrialization. In agriculture, productivity is orders of magnitude better. Genetic engineering made crops resistant to pests and environmental extremes such as droughts while simultaneously improving consistency. The industrialized evolution of farming means we can feed an expanding population with fewer farmers. It has benefits in nearly every facet of agricultural production.
Manufacturing process improvements like the assembly line and just-in-time manufacturing combined with automation and statistical quality control to ensure that we can make products faster and more consistently, at a lower cost. Most of the products we use could not exist without an industrialized model.
Smoke and fire is all around you, the sound of the alarm makes you dizzy and people are running in panic to escape the inferno while you have to find your way to safety. This is not a scene in the latest video game but actually training for e.g. field engineers in an exact virtual copy of a real world environment such as oil platforms or manufacturing plants.
In a recent discussion with VRcontext, a company based in Brussels and specialized since 10 years in asset virtualization, I was fascinated by the possibilities to create virtual copies of real world large, extremely complex assets simply from scanning existing CAD plans or on-site laser scans. It’s not just the 3D virtualization but the integration of the virtual world with Enterprise Asset Management (EAM), ERP, LIMS, P&ID and other systems that allows users to track, identify and locate every single piece of equipment in the real and virtual world.
These solutions are used today for safety training simulations as well as to increase operational efficiency e.g. in asset maintenance processes. There are still areas for further improvements, like the integration of RFID tags or sensor readings. However, as the technology further matures I can see future use cases all over the place – from the virtualization of any kind of location that is difficult or dangerous to enter to simple office buildings for a ‘company campus tour’ or a ‘virtual meeting’. And it doesn’t require super-computing power – it all runs on low-spec, ‘standard’ PCs and the models are only taking few GBytes storage.
So if you are bored of running around in Second Life or World Of Warcraft, if you ever have the chance, exchange your virtual sword for a wrench and visit the ‘real’ virtual world of a fascinating oil rig or refinery.
CA is a vendor that already enjoys a leading position in overall network management. Its 2005 acquisition of Concord, which brought along the assets of the previously acquired Aprisma, instantly moved CA from an also-ran to one of the clear leaders. Concord was good, and CA has an impressive track record of growing that business since the acquisition. Still, there were some weaknesses with regard to more advanced performance analysis.
On September 14, 2009, CA finally addressed these performance gaps by announcing its intent to acquire NetQoS for $200 million. Based in Austin, TX, NetQoS is one of those exciting small companies that proved there is a better approach to many of the challenges we face. It is one of the true innovators in performance management of both infrastructure and applications.