Earlier this week at its Discover customer event, HP announced a significant set of improvements to its already successful c-Class BladeSystem product line, which, despite continuing competitive pressure from IBM and the entry of Cisco into the market three years ago, still commands approximately 50% of the blade market. The significant components of this announcement fall into four major functional buckets – improved hardware, simplified and expanded storage features, new interconnects and I/O options, and serviceability enhancements. Among the highlights are:
Direct connection of HP 3PAR storage – One of the major drawbacks for block-mode storage with blades has always been the cost of the SAN to connect it to the blade enclosure. With the ability to connect an HP 3PAR storage array directly to the c-Class enclosure without any SAN components, HP has reduced both the cost and the complexity of storage for a wide class of applications that have storage requirements within the scope of a single storage array.
New blades – With this announcement, HP fills in the gaps in their blade portfolio, announcing a new Intel Xeon EN based BL-420 for entry requirements, an upgrade to the BL-465 to support the latest AMD 16-core Interlagos CPU, and the BL-660, a new single-width Xeon E5 based 4-socket blade. In addition, HP has expanded the capacity of the sidecar storage blade to 1.5 TB, enabling an 8-server and 12 TB + chassis configuration.
Huawei hosted about 160 industry and financial analysts at its ninth annual analyst summit in Shenzhen, China in April 2012. The event showed us that Huawei’s carrier network activities are becoming increasingly software-focused. Huawei is building up its network software and professional services capabilities. This drive is reflected in its SoftCom solution, driven by the cloud computing delivery model in the network space. Huawei is well aware of the role software will play for future distributed and virtualized network infrastructure and network-centric solutions, where the data center is effectively becoming the phone switch for ICT solutions. In fact, Huawei goes as far as to say that hardware will be fairly commoditized and that differentiation will be based on software. Huawei is a member of more than 130 industry standard-defining bodies; as such, it influences the development of industry standards. Huawei maintains its own silicon chip fabrication capabilities (HiSilicon), which help deliver opex reductions and greater energy efficiency as part of its networking solutions for wired and wireless (WiFi, WiMAX, and LTE) environments. Huawei has been designing and assembling servers for a decade and offers blade and rack configurations designed to support cloud and virtualization environments. Huawei’s security solutions, greatly enhanced by Huawei buying the remaining 49% stake in its Huawei Symantec joint venture recently, include firewall, VPNs, intrusion detection, application gateways, and unified threat management. Huawei also works with other leading ICT vendors to deliver solutions according to customer requirements. Huawei’s GalaX Cloud operating system delivers large scale virtualization capability for compute and storage resources in a cloud deployment. Huawei assists carriers and enterprise customers with design implementation and operation of deployments through its SmartCare Services solution, which monitors and ensures the
At its 2011 Analyst Event in Boston, Nokia Siemens Networks (NSN) outlined more details of its recently announced strategy review. In our view, the new focus NSN is taking is right. NSN is focusing on growth segments of the infrastructure market and will generate large savings from operating expenses and production overheads. In addition to its focus on providing the most efficient mobile broadband network infrastructure, NSN also highlighted the importance of customer experience management (CEM) as an integral part of its strategy.
NSN also provided more guidance on which market segments it no longer considers core. These include wireline, microwave, Wimax, perfect voice, and business support systems. Some of these, microwave and Wimax, it already spun off. NSN estimates that the overall revenue impact of its non-core disposals will total 10% of its current revenue base. The impact on profit will be less than 10%, as these non-core disposals are low-margin operations.
NSN believes that telcos increasingly demand end-to-end solutions from their equipment vendor partners. No equipment vendors can credibly deliver such end-to-end solutions on their own. Hence, NSN is positioning itself as an ecosystems manager for end-to-end solutions. As part of its innovation drive, NSN increasingly focuses on devising concepts for solutions rather than simply focusing on product upgrades. For instance, Liquid Net is a concept for network infrastructure design that supports a more efficient usage of underutilized infrastructure capacity based on a range of NSN products. Similarly, NSN places great emphasis on its CEM solution, which helps telcos to transform their services offerings by enhancing network-related features that affect customer experience and satisfaction.
Nokia Seimens Networks’ top management has finally pulled the emergency brakes, after months of unsuccessful attempts to find a buyer. Going forward, NSN will focus on mobile network infrastructure and the services market. All other areas are non-core and subject to disposal. We estimate that about two-thirds of NSN’s current portfolio will remain in this new focus area. NSN will retain an attractive product and services portfolio and innovative solutions, as for instance its Liquid Net offering. However, some elements, like convergence offerings, will be difficult to pursue credibly in the future.
In our view, the new focus NSN is taking is right:
NSN is focusing on growth segments of the infrastructure market. NSN aims to provide the most efficient mobile networks (including network outsourcing and sharing) to extract maximum value for telcos’ operations by developing intelligent network solutions and boost customer experience management.
NSN will generate large savings from operating expenses and production overheads. NSN targets savings of €1 billion annually by the end of 2013. NSN tries to achieve this goal be focusing on organizational streamlining, real estate, information technology, product and service procurement costs, G&A, and supplier consolidation. Despite good revenue growth in recent quarters, NSN’s revenues per employee remain well below that of Ericsson’s in 2010 and even lags Huawei’s. NSN’s plans to reduce its global workforce by 17,000, or 23%, will go some way to address this imbalance.
As a former investment analyst, I remember the feeling when stock market screens turn deep red. Such days turn one’s stomach upside down on a dealing floor. But even from the outside, such days are unnerving. The big question in the telecoms markets making the rounds at present is how the current market turmoil will affect the telcos. The 2008 financial crisis might provide some clues to what we could expect in 2011 and 2012, albeit in a less-pronounced fashion:
Consumer spending on communications will remain pretty stable. During the last financial crisis, spending on communications remained largely untouched by the consumer. We do expect a slight migration towards flat rates for customers with the desire for greater cost certainties and towards prepaid by customers with the desire to lower their communication expenditure. One obvious danger in times of turmoil are price wars between service providers. They can offer only short-term growth relief, but at a high cost. Resulting poor margins will be felt for a long time.
Businesses will put nonessential IT projects on hold or water them down. We have not yet seen evidence that COOs and IT departments have tapped the brakes on their tech buying, but they certainly have become more cautious. If the economies of the US or Europe go into recession — a possibility, but not our baseline forecast — that will hit IT budgets, as happened in 2008 and 2009. I am hearing from telecoms providers that their enterprise sales pipelines are already under pressure as customers slow their IT investments and look for ways to reduce their telecom services spending. Projects that support end-users with their sales efforts, e.g., sales force automation projects, are likely to be less affected than others.
Not to be left out of the announcement fever that has gripped vendors recently, Cisco today announced several updates to their UCS product line aimed at easing potential system bottlenecks by improving the whole I/O chain between the network and the servers, and improving management, including:
Improved Fabric Interconnect (FI) – The FI is the top of the UCS hardware hierarchy, a thinly disguised Nexus 5xxx series switch that connects the UCS hierarchy to the enterprise network and runs the UCS Manager (UCSM) software. Previously the highest end FI had 40 ports, each of which had to be specifically configured as Ethernet, FCoE, or FC. The new FI, the model 6248UP has 48 ports, each one of which can be flexibly assigned as up toa 10G port for any of the supported protocols. In addition to modestly raising the bandwidth, the 6248UP brings increased flexibility and a claimed 40% reduction in latency.
New Fabric Extender (FEX) – The FEXC connects the individual UCS chassis with the FI. With the new 2208 FEX, Cisco doubles the bandwidth between the chassis and the FI.
VIC1280 Virtual Interface Card (VIC) – At the bottom of the management hierarchy the new VIC1280 quadruples the bandwidth to each individual server to a total of 80 GB. The 80 GB can be presented as up to 8 10 GB physical NICs or teamed into a pair fo 40 Gb NICS, with up to 256 virtual devices (vNIC, vHBA, etc presented to the software running on the servers.
One evening in 1972 I was hanging out in the computer science department at UC Berkeley with a couple of equally socially backward friends waiting for our batch programs to run, and to kill some time we dropped in on a nearby physics lab that was analyzing photographs of particle tracks from one of the various accelerators that littered the Lawrence Radiation Laboratory. Analyzing these tracks was real scut work – the overworked grad student had to measure angles between tracks, length of tracks, and apply a number of calculations to them to determine if they were of interest. To our surprise, this lab had something we had never seen before – a computer-assisted screening device that scanned the photos and in a matter of seconds determined it had any formations that were of interest. It had a big light table, a fancy scanner, whirring arms and levers and gears, and off in the corner, the computer, “a PDP from Digital Equipment.” It was a 19” rack mount box with an impressive array of lights and switches on the front. As a programmer of the immense 1 MFLOP CDC 6400 in the Rad Lab computer center, I was properly dismissive…
This was a snapshot of the dawn of the personal computer era, almost a decade before IBM Introduced the PC and blew it wide open. The PDP (Programmable Data Processor) systems from MIT Professor Ken Olsen were the beginning of the fundamental change in the relationship between man and computer, putting a person in the computing loop instead of keeping them standing outside the temple.
There has been a lot of press about IBM’s acquisition of BNT (Blade Network Technologies) focusing on the economics and market share of BNT as a competitor to Cisco and HP’s ProCurve/3Com franchise. But at its heart the acquisition is more about defending and expanding a position in the emerging converged server, networking, and storage infrastructure segment than it is about raw switch port market share. It is also a powerful vindication of the proposition that infrastructure convergence is driving major realignment in the vendor industry.
Starting with HP’s success with its c-Class blade servers and Virtual Connect technology, and escalating with Cisco’s entrance into the server market, IBM continued its investment in its Virtual Fabric and Open Fabric Manager technology, heavily leveraging BNT’s switch platforms. At some point it became clear that BNT was a critical element of IBM’s convergence strategy, with IBM’s plans now heavily dependent on a vendor with whom they had an excellent, but non-exclusive relationship, and one whose acquisition by another player could severely compromise their product plans. Hence the acquisition. Now that it owns BNT, IBM can capitalize on its excellent edge network technology for further development of its converged infrastructure strategy without hesitation about further leveraging BNT’s technology.
I recently spent a day with IBM’s x86 team, primarily to get back up to speed on their entire x86 product line, and partially to realign my views of them after spending almost five years as a direct competitor. All in all, time well spent, with some key takeaways:
IBM has fixed some major structural problems with the entire x86 program and it perception in the company – As recently as two years ago, it appeared to the outside world that IBM was not really serious about x86 servers. Between licensing its low-end server designs to Lenovo (although IBM continued to sell its own versions) and an apparent retreat to the upper-end of the segment, it appeared that IBM was not serious about x86 severs. New management, new alignment with sales, and a higher internal profile for x86 seems to have moved the division back into IBM’s mainstream.
Increased investment – It looks like IBM significantly ramped up investments in x86 products about three years ago. The result has been a relatively steady flow of new products into the marketplace, some of which, such as the HS22 blade, significantly reversed deficits versus equivalent HP products. Others followed in high-end servers, virtualization and systems management, and increased velocity of innovation in low-end systems.
Established leadership in new niches such as dense modular server deployments – IBM’s iDataplex, while representing a small footprint in terms of their total volume, gave them immediate visibility as an innovator in the rapidly growing niche for hyper scale dense deployments. Along the way, IBM has also apparently become the leader in GPU deployments as well, another low-volume but high-visibility niche.