There is always a tendency to regard the major players in large markets as being a static background against which the froth of smaller companies and the rapid dance of customer innovation plays out. But if we turn our lens toward the major server vendors (who are now also storage and networking as well as software vendors), we see that the relatively flat industry revenues hide almost continuous churn. Turn back the clock slightly more than five years ago, and the market was dominated by three vendors, HP, Dell and IBM. In slightly more than five years, IBM has divested itself of highest velocity portion of its server business, Dell is no longer a public company, Lenovo is now a major player in servers, Cisco has come out of nowhere to mount a serious challenge in the x86 server segment, and HP has announced that it intends to split itself into two companies.
And it hasn’t stopped. Two recent events, the fracturing of the VCE consortium and the formerly unthinkable hook-up of IBM and Cisco illustrate the urgency with which existing players are seeking differential advantage, and reinforce our contention that the whole segment of converged and integrated infrastructure remains one of the active and profitable segments of the industry.
EMC’s recent acquisition of Cisco’s interest in VCE effectively acknowledged what most customers have been telling us for a long time – that VCE had become essentially an EMC-driven sales vehicle to sell storage, supported by VMware (owned by EMC) and Cisco as a systems platform. EMC’s purchase of Cisco’s interest also tacitly acknowledges two underlying tensions in the converged infrastructure space:
In this playbook, we do not predict the future of technology but we try to understand how, in the age of the customer, I&O must transform to support businesses by accelerating the speed of service delivery, enabling capacity when and where needed and improving customers and employee experience.
All industries mature towards commoditization and abstraction of the underlying technology because knowledge and expertise are cumulative. Our industry will follow an identical trajectory that will result in ubiquitous and easier to implement, manage and change technology.
Dell today announced its new FX system architecture, and I am decidedly impressed.
Dell FX is a 2U flexible infrastructure building block that allows infrastructure architects to compose an application-appropriate server and storage infrastructure out of the following set of resources:
Multiple choices of server nodes, ranging from multi-core Atom to new Xeon E5 V3 servers. With configurations ranging from 2 to 16 server nodes per enclosure, there is pretty much a configuration point for most mainstream applications.
A novel flexible method of mapping disks from up to three optional disk modules, each with 16 drives - the mapping, controlled by the onboard management, allows each server to appear as if the disk is locally attached DASD, so no changes are needed in any software that thinks it is accessing local storage. A very slick evolution in storage provisioning.
A set of I/O aggregators for consolidating Ethernet and FC I/O from the enclosure.
All in all, an attractive and flexible packaging scheme for infrastructure that needs to be tailored to specific combinations of server, storage and network configurations. Probably an ideal platform to support the Nutanix software suite that Dell is reselling as well. My guess is that other system design groups are thinking along these lines, but this is now a pretty unique package, and merits attention from infrastructure architects.
Forrester’s Infrastructure and Operations research team has been on the leading edge of infrastructure technology and its proper operational aspects for years. We pushed the industry on both the supply side (vendors) and the demand side (enterprises) toward new models and we pushed hard. I’m proud to say we’ve been instrumental in changing the world of infrastructure and we’re about to change it again!
As the entire technology management profession evolves into the Age of the Customer, the whole notion of infrastructure is morphing in dramatic ways. The long-criticized silos are finally collapsing, cloud computing quickly became mainstream, and you now face a dizzying variety of infrastructure options. Some are outside your traditional borders – like new outsourcing, hosting and colocation services as well as too many cloud forms to count. Some remain inside and will for years to come. More of these options will come from the outside though, and even those “legacy” technologies remaining inside will be created and managed differently.
Your future lies not in managing pockets of infrastructure, but in how you assemble the many options into the services your customers needs. Our profession has been locally brilliant, but globally stupid. We’re now helping you become globally brilliant. We call this service design, a much broader design philosophy rooted in systems thinking. The new approach packages technology into a finished “product” that is much more relevant and useful than any of the parts alone.
On Monday Microsoft officially announced the launch of two Azure Data Centers in Australia. This is big news for the many Australia-based organizations concerned about data sovereignty, as well as those who simply equate on-shore data residency with increased security and control.
Announced as part of TechEd 2014 in Sydney, Microsoft specifically called out Amazon Web Services (AWS) and Google as it’s key competition. In fact, Microsoft has gone to great lengths over the past year plus to consistently position these two companies as the only other viable longterm cloud providers. This is based on three cloud provider capabilities identified by Microsoft as critical: hyper-scale, enterprise-grade, and hybrid.
Overall it’s a good angle for Microsoft. All three players operate at hyper-scale as public cloud providers. All three also offer enterprise-grade services, (although this definition varies based on workload). Most importantly for Microsoft, neither AWS nor Google have a primary focus on enabling hybrid cloud services.
In contrast, all traditional large infrastructure vendors (Fujitsu, HP, IBM, VMware, etc.), system integrators (Dimension Data, NTT, etc.), and telco’s (Telstra) focus squarely on enterprise-grade services and hybrid cloud enablement. Rackspace, IBM and HP also have Australia-based data centers. But all these providers lack hyper-scale.
Early next month, Forrester will publish a report on the dynamics of China’s private cloud market. This research demonstrates that Chinese I&O pros have started to leverage the benefits of private cloud — including highly standardized and automated virtual pooling and metered pay-per-use chargeback — to support the digital transformation of traditional business. By using private cloud, Chinese I&O pros not only support their business units’ digital transformation, but also provide the cost transparency that the CFO’s office demands. In practical business terms, Chinese organizations use private cloud to:
Improve business agility. There is fierce market competition to give Chinese consumers more choices. To do this, Chinese organizations must shift their business operations to increase their product portfolio to win new customers and provide a better customer experience to serve and retain existing customers. Chinese I&O pros need to provide a cloud platform that also supports business units’ requirement to lower their capital and operating expenditures.
Avoid disruption by Internet companies. Chinese web-based companies have started to use high-quality service to disrupt traditional businesses. Chinese I&O pros need to provide more flexible computing to help the application development team to improve the development cycle and respond to customers more quickly, flexibly, and effectively.
Develop new business without adding redundancy. Chinese organizations want to scale up new business to offset declines in revenue. However, the existing IT infrastructure at these firms often cannot support new business models — and can even take a toll. Chinese I&O pros need to find a new way — such as private cloud — to support business development and reuse existing infrastructure.
I’ve recently been thinking a lot about application-specific workloads and architectures (Optimize Scalalable Workload-Specific Infrastructure for Customer Experiences), and it got me to thinking about the extremes of the server spectrum – the very small and the very large as they apply to x86 servers. The range, and the variation in intended workloads is pretty spectacular as we diverge from the mean, which for the enterprise means a 2-socket Xeon server, usually in 1U or 2U form factors.
At the bottom, we find really tiny embedded servers, some with very non-traditional packaging. My favorite is probably the technology from Arnouse digital technology, a small boutique that produces computers primarily for military and industrial ruggedized environments.
Slightly bigger than a credit card, their BioDigital server is a rugged embedded server with up to 8 GB of RAM and 128 GB SSD and a very low power footprint. Based on an Atom-class CPU, thus is clearly not the choice for most workloads, but it is an exemplar of what happens when the workload is in a hostile environment and the computer maybe needs to be part of a man-carried or vehicle-mounted portable tactical or field system. While its creators are testing the waters for acceptance as a compute cluster with up to 4000 of them mounted in a standard rack, it’s likely that these will remain a niche product for applications requiring the intersection of small size, extreme ruggedness and complete x86 compatibility, which includes a wide range of applications from military to portable desktop modules.
The last few days have been eventful in the cloud gateway space and should provide I&O organizations more incentive to start evaluating gateways. Yesterday, EMC announced its acquisition of cloud gateway startup TwinStrata which will allow EMC customers to move on-premise data from EMC arrays to public cloud storage providers. Today, Panzura launched a free cloud gateway and their partner Google is adding 2TB of free cloud storage for a year to entice companies to kick the tires on a gateway. Innovation and investment in this area does not appear to be slowing down. CTERA locked in an additional $25 million in VC funding last week to accelerate the sales and marketing efforts to support its cloud gateway and file sync & share products.
Though the cloud gateway market has grown slowly so far, this technology category is about to become mainstream. Cloud Gateways are disruptive since they can facilitate data migration from on-premises to a public cloud storage service to create a true hybrid cloud storage environment. Basically, a cloud gateway is a virtual or physical storage appliance which looks like a NAS or block storage device to users and applications on-premises, but can write data back to a public cloud storage service using the native APIs of that cloud.
A number of use cases have emerged for cloud gateways including:
Last week I presented an overview of cloud adoption trends in the banking sector in Asia to a panel of financial services regulators in Hong Kong. The presentation showcased a few cloud case studies including CBA, ING Direct, and NAB in Australia. I focused on the business value that these banks have realized through the adoption of cloud concepts, while remaining compliant with the local regulatory environments. These banks have also developed a strong competitive advantage: They know how to do cloud. Ultimately, I believe that cloud is a capability that banks will have to master in order to build an agility advantage. For instance, cloud is a key enabler of Yuebao, Alibaba’s new Internet finance business. 80 million users in less than 10 months? Only cloud architecture can enable that type of agility and scale (an idea that Hong Kong regulators clearly overlooked).
The business press has come alive over the past few weeks as companies as diverse as Delta, Facebook, and Tesla have publicly declared that they want to own software development for key applications. What should catch your attention about these announcements is the types of software these firms want to control. Delta is acquiring the software IP and data associated with an application that affects 180 of its customer and flight operations systems. Facebook is building proprietary software to simplify interactions between its sales teams and the advertisers posting ads on the social networking site. And Tesla has developed its own enterprise resource management (ERP) and commerce platform that links the manufacturing history of a vehicle with important sales and customer support systems. Tesla's CIO Jay Vijayan, in describing his organization's system, sums up the sentiment behind many of these business decisions: "It helps the company move really fast."