Dell today announced its new FX system architecture, and I am decidedly impressed.
Dell FX is a 2U flexible infrastructure building block that allows infrastructure architects to compose an application-appropriate server and storage infrastructure out of the following set of resources:
Multiple choices of server nodes, ranging from multi-core Atom to new Xeon E5 V3 servers. With configurations ranging from 2 to 16 server nodes per enclosure, there is pretty much a configuration point for most mainstream applications.
A novel flexible method of mapping disks from up to three optional disk modules, each with 16 drives - the mapping, controlled by the onboard management, allows each server to appear as if the disk is locally attached DASD, so no changes are needed in any software that thinks it is accessing local storage. A very slick evolution in storage provisioning.
A set of I/O aggregators for consolidating Ethernet and FC I/O from the enclosure.
All in all, an attractive and flexible packaging scheme for infrastructure that needs to be tailored to specific combinations of server, storage and network configurations. Probably an ideal platform to support the Nutanix software suite that Dell is reselling as well. My guess is that other system design groups are thinking along these lines, but this is now a pretty unique package, and merits attention from infrastructure architects.
Forrester’s Infrastructure and Operations research team has been on the leading edge of infrastructure technology and its proper operational aspects for years. We pushed the industry on both the supply side (vendors) and the demand side (enterprises) toward new models and we pushed hard. I’m proud to say we’ve been instrumental in changing the world of infrastructure and we’re about to change it again!
As the entire technology management profession evolves into the Age of the Customer, the whole notion of infrastructure is morphing in dramatic ways. The long-criticized silos are finally collapsing, cloud computing quickly became mainstream, and you now face a dizzying variety of infrastructure options. Some are outside your traditional borders – like new outsourcing, hosting and colocation services as well as too many cloud forms to count. Some remain inside and will for years to come. More of these options will come from the outside though, and even those “legacy” technologies remaining inside will be created and managed differently.
Your future lies not in managing pockets of infrastructure, but in how you assemble the many options into the services your customers needs. Our profession has been locally brilliant, but globally stupid. We’re now helping you become globally brilliant. We call this service design, a much broader design philosophy rooted in systems thinking. The new approach packages technology into a finished “product” that is much more relevant and useful than any of the parts alone.
On Monday Microsoft officially announced the launch of two Azure Data Centers in Australia. This is big news for the many Australia-based organizations concerned about data sovereignty, as well as those who simply equate on-shore data residency with increased security and control.
Announced as part of TechEd 2014 in Sydney, Microsoft specifically called out Amazon Web Services (AWS) and Google as it’s key competition. In fact, Microsoft has gone to great lengths over the past year plus to consistently position these two companies as the only other viable longterm cloud providers. This is based on three cloud provider capabilities identified by Microsoft as critical: hyper-scale, enterprise-grade, and hybrid.
Overall it’s a good angle for Microsoft. All three players operate at hyper-scale as public cloud providers. All three also offer enterprise-grade services, (although this definition varies based on workload). Most importantly for Microsoft, neither AWS nor Google have a primary focus on enabling hybrid cloud services.
In contrast, all traditional large infrastructure vendors (Fujitsu, HP, IBM, VMware, etc.), system integrators (Dimension Data, NTT, etc.), and telco’s (Telstra) focus squarely on enterprise-grade services and hybrid cloud enablement. Rackspace, IBM and HP also have Australia-based data centers. But all these providers lack hyper-scale.
Early next month, Forrester will publish a report on the dynamics of China’s private cloud market. This research demonstrates that Chinese I&O pros have started to leverage the benefits of private cloud — including highly standardized and automated virtual pooling and metered pay-per-use chargeback — to support the digital transformation of traditional business. By using private cloud, Chinese I&O pros not only support their business units’ digital transformation, but also provide the cost transparency that the CFO’s office demands. In practical business terms, Chinese organizations use private cloud to:
Improve business agility. There is fierce market competition to give Chinese consumers more choices. To do this, Chinese organizations must shift their business operations to increase their product portfolio to win new customers and provide a better customer experience to serve and retain existing customers. Chinese I&O pros need to provide a cloud platform that also supports business units’ requirement to lower their capital and operating expenditures.
Avoid disruption by Internet companies. Chinese web-based companies have started to use high-quality service to disrupt traditional businesses. Chinese I&O pros need to provide more flexible computing to help the application development team to improve the development cycle and respond to customers more quickly, flexibly, and effectively.
Develop new business without adding redundancy. Chinese organizations want to scale up new business to offset declines in revenue. However, the existing IT infrastructure at these firms often cannot support new business models — and can even take a toll. Chinese I&O pros need to find a new way — such as private cloud — to support business development and reuse existing infrastructure.
I’ve recently been thinking a lot about application-specific workloads and architectures (Optimize Scalalable Workload-Specific Infrastructure for Customer Experiences), and it got me to thinking about the extremes of the server spectrum – the very small and the very large as they apply to x86 servers. The range, and the variation in intended workloads is pretty spectacular as we diverge from the mean, which for the enterprise means a 2-socket Xeon server, usually in 1U or 2U form factors.
At the bottom, we find really tiny embedded servers, some with very non-traditional packaging. My favorite is probably the technology from Arnouse digital technology, a small boutique that produces computers primarily for military and industrial ruggedized environments.
Slightly bigger than a credit card, their BioDigital server is a rugged embedded server with up to 8 GB of RAM and 128 GB SSD and a very low power footprint. Based on an Atom-class CPU, thus is clearly not the choice for most workloads, but it is an exemplar of what happens when the workload is in a hostile environment and the computer maybe needs to be part of a man-carried or vehicle-mounted portable tactical or field system. While its creators are testing the waters for acceptance as a compute cluster with up to 4000 of them mounted in a standard rack, it’s likely that these will remain a niche product for applications requiring the intersection of small size, extreme ruggedness and complete x86 compatibility, which includes a wide range of applications from military to portable desktop modules.
Last week I presented an overview of cloud adoption trends in the banking sector in Asia to a panel of financial services regulators in Hong Kong. The presentation showcased a few cloud case studies including CBA, ING Direct, and NAB in Australia. I focused on the business value that these banks have realized through the adoption of cloud concepts, while remaining compliant with the local regulatory environments. These banks have also developed a strong competitive advantage: They know how to do cloud. Ultimately, I believe that cloud is a capability that banks will have to master in order to build an agility advantage. For instance, cloud is a key enabler of Yuebao, Alibaba’s new Internet finance business. 80 million users in less than 10 months? Only cloud architecture can enable that type of agility and scale (an idea that Hong Kong regulators clearly overlooked).
The business press has come alive over the past few weeks as companies as diverse as Delta, Facebook, and Tesla have publicly declared that they want to own software development for key applications. What should catch your attention about these announcements is the types of software these firms want to control. Delta is acquiring the software IP and data associated with an application that affects 180 of its customer and flight operations systems. Facebook is building proprietary software to simplify interactions between its sales teams and the advertisers posting ads on the social networking site. And Tesla has developed its own enterprise resource management (ERP) and commerce platform that links the manufacturing history of a vehicle with important sales and customer support systems. Tesla's CIO Jay Vijayan, in describing his organization's system, sums up the sentiment behind many of these business decisions: "It helps the company move really fast."
Yesterday HP announced that it will be entering into a “non-equity joint venture” (think big strategic contract of some kind with a lot of details still in flight) to address the large-scale web services providers. Under the agreement, Foxcon will design and manufacture and HP will be the primary sales channel for new servers targeted at hyper scale web service providers. The new servers will be branded HP but will not be part of the current ProLiant line of enterprise servers, and HP will deliver additional services along with hardware sales.
The motivation is simple underneath all the rhetoric. HP has been hard-pressed to make decent margins selling high-volume low-cost and no-frills servers to web service providers, and has been increasingly pressured by low-cost providers. Add to that the issue of customization, which these high-volume customers can easily get from smaller and more agile Asian ODMs and you have a strategic problem. Having worked at HP for four years I can testify to the fact that HP, a company maniacal about quality but encumbered with an effective but rigid set of processes around bringing new products to market, has difficulty rapidly turning around a custom design, and has a cost structure that makes it difficult to profitably compete for deals with margins that are probably in the mid-teens.
Enter the Hon Hai Precision Industry Co, more commonly known as Foxcon. A longtime HP partner and widely acknowledged as one of the most efficient and agile manufacturing companies in the world, Foxcon brings to the table the complementary strengths to match HP – agile design, tightly integrated with its manufacturing capabilities.
It was five years ago, March 2009, when Cisco formally announced “Project California,” its (possibly intentionally) worst-kept secret, as Cisco Unified Computing System. At the time, I was working at Hewlett Packard, and our collective feelings as we realized that Cisco really did intend to challenge us in the server market were a mixed bag. Some of us were amused at their presumption, others were concerned that there might be something there, since we had odd bits and pieces of intelligence about the former Nuova, the Cisco spin-out/spin-in that developed UCS. Most of us were convinced that they would have trouble running a server business at margins we knew would be substantially lower than their margins in their core switch business. Sitting on top of our shiny, still relatively new HP c-Class BladeSystem, which had overtaken IBM’s BladeCenter as the leading blade product, we were collectively unconcerned, as well as puzzled about Cisco’s decision to upset a nice stable arrangement where IBM, HP and Dell sold possibly a Billion dollars’ worth of Cisco gear between them.
Five years later, HP is still number one in blade server units and revenue, but Cisco appears to be now number two in blades, and closing in on number three world-wide in server sales as well. The numbers are impressive:
· 32,000 net new customers in five years, with 14,000 repeat customers
· Claimed $2 Billion+ annual run-rate
· Order growth rate claimed in “mid-30s” range, probably about three times the growth rate of any competing product line.
It’s been a long wait, about four years if memory serves me well, since Intel introduced the Xeon E7, a high-end server CPU targeted at the highest performance per-socket x86, from high-end two socket servers to 8-socket servers with tons of memory and lots of I/O. In the ensuing four years (an eternity in a world where annual product cycles are considered the norm), subsequent generations of lesser Xeons, most recently culminating in the latest generation 22 nm Xeon E5 V2 Ivy Bridge server CPUs, have somewhat diluted the value proposition of the original E7.
So what is the poor high-end server user with really demanding single-image workloads to do? The answer was to wait for the Xeon E7 V2, and at first glance, it appears that the wait was worth it. High-end CPUs take longer to develop than lower-end products, and in my opinion Intel made the right decision to skip the previous generation 22nm Sandy Bridge architecture and go to Ivy Bridge, it’s architectural successor in the Intel “Tick-Tock” cycle of new process, then new architecture.
What was announced?
The announcement was the formal unveiling of the Xeon E7 V2 CPU, available in multiple performance bins with anywhere from 8 to 15 cores per socket. Critical specifications include:
Up to 15 cores per socket
24 DIMM slots, allowing up to 1.5 TB of memory with 64 GB DIMMs
Approximately 4X I/O bandwidth improvement
New RAS features, including low-level memory controller modes optimized for either high-availability or performance mode (BIOS option), enhanced error recovery and soft-error reporting