In 2014 I wrote about Microsoft and Dell’s joint Cloud Platform System offering, Microsoft’s initial foray into an “Azure-Like” experience in the enterprise data center. While not a complete or totally transparent Azure experience, it was a definite stake in the ground around Microsoft’s intentions to provide enterprise Azure with hybrid on-premise and public cloud (Azure) interoperability.
I got it wrong about other partners – as far as I know, Dell is the only hardware partner to offer Microsoft CPS – but it looks like my idiot-proof guess that CPS was a stepping stone toward a true on premise Azure was correct, with Microsoft today announcing its technology preview of Azure Stack, the first iteration of a true enterprise Azure offering with hybrid on-prem and public cloud interoperability.
Azure Stack is in some ways a parallel offering to the existing Windows Server/Systems Center and Azure Pack offering, and I believe it represents Microsoft’s long-term vision for enterprise IT, although Microsoft will do nothing to compromise the millions of legacy environments who want to incremental enhance their Windows environment. But for those looking to embrace a more complete cloud experience, Azure Stack is just what the doctor ordered – an Azure environment that can run in the enterprise that has seamless access to the immense Azure public cloud environment.
On the partner front, this time Microsoft will be introducing this as a pure software that can run on one or more standard x86 servers, no special integration required, although I’m sure there will be many bundled offerings of Azure Stack and integration services from partners.
I was in Tokyo last week, for the latest OpenStack Summit. Over 5,000 people joined me from around the world, to discuss this open source cloud project's latest - Liberty - release, to lay the groundwork for next year's Mitaka release, and to highlight stories of successful adoption.
Tokyo's Hamarikyu Gardens combine old with new (Source: Paul Miller)
And, unlike many events, this wasn't a hermetically sealed bubble of blandly anodyne mid-Atlantic content, served up to the same globe-trotting audience in characterless rooms that could so easily have been in London, Frankfurt, or Chicago. Instead, we heard from local implementers of OpenStack like Fujitsu, Yahoo! Japan, and - from just across the water - SK Telecom and Huawei.
In keynotes, case studies, and deep-dive technical sessions, attendees learned what worked, debated where to go next, and considered the project's complicated relationship to containers, software-defined networks, the giants of the public cloud, and more.
Cloud is becoming the new norm for enterprises. More and more companies across the globe are using a combination of two or more private, hosted, or public cloud services – applying different technology stacks to different business scenarios. Hybrid cloud management is now an important priority that enterprise architecture (EA) professionals should consider to support their organizations on the journey toward becoming a digital business.
I’ve recently published tworeports focusing on how to manage the complexity of hybrid cloud. These reports analyze the key dimensions to consider for hybrid cloud management and present four steps to help move your firm further along the path to hybrid maturity. To unleash the power of digital business, analyze the strategic hybrid cloud management practices of visionary Chinese firms on their digital transformation journey. Some of the key takeaways:
Align hybrid cloud management capabilities with your level of maturity . Hybrid cloud maturity is a journey of digital transformationcovering four steps: initial acceptance, strategic adoption, hybrid operationalization, and hybrid autonomy; maturity is measured by familiarity with, experience with, and knowledge of how to operate cloud. EA pros should build their management capabilities step by step, aiming to unify and automate cloud managed services by understanding technical dependencies and business priorities.
I recently attended VMware’s vForum 2014 event in Beijing. The vendor has established a local ecosystem for the three pillars of its business: the software-defined data center (SDDC), cloud services, and end user computing. VMware is working with:
Huawei to refine SDDC technologies.VMware is leveraging Huawei’s technology capability to improve its product feature. VMware integrated Huawei Agile Controller into NSX and vCenter to operate and manage network automation and quickly migrate virtual machines online. Huawei provides the technology to unify the management of virtual and physical networks based on VMware’s virtualization platform. This partnership can help VMware optimize its existing software features and improve the customer experience.
A group of us just published an analysis of VMworld (Breaking Down VMworld), and I thought I’d take this opportunity to add some additional color to the analysis. The report is an excellent synthesis of our analysis, the work of a talented team of collaborators with my two cents thrown in as well, but I wanted to emphasize a few additional impressions, primarily around storage, converged infrastructure, and the overall tone of the show.
First, storage. If they ever need a new name for the show, they might consider “StorageWorld” – it seemed to me that just about every other booth on the show floor was about storage. Cloud storage, flash storage, hybrid storage, cheap storage, smart storage, object storage … you get the picture.[i] Reading about the hyper-growth of storage and the criticality of storage management to the overall operation of a virtualized environment does not drive the concept home in quite the same way as seeing 1000s of show attendees thronging the booths of the storage vendors, large and small, for days on end. Another leading indicator, IMHO, was the “edge of the show” booths, the cheaper booths on the edge of the floor, where smaller startups congregate, which was also well populated with new and small storage vendors – there is certainly no shortage of ambition and vision in the storage technology pipeline for the next few years.
Hybrid clouds are especially subject to the law of unintended consequences, says Forrester’s cloud expert James Staten. Many IT organizations don’t even acknowledge that they have a hybrid cloud. The reality: If enterprises are using public cloud software-as-a-service (SaaS) and/or deploying any custom applications in the public cloud, then by definition they have a hybrid cloud, because it almost always connects to the back end.
In this episode of TechnoPolitics, James implores CIOs and IT professionals to get serious about hybrid cloud now to avoid spaghetti clouds in the future.
On Tuesday, September 4, Microsoft made the official announcement of Windows Server 2012, ending what has seemed like an interminable sequence of rumors, Beta releases, and endless speculation about this successor to Windows Server 2008.
So, is it worth the wait and does it live up to its hype? All omens point to a resounding “YES.”
Make no mistake, this is a really major restructuring of the OS, and a major step-function in capabilities aligned with several major strategic trends for both Microsoft and the rest of the industry. While Microsoft’s high level message is centered on the cloud, and on the Windows Server 2012 features that make it a productive platform upon which both enterprises and service providers can build a cost-effective cloud, its features will be immensely valuable to a wide range of businesses.
What It Does
The reviewers guide for Windows Server 2012 is over 220 pages long, and the OS has at least 100 features that are worth noting, so a real exploration of the features of this OS is way beyond what I can do here. Nonetheless, we can look at several buckets of technology to get an understanding of the general capabilities. Also important to note is that while Microsoft has positioned this as a very cloud-friendly OS, almost all of these cloud-related features are also very useful to an enterprise IT environment.
New file system — Included in WS2012 is ReFS, a new file system designed to survive failures that would bring down or corrupt the previous NTFS file system (which is still available). Combined with improvements in cluster management and failover, this is a capability that will play across the entire user spectrum.