I’ve been writing about platform-as-a-service (PaaS) since the beginning of 2009, and we published our first Forrester Wave™ on the PaaS market about 18 months ago. While the lines between IaaS, PaaS, and SaaS are blurring in the minds of some end users and developers, delivering PaaS requires a lot more intellectual property on the part of the cloud provider. IaaS is “just” the offering of an industrialized infrastructure service — but full PaaS service turns the cloud provider basically into a real software vendor or VAR of a decent stack of software platform components.
The market has undergone amazing changes since 2009 and the market landscape has been shaken up considerably since the last Forrester Wave. Why? A number of vendors have joined the crowd from three different directions:
IaaS cloud providers such as Amazon are moving up the stack to PaaS. From advanced database, messaging, and parallel processing to identity management and federation services, Amazon is arming itself with a myriad of value-added PaaS services to combat margin pressure in the commoditizing pure infrastructure space. Other IaaS providers are about to follow, most by OEMing PaaS stacks like those from Cordys or LongJump, or some other PaaS stack that is available to third-party infrastructure provider models.
Out of all the inquiries I get from Forrester enterprise clients, the above question is by far the most common these days. However, the question shows that we have a lot to learn about true public cloud environments.
I know I sound like a broken record when I say this, but public clouds are not traditional hosting environments, and thus you can't just put any app that can be virtualized into the cloud and expect the same performance and resiliency. Apps in the cloud need to adapt to the cloud - not the other way around (at least not today). This means you shouldn't be thinking about what applications you can migrate to the cloud. That isn't the path to lower costs and greater flexibility. Instead, you should be thinking about how your company can best leverage cloud platforms to enable new capabilities. Then create those new capabilities as enhancements to your existing applications.
This advice should sound familiar if you have been in the IT business for more than a decade. Back in 1999 we did the same thing. As the Web was emerging, we didn't pick up our UNIX applications and move them to the web. We instead built new web capabilities and put them in front of the legacy systems (green screen scrapers, anyone?). The new web apps were built in a new way - using the LAMP stack, scaling out, and being geographically dispersed through hosting providers and content delivery networks. We learned new programming architectures, languages, and techniques for availability and performance. Cloud platforms require the same kind of thinking.
If you have dismissed Microsoft as a cloud platform player up to now, you might want to rethink that notion. With the latest release of Windows Azure here at Build, Microsoft’s premier developer shindig, this cloud service has become a serious contender for the top spot in cloud platforms. And all the old excuses that may have kept you away are quickly being eliminated.
In typical Microsoft fashion, the Redmond, Washington giant is attacking the cloud platform market with a competitive furor that can only be described as faster follower. In 2008, Microsoft quickly saw the disruptive change that Amazon Web Services (AWS) represented and accelerated its own lab project centered around delivering Windows as a cloud platform. Version 1.0 of Azure was decidedly different and immature and thus struggled to establish its place in the market. But with each iteration, Microsoft has expanded Azure’s applicability, appeal, and maturity. And the pace of change for Windows Azure has accelerated dramatically under the new leadership of Satya Nadella. He came over from the consumer Internet services side of Microsoft, where new features and capabilities are normally released every two weeks — not every two years, as had been the norm in the server and tools business prior to his arrival.
Well if you're going to make a dramatic about face from total dismissal of cloud computing, this is a relatively credible way to do it. Following up on its announcement of a serious cloud future at Oracle Open World 2011, the company delivered new cloud services with some credibility at this last week's show. It's a strategy with laser focus on selling to Oracle's own installed base and all guns aimed at Salesforce.com. While the promise from last year was a homegrown cloud strategy, most of this year's execution has been bought. The strategy is essentially to deliver enterprise-class applications and middleware any way you want it - on-premise, hosted and managed or true cloud. A quick look at where they are and how they got here:
The long-rumored changing of the guard at VMware finally took place last week and with it came down a stubborn strategic stance that was a big client dis-satisfier. Out went the ex-Microsoft visionary who dreamed of delivering a new "cloud OS" that would replace Windows Server as the corporate standard and in came a pragmatic refocusing on infrastructure transformation that acknowledges the heterogeneous reality of today's data center.
Paul Maritz will move into a technology strategy role at EMC where he can focus on how the greater EMC company can raise its relevance with developers. Clearly, EMC needs developer influence and application-level expertise, and from a stronger, full-portfolio perspective. Here, his experience can be more greatly applied -- and we expect Paul to shine in this role. However, I wouldn't look to see him re-emerge as CEO of a new spin out of these assets. At heart, Paul is more a natural technologist and it's not clear all these assets would move out as one anyway.
B2B communication, with its original form of EDI messages, is the oldest and unfortunately the least flexible form of integration between systems and different enterprises. Many enterprises run B2B gateways on-premises or have managed service contracts for “their instance of their B2B Hub.”
I’ve received over the past months an increasing number of inquiries from Forrester clients asking for the future of this approach and the market trend. This is what I usually explain:
Your future cloud/legacy integration should cover your business partner and your SaaS applications. Cloud computing is disrupting the integration space! Why? Traditionally, you had two very distinguished integration scenarios. Either, it was about the integration between multiple systems within your enterprise — middleware software, with product categories like EAI, ESB, CIS, and BPM, was the matching solution, as all systems have been on premises in the past. Or, it was about the integration with your business partners — the well-established B2B/EDI gateways and managed services were the matching solution over the Internet (or VANs). However, cloud computing disrupted the space already: Suddenly parts of your business unit’s applications are in the cloud on packaged SaaS applications, and they needed to be integrated with your on-premises legacy. Or, you and your business partners even use the same SaaS applications, and B2B traffic is as simple as moving data from one tenant to the other tenant on the same cloud platform. To face this trend of an increasing variety of integration, a good cloud integration strategy should look at synergies between the cloud/legacy integration scenarios with your business partners and the SaaS tenants of your own enterprise holistically!
In typical Microsoft fashion, they don't catch a new trend right with the first iteration but they keep at it and eventually strike the right tone and in more cases than not, get good enough. And often good enough wins. That seems the be the pattern playing out with Windows Azure, its cloud platform.
As developers, we often ask for more resources from the infrastructure & operations (I&O) teams than we really need so we don't have to go back later and ask for more - too painful and time consuming. We also often don't know how many resources our code might need, so we might as well take as much as we can get. But do we ever give it back when we learn it is more than we need?
On the other hand, I&O often isn't any better. The first rule we learned about capacity planning was that it's more expensive to underestimate resource needs and be wrong than to overestimate, and we always seem to consume more resources eventually.
If you wanted to see the full spectrum of cloud choices that are coming to market today you only have to look at these two efforts as they are starting to evolve. They represent the extremes. And ironically both held analyst events this week.
OpenStack is clearly an effort by a vendor (Rackspace) to launch a community to help advance technology and drive innovation around a framework that multiple vendors can use to bring myriad cloud services to market and deliver differentiated values. Whereas Oracle, who gave analysts a brief look inside its public cloud efforts this week, is taking a completely closed and self-built approach that looks to fulfill all cloud values from top to bottom.