Ok, so NASA failed an audit. Don’t we all? I think it is important to understand the government’s cloud computing adoption timeline before passing judgment on NASA for failing to meet its cloud computing requirements. And, as someone who has read NASA’s risk management program (and the 600 pages of supporting documentation), I can say that this wasn’t a failure of risk management policy or procedure effectiveness. Clearly, this was a failure of third-party risk management’s monitoring and review of cloud services.
The Cloud Is Nebulous
Back in 2009, NASA pioneered cloud technology with a shipping container-based public cloud technology project named Nebula -- after the stellar cloud formation. (I love nerd humor, don’t you?)
Photo Source: NASA
During 2009, NASA, to determine if current cloud provider service offerings had matured enough to support the Nebula environment, did a study. The study proved that commercial cloud services had, in fact, become cheaper and more reliable than Nebula. NASA, as a result of the study, moved more than 140 applications to the public sector cloud environment.
In October of 2010, Congress had committee hearings on cybersecurity and the risk associated with cloud adoption. But remember, NASA had already moved its noncritical data (like www.nasa.gov or the daily video feeds from the international space station, that are edited together and packaged as content for the NASA website) to the public cloud in 2009. Before anyone ever considered the rules for such an adoption of these services.
Adobe Systems is a pioneer and fast mover in the public cloud and in so doing is showing that there is nothing for infrastructure & operations professionals (IT Ops) to fear about this move. Instead, as they put it, the cloud gives their systems administrators (sysadmins) super powers ala RoboCop.
This insight was provided by Fergus Hammond, a senior manager in Adobe Cloud Services, in an analyst webinar conducted by Amazon Web Services (AWS) last month. Hammond (no relation to Forrester VP and principal analyst Jeffrey Hammond) said that Adobe was live on AWS in October 2011, just 8 months after its formal internal decision to use the public cloud platform for its Adobe Creative Cloud. Prior to this there were pockets of AWS experience across various product teams but no coordinated, formal effort as large or strategic as this.
I get a lot of questions about the best way for developers to move to the cloud. That’s a good thing, because trying to forklift your existing applications as is isn’t a recipe for success. Building elastic applications requires a focus on statelessness, atomicity, idempotence, and parallelism — qualities that are not often built into traditional “scale-up” applications. But I also get questions that I think are a bit beside the point, like “Which is better: infrastructure-as-a-service (IaaS) or platform-as-a-service (PaaS)?” My answer: "It depends on what you’re trying to accomplish, your teams’ skills, and how you like to consume software from ISVs.” That first question is often followed up by a second: “Who’s the leader in the public cloud space?” It’s like asking, “Who's the leading car maker?” There’s a volume answer and there’s a performance answer. It’s one answer if you like pickups, and it’s a different answer if you want an EV. You have to look at your individual needs and match the capabilities of the car and its “ilities” to those needs. That’s how I think we’re starting to see developer adoption of cloud services evolve, based around the capabilities of individual services — not the *aaS taxonomy that we pundits and vendors apply to what’s out there. This approach to service-based adoption is reflected in data from our Forrsights Developer Survey, Q1 2013, so I've chosen publish some of it today to illustrate the adoption differences we see from service to service.