To Get Cloud Economics Right, Think Small, Very, Very Small

A startup, who wishes to remain anonymous, is delivering an innovative new business service from an IaaS cloud and most of the time pays next to nothing to do this. This isn't a story about pennies per virtual server per hour - sure they take advantage of that- but more a nuance of cloud optimization any enterprise can follow: reverse capacity planning

Most of us are familiar with the black art of capacity planning. You take an application, simulate load against it, trying to approximate the amount of traffic it will face in production, then provision the resources to accommodate this load. With web applications we tend to capacity plan against expected peak, which is very hard to estimate - even if historical data exists. You capacity plan to peak because you don't want to be overloaded and cause the client to wait, error out, or go to your competition for your service.

And if you can't accommodate a peak you don't know how far above the capacity you provisioned the spike would have gone. If you built out the site to accommodate 10,000 simultaneous connections and you went above capacity for five minutes, was the spike the 10,001st connection or an additional 10,000 connections?  

But of course we can never truly get capacity planning right because we don't actually know what peak will be and if we did, we probably don't have the budget (or data center capacity) to build it out to this degree. 

The downside of traditional capacity planning is that while we build out the capacity for a big spike, that spike rarely comes. And when there isn't a spike a good portion of those allocated resources sit unused. Try selling this model to your CFO. That's fun. 

Cloud computing platforms - platform as a service (PaaS) and infrastructure as a service (IaaS) offerings to be specific - provide a way to change the capacity planning model. For scale-out applications you can start with a small application footprint and incrementally adjust the number of instances deployed as load patterns change. This elastic compute model is a core tenet of cloud computing platforms, and most developers and IT infrastructure and operations professionals get this value. But are you putting it into practice?

To get the most value from the cloud, you should use capacity planning completely opposite of the traditional way because the objective on the cloud is to get the application to have as small a base footprint as possible. You don't preallocate resources for peak - that's the cloud's job. You plan for the minimum hourly bill because you know you can call up more resources when you need them providing you have the appropriate monitoring regime and can provision additional IaaS resources fast enough. 

In the case study we published on GSA's use of Terremark's The Enterprise Cloud, this is the approach it took for USA.gov. It determined the minimum footprint the site required and placed this capacity into traditional Terremark hosting under a negotiated 12-month contract. It then tapped into The Enterprise Cloud for capacity as traffic loads went up from here. The result: a 90% reduction in hosting costs.

You can do the same but don't stop once you've determined your base capacity. Can you make it even smaller? The answer is most likely yes. Because if there is a peak, then there is also a trough.

That's what the startup above has done. They tweaked their code to get this footprint even smaller, then determined that the minimum footprint they really needed persistently was their services' home page, and that could be cached in a content delivery network. So when their service has no traffic at all, they have no instances running on their IaaS platform. Once the cache has been hit they then fire up the minimal footprint (which they tuned down to a single virtual machine) and then scale up from here.

Now that's leveraging cloud economics. If you can find the trough then you can  game the system to your advantage. What are you doing to maximize your cloud footprint? Share your best tips here and we'll collect them for a future blog post and the forthcoming update to our IaaS Best Practices report

Mike Gualtieri contributed to this report.

Comments

Another great post James. I

Another great post James. I don't think I could have said it better myself.

One of the key benefits of utilizing IaaS clouds (both private and public) is the elimination of the need to do full out capacity planning. In fact, looked at from another angle, organizations no longer need to worry about taking on projects because of the risk associated with buying all of the hardware that is estimated to be required over the next 3-5 years. In the new cloud world, if a project fails to meet expectations you are not left waiting around for another project with the exact same specifications. Instead, you simply release the resources back to the farm. These resources can then be split up and utilized however the next project needs them.

To take this even further, lets say an organization has a project that has stalled and seems to be lost in perpetual limbo. The computing resources used for this project no longer need to be locked up and wasted in the new world of computing. Simply power off the virtual machines associated with this project and let others use the resources until the project is ready again.

I know this thinking is much different than organizations are used to when it comes to planning for computing resources. However, once organizations start embracing this type of thinking and realize the efficiencies and flexibility afforded to them they will wonder how they ever survived in the old world.

Once again, keep up the great posts.

Good point, but IT capabilities are not mature enough for that

Almost 99% of COTS software is not CLOUD auto-scaling enabled, so you cannot run it on Amazon AWS (this is just an example) with auto-scaling up & down when demand requires it. You need to program auto-scaling yourself. (i.e. autodetect when you need a new instance, intance agnostic software, etc...)

I still do not know many programmers that have programmed auto-scaling for the cloud, this means that it is going to cost a lot in time & budget for the first projects that use the cloud for that specific purpose (CFOs and business users don't like that either...)

I'm sure in some years the IT landscape will be just like what you said.

Interesting implication for internal cloud

We have found the same challenge- conservative over allocation and reservation of resources and we are investigating a 'start small and grow' approach for internal private cloud. We should not need the complexity small, medium, large options with variations for memory, processing power, storage, I/O. If you need a VM, you get one immediately - as long as it's small.

The cloud 'should' scale as required - up and down / in and out - if the elasticity theory holds.

Of course, this works best if the the application is 'cloud aware'.

This assumption clearly helps us shape the design and development effort of the cloud and application development practices.