Disaster recovery-as-a-service (DRaaS), in my opinion, is one of the most exciting areas I look at. To me, using the cloud for disaster recovery (DR) purposes makes perfect sense: the cloud is an on-demand resource that you pay for as you need it (i.e., during a disaster or testing). Up until now, there haven't been many solutions out there that truly offered DRaaS--replicating physical or virtual servers to the cloud and the ability to failover production to the cloud provider's environment (you can read more about my definition of DRaaS in my recent TechRadar report), but so far today, we've seen TWO new DRaaS platforms announced from VMware and SunGard! Here's a quick roundup of what was announced today:
VMware. VMware announced at VMworld that they will be making their popular Site Recovery Manager (SRM), a DR automation tool, available as a service through hosting and cloud partners. At launch, participating partners are FusionStorm, Hosting.com, iland, and Veristor. Benefits: Built into the VMware platform. Limitations: VMware specific.
The growing realization for SaaS buyers is that if they overlook the details of their SaaS contracts, chances are they’ll pay for it later. Forrester analyzed the thousands of inquiries we receive every quarter to understand the hot button topics in the SaaS space for the first half of 2011. When it comes to on-demand services, we found that people paid more attention to the following three factors in the first half of 2011 than ever before:
Pricing and discounts. It came as no surprise that people are most concerned about money and are looking for guidance around SaaS pricing and discounts more than anything else. Many of our clients want to benchmark themselves against peers. For example, one client asked, “Is there some benchmark data to compare pricing on B2C web portal (PaaS or SaaS) solutions?” Forrester’s take? Unlike traditional software, most SaaS pricing is publicly available on vendor websites. However, pricing and pricing models are still in flux for many emerging areas of SaaS. Even in more established areas, like HR and CRM, discounts can range as high as 85% for large or strategic clients.
Lack Of Infrastructure Portability Is A Showstopper For Me
Salesforce.com bills Force.com as "The leading cloud platform for business apps." It is definitely not for me, though. The showstopper: infrastructure portability. If I develop an application using the Apex programming language, I can only run in the Force.com "cloud" infrastructure.
Don't Lock Me In
Q: What is worse than being locked-in to a particular operating system?
A: Being locked-in to hardware!
In The Era Of Cloud Computing, Infrastructure Portability (IP) Is A Key Requirement For Application Developers
Unless there is a compelling reason to justify hardware lock-in, make sure you choose a cloud development platform that offers infrastructure portability; otherwise, your app will be like a one-cable-television-company town.
Bottom line: Your intellectual property (IP) should have infrastructure portability (IP).
For the past few months, I’ve been heads down talking to our clients about storage refreshes. There have been some technology refreshes, primarily from some product coming up on end of life. However, for the most part, I’ve been consistently hearing the pain that I&O professionals have been suffering, which is from the storage capacity overload of server virtualization. Many today, however, are suffering even more, because not only do they have the server virtualization storage growth problems, but now it’s compounded with VDI, AND the overall private cloud initiatives many organizations have in place. Not only has their storage grown by 50% in the last 12 months, but it’s now projected to grow another 50% in the next 12 months. Before another million dollars plus investment is made, many are asking (as should you) the question: Is throwing more hardware going to really solve the problem?
These three BIG initiatives have a significant impact on how storage architectures change. But the reality is that storage has been an afterthought for a long time, and today, there is much change that has to happen. Features such as thin provisioning, deduplication (for primary environments), and compression have all been available for some time now and must be a part of common practice and procedures for managing storage that is supporting virtualization environments. And this is key. Having tools and solutions in place that understand your virtualization environment are critical to the overall success of your private cloud initiative, because storage is one of the integrated foundational blocks of establishing a private cloud environment in your data center. Today, it’s difficult to manage your storage without understanding what’s happening in the network as well in your server virtualization environment.
It’s a couple of days after Google announced its intentions to jump headfirst into the hardware business. By now everyone — including my colleagues Charles Golvin and John McCarthy — have expressed their thoughts about what this means for Apple, Microsoft, RIM, and all of the Android-based smartphone manufacturers. This is not another one of those blog posts.
What I really want to highlight is something more profound, and more relevant to all of you out there who might classify your day job as “product strategy.” To you, the Google/Moto deal is just one signal — however faint — coming through the static noise of today’s M&As, IPOs, and new product launches. But if you tune in and listen carefully, two things become crystal clear:
The lines between entire industries are blurring. Google — and some of the other firms I mentioned above — are just high profile examples of companies that are diversifying their product portfolio, and the very industries in which they play. There are several instances of this over the past "digital decade." What's different now is the increased frequency of the occurrences.
An important prerequisite for a full cloud broker model is the technical capability of cloud bursting:
Cloud bursting is the dynamic relocation of workloads from private environments to cloud providers and vice versa. A workload can represent IT infrastructure or end-to-end business processes.
The initial meaning of cloud bursting was relatively simple. Consider this scenario: An enterprise with traditional, non-cloud infrastructure is running out of infrastructure and temporarily gets additional compute power from a cloud service provider. Many enterprises have now established private clouds, and cloud bursting fits even better here, with dynamic workload relocation between private clouds, public clouds, and the more private provider models in the middle; Forrester calls these virtual private clouds. The private cloud is literally bursting into the next cloud level at peak times.
An essential step before leveraging cloud bursting is properly classifying workloads. This involves describing the most public cloud level possible, based on technical restrictions and data privacy needs (including compliance concerns). A conservative enterprise could structure their workloads into three classes of cloud:
Productive workloads of back-office data and processes, such as financial applications or customer-related transactions:These need to remain on-premises. An example is the trading system of an investment bank.