DELETE. It's a button we hit every single day. But normally, we are comforted by the fact that if we need to get something back that we accidentally deleted, backup software can save the day. But what happens when you delete data within a SaaS application? In some cases it is as simple as pulling up the virtual trash can and retrieving it. Sometimes, however, its not so simple. While the majority of the enterprise-grade SaaS offerings have robust methodologies for backing up and restoring data to protect against data loss or disaster, they may or may not make this technology available to you as the user. In cases where data is deleted accidentally or maliciously, tied to the account of departing employees, wiped out by rogue applications or lost during a migration, the vendor may or may not work with you to retrieve data from its backups.
How well do you know your SaaS provider's SLAs for retrieving data? Chances are, this isn't something you've spent much time thinking about. In a recent report, we dug through the backup and restore policies of dozens of SaaS vendors and found the results extremely variable. Some vendors will help restore data, but only for a hefty fee, others will take no part in assisting you with restoring data, and the vast majority, simple don't disclose their policies. Here are excerpts from several SaaS provider's restore policies that we found particularly interesting:
In a world where failure and downtime are no longer an option, resiliency is an increasingly critical priority. In fact, it was the no. 3 overall infrastructure priority moving into 2014. But how do we become more resilient organizations? Many companies feel that they do not have the required in-house expertise to run their entire DR programs--in fact, according to our latest Forrester/DRJ study, 42% of companies use some sort of outsourced DR service.
But how do you select a DR service provider partner? Forrester undertook this task with an update of our Traditional Disaster Recovery Service Provider Wave, and our first ever Disaster Recovery-As-A-Service Provider Wave evaluations. Vendors evaluated in this report represent today's top DR service providers vendors -- Axcient; Barracuda Networks; CenturyLink Technology Solutions; CSC; EVault; HP; IBM; iland; nScaled; Persistent Systems; Phoenix; Quorum; Recovery Point Systems; SunGard; and Verizon Terremark. What we found was a tight race, and a DRaaS market in which:
iland and SunGard lead the pack. Two vendors stand apart in this evaluation: iland and SunGard both excel in their current offering and their strategies. Both offer very flexible solutions that cover the entire resiliency spectrum and allow users to pick from an array of standardized offerings to ensure that their needs are met.
Here's a question I've been getting a lot recently: "how far apart should I locate my primary data center from my farthest recovery data center?" Unfortunately, the answer is "it depends". There is no hard and fast rule for how far apart your sites should be, but here is my basic rule of thumb: the sites should be far enough apart that they are not subject to the majority of the same risks. Whether it's winter storms, power outages, or terror threats, you need to make sure that it's highly unlikely that a single event could take down both sites.
But seriously, just give me a number. Ok, ok, I have some numbers. In the chart below you can see how far apart companies were locating their recovery sites in 2007 and in 2010. What's interesting here is that between 2007 and 2010, survey respondents reported shorter distances between primary and secondary data centers. In 2007, 22% of respondents reported that the distance between their primary data center and farthest backup data center was greater than 1,000 miles, while in 2010, only 12% claimed this distance.
You want 2013 data? We are currently collecting that data now in our Forrester/Disaster Recovery Journal Survey which I highly encourage you to take here.
So, does that mean farther apart is better? Not necessarily! Consider the following:
Distance ≠ safety. Just because sites are far apart, doesn't mean they can't experience the same risks. For example, a company that has their primary site in South Florida and a recovery site in North Carolina would have significant distance between the sites, but could still both be impacted by a single hurricane.
Last week there were several of outages that got me thinking more about the cost of downtime. I get this question a lot: what is the industry average cost of downtime? I hate answering "it depends," but that's the truth. So much depends on the organization, the industry, the duration of the downtime, the number of people impacted, etc. And not all of it is about dollars and sense. Reputation, customer retention, employee satisfaction, and overall confidence can be shaken by even a short outage. Take, for example, the New York Times' mysterious outage on August 14, 2013, of around two hours. While two hours might not seem like much, in the middle of a news-heavy weekday, it made a lasting impression. The stock dropped, twitter exploded, and the Wall Street Journal dropped their paywall to try and capture readers. In this case, I argue the biggest impact of downtime was not the drop in stock price, but the loss of confidence and loss of competitive advantage.
Have you heard the big news? Data is growing at an insane pace. Ok ok, this isn't really news, I hear this almost every day. But what many people don't realize is that one of the guiltiest culprits behind data growth is actually backup data. Between 2010 and 2012, the average enterprise server backup data store grew by 42%, while file storage (which is often the scapegoat of data growth) grew by 28%. And with more and more mobile workers, it's no surprise that PC backup storage is also growing at an explosive rate, almost 100% over the past two years.
Backup data growth being what it is, it's no surprise that a lot of people are re-evaluating their enterprise backup software. That's why I recently embarked on Forrester's first Wave on Enterprise Backup and Recovery Software. As part of that report, I developed a list of key criteria that are necessary to evaluate your backup and recovery software. At a high level, here is what I came up with:
Data reduction capabilities and scalability. What data reduction techniques does the product support, and how well do these techniques scale?
Backup targets. What targets and backup methods does the solution support?
Advanced backup options. What advanced backup options does the solution support?
Encryption. What are the native backup encryption and encryption key management capabilities? What encryption solutions does the product integrate with?
In today's rapidly changing risk landscape, it's increasingly critical that infrastruture and operations professionals keep up to date with their most likely risks. Most companies only update their risk assessments annually, but many have not considered the risk of a robot uprising. For those firms that have not yet updated their continuity plans to include this very real risk, here are three tips that can get you started on the right path:
Store data in offline forms. Whether you choose tapes (outside a robotic tape library), punch cards, or optical media, you must keep current copies of data in a format that can't be sabotaged by the robots.
Keep continuity plans on paper. You'll want to have your plans for specifically dealing with the robot uprising in a format that is harder for the robots to read so they can't devise countermeasures to your plans.
Have emergency shutdwn protocols for your data center. To prevent the robots from taking over your data center and using it for their own purposes, you'll have to have an emergency shutdown plan.
These tips should get you started on the right path. Please contact me for additional information and best practices that I can provide on paper.
Here's how Amazon ruined* my Christmas: after devouring a lovely rib roast with a porcini-spinach stuffing (recipe here in case your stomach is now growling), we all curled up on the couch with hot cocoa, turned on Netflix streaming to watch classic Christmas movies (and past Doctor Who Christmas Specials)... only to get an error message. That's right, in case you missed it, Netflix was down on Christmas Eve and Christmas Day in North America for many users due to issues with Amazon's Elastic Load Balancing (ELB) service in the US East region. It's interesting to note, that this is at least the third time issues with the ELB service has caused problems for Netflix, with each time, the company making improvements to prevent this from happening again.
The world may or may not be ending on December 21, 2012. I'm not an expert on the ancient Maya (although I've climbed many Mayan pyramids and have long been fascinated by their history, see proof below), but I've heard a rumor that this week marks the end of the Long Count calendar, meaning a new era begins on Friday, December 21, 2012, bringing a new civilization. Also, potentially a planet called Niburu might crash into the earth (although NASA has confirmed they have seen no evidence of this).
So, what's your plan? Will it be a space ark? A time machine (i.e., a TARDIS)? Wormhole (a la Fringe)? Should you consider sending your data to Mars? How do you even prepare for the unknown, the black swan events that are highly improbably, but highly disruptive?
A little more than a week after Hurricane Sandy barreled through the Eastern seaboard, I wanted to take a moment and share some of my thoughts on business technology resiliency* and how we fared during this significant weather event. While there are still over a million people without electricity and significant recovery efforts underway, I'm overall impressed with the level of resiliency and preparedness many organizations exhibited during (and since) Sandy. I stress resiliency over recovery here because I believe that is the future of disaster recovery and business continuity. Our official definition is: “The ability for business technology to absorb