In my coverage of business continuity and disaster recovery, I talk to both IT infrastructure and operations professionals as well as IT security professionals and I've found that the term "data protection" means something different to each. This comes as no surprise and I think for a long time it didn't really matter because IT operations and security professionals operated in independent silos. But as silos break down and "data protection" is a shared responsibility across the organization, it's important to be specific and to understand who is responsible for what.
But there are new options emerging from governance, risk, and compliance (GRC) vendors. For example, Archer Technologies has added a business continuity management module to its GRC SmartSuite Framework. I recently saw a demo of the offering and I found it to be intuitive and comprehensive. It's also closely aligned with the British Standard for Business Continuity Management, BS 25999. I also recently met with MetricStream, they have also added a BCM module to their GRC platform. Provided that you've already purchased the core GRC platform from one of these vendors, buying the BCM module is significantly less expensive that buying or subscribing to a tier 1 stand-alone BCM offering. Tier 1 offerings start at US$100K and average sales prices can be in the hundreds of thousands of dollars. The add-on modules to these GRC platforms will start between $30K-$50K.
If you still subscribe to fixed site recovery services using shared IT infrastructure from the likes of HP, IBM BCRS, or SunGard, among others, you will quickly become a dinosaur in the next 1 to 2 years.
These types of shared infrastructure services involve lengthy restores from tape and a recovery time objective of 72 hours, at best. Plus, you'll be lucky if you recover at all because chances are, you've had trouble scheduling a test with your service provider and it's been a LONG time since the last one, if indeed you’ve ever tested.
72 hours recovery just doesn't cut it anymore. And frankly, understanding your provider's oversubscription ratio to shared infrastructure to determine the risk of multiple invocations, or attempting to negotiate exclusions zones and availability guarantees is a time suck. Most companies are either taking DR back in-house or, if they still rely on a DR service provider, they are using dedicated infrastructure.
TechCrunchIT reported today that a Rackspace data center went down for several hours during the evening due to a power grid failure. Because Rackspace is a managed service provider (MSP), the downtime affected several businesses hosted in the data center.
Bottom line for IT Infrastructure and Operations professionals? Your next purchase of a backup-to-disk appliance or backup software will have integrated deduplication functionality, given the slew of announcements from all the major storage players. It’s no longer just pioneering vendors Data Domain and Diligent beating the deduplication drum — it’s all the major storage vendors.
In addition, based on the direction of NetApp, you need to start thinking about how the rest of your storage environment would benefit from integrated deduplication functionality like your VMware environment (server and desktop) or end-user home directories.
NetApp plans to introduce integrated deduplication technology in its NearStore VTL some time this year. In the meantime, the company is promoting the availability of deduplication on its production FAS storage systems and touting the huge benefits of deduplication in VMware environments.
On May 12th, 2008 VMware announced that nine storage replication vendors have tested and certified their technology with VMware’s long awaited Site Recovery Manager (SRM) offering. SRM is an important step forward in DR (DR) preparedness because it automates the process of restarting virtual machines (VM) at an alternate data center. Of course, your data and your VM configuration files must be present at the alternate site, hence the necessary integration with replication vendors. SRM not only automates the restart of VMs at an alternate data center, it can automate other aspects of DR. For example, it can shutdown other VMs before it recovers others. You can also integrate scripts for other tasks and insert checkpoints where a manual procedure is required. This is useful if you are using the redundant infrastructure at the alternate data center for other workloads such as application development and testing (a very common scenario). When you recover an application to an alternate site, especially if your redundant infrastructure supports other workloads, you have to think about how you will repurpose between secondary and production workloads. You also have to think about the entire ecosystem, such as network and storage settings, not just simply recovering a VM.
Essentially, VMware wants you to replace manual DR runbook with the automated recovery plans in SRM. It might not completely replace your DR runbook but it can automate enough of it. So much so that DR service providers such as SunGard are productizing new service offerings based on SRM.
On April 18th, IBM announced its intent to acquire virtual tape library (VTL) and deduplication vendor Diligent Technologies. For IBM, Diligent is a good fit. The company offers both mainframe and open systems virtual tape libraries and they are a pioneer of deduplication. However, IBM already offers a market leading mainframe VTL based on its own intellectual property and an open systems VTL based on FalconStor technology — although the open systems VTL has very limited adoption — so there is also a lot of overlap. Because Diligent is a software solution, IBM can quickly integrate Diligent with any of its storage systems and bring new VTLs to market relatively quickly. It’s very likely that IBM will in fact pursue this route so it can bring an inline deduplicating VTL to market as quickly as possible.
On April 10, 2008, IBM announced its intent to acquire FilesX, a small startup that offers server-based replication and continuous data protection technology. The acquisition will become part of the Tivoli Storage Manager (TSM) family of products.
This acquisition will help IBM Tivoli fill a gap in their current portfolio of offerings for data protection. The vendor currently offers Tivoli Storage Manager (TSM), which is one of the leading enterprise-class backup software applications, and Tivoli Continuous Data Protection for Files, a product mostly used to protect PCs. In addition to traditional backup to tape or disk, TSM can also manage Microsoft Virtual Snapshots (VSS) and its own IBM storage-based snapshot technology in support of instant restore or snapshot assisted backup. But the company didn’t really have an offering for customers who wanted something that was better than backup but not as expensive as storage-based replication, this is where FilesX comes in. With FilesX, IBM can now address the recovery requirements of small enterprises that can’t afford storage-based replication. They can also meet the recovery requirements of large enterprises that want to protect more servers within their company with a more affordable replication offering as well as servers at the remote office.