Today HP announced a new set of technology programs and future products designed to move x86 server technology for both Windows and Linux more fully into the realm of truly mission-critical computing. My interpretation of these moves is that it is both a combined defensive and pro-active offensive action on HP’s part that will both protect them as their Itanium/HP-UX portfolio slowly declines as well as offer attractive and potentially unique options for both current and future customers who want to deploy increasingly critical services on x86 platforms.
Bearing in mind that the earliest of these elements will not be in place until approximately mid-2012, the key elements that HP is currently disclosing are:
ServiceGuard for Linux – This is a big win for Linux users on HP, and removes a major operational and architectural hurdle for HP-UX migrations. ServiceGuard is a highly regarded clustering and HA facility on HP-UX, and includes many features for local and geographically distributed HA. The lack of ServiceGuard is often cited as a risk in HP-UX migrations. The availability of ServiceGuard by mid-2012 will remove yet another barrier to smooth migration from HP-UX to Linux, and will help make sure that HP retains the business as it migrates from HP-UX.
Analysis engine for x86 – Analysis engine is internal software that provides system diagnostics, predictive failure analysis and self-repair on HP-UX systems. With an uncommitted delivery date, HP will port this to selected x86 servers. My guess is that since the analysis engine probably requires some level of hardware assist, the analysis engine will be paired with the next item on the list…
Over the past several months, I've been receiving a lot of questions about replication for continuity and recovery. One thing I've noticed, however, is that there is a lot of confusion around replication and its uses. To combat this, my colleague Stephanie Balaouras and I recently put out a research report called "The Past, Present, And Future Of Replication" where we outlined the different types of replication and their use cases. In addition to that, I thought it would be good to get some of the misconceptions about replication cleared up:
Myth: Replication is the same as high availability Reality: Replication can help to enable high availability and disaster recovery, but it is not a solution in and of itself. In the case of an outage, simply having another copy of the data at an alternate site isn't going to help if you don't have a failover strategy or solution. Some host-based replication products come with integrated failover and failback capabilities.
Myth: Replication is too expensive Reality: It's true that traditionally array-based replication has been expensive due to the fact that it requires like-to-like storage and additional licensing fees. However, two factors have mitigated this expense: 1) several storage vendors are no longer charging an extra licensing fee for replication; and 2) there are several alternatives to array-based replication that allow you to use heterogeneous storage and come at a significantly lower acquisition cost. Replication products fall into one of four categories (roughly from most to least expensive):
Storage-as-a-Service is relatively new. Today the main value proposition is as a cloud target for on-premise deployments of backup and archiving software. If you have a need to retain data for extended periods of time (1 year plus in most cases) tape is still the more cost effective option given it's low capital acquisition cost and removability. If you have long term data retention needs and you want to eliminate tape, that's where a cloud storage target comes in. Electronically vault that data to a storage-as-service provider who can store that data at cents per GB. You just can't beat the economies of scale these providers are able to achieve.
If you're a small business and you don't have the staff to implement and manage a backup solution or if you're an enterprise and you're looking for a PC backup or a remote office backup solution, I think it's worthwhile to compare the three year total cost of ownership of an on-premise solution versus backup-as-a-service.
Over the past 2 months, I've seen an increase in the number of end user inquiries regarding high availability and almost more importantly, how to measure high availability (HA). HA means something different depending on whom you're talking with so it's worth a quick definition. I define HA as:
Focused on the technology and processes to prevent application/service outages at the primary site or in a specific IT system domain.
This is in contrast to disaster recovery or IT service continuity (ITSC) which is about preventing or responding to outages of the entire site.
Why so many inquiries about HA recently? I believe that due to our increasing reliance on IT as well as the 24X7 operating environment that companies of all sizes and industries are becoming more and more sensitive to application and system downtime. The interest in measurement is driven by the need to continuously improve upon IT services and justify IT investments to senior management, especially now.
The US Center for Disease Control (CDC) has confirmed 64 cases of swine flu in the United States and as other countries including Canada (6), New Zealand (3), the United Kingdom (2), Israel (2), Spain (2), and now Germany have confirmed cases, the World Health Organization has raised the worldwide pandemic threat level to Phase 4. This means health officials have confirmed that the disease can spread person-to-person and has the potential to cause "community-level" outbreaks. The CDC recommends avoiding travel to Mexico and if you get sick, to stay home from work. Large numbers of employees out sick will impact the business (revenue) and cost your company a lot of money in productivity loss (you still pay employees their salary when they're out).
Stopping the spread of the disease and treating those infected is obviously a health issue, but the swine flu outbreak does have implications for IT professionals in both the short term and the long term. First, if you haven't done so already, you need find a copy of the bird flu business continuity plan (BCP) that your company developed in 2006 and call a walk through exercise immediately. And if your responsibility is IT disaster recovery and not necessarily business continuity, don't wait around for someone else to dust of the plan and call the exercise - this is too important to wait. Call your CIO, CISO, COO, and CEO and tell them it needs to be done now. There's a good chance that the plan is out of date and that it hasn't been exercised in a long time.
On May 12th, 2008 VMware announced that nine storage replication vendors have tested and certified their technology with VMware’s long awaited Site Recovery Manager (SRM) offering. SRM is an important step forward in DR (DR) preparedness because it automates the process of restarting virtual machines (VM) at an alternate data center. Of course, your data and your VM configuration files must be present at the alternate site, hence the necessary integration with replication vendors. SRM not only automates the restart of VMs at an alternate data center, it can automate other aspects of DR. For example, it can shutdown other VMs before it recovers others. You can also integrate scripts for other tasks and insert checkpoints where a manual procedure is required. This is useful if you are using the redundant infrastructure at the alternate data center for other workloads such as application development and testing (a very common scenario). When you recover an application to an alternate site, especially if your redundant infrastructure supports other workloads, you have to think about how you will repurpose between secondary and production workloads. You also have to think about the entire ecosystem, such as network and storage settings, not just simply recovering a VM.
Essentially, VMware wants you to replace manual DR runbook with the automated recovery plans in SRM. It might not completely replace your DR runbook but it can automate enough of it. So much so that DR service providers such as SunGard are productizing new service offerings based on SRM.
On April 10, 2008, IBM announced its intent to acquire FilesX, a small startup that offers server-based replication and continuous data protection technology. The acquisition will become part of the Tivoli Storage Manager (TSM) family of products.
This acquisition will help IBM Tivoli fill a gap in their current portfolio of offerings for data protection. The vendor currently offers Tivoli Storage Manager (TSM), which is one of the leading enterprise-class backup software applications, and Tivoli Continuous Data Protection for Files, a product mostly used to protect PCs. In addition to traditional backup to tape or disk, TSM can also manage Microsoft Virtual Snapshots (VSS) and its own IBM storage-based snapshot technology in support of instant restore or snapshot assisted backup. But the company didn’t really have an offering for customers who wanted something that was better than backup but not as expensive as storage-based replication, this is where FilesX comes in. With FilesX, IBM can now address the recovery requirements of small enterprises that can’t afford storage-based replication. They can also meet the recovery requirements of large enterprises that want to protect more servers within their company with a more affordable replication offering as well as servers at the remote office.
It’s official, the future of information management and infrastructure is software as a service (SaaS). Today, Dell announced its intent to acquire the powerhouse in email continuity and archiving, MessageOne. This acquisition will give Dell the cornerstone that it needs to build out its own suite of SaaS offerings. Dell clearly didn’t want to be left out of race as it watched Iron Mountain successfully building out its SaaS offerings and watched its competitors and partners complete significant acquisitions in the market including Seagate Services’ acquisition of Evault, EMC’s acquisition of Mozy and IBM’s recent acquisition of Arsenal Digital Solutions. Then there’s Symantec who is building out its Symantec Protection Network.
On December 6th, 2007 IBM announced its acquisition of Arsenal Digital Solutions, a major player in the online backup service provider market. Arsenal provides online backup services to customers directly but other service providers (particularly telecommunication providers) rebrand and resell Arsenal's online backup services as their own. So the company is both provider and enabler. Arsenal is profitable, cash flow positive and has not required funding since 2002. It has approximately 3400 customers. IBM did not disclose the value of the acquisition.
It is important to note that the acquisition was made by IBM Global Services (IGS), not IBM Tivoli or IBM System and Technology. This acquisition is not about filling in a product gap (although IBM is lagging in data protection offerings that support deduplication), it's about ensuring a foothold in a critical market. In fact, the engine of Arsenal's service is EMC Avamar - what Arsenal provides is a software as a service (SaasS) wrapper around Avamar, everything you need for SaaS such as multi-tenancy, billing, reporting etc. IGS is clearly indifferent to the technology; they care about a dependable, scaleable online backup service
Backup is a struggle for both enterprises and small and medium businesses. It’s a complex ecosystem of backup software, networks, servers, disk arrays, and tape systems. Most companies report they are having difficulty completing backups in the time available and when backups fail or complete with errors, it’s often very difficult to discover the root cause. Couple those troubles with the fact that the amount of data that you need backed up is growing conservatively at 30% to 50% per year. Aside from these challenges, most companies are also interested in keeping backups longer for version history and companies are interested in the ability to perform much faster restores if they could.
Given the headaches associated with backup, many small and medium business and even some enterprises are choosing to outsource their backups all together to a service provider. There are already numerous players in the marketplace from Evault (which is resold by a number of different service providers) to Iron Mountain, to your telecommunication provider, and to emerging entrants such as Berkeley Data Systems and its Mozy service offering. This opportunity is so huge that even Symantec (which acquired Veritas) launched a beta of its own online backup service called the Symantec Protection Network. EMC’s acquisition of Berkeley Data Systems is just further proof that the online backup market is a huge opportunity.