I've got backup on the brain. I guess this isn't an unusual occurrence for me, but it's also been bolstered by a week at Symantec Vision, a week at EMC World, as well as backup announcements about IBM's data protection hardware and CommVault's PC backup enhancements not to mention the flurry of cloud backup news this week from Trend Micro, CA Technologies, and Carbonite. All of this has gotten me thinking about the future of backup... we've come a long way from simple agent-based backup and recovery. Backup is just one piece in an ever-increasingly complicated puzzle we call continuity. If backup software vendors want to stay relevant they're going to need to offer a lot more than just backup in their "data protection" suites.
We are sometimes so focused on details that we forget to think clearly. Nothing new there; it’s still a story about trees and forest. A few years ago, this was clearly the case when I met with one of the first vendors of run book automation. My first thought was that it was very similar to workload automation, but I let myself be convinced that it was so different that it was obviously another product family. Taking a step back last year, I started thinking that in fact these two forms of automation complemented each other. In “Market Overview: Workload Automation, Q3 2009,” I wrote that “executing complex asynchronous applications requires server capacity. The availability of virtualization and server provisioning, one of the key features of today’s IT process [run book] automation, can join forces with workload automation to deliver a seamless execution of tasks, without taxing IT administrators with complex modifications of pre-established plans.”In June of this year, UC4 announced a new feature of its workload automation solution, by which virtual machines or extension to virtual machines can be provisioned automatically when the scheduler detects a performance issue (see my June 30 blog post “Just-In-Time Capacity”). This was a first sign of convergence. But there is more.
Automation is about processes. As soon as we can describe a process using a workflow diagram and a description of the operation to be performed by each step of the diagram, we can implement the software to automate it (as we do in any application or other forms of software development). Automation is but a variation of software that uses pre-developed operations adapted to specific process implementations.
One of the great revolutions in manufacturing of the past decades is just-in-time inventory management. The basic idea is to provision only what is needed for a certain level of operation and to put in place a number of management functions that will trigger the provisioning of inventory. This is one the key elements that allowed the manufacturing of goods to contain production costs. We have been trying to adapt the concept to IT for years with little success. But a combination of the latest technologies is finally bringing the concept to a working level. IT operations often faces unpredictable workloads or large variations of workloads during peak periods. Typically, the solution is to over-provision infrastructure capacity and use a number of potential corrective measures: load balancing, traffic shaping, fast reconfiguration and provisioning of servers, etc.
Technology growth is exponential. We all know about Moore’s Law by which the density of transistors on a chip doubles every two years; but there is also Watts Humphrey’s comment that the size of software doubles every two years, Nielsen’s Law by which Internet bandwidth available to users doubles every two years, and many others concerning storage, computing speed, and power consumption in a data center. IT organizations and especially IT operations must cope with this afflux of technology, which brings more and more services to the business, as well as the management of the legacy services and technology. I believe that the two most important roadblocks that prevent IT from optimizing its costs are in fact diversity and complexity. Cloud computing, whether SaaS or IaaS, is going to add diversity and complexity, as is virtualization in its current form. This is illustrated by the following chart, which compiles answers to the question: “Approximately how many physical servers with the following processor types does your firm operate that you know about?”
If virtualization could potentially address the number of servers in each category, it does not address the diversity of servers, nor does it address the complexity of services running on these diverse technologies.