With all the hype and progress happening around cloud computing, we know that our infrastructure and operations professional clients are under pressure to have a cloud answer. This is causing some unproductive behavior and a lot of defensiveness. A growing trend is to declare victory – point to your virtual infrastructure where you can provision a VM in a few seconds and say, “See, I’m a cloud.” But you aren’t, really. And I think you know that.
Being a cloud means more than just using server virtualization. It means you have the people, process, and tools in place to deliver IT on demand, via automation, are sharing resources so you can maximize the utilization of assets and are enabling your company to act nimbly. In our latest Forrester report we document that to be a cloud you need to have:
After an in-depth survey of IT security and risk professionals, as well as our ongoing work with leaders in this field, Forrester recognized the need for a detailed, practical way to measure the maturity of security organizations. You asked, and we responded. I'm happy to announce today we published the Forrester Information Security Maturity Model, detailing 123 components that comprise a successful security organization, grouped in 25 functions, and 4 high level domains. In addition to the People, Process, and Technology functions you may be familiar with, we added Oversight, a domain that addresses the strategy and decision making needed to coordinate functions in the other three domains.
Our Maturity Model report explains the research and methodology behind this new framework, which is designed to help security and risk professionals articulate the breadth of security’s role in the organization, identify and fix gaps in their programs, and demonstrate improvement over time.
What makes the Forrester Information Security Maturity Model work?
We are sometimes so focused on details that we forget to think clearly. Nothing new there; it’s still a story about trees and forest. A few years ago, this was clearly the case when I met with one of the first vendors of run book automation. My first thought was that it was very similar to workload automation, but I let myself be convinced that it was so different that it was obviously another product family. Taking a step back last year, I started thinking that in fact these two forms of automation complemented each other. In “Market Overview: Workload Automation, Q3 2009,” I wrote that “executing complex asynchronous applications requires server capacity. The availability of virtualization and server provisioning, one of the key features of today’s IT process [run book] automation, can join forces with workload automation to deliver a seamless execution of tasks, without taxing IT administrators with complex modifications of pre-established plans.”In June of this year, UC4 announced a new feature of its workload automation solution, by which virtual machines or extension to virtual machines can be provisioned automatically when the scheduler detects a performance issue (see my June 30 blog post “Just-In-Time Capacity”). This was a first sign of convergence. But there is more.
Automation is about processes. As soon as we can describe a process using a workflow diagram and a description of the operation to be performed by each step of the diagram, we can implement the software to automate it (as we do in any application or other forms of software development). Automation is but a variation of software that uses pre-developed operations adapted to specific process implementations.
In the late 1940s, William Levitt came up with the idea of pre-fabricated homes that could be mass-produced and shipped to suburbs across the US providing cheap and efficient housing. Towns built using these pre-fabricated houses were dubbed Levittowns, and are now known for their drab monotony. In my opinion, pre-fabricated homes were a flop, but the idea of pre-fabricated “Levittown-esque” data centers is brilliant!
And I’m not alone--HP and Colt are just two of the latest providers to jump on the pre-fabricated data center bandwagon this month. Other vendors such as Digital Realty Trust, APC, and IBM have also been offering similar solutions for a while now, but the solutions appear to be a bit more custom-made than the recent announcements by HP and Colt.
The pre-fabricated data center modules are built out in around 750-800kw units and are fitted together like Legos (HP’s even looks like Legos!). Many modular data centers can be linked together (and Colt’s can also stack vertically) to build out a much larger space.
Why should you care about these pre-fabricated data center offerings? Well, they make the whole process of building your own data center much cheaper and faster. Some of the benefits I can see include:
As CEOs put IT budgets under pressure year after year, CIOs and their teams focus on balancing money spent on running the business (RTB) versus money spent on growing the business (GTB). By decreasing the percentage of their budget spent on maintenance and ongoing operations (RTB), they aim to have a greater share of their budget to spend on projects that grow the business. In the best IT organizations, the ratio can sometimes approach 50:50 — however, a more typical ratio is 70% RTB and 30% GTB.
Unfortunately, such practices suggest an incremental budget cycle — one that looks at the prior year’s spend to determine the next year’s budget. While this may be appropriate for the RTB portion of the IT budget, it is far from ideal for the GTB portion. Incremental budgeting for GTB results in enormous tradeoffs being made as part of the IT governance process, with steering committees making decisions on which projects can be funded based upon the IT and business strategy. Anyone from outside of IT who has worked through IT governance committees understands just how challenging that process can be. And the ultimate result of such tradeoffs is that sometimes valuable projects go unfunded or shadow-IT projects spring up to avoid the process altogether.
To paraphrase Charles Dickens, Q2 2010 seemed like the best of times or the worst of times for the big software vendors. For Microsoft, it was the best of times; for IBM, it was (comparatively) the worst of times; and for SAP it was in between. IBM on June 19, 2010, reported total revenue growth of just 2% in the fiscal quarter ending June 30, 2010, with its software unit also reporting 2% growth (6%, excluding the revenues of its divested product lifecycle management group from Q2 2009). Those growth rates were down from 5% growth for IBM overall in Q1 2010, and 11% for the software group. In comparison, Microsoft on June 22, 2010, reported 22% growth in its revenues, with Windows revenues up 44%, Server and Tools revenues up 14%, and Microsoft Business Division (Office and Dynamics) up 15%. And SAP on June 27, 2010, posted 12% growth in its revenues in euros, 5% growth on a constant currency basis, and 5% growth when its revenues were converted into dollars.
What do these divergent results for revenue growth say about the state of the enterprise software market?
Infrastructure diversity is one important component of many IT infrastructures’ complexity. Even at a time when organizations are standardizing on x86 hardware, they often maintain separate support groups by types of operating systems. In the meantime, we see even more technology diversity developing in a relentless pursuit of performance, and ironically, simplification. This begs a simple question: Should we, for the sake of operational efficiency, standardize at the lowest possible level, e.g., the computing platform, or at a much higher level, e.g., the user interface?
In the past months, I think a clear answer was provided by the mainframe world. One key element that actually limits mainframe expansion in some data centers is the perception from higher levels of management that the mainframe is a complex-to-operate and obsolete platform, too radically different from the Linux and Windows operating systems. This comes from the fact that most mainframe management solutions use an explicit interface for configuration and deployment that requires a detailed knowledge of the mainframe specificity. Mastering it requires skills and experience that unfortunately do not seem to be taught in most computer science classes. Because mainframe education is lacking, the issue seems to be more acute than in other IT segments. This eventually would condemn the mainframe when all the baby boomers decide that they would rather golf in Florida.
This whole perception was shattered to pieces by two major announcements. The most recent one is the new IBM zEnterprise platform, which regroups a mix of hardware and software platforms under a single administration interface. In doing this, IBM provides a solution that actually abstracts the platforms’ diversity and removes the need for different administrators versed in the vagaries of the different operating systems.
There’s an old adage that the worst running car in the neighborhood belongs to the auto mechanic. Why? Because they like to tinker with it. We as IT pros love building and tinkering with things, too, and at one point we all built our own PC and it probably ran about as well as the mechanic's car down the street.
While the mechanic’s car never ran that well, it wasn’t a reflection on the quality of his work on your car because he drew the line between what he can tinker with and what can sink him as a professional (well, most of the time). IT pros do the same thing. We try not to tinker with computers that will affect our clients or risk the service level agreement we have with them. Yet there is a tinkerer’s mentality in all of us. This mentality is evidenced in our data centers where the desire to configure our own infrastructure and build out our own best of breed solutions has resulted in an overly complex mishmash of technologies, products and management tools. There’s lots of history behind this mess and lots of good intentions, but nearly everyone wants a cleaner way forward.
In the vendors’ minds, this way forward is clearly one that has more of their stuff inside and the latest thinking here is the new converged infrastructure solutions they are marketing, such as HP’s BladeSystem Matrix and IBM’s CloudBurst. Each of these products is the vendor’s vision of a cleaner, more integrated and more efficient data center. And there’s a lot of truth to this in what they have engineered. The big question is whether you should buy into this vision.
Just read an excellent article on the subject by Tom Davenport. We at Forrester Research indeed see the same trend, where more advanced enterprises are starting to venture into combining reporting and analytics with decision management. In my point of view, this breaks down into at least two categories:
Automated (machine) vs. non automated (human) decisions, and
Decisions that involve structured (rules and workflows) and unstructured (collaboration) processes
I know, I know, this is what analysts do. But I personally would never want to get involved in doing a BI market size – it’s open game for serious critique. Here are some of the reasons, but the main one is a good old “garbage in garbage out.” I am not aware of any BI market size study that took into account the following questions:
What portion of the DBMS market (DW, DBMS OLAP) do you attribute to BI?
What portion of the BPM market (BAM, process dashboards, etc.) do you attribute to BI?
What portion of the ERP market (with built-in BI apps, such as Lawson, Infor, etc.) do you attribute to BI?
What portion of the portal market (SharePoint is the best example) do you attribute to BI?
What portion of the search market (Endeca, Google Analytics, etc.) do you attribute to BI?
What is the market size of custom developed BI applications?
What is the market size of self built BI apps using Excel, Access, etc?
On the other side, what is the % of licenses sold that are shelfware and should not be counted?
Plus many more unknowns. But, if someone indeed did do such a rough estimate, my bet is that the actual BI market size is probably 3x to 4x larger than any current estimate.