How Much Infrastructure Integration Should You Allow?

 There’s an old adage that the worst running car in the neighborhood belongs to the auto mechanic. Why? Because they like to tinker with it. We as IT pros love building and tinkering with things, too, and at one point we all built our own PC and it probably ran about as well as the mechanic's car down the street.

While the mechanic’s car never ran that well, it wasn’t a reflection on the quality of his work on your car because he drew the line between what he can tinker with and what can sink him as a professional (well, most of the time). IT pros do the same thing. We try not to tinker with computers that will affect our clients or risk the service level agreement we have with them. Yet there is a tinkerer’s mentality in all of us. This mentality is evidenced in our data centers where the desire to configure our own infrastructure and build out our own best of breed solutions has resulted in an overly complex mishmash of technologies, products and management tools. There’s lots of history behind this mess and lots of good intentions, but nearly everyone wants a cleaner way forward.

In the vendors’ minds, this way forward is clearly one that has more of their stuff inside and the latest thinking here is the new converged infrastructure solutions they are marketing, such as HP’s BladeSystem Matrix and IBM’s CloudBurst. Each of these products is the vendor’s vision of a cleaner, more integrated and more efficient data center. And there’s a lot of truth to this in what they have engineered. The big question is whether you should buy into this vision.

There is a clear inflection point in the market opening up an opportunity for change. Infrastructures are being abstracted by virtualization and the desire to move to infrastructure-as-a-service cloud architectures, which let more workloads run on consistent hardware configurations. And configuring the hardware in as standard a configuration as possible enables greater use and reuse. Technologies including blade systems, 10GbE, and network storage support the concept of wire-once, virtually configure infinitely. And some of these vendors are packaging their solution using these technologies and a host of virtualization, management, and automation tools integrated together so you can simply drop them in as a virtual pool and expand them with highly repeatable building blocks – cloud Legos, essentially.

And the major hardware manufacturers have proven that their superior QA and integration capabilities can churn out known good configurations in high volume and lower cost than we can build them ourselves. This is why we don’t build our own corporate PCs or servers anymore. So who’s to say we shouldn’t let them integrate and dropship full infrastructures for us? Isn’t that the next logical step in this evolution?

That’s the key question I explore in my latest Forrester Report, “Are Converged Infrastructures Good for IT?”

There’s a fundamental tension point in IT that these solutions pull on – lock-in versus standardization. It’s a core focus of the advice we provide to clients in the Sourcing & Vendor Management Role and in the back of the minds of every IT Infrastructure architect. How much of a single vendor’s technology do you standardize on at risk of weakening your leverage with the vendor?

We know that every infrastructure & operations professional is under pressure to bring cloud technologies to your company – and fast. This is a fast path to doing so. The question is how far down this path is it safe to go.

We have our researched opinion on this topic in my report. What are your thoughts? Share them here.  


How much infrastructure integration should you allow?

My thoughts are first separate the technologies and the concepts.

There are a number of levers that need to be considered when setting infrastructure direction. At a conceptual level you need to work out what these levers are for your company and then look at writing an associated strategic direction to address these levers (e.g. around virtualisation). Once you have that overarching direction try and write an associated business case if it hangs together then you might have the right strategy. Run some sensitivity around this scenario and understand which levers are the most important to your company/industry. Now all you have to do is sell the concepts as it will be different for each company.

Some other thoughts
- Don’t forget the industry direction e.g. what will happen to the availability of IT staff as more people move to cloud?
- Don’t forget to understand your business e.g. where do they want to differentiate and can they scale appropriately on the cloud when the comapny expands?
- Don’t forget your finance team e.g what depreciation is left on the current assets, what is the benefit (if any) of moving from capital costs to revenue costs ?
- Don’t for forget maturity e.g. internal and your ability to handle vendors, and external does that vendor understand security (remember outsourcing does not negate responsibility for things like data protection)

If all else fails use Zachman to get you started (what, how, where, who, when, and why) but for this type of change I would start with the why!

Hope this helps

Converged for what?

Great thinking here, Edward. Agreed, while a pre-configured infrastructure can be a great starting point it may not fit with your given application, service or strategy. It's always good practice to do this strategy-fit analysis.

Define "Integration"

One thing that keeps me worried about this whole "converged infrastructure" movement is what it really means.

If we're not careful, large vendors would have it mean "it's a bunch of our products that have been pre-integrated for you" -- Which really isn't a ton of value-add. You're still paging between applications, and you haven't done *anything* to reduce operational complexity following installation.

The real opportunity here isn't simply to shrink-wrap integrate products and to call them "converged". It is to drive out complexity, to reduce point-products and to eliminate error-prone manual processes in IT operations.

We would all do well with a firmer definition of Converged Infrastructure (or, Unified Computing) and then drive toward that ideal.

Re: Define integration

Is having your IT ops staff spend their value time racking, stacking, installing and configuration what is essentially commodity configurations a good use of their valuable time? Hard to see how pre-integration of these components doesn't speed up time to market. I don't think any of these vendors is arguing that can't modify (optimize) the configurations once installed. That value remains.

Define Integration

In my mind this all comes down to two things. The first point to look at is it cheaper to do it yourself or do it the cloud. This is done by our old friend TCO/TEI (this should include vendor management costs, security, audits etc…). The second point is to check if your infrastructure is ever on the critical path of projects (in most cases there is no need for it to be)

Our personal philosophy as an IT shop is to be cheaper than the cloud offerings, we can do this by being efficient in the way we set up and manage our infrastructure and its associated security. Second we ensure that infrastructure is not on the critical path of any project. This is helped through virtualisation, and preinstalling racks and wiring.

All in all it comes down to business benefit. Our current IT aim is to do the bulk of the work internally as we think we can do it both cheaper and better than the cloud offering. However we will use the cloud for specific purposes e.g. when the company want things to run outside our main firewall.

But as you all know the world moves on and we expect this to change as the industry and the cloud matures. But at the moment we try and keep jobs local and can demonstrate it makes sense to do so. This is both from a cost and a risk point of view.

To sum up if you are not currently mature in your infrastructure management expect to be pushed to the cloud. If you are mature you better continue to improve as the cloud competition is maturing faster than you are, so perhalps we all need to start cross training our sys-progs to become vendor managers :-)

Cloud is the new PC

Corollary to this, James, is the realization that the cloud is still in it's infancy and that all these push and offers towards 'mods and customization' is proof to that.

Two things happening for the next few years or so:

1. Major players will play/ship both 'minimum base config and in-house/ accredited custom config' but only to a point when they have finally have data on what the market demand for the ideal base cloud is and they want it all (revenues) for themselves and speak 'voided warranty'.

2. Third-party vendor solutions offering hybrid configurations will continue to mushroom at this stage of the cloud game based on market need and 'affordability' to leverage technology at soonest possible time. Both lock-in and lack of standards are reasons that contribute heavily to this and the market will continue to have a wait and see attitude until this is resolved. Sure, cloud is also about curbing costs but IT will always go to great lengths to get that high -end efficiency eventually but no way will companies max out budgets this early.

Continuing with your car analogy, no one thinks there's a Ferrari-cloud yet at this point, so buying a Honda and putting in the custom modifications without breaking bank will be excellent way to gain that breakneck speed for now. IT may not be totally comfortable with this bec there is trade-off in terms of level of comfort - squeaks, frequent service plus parts replacement but it is still cheaper.

Until a Benz, Audi or Lexus cloud truly emerges ...

Honda cloud is a good way to start - agree

Good advice with your comment about starting with a Honda cloud and building from there. We are all learning this new model and there's no point in over engineering your solution this early in the game.

Is Lock-In really an issue?


Lock-in has been an "issue"the industry has struggled with for as long as I can remember. But is is really that big of an issue, especially in this context?

At least when I think of lock-in, it starts at the OS layer, then the rest of the stack and the programming language. The last one is often the most gnarly of the issues, and perhaps that's what held back as a realistic platform, until it's recent tie-up with VMWare's SpringSource to enable Java to run on's infrastructure.

But for years many company's have made a defacto lock in choice...primarily of .NET v. Java regardless of whether the technology was deployed on premise or in the Cloud. But that kind of lock-in is less of an issue as you can spin up an AMI for any stack, SpringSource agreements with Google and create a reasonable level of portability of your software between providers, Rackspace's OpenStack enables even more flexibility and if you were a .NET shop before, you're probably more likely to choose Azure.

So the major decisions that lock you in are going to be made anyway. If there is a converged hardware/software solution like the new Azure-in-a-Box solution enables simpler integration and scalability where's the lock-in concern in that? Especially in a Private Cloud context.

But perhaps @Ken was making a slightly different point if all the "converged infrastructure" did was enable upsell/cross-sell of a company's different products, but I don't think that's what you had intended.

Am I way off in left field? (won't be the first time, won't be the last)