Single-Sourcing Storage -- A Way To Increase Consistency And Control Costs, Or A Dumb Idea...

I was recently taken to task in the twittersphere (I post as @reichmanIT) by @Knieriemen, a VP at virtualization and storage reseller Chi Corporation and the host of the Infosmack podcast.  Mr. Knieriemen took exception to statements about storage single sourcing I made to the press related to a recent document about storage choices for virtual server environments, including the following:

RT @markwojtasiak: Forrester says "single source when possible" <-what an incredibly dumb thing for @reichmanIT to say

He followed up his comments with the following clarifications:

@reichmanIT In context to managing data center costs, single sourcing is probably the worst suggestion I've heard from an analyst


@reichmanIT In a virtual server environment, storage is a commodity and should be purchased as such to control costs

I don’t know Mr. Knieriemen personally, and I must admit that I was taken aback by his blunt approach, but I’m from New York and have a thick skin, so I can look past that. The reason I bring this up on Forrester’s blog is solely to take a look at the actual points of view brought out in the exchange. The crux of the argument is whether or not storage is commoditized, and in my opinion, it’s not (yet anyway). There are three reasons why:

  • Storage is all about software and features, which are not commodities.  X86 servers certainly have reached a point where they struggle for differentiation.  Many of the components that go into storage arrays are true commodities, especially disk drives.  But enterprise storage, the combo of hardware and software used to deliver high performance and high availability in enterprise settings, is not a commodity. There’s a high cost of gaining proficiency in management and troubleshooting within a given vendor’s products, so switching is not trivial. Migrating data is much easier going from old storage to new when both boxes come from the same vendor. Features such as snapshots, thin provisioning, deduplication, and protocol support are differentiated and deliver varied results depending on the specific vendor implementation.
  • Storage virtualization doesn’t commoditize storage, it just shifts the control to somebody else's software.  There’s been a great deal of talk of storage virtualization freeing users from lock-in and high cost of storage hardware, but it hasn't panned out that way. Yes, companies use IBM SVC or Hitachi USP/VSP or NetApp V-series to virtualize external storage systems, and then the underlying storage does behave like dumb commoditized disk. But few environments multi-source the back end systems. There are troubleshooting issues among the vendors, and it doesn’t make sense to spend money on features embedded in arrays from full feature vendors when you won’t use them. Even if you did muti-source the back-end hardware, you would be locked in to the virtualization platform of the virtualization vendor, so it wouldn’t make storage a commodity, just trade one lock-in for another. There are real benefits to be had in storage virtualization such as easier data migration or consistency across disparate physical devices, but calling it true commoditization or an exponential reduction in total cost of ownership is an overly ambitious assessment of the impact.
  • Server virtualization uses APIs to call features in differentiated storage systems. In terms of server virtualization, there certainly is some effect of hypervisors controlling and to some extent commoditizing storage. With VAAI integration, VM admins can take on storage management tasks directly in the VCenter console, taking away some of the complexity of managing disparate storage systems. But for the most part, VAAI calls underlying storage features with APIs, meaning that you’re still dependent on the strength of features within the underlying arrays. We may someday see VMware or other server virtualization vendors managing vast tracts of dumb disk as commodities, but for now, there are real differences among vendor capabilities.

I’m by no means an apologist for the shortcomings of the big storage vendors. But in my opinion, the industry is not yet ready to throw out the trusted relationships that govern storage architectures and purchasing. So yes, I do think that it makes sense to pick a single storage vendor for each major workload stack (server virtualization, mainframe, file, data warehouse, non-virtualized OLTP databases, etc.) and stick with it. While you may be able to shave off some percentage points in negotiation through increased multi-sourcing, the complexity you add is likely to increase TCO in the long run and diminish the benefits of private cloud and virtualization initiatives.

In the end, I’m curious whether Mr. Knieriemen is just spouting clichéd pipe dreams about virtualization, commoditization, and multi-sourcing, or whether he really does have examples of better solutions. Is he delivering SAN-less architectures based on commodity servers with external virtualization layers managing data protection and advanced features that deliver equal or better performance and resiliency at lower price and without lock-in? If so, I’d love to hear about it. More importantly, I’d like to hear directly from infrastructure & operations teams. What are your thoughts on this? I’m always willing to have my eyes opened to new ideas, but calling me dumb just isn’t enough to get me there. 



Good post

Here is what I wrote a couple of weeks ago:

I can go as deep as you'd like on specific storage virtualization implementations and why single sourcing is a really, really expensive way to build out your infrastructure.


Additional comments

Thanks Andrew - this is a worthy debate.

Some specific responses to your points:

"Storage is all about software and features, which are not commodities"

True, but the hardware component IS commoditized. You could argue that server virtualization is also all about software... and it is... that doesn't take away the value of server hardware consolidation. Same goes for storage except now we are commoditizing storage arrays, not servers.

"Storage virtualization doesn’t commoditize storage, it just shifts the control to somebody else's software."

More accurately, storage management is abstracted to a higher, centralized resource. Like server virtualization, you could argue that virtualization adds complexity and management beyond the OS and beyond the application. As we all know now, this extra layer adds management and control functions that make admins lives much easier. Same with Storage Virtualization but at a MUCH lover cost.

"There are troubleshooting issues among the vendors..."

In theory yes (this is one of those issues that vendors who don't virtualize third-party disk like to spread around), in practice we've seen very little support conflict among our end users. I've encouraged a few to come on here to share their experiences with you directly. I think this largely depends on the vendor you work with.

" doesn’t make sense to spend money on features embedded in arrays from full feature vendors when you won’t use them"

There are a couple of ways of looking at that. First there are ways to leverage existing features of the array but I would argue that it doesn't matter (granted others will disagree with me here). Typically IT orgs will shift legacy storage (greater than 3 years) under an abstracted layer and leverage it as tier 3 storage so those features aren't as important as getting 2 - 3 more years of life out of that array with a clear and easy migration path to a different storage array when and if needed. Also... When you virtualize servers, there are OS features that become obsolete - so what? That's the nature of server and storage virtualization.

"Server virtualization uses APIs to call features in differentiated storage systems."

Yes! And some vendors (specifically HDS) allow you to leverage VAAI capabilities all the way down to the disk level but manage VAAI capabilities at the abstraction layer for multiple vendor disk arrays. I'm not going to do a vendor pitch here, but I would strongly encourage you to get a demo from HDS so you can see these features for yourself.

There is so much more here to cover - this is a GREAT topic to cover... if specific questions come up I'd like to respond.


This sounds alot like single-sourcing to me...

You know, for all the bluster about multi-sourcing, I only see you talking about storage solutions from a single vendor in your posts, HDS. I agree with you that storage virtualization from HDS can be a powerful way to make an existing storage environment function consistently without having to replace hardware that still has useable life. But I disagree that this truly constitutes the commoditized multi-source environment you so loudly support. Yes you can consistently manage storage from several vendors, but users that follow this path are locked in to the virtualization and replication software from a single vendor, much as they would be if they single-sourced hardware/software combos. The software is the most important and most expensive part of the overall solution, so even if users do buy commodity hardware from multiple vendors, they are still giving the lion’s share of spend, mindshare and trust to a single vendor, especially when you consider that you have to pay HDS capacity based licensing for the virtualization and replication software. What’s more, while virtualization is a good way to make an existing heterogeneous environment more consistent, my conversations with Forrester customers and HDS themselves (based on the phone-home data their systems gather) leads me to believe that when it comes to refresh time, the vast majority of HDS virtualization customers buy HDS storage as the back end rather than purchasing new capacity from a third party vendor. Looking at your VAR’s list of partners, I wonder which ones you recommend as multi-source candidates when it comes to refresh time? Compellent, EqualLogic and Xiotech are the ones I see listed, and all of them have software bundled with their solutions that you wouldn’t use or want to pay for as back ends to HDS virtualization. Please tell me what the multi-sourcing you recommend really looks like, as the more I hear, the more it sounds like what you’re really recommending is a faster path to the consistency of an essentially single-sourced environment, albeit one that supports dumbed-down hardware from multiple vendors.

You missed the point

I referenced HDS directly to contradict your specific statement about VAAI. I didn't want to get into specific vendor pitches, this is a technology debate.

As virtualization relates to the customers we work with, some use HDS and some use FalconStor but others that you noted also do storage virtualization like IBM, NetApp, etc.

I didn't mention specific storage vendors that can be virtualized because it doesn't really matter. One customer may have EMC and another may have IBM - it doesn't matter, we've virtualized dozens of different storage arrays. That's the nature of virtualization and it's the point I don't think you are getting - this is about leveraging whatever existing storage an end-user might have today while giving them the option to procure whatever storage they want tomorrow. Being in the channel (and not just doing surveys) we see this first hand on a very regular basis.

But, since you asked, yes - we have customers that also virtualize Compellent, Xiotech and Nexsan for a variety of reasons depending on the disk performance required for their environment and applications.

In addition, I can't think of a single enterprise virtualization customer we work with that hasn't put their virtualized disk requirements out for RFP with each major storage purchase. That's the value they purchased with virtualization.

Additional notes

I should clarify a couple of things...

Not all storage virtualization is used exclusively to cut costs on storage although this is always an over-riding benefit. Some use storage virtualization for easy (REALLY easy) data migration. Others may use it as a preferred resource to manage their whole storage environment rather than managing silo's of storage resources. In these instances, they may well continue to single-source their storage, but they completely avoid vendor lock-in or restricting themselves to the features of a single vendor.

To your point, many end-users do continue to single source their storage but the really important point here is that they are not locked in and they have vendor, pricing, management and feature flexibility they wouldn't have by single sourcing their storage vendor. Any advantage or disadvantage you could state about storage virtualization is almost always the same advantage/disadvantage as server virtualization.

Andrew, let me start by

Andrew, let me start by saying I am a friend of Greg's and did respond on his original post at However as you will see from here - - multi-vendor sourcing is something I've discussed many times in the past.

First of all, let's discuss the subject of commodity. As far as a host is concerned, storage *should* be a commodity. We use standard protocols to interface to the storage array - take your pick - NFS/CIFS, iSCSI, FCoE or Fibre Channel. What's commoditised here is the connection. Every storage array supporting fibre channel (for example) should act in exactly the same way. Within the array the vendor may choose to implement features differently. I'd suggest for 2 reasons this is a necessity. Firstly, if they copied other vendors' implementations they'd get sued and second, they need differentiation. But ultimately 90% of storage array features boil down to the same things, namely primary LUNs and secondary LUNs (where the secondary LUN may be a replicated LUN, snapshot or clone). For NFS/CIFS we have exported shares.

Look under the hood and certain vendors scale better - some implement the management interfaces better; some implement features such as dedupe and thin provisioning; these don't change the host view of the storage, they reduce your TCO and in some cases make operations more efficient.

What we see rarely happen in customers is the standardisation of their requirements by designing infrastructures to meet service needs. Rather, they focus on individual technical features. Who cares whether I use SSD, SAS or SATA drives in my array if I can (a) meet my cost/TCO goals and (b) meet my performance goals? Of course the reason technology has more focus is that it is deployed by technologists, who historically cared less about TCO and the long term operational support of storage environments and focused more on technology for technology's sake. But the world is different, we need to cope with scale and reduce costs. If you didn't spend the time to get your storage framework right, then migration from one vendor to another is going to cost you time and effort. In this instance, single vendor looks more attractive.

If you're wondering how I know this then I speak from personal experience. I've been in IT nearly 24 years as a mixture of being both a customer and a consultant and working across industries and platforms (mainframe, Open Systems, Windows). I've worked in organisations with 1 storage array - I've worked with organisations that had hundreds (literally). Time after time I saw customers paying inflated costs because they had no bargaining power and that was because they'd never standardised their infrastructure to make vendor exchange and data migration a simple operation. By the way, many organisations are by nature multi-vendor; if they change vendors, then they usually don't immediately exit the hardware of the old vendor and have multiple platforms onsite.

As for storage virtualisation, I've implemented this many times. It serves a number of purposes, including data migration, extending the life of assets, simplifying support and so on. In itself it is just another tool to manage data mobility and TCO.

If any of your readers would like me to explain the concepts of multi-vendor management in more detail, then I'd be more than happy to do so. In fact, I'll make the offer of a free 1 hour webinar to discuss the subject in detail. Please feel free to contact me directly (you have my email address) if you want to discuss the subject further.

Chris Evans