Storage has been confined to hardware appliance form factors for far too long. Over the past two decades, innovation in the storage space has transitioned from proprietry hardware controllers and processors to proprietary software running on commodity X86 hardware. The hardware driving backup appliances, NAS systems, iSCSI arrays, and object storage systems, are often quite similar in terms of processors and components, yet despite this fact I&O professionals are still used to purchasing single purpose systems which lock customers into a technology stack.
Over the past few years, companies such as HP (StoreVirtual VSA), Nexenta, Sanbolic and Maxta have released software-only storage offerings to complete head to head with proprietrary hardware appliances, and have found some success with cost conscious enterprises and service providers. The software-only storage revolution is now ready for primetime with startup offerings now reaching maturity and established players such as IBM, EMC and NetApp jumping into the market.
I&O professionals should consider software only storage since:
The storage technology acquisition process is broken. Any storage purchase you complete today will be bound to your datacenter for the next 3 to 5 years. When business stakeholders and clients need storage resources for emerging use cases such as object storage and flash storage, these parties often do not have the luxury of time to wait for storage teams to complete RFPs and product evaluations. With software-only storage access to new technology can be accelerated to meet the provisioning velocity needs of customers.
The last few days have been eventful in the cloud gateway space and should provide I&O organizations more incentive to start evaluating gateways. Yesterday, EMC announced its acquisition of cloud gateway startup TwinStrata which will allow EMC customers to move on-premise data from EMC arrays to public cloud storage providers. Today, Panzura launched a free cloud gateway and their partner Google is adding 2TB of free cloud storage for a year to entice companies to kick the tires on a gateway. Innovation and investment in this area does not appear to be slowing down. CTERA locked in an additional $25 million in VC funding last week to accelerate the sales and marketing efforts to support its cloud gateway and file sync & share products.
Though the cloud gateway market has grown slowly so far, this technology category is about to become mainstream. Cloud Gateways are disruptive since they can facilitate data migration from on-premises to a public cloud storage service to create a true hybrid cloud storage environment. Basically, a cloud gateway is a virtual or physical storage appliance which looks like a NAS or block storage device to users and applications on-premises, but can write data back to a public cloud storage service using the native APIs of that cloud.
A number of use cases have emerged for cloud gateways including:
$2 billion. That's billion with a "B". That's a lot of money. That is also what Aaron Levie's Inc. Magazine Entrepreneur of the Year company, Box Inc., is being valued at today. According to an article in the Wall Street Journal, Box has said it recieved a fresh round of $125 million in investment, with $100 million of that money coming from a single private equity firm. Also according to the article, Box is expecting to close out 2013 with approximately $100 million in revenue, giving the company a 20x multiple. The numbers are certainly impressive, but is this a bubble or are we seeing a fundamental shift in how businesses of the future will operate, thus justifying the big dollar signs?
A recent article in Forbes stated the following: "Taking a cue from the Dot-com bubble’s playbook, investors have resorted to valuing today’s profitless tech companies on a price-to-sales ratio basis, yet even this metric shows that Twitter’s valuation is quite overvalued at 22 times its expected 2014 sales, which is approximately double the multiple carried by Facebook and LinkedIn (which have high multiples in their own right)."
The untimely demise of Nirvanix has left over 1,000 customers scrambling to migrate data off of the cloud storage service provider and with a short two-week timeframe to save their data. While providers have gone to great lengths to make data import into the cloud easy by eliminating data ingest fees, large data sets in the cloud are difficult to retrieve or migrate to a new target. The recent example with Nirvanix highlights why customers should also consider exit and migration strategies as they formulate their cloud storage deployments.
One of the most significant challenges in cloud storage is related to how difficult it is to move large amounts of data from a cloud. While bandwidth has increased significantly over the years, even over large network links it could take days or even weeks to retrieve terabytes or petabytes of data from a cloud. For example, on a 1 Gbps link, it would take close to 13 days to retrieve 150 TB of data from a cloud storage service over a WAN link.
To minimize risks in cloud storage deployments and facilitate a graceful exit strategy (just in case things go sour), I recommend customers take the following steps:
Yesterday Intel had a major press and analyst event in San Francisco to talk about their vision for the future of the data center, anchored on what has become in many eyes the virtuous cycle of future infrastructure demand – mobile devices and “the Internet of things” driving cloud resource consumption, which in turn spews out big data which spawns storage and the requirement for yet more computing to analyze it. As usual with these kinds of events from Intel, it was long on serious vision, and strong on strategic positioning but a bit parsimonious on actual future product information with a couple of interesting exceptions.
Content and Core Topics:
No major surprises on the underlying demand-side drivers. The the proliferation of mobile device, the impending Internet of Things and the mountains of big data that they generate will combine to continue to increase demand for cloud-resident infrastructure, particularly servers and storage, both of which present Intel with an opportunity to sell semiconductors. Needless to say, Intel laced their presentations with frequent reminders about who was the king of semiconductor manufacturingJ
For the vast majority of Forrester customers who I have not had the pleasure of meeting, my name is Henry Baltazar and I'm the new analyst covering Storage for the I&O team. I've covered the Storage industry for over 15 years and spent the first 9 years of my career as a Technical Analyst at eWEEK/PCWeek Labs, where I was responsible for benchmarking storage systems, servers and Network Operating Systems.
During my lab days, I tested hundreds of different products and was fortunate to witness the development and maturation of a number of key innovations such as data deduplication, WAN optimization and scale-out storage. In the technology space "Better, Faster, Cheaper - Pick Two" used to be the design goal for many innovators, and I've seen many technologies struggle to attain two, let alone three of these goals, especially in the first few product iterations. For example, while iSCSI was able to challenge Fibre Channel on the basis of being cheaper - despite being around for over a decade many storage professionals are still not convinced that iSCSI is faster or better.
Looking at storage technologies today, relative to processors and networking, storage has not held up its end of the bargain. Storage needs to improve in all three vectors to either push innovation forward, or avoid being viewed as a bottleneck in the infrastructure. At Forrester I will be looking at a number of areas of innovation which should drive enterprise storage capabilities to new heights including: