Gateways Will Accelerate Data Migration To The Cloud

The last few days have been eventful in the cloud gateway space and should provide I&O organizations more incentive to start evaluating gateways.  Yesterday, EMC announced its acquisition of cloud gateway startup TwinStrata which will allow EMC customers to move on-premise data from EMC arrays to public cloud storage providers.  Today, Panzura launched a free cloud gateway and their partner Google is adding 2TB of free cloud storage for a year to entice companies to kick the tires on a gateway.  Innovation and investment in this area does not appear to be slowing down.  CTERA locked in an additional $25 million in VC funding last week to accelerate the sales and marketing efforts to support its cloud gateway and file sync & share products.
 
Though the cloud gateway market has grown slowly so far, this technology category is about to become mainstream.  Cloud Gateways are disruptive since they can facilitate data migration from on-premises to a public cloud storage service to create a true hybrid cloud storage environment.  Basically, a cloud gateway is a virtual or physical storage appliance which looks like a NAS or block storage device to users and applications on-premises, but can write data back to a public cloud storage service using the native APIs of that cloud.  
 
A number of use cases have emerged for cloud gateways including:
Read more

Is Red Hat Storage The Future Of Software Storage?

With last week's $175M acquisition of storage startup Inktank, the initial developer of the Open Source Ceph storage platform, Red Hat has added another piece to its growing storage portfolio. With large storage players fleshing out their software storage offerings including EMC (ScaleIO, Project Nile), NetApp, HP, and IBM, it's clear that the transition from hardware-centric storage appliances to software storage is underway — and it won't be long before we are at a point where your next array should be an app running on commodity hardware.

Though large storage players have successfully fended off software challengers such as Symantec before, it is telling that nearly all of the major players are developing and marketing their own software storage products, including EMC’s ScaleIO. The drive toward software storage is clearly gaining momentum, and this time around the storage leaders are active participants in addition to startups such as Nexenta Systems and the Open Source projects.

While Red Hat is not a major storage player today, there are a few reasons why this company could become disruptive as the market transitions to software storage:

  • No legacy business to lose. This is probably the most powerful attribute that makes Red Hat dangerous relative to existing storage players. While Red Hat’s market share (in terms of storage revenue and paying customers) is nowhere near the likes of leaders such as NetApp and EMC, its lack of legacy business will allow Red Hat to attack the NAS, object, and block storage market without the sacrifice of losing high-margin storage appliance sales.  
Read more

Seagate's Kinetic Will Impact Object Storage And Data Driven Applications

Seagate's recent Kinetic Open Storage platform unveiling is making hard drive based technology interesting again.  The Kinetic platform essentially turns hard drives into individual key value stores, and allows applications and hosts to directly access Kinetic drives over TCP/IP networks.  Processing power within the drives is used to run the key value store, and the Kinetic technology also facilitates policy based drive-to-drive data migration.  If this storage architecture is commercially successful it will be extraordinarly disruptive since the direct connectivity from drives to applications will eliminate storage controllers, file systems, SANs and even RAID from the storage data path.  Developer kits for Kinetic are available today, though Seagate will not be making the drives generally available until 2014.  I'll be publishing a more in-depth report for Forrester clients on our site in the future, but for now there are a number of key points to be aware of as this technology ramps up:

Read more

Nirvanix's Demise Emphasizes The Need For Hybrid Clouds And Storage Mobility

The untimely demise of Nirvanix has left over 1,000 customers scrambling to migrate data off of the cloud storage service provider and with a short two-week timeframe to save their data. While providers have gone to great lengths to make data import into the cloud easy by eliminating data ingest fees, large data sets in the cloud are difficult to retrieve or migrate to a new target. The recent example with Nirvanix highlights why customers should also consider exit and migration strategies as they formulate their cloud storage deployments.

One of the most significant challenges in cloud storage is related to how difficult it is to move large amounts of data from a cloud. While bandwidth has increased significantly over the years, even over large network links it could take days or even weeks to retrieve terabytes or petabytes of data from a cloud. For example, on a 1 Gbps link, it would take close to 13 days to retrieve 150 TB of data from a cloud storage service over a WAN link.

To minimize risks in cloud storage deployments and facilitate a graceful exit strategy (just in case things go sour), I recommend customers take the following steps:

Read more

Categories:

EMC slides into Software Defined Storage with ViPR

EMC's Project Bourne morphed into ViPR at the EMC World 2013 event at Las Vegas last week. It seems like everyone has a different take on what should be included in SDS, and my definition and implementation guidelines can be found in this report. Like other vendors, EMC is promising to revolutionize the way customers will provision, manage and create storage resources using ViPR, which will become a key component in the vendor's Software Defined Data Center strategy for virtualizing compute, networking, and storage resources.  Unlike other years, where EMC bombarded its attendees with dozens of product launches, this year's show focused almost entirely on ViPR, which makes sense given the importance of this technology. ViPR is expected to become generally available in the latter half of 2013, and like all other SDS implementations, ViPR is designed to reduce the number of administrators it takes to manage rapidly growing data repositories by using automation and self-service provisioning. So what's under ViPR's covers?

Read more

Storage QoS Is A Must-Have Feature For Enterprises And The Cloud

Later this year, many of the established storage players will finally be adding Storage QoS (Quality of Service) functionality to their systems.  Though startups such as SolidFire and NexGen Storage (and some platforms such as IBM's XIV) have been touting this functionality for a few years now, most storage systems today currently lack Storage QoS.  If your primary storage vendor does not have Storage QoS on its roadmap, now is the time to start demanding it.

Normally, when I bring up the topic of Storage QoS with All-Flash Array startups or other high-end array vendors, the typical response I get is "We don't need Storage QoS. Our system is so fast - there are IOPS for everyone!"  While this statement may or may not be true (it isn't!), even if a system had a seemingly infinite amount of performance, this would only solve part of the problem with storage performance provisioning.  Here are a few things to keep in mind as you evaluate Storage QoS:

Read more

Better, Faster, Cheaper - Storage needs to be all three

For the vast majority of Forrester customers who I have not had the pleasure of meeting, my name is Henry Baltazar and I'm the new analyst covering Storage for the I&O team. I've covered the Storage industry for over 15 years and spent the first 9 years of my career as a Technical Analyst at eWEEK/PCWeek Labs, where I was responsible for benchmarking storage systems, servers and Network Operating Systems.  

During my lab days, I tested hundreds of different products and was fortunate to witness the development and maturation of a number of key innovations such as data deduplication, WAN optimization and scale-out storage.  In the technology space "Better, Faster, Cheaper - Pick Two" used to be the design goal for many innovators, and I've seen many technologies struggle to attain two, let alone three of these goals, especially in the first few product iterations.  For example, while iSCSI was able to challenge Fibre Channel on the basis of being cheaper - despite being around for over a decade many storage professionals are still not convinced that iSCSI is faster or better.

Looking at storage technologies today, relative to processors and networking, storage has not held up its end of the bargain.  Storage needs to improve in all three vectors to either push innovation forward, or avoid being viewed as a bottleneck in the infrastructure.  At Forrester I will be looking at a number of areas of innovation which should drive enterprise storage capabilities to new heights including:

Read more