Seagate's Kinetic Will Impact Object Storage And Data Driven Applications

Seagate's recent Kinetic Open Storage platform unveiling is making hard drive based technology interesting again.  The Kinetic platform essentially turns hard drives into individual key value stores, and allows applications and hosts to directly access Kinetic drives over TCP/IP networks.  Processing power within the drives is used to run the key value store, and the Kinetic technology also facilitates policy based drive-to-drive data migration.  If this storage architecture is commercially successful it will be extraordinarly disruptive since the direct connectivity from drives to applications will eliminate storage controllers, file systems, SANs and even RAID from the storage data path.  Developer kits for Kinetic are available today, though Seagate will not be making the drives generally available until 2014.  I'll be publishing a more in-depth report for Forrester clients on our site in the future, but for now there are a number of key points to be aware of as this technology ramps up:

Read more

Nirvanix's Demise Emphasizes The Need For Hybrid Clouds And Storage Mobility

The untimely demise of Nirvanix has left over 1,000 customers scrambling to migrate data off of the cloud storage service provider and with a short two-week timeframe to save their data. While providers have gone to great lengths to make data import into the cloud easy by eliminating data ingest fees, large data sets in the cloud are difficult to retrieve or migrate to a new target. The recent example with Nirvanix highlights why customers should also consider exit and migration strategies as they formulate their cloud storage deployments.

One of the most significant challenges in cloud storage is related to how difficult it is to move large amounts of data from a cloud. While bandwidth has increased significantly over the years, even over large network links it could take days or even weeks to retrieve terabytes or petabytes of data from a cloud. For example, on a 1 Gbps link, it would take close to 13 days to retrieve 150 TB of data from a cloud storage service over a WAN link.

To minimize risks in cloud storage deployments and facilitate a graceful exit strategy (just in case things go sour), I recommend customers take the following steps:

Read more

Categories:

EMC slides into Software Defined Storage with ViPR

EMC's Project Bourne morphed into ViPR at the EMC World 2013 event at Las Vegas last week. It seems like everyone has a different take on what should be included in SDS, and my definition and implementation guidelines can be found in this report. Like other vendors, EMC is promising to revolutionize the way customers will provision, manage and create storage resources using ViPR, which will become a key component in the vendor's Software Defined Data Center strategy for virtualizing compute, networking, and storage resources.  Unlike other years, where EMC bombarded its attendees with dozens of product launches, this year's show focused almost entirely on ViPR, which makes sense given the importance of this technology. ViPR is expected to become generally available in the latter half of 2013, and like all other SDS implementations, ViPR is designed to reduce the number of administrators it takes to manage rapidly growing data repositories by using automation and self-service provisioning. So what's under ViPR's covers?

Read more

Storage QoS Is A Must-Have Feature for Enterprises and the Cloud

Later this year, many of the established storage players will finally be adding Storage QoS (Quality of Service) functionality to their systems.  Though startups such as SolidFire and NexGen Storage (and some platforms such as IBM's XIV) have been touting this functionality for a few years now, most storage systems today currently lack Storage QoS.  If your primary storage vendor does not have Storage QoS on its roadmap, now is the time to start demanding it.

Normally, when I bring up the topic of Storage QoS with All-Flash Array startups or other high-end array vendors, the typical response I get is "We don't need Storage QoS. Our system is so fast - there are IOPS for everyone!"  While this statement may or may not be true (it isn't!), even if a system had a seemingly infinite amount of performance, this would only solve part of the problem with storage performance provisioning.  Here are a few things to keep in mind as you evaluate Storage QoS:

Read more

Better, Faster, Cheaper - Storage needs to be all three

For the vast majority of Forrester customers who I have not had the pleasure of meeting, my name is Henry Baltazar and I'm the new analyst covering Storage for the I&O team. I've covered the Storage industry for over 15 years and spent the first 9 years of my career as a Technical Analyst at eWEEK/PCWeek Labs, where I was responsible for benchmarking storage systems, servers and Network Operating Systems.  

During my lab days, I tested hundreds of different products and was fortunate to witness the development and maturation of a number of key innovations such as data deduplication, WAN optimization and scale-out storage.  In the technology space "Better, Faster, Cheaper - Pick Two" used to be the design goal for many innovators, and I've seen many technologies struggle to attain two, let alone three of these goals, especially in the first few product iterations.  For example, while iSCSI was able to challenge Fibre Channel on the basis of being cheaper - despite being around for over a decade many storage professionals are still not convinced that iSCSI is faster or better.

Looking at storage technologies today, relative to processors and networking, storage has not held up its end of the bargain.  Storage needs to improve in all three vectors to either push innovation forward, or avoid being viewed as a bottleneck in the infrastructure.  At Forrester I will be looking at a number of areas of innovation which should drive enterprise storage capabilities to new heights including:

Read more