The last few days have been eventful in the cloud gateway space and should provide I&O organizations more incentive to start evaluating gateways. Yesterday, EMC announced its acquisition of cloud gateway startup TwinStrata which will allow EMC customers to move on-premise data from EMC arrays to public cloud storage providers. Today, Panzura launched a free cloud gateway and their partner Google is adding 2TB of free cloud storage for a year to entice companies to kick the tires on a gateway. Innovation and investment in this area does not appear to be slowing down. CTERA locked in an additional $25 million in VC funding last week to accelerate the sales and marketing efforts to support its cloud gateway and file sync & share products.
Though the cloud gateway market has grown slowly so far, this technology category is about to become mainstream. Cloud Gateways are disruptive since they can facilitate data migration from on-premises to a public cloud storage service to create a true hybrid cloud storage environment. Basically, a cloud gateway is a virtual or physical storage appliance which looks like a NAS or block storage device to users and applications on-premises, but can write data back to a public cloud storage service using the native APIs of that cloud.
A number of use cases have emerged for cloud gateways including:
Last week I presented an overview of cloud adoption trends in the banking sector in Asia to a panel of financial services regulators in Hong Kong. The presentation showcased a few cloud case studies including CBA, ING Direct, and NAB in Australia. I focused on the business value that these banks have realized through the adoption of cloud concepts, while remaining compliant with the local regulatory environments. These banks have also developed a strong competitive advantage: They know how to do cloud. Ultimately, I believe that cloud is a capability that banks will have to master in order to build an agility advantage. For instance, cloud is a key enabler of Yuebao, Alibaba’s new Internet finance business. 80 million users in less than 10 months? Only cloud architecture can enable that type of agility and scale (an idea that Hong Kong regulators clearly overlooked).
The business press has come alive over the past few weeks as companies as diverse as Delta, Facebook, and Tesla have publicly declared that they want to own software development for key applications. What should catch your attention about these announcements is the types of software these firms want to control. Delta is acquiring the software IP and data associated with an application that affects 180 of its customer and flight operations systems. Facebook is building proprietary software to simplify interactions between its sales teams and the advertisers posting ads on the social networking site. And Tesla has developed its own enterprise resource management (ERP) and commerce platform that links the manufacturing history of a vehicle with important sales and customer support systems. Tesla's CIO Jay Vijayan, in describing his organization's system, sums up the sentiment behind many of these business decisions: "It helps the company move really fast."
Yesterday HP announced that it will be entering into a “non-equity joint venture” (think big strategic contract of some kind with a lot of details still in flight) to address the large-scale web services providers. Under the agreement, Foxcon will design and manufacture and HP will be the primary sales channel for new servers targeted at hyper scale web service providers. The new servers will be branded HP but will not be part of the current ProLiant line of enterprise servers, and HP will deliver additional services along with hardware sales.
The motivation is simple underneath all the rhetoric. HP has been hard-pressed to make decent margins selling high-volume low-cost and no-frills servers to web service providers, and has been increasingly pressured by low-cost providers. Add to that the issue of customization, which these high-volume customers can easily get from smaller and more agile Asian ODMs and you have a strategic problem. Having worked at HP for four years I can testify to the fact that HP, a company maniacal about quality but encumbered with an effective but rigid set of processes around bringing new products to market, has difficulty rapidly turning around a custom design, and has a cost structure that makes it difficult to profitably compete for deals with margins that are probably in the mid-teens.
Enter the Hon Hai Precision Industry Co, more commonly known as Foxcon. A longtime HP partner and widely acknowledged as one of the most efficient and agile manufacturing companies in the world, Foxcon brings to the table the complementary strengths to match HP – agile design, tightly integrated with its manufacturing capabilities.
With video rapidly becoming the dominant content type on enterprise networks the issues being faced in the media market foreshadow the coming challenges for the rest of the market. And use of the cloud was very much in focus at the 2014 National Association of Broadcasters conference held in Las Vegas in the second week of April.
Here we present ten issues the media industry faces as it more broadly embraces the cloud, as observed first-hand at NABShow 2014. These ten issues show how going cloud changes how you think (planning), act (workflow), and engage (distribute). For Forrester clients there is a new companion report to this blog detailing what the industry is doing to address these challenges and how you can follow suit:
Usually when a product or service shouts about its low pricing, that’s a bad thing but in Google’s case there’s unique value in its Sustained-use Discounts program which just might make it worth your consideration.
It was five years ago, March 2009, when Cisco formally announced “Project California,” its (possibly intentionally) worst-kept secret, as Cisco Unified Computing System. At the time, I was working at Hewlett Packard, and our collective feelings as we realized that Cisco really did intend to challenge us in the server market were a mixed bag. Some of us were amused at their presumption, others were concerned that there might be something there, since we had odd bits and pieces of intelligence about the former Nuova, the Cisco spin-out/spin-in that developed UCS. Most of us were convinced that they would have trouble running a server business at margins we knew would be substantially lower than their margins in their core switch business. Sitting on top of our shiny, still relatively new HP c-Class BladeSystem, which had overtaken IBM’s BladeCenter as the leading blade product, we were collectively unconcerned, as well as puzzled about Cisco’s decision to upset a nice stable arrangement where IBM, HP and Dell sold possibly a Billion dollars’ worth of Cisco gear between them.
Five years later, HP is still number one in blade server units and revenue, but Cisco appears to be now number two in blades, and closing in on number three world-wide in server sales as well. The numbers are impressive:
· 32,000 net new customers in five years, with 14,000 repeat customers
· Claimed $2 Billion+ annual run-rate
· Order growth rate claimed in “mid-30s” range, probably about three times the growth rate of any competing product line.
I know, more control is an axiom! But the above statement is more often true. When we're talking about configuration control in the public cloud it can be especially true, as control over the configuration of your application can put control in the hands of someone who knows less about the given platform and thus is more likely to get the configuration wrong. Have I fired you up yet? Then you're going to love (or loathe) my latest report, published today.
Let's look at the facts. Your base configuration of an application deployed to the cloud is likely a single VM in a single availability zone without load balancing, redundancy, DR, or a performance guarantee. That's why you demand configuration control so you can address these shortcomings. But how well do you know the cloud platform you are using? Is it better to use their autoscaling service (if they have one) or to bring your own virtual load balancers? How many instances of your VM, in which zones, is best for availability? Would it be better to configure your own database cluster or use their database as a service solution? One answer probably isn't correct — mirroring the configuration of the application as deployed in your corporate virtualization environment. Starting to see my point?
Fact is, more configuration control may just be a bad thing.