What is one of the most important decisions infrastructure & operations (I&O) professionals face today? It's not whether to leverage the cloud or build a private cloud or even which cloud to use. The more important decision is which applications to place in the cloud, and sadly this decision isn't often made objectively. Application development & delivery professionals often decide on their own by bypassing IT. When the decision is made in the open with all parts of IT and the business invited to collaborate, emotion and bravado often rule the day. "SAP's a total pain and a bloated beast, let's move that to the cloud," one CIO said to his staff recently. His belief was if we can do that in the cloud it will prove to the organization that we can move anything to the cloud. Sadly, while a big bang certainly would garner a lot of attention, the likelihood that this transition would be successful is extremely low, and a big bang effort that becomes a big disaster could sour your organization on the cloud and destroy IT's credibility. Instead, organizations should start with low risk applications that let you learn safely how to best leverage the cloud — whether public or private.
Cloud computing continues to be hyped. By now, almost every ICT hardware, software, and services company has some form of cloud strategy — even if it’s just a cloud label on a traditional hosting offering — to ride this wave. This misleading vendor “cloud washing” and the complex diversity of the cloud market in general make cloud one of the most popular and yet most misunderstood topics today (for a comprehensive taxonomy of the cloud computing market, see this Forrester blog post).
Software-as-a-service (SaaS) is the largest and most strongly growing cloud computing market; its total market size in 2011 is $21.2 billion, and this will explode to $78.4 billion by the end of 2015, according to our recently published sizing of the cloud market. But SaaS consists of many different submarkets: Historically, customer relationship management (CRM), human capital management (HCM) — in the form of “lightweight” modules like talent management rather than payroll — eProcurement, and collaboration software have the highest SaaS adoption rates, but highly integrated software applications that process the most sensitive business data, such as enterprise resource planning (ERP), are the lantern-bearers of SaaS adoption today.
Do you keep every single light on in your house even though you are fast asleep in your bedroom?
Of course you don't. That would be an abject waste. Then why do most firms deploy peak capacity infrastructure resources that run around the clock even though their applications have distinct usage patterns? Sometimes the applications are sleeping (low usage). At other times, they are huffing and puffing under the stampede of glorious customers. The answer is because they have no choice. Application developers and infrastructure operations pros collaborate (call it DevOps if you want) to determine the infrastucture that will be needed to meet peak demand.
One server, two server, three server, four.
The business is happy when the web traffic pedal is to the floor.
All of us in the technology industry get caught up in the near-term fluctuations and pressures of our business. This quarter’s earnings, next quarter’s shipments, this year’s hiring plan . . . it’s easy to get swallowed up by the flood of immediate concerns. So one of the things that we work hard on at Forrester, and that our clients value in their relationships with us, is taking a few steps back and looking at the longer-term, bigger picture of the size and shape of the industry’s trajectory. It provides strategic and financial context for the short-term fluctuations and trends that buffet all of us.
I am lucky to co-lead research in Forrester's Vendor Strategy team, which is explicitly chartered to predict and quantify the new growth opportunities and disruptions facing strategists at some of our leading clients. We will put those predictions on display later this month at Forrester's IT Forum, our flagship client event. Among the sessions that Vendor Strategy analysts will be leading:
"The Software Industry in Transition": Holger Kisker will preview his latest research detailing best practices for software vendors navigating the tricky transition from traditional license to as-a-service pricing and engagement models.
"The Computing Technologies of 2016": Frank Gillett will put us in a time machine for a trip five years into the future of computing, storage, network, and component technologies that will underpin new applications, new experiences, and new computing capabilities.
Intel has been publishing research for about a decade on what they call “3D Trigate” transistors, which held out the hope for both improved performance as well as power efficiency. Today Intel revealed details of its commercialization of this research in its upcoming 22 nm process as well as demonstrating actual systems based on 22 nm CPU parts.
The new products, under the internal name of “Ivy Bridge”, are the process shrink of the recently announced Sandy Bridge architecture in the next “Tock” cycle of the famous Intel “Tick-Tock” design methodology, where the “Tick” is a new optimized architecture and the “Tock” is the shrinking of this architecture onto then next generation semiconductor process.
What makes these Trigate transistors so innovative is the fact that they change the fundamental geometry of the semiconductors from a basically flat “planar” design to one with more vertical structure, earning them the description of “3D”. For users the concepts are simpler to understand – this new transistor design, which will become the standard across all of Intel’s products moving forward, delivers some fundamental benefits to CPUs implemented with them:
Leakage current is reduced to near zero, resulting in very efficient operation for system in an idle state.
Power consumption at equivalent performance is reduced by approximately 50% from Sandy Bridge’s already improved results with its 32 nm process.
Is your cloud strategy centered on saving money or fueling revenue growth? Where you land on this question could determine a lot about your experience level with cloud services and what guidance you should be giving to your application developers and infrastructure & operations teams. According to our research the majority of CIOs would vote for the savings, seeing cloud computing as an evolution of outsourcing and hosting that can drive down capital and operations expenses. In some cases this is correct but in many the opposite will result. Using the cloud wrong may raise your costs.
But this isn’t a debate worth having because it’s the exploration of the use cases where it does save you money that bears the real fruit. And it’s through this experience that you can start shifting your thinking from cost savings to revenue opportunities. Forrester surveys show that the top reasons developers tap into cloud services (and the empowered non-developers in your business units) is to rapidly deploy new services and capabilities. And the drivers behind these efforts – new services, better customer experience and improved productivity. Translation: Revenues and profits.
If the cloud is bringing new money in the door, does it really matter if it’s the cheaper solution? Not at first. But over time using cloud as a revenue engine doesn’t necessarily mean high margins on that revenue. That’s where your experience with the cost advantaged uses of cloud come in.
Having attended the OpenStack Design Summit this week and at the same time fielding calls from Forrester clients affected by the Amazon Web Services (AWS) outage, an interesting contrast in approaches bore out. You could boil it down to closed versus open but there’s more to this contrast that should be part of your consideration when selecting your Infrastructure as a Service (IaaS) providers.
The obvious comparison is that AWS’ architecture and operational procedures are very much their own and few outside the company know how it works. Not even close partners like RightScale or those behind the open source derivative Eucalyptus know it well enough to do more than deduce what happened based on their experience and what they could observe. OpenStack, on the other hand, is fully open source so if you want to know how it works you can download the code. At the Design Summit here in Santa Clara, Calif. this week, developers and infrastructure & operations professionals had ample opportunity to dig into the design and suggest and submit changes right there. And there were plenty of conversations this week about how CloudFiles and other storage services worked and how to ensure an AWS Elastic Block Store (EBS) mirror storm could be avoided.
It seems that during every major shift in the telecommunications, service provider or hosting market there is a string of moves like these as players attempt to capitalize on the change to gain greater market position. And there are plenty of investors caught up in the opportunity who are willing to lend a few bucks. In the dot.com period, through 2000s, we saw major shifts in the service provider landscape as colo/hosting giants were created such as Cable & Wireless and Equinix.
But what does this mean for infrastructure & operations professionals looking to select a hosting or Infrastructure as a Service (IaaS) cloud provider? The key is in determining if 1 + 1 actually equals anything greater than 2.
. . . but bad reactive marketing can make the problem much worse.
[co-authored by Zachary Reiss-Davis]
As has been widely reported, in sources broad and narrow, Amazon.com’s cloud service EC2 went down for an extended period of time yesterday, bringing many of the hottest high-tech startups with it, ranging from the well known (Foursquare, Quora) to the esoteric (About.me, EveryTrail). For a partial list of smaller startups affected, see http://ec2disabled.com/.
While this is clearly a blow to both Amazon.com and to the cloud hosting market in general, it also serves as an example of how technology companies must quickly respond publicly and engage with their customers when problems arise. Amazon.com let their customers control the narrative by not participating in any social media response to the problem; their only communication was through their online dashboard with vague platitudes. Instead, they allowed angry heads of product management and CEOs who are used to communicating with their customers on blogs and Twitter to unequivocally blame Amazon.com for the problem.
What is it that you think makes one tech company stand out from another? “My product is better than your product”? Not anymore. “My salespeople are better than your salespeople”? Possibly. “My channel is better than your channel”. You’re getting warmer. How about, “My marketing machine is better than your marketing machine”?
For example, 41% of customers identify “the vendor’s (not including its salespeople’s) ability to understand our business problem”, compared with only 21% who identified “the vendor’s salesperson’s ability to understand our business problem” as the most important vendor action factor when selecting a tech vendor. Marketing is clearly the difference-maker.
But cloud computing changes everything. The implications of cloud computing go far beyond its technology delivery/consumption model. It seems I get questions from tech marketers about all things cloud these days. A few examples:
“How can I use the cloud more effectively to market our solutions?” (Answer: It’s not what you read in USA Today about Facebook and Twitter. According to the results of our 2011 B2B Social Technographics® survey, discussion forums and professional social networking sites (read: not consumer social sites) outpace Facebook and Twitter ten-fold as information sources for informing businesses’ technology purchase decisions.)