Verizon Steps Into IaaS Cloud Leadership Ranks

James Staten

Pop Quiz: What’s the fastest way to build a credible, enterprise-relevant and highly profitable cloud computing services practice? Buy one that already is. That’s exactly what Verizon did last week when it pushed $1.4B across the table to Terremark. Despite its internal efforts to build an infrastructure-as-a-service (IaaS) business over the last two years, Verizon simply couldn’t learn the best practices fast enough to have matched the gains in the market it received through this move. Terremark has one of the strongest IaaS hosting businesses in the market and perhaps the best enterprise mix in its customer base of the top tier providers. It also has a significant presence with government clients including the United States’ Government Services Agency (GSA) which has production systems running in a hybrid mode between Terremark’s IaaS and traditional managed hosting services.

Confidential Forrester client inquiries have shown struggles by Verizon to win competitive IaaS bids with its computing-as-a-service (CaaS) offering, often losing to Terremark. This led to Verizon reselling the Terremark solution (its CaaS for SMB) so they could try before the buy.

Read more

Categories:

The ITSM Selection Process

Eveline Oehrlich

Almost every day I get the question: “We want to replace our ITSM support tool; which vendor should I look at?” There are many alternatives today and each vendor has certainly done a great amount of work to position themselves as the best. The success I had in consulting with these clients, and the knowledge I carry with me now, is thanks in part to the clients with whom I have discussed the ITSM space. They have all confirmed that the functionality across these vendors is very similar. This, however, does not help in decision-making — so I’m especially excited to have authored a three-piece research document which might take some magic out of the decision process when selecting ITSM support tools in the future.

This Forrester report is called Eliminate Magic When Selecting The Right IT Service Management (ITSM) Support Tool.  It’s an overview of the process decision-makers need to follow and the important — but sometimes overlooked — other criteria to keep in mind as they work toward launching or engaging with the ITSM vendor community.

I identified four phases of the evaluation process that should be followed:

Plan: Lay the groundwork, set objectives, explore existing conversations, and make necessary early decisions.

Assemble an evaluation team: Putting the right people together to understand the use cases and requirements is critical before the next step.

Define your requirements: Use the ITSM Support Tools Product Comparison to define your requirements.

Read more

Categories:

IBM And ARM Continue Their Collaboration – Major Win For ARM

Richard Fichera

Last week IBM and ARM Holdings Plc quietly announced a continuation of their collaboration on advanced process technology, this time with a stated goal of developing ARM IP optimized for IBM physical processes down to a future 14 nm size. The two companies have been collaborating on semiconductors and SOC design since 2007, and this extension has several important ramifications for both companies and their competitors.

It is a clear indication that IBM retains a major interest in low-power and mobile computing, despite its previous divestment of its desktop and laptop computers to Lenovo, and that it will be in a position to harvest this technology, particularly ARM's modular approach to composing SOC systems, for future productization.

For ARM, the implications are clear. Its latest announced product, the Cortex A15, which will probably appear in system-level products in approximately 2013, will be initially produced in 32 nm with a roadmap to 20nm. The existence of a roadmap to a potential 14 nm product serves notice that the new ARM architecture will have a process roadmap that will keep it on Intel’s heels for another decade. ARM has parallel alliances with TSMC and Samsung as well, and there is no reason to think that these will not be extended, but the IBM alliance is an additional insurance policy. As well as a source of semiconductor technology, IBM has a deep well of systems and CPU IP that certainly cannot hurt ARM.

Read more

POST: Refining Your Strategy For iPads and Tablets -- The Workshop!

JP Gownder

Are you a product strategist trying to craft an iPad (or general tablet) product strategy?  For example, are you thinking about creating an app to extend your product proposition using the iPad or other tablet computer?

At Forrester, we’ve noticed that product strategists in a wide variety of verticals – media, retail, travel, consumer products, financial services, pharmaceuticals, software, and many others – are struggling to make fundamental decisions about how the iPad (and newer tablets based on Android, Windows, webOS, RIM’s QNX, and other platforms) will affect their businesses.

To help these clients, an analyst on my team, Sarah Rotman Epps, has designed a one-day Workshop that she’ll be conducting twice, on February 8th and February 9th, in Cambridge, Massachusetts.

She’ll be helping clients answer fundamental questions, such as:

  • Do we need to develop an iPad app for our product/service/website? If we don't build an app, what else should we do?
  • What are the best practices for development app products for the iPad? What are the features of these best-in-class app products?
  • Which tablet platforms should we prioritize for development, aside from the iPad?
  • Which tablets will be the strongest competitors to the iPad?
Read more

Is The IaaS/PaaS Line Beginning To Blur?

James Staten

Forrester’s survey and inquiry research shows that, when it comes to cloud computing choices, our enterprise customers are more interested in infrastructure-as-a-service (IaaS) than platform-as-a-service (PaaS) despite the fact that PaaS is simpler to use. Well, this line is beginning to blur thanks to new offerings from Amazon Web Services LLC and upstart Standing Cloud.

The concern about PaaS lies around lock-in, as developers and infrastructure and operations professionals fear that by writing to the PaaS layer’s services their application will lose portability (this concern has long been a middleware concern — PaaS or otherwise). As a result, IaaS platforms that let you control the deployment model down to middleware, OS and VM resource choice are more open and portable. The tradeoff though, is that developer autonomy comes with a degree of complexity. As the below figure shows, there is a direct correlation between the degree of abstraction a cloud service provides and the skill set required by the customer. If your development skills are limited to scripting, web page design and form creation, most SaaS platforms provide the right abstraction for you to be productive. If you are a true coder with skills around Java, C# or other languages, PaaS offerings let you build more complex applications and integrations without you having to manage middleware, OS or infrastructure configuration. The PaaS services take care of this. IaaS, however, requires you to know this stuff. As a result, cloud services have an inverse pyramid of potential customers. Despite the fact that IaaS is more appealing to enterprise customers, it is the hardest to use.

Read more

Vendors Must Modify Strategies To Reach New Segments Of Mobile Workers

Michele Pelino

Vendors in the mobility ecosystem are dramatically underestimating the demand for mobility solutions in the corporate arena. Why? Because they are missing demand that will come from two emerging segments of employees: Mobile Wannabes and Mobile Mavericks. When combined, these two worker segments account for 22% of all employees today, but by 2015 they will grow significantly to 42% of all corporate employees. To identify the needs of these mobile workers, Forrester analyzed results from the Forrsights Workforce Employee Survey, Q3 2010, which was fielded to over 5,500 employees in Canada, France, Germany, the UK, and the US and captures their smartphone device usage, purchasing behavior, and mobile application adoption.

Mobile Wannabe employees work in desk jobs at an office and do not get mobile devices from the corporate IT department, but they “want to” use their smartphone devices for work. Today, Mobile Wannabe workers account for 16% of all employees worldwide; however, by 2015, this segment will account for nearly 30% of all employees. Wannabe worker roles include executive assistants, clerical personnel, human resource workers, and customer service representatives. Momentum in this segment is driven by Millennial workers who grew up having easy access to personal computers and mobile phones and often purchase smartphones prior to entering the workforce.

Read more

Why Product Strategists Should Embrace Conjoint Analysis

JP Gownder

Aside from my work with product strategists, I’m also a quant geek. For much of my career, I’ve written surveys (to study both consumers and businesses) to delve deeply into demand-side behaviors, attitudes, and needs. For my first couple of years at Forrester, I actually spent 100% of my time helping clients with custom research projects that employed data and advanced analytics to help drive their business strategies.

These days, I use those quantitative research tools to help product strategists build winning product strategies. I have two favorite analytical approaches: my second favorite is segmentation analysis, which is an important tool for product strategists. But my very favorite tool for product strategists is conjoint analysis. If you, as a product strategist, don’t currently use conjoint, I’d like you to spend some time learning about it.

Why? Because conjoint analysis should be in every product strategist’s toolkit. Also known as feature tradeoff analysis or discrete choice, conjoint analysis can help you choose the right features for a product, determine which features will drive demand, and model pricing for the product in a very sophisticated way. It’s the gold standard for price elasticity analysis, and it offers extremely actionable advice on product design.  It helps address each of “the four Ps” that inform product strategies.

Read more

ARM-Based Servers – Looming Tsunami Or Just A Ripple In The Industry Pond?

Richard Fichera

From nothing more than an outlandish speculation, the prospects for a new entrant into the volume Linux and Windows server space have suddenly become much more concrete, culminating in an immense buzz at CES as numerous players, including NVIDIA and Microsoft, stoked the fires with innuendo, announcements, and demos.

Consumers of x86 servers are always on the lookout for faster, cheaper, and more power-efficient servers. In the event that they can’t get all three, the combination of cheaper and more energy-efficient seems to be attractive to a large enough chunk of the market to have motivated Intel, AMD, and all their system partners to develop low-power chips and servers designed for high density compute and web/cloud environments. Up until now the debate was Intel versus AMD, and low power meant a CPU with four cores and a power dissipation of 35 – 65 Watts.

The Promised Land

The performance trajectory of processors that were formerly purely mobile device processors, notably the ARM Cortex, has suddenly introduced a new potential option into the collective industry mindset. But is this even a reasonable proposition, and if so, what does it take for it to become a reality?

Our first item of business is to figure out whether or not it even makes sense to think about these CPUs as server processors. My quick take is yes, with some caveats. The latest ARM offering is the Cortex A9, with vendors offering dual core products at up to 1.2 GHz currently (the architecture claims scalability to four cores and 2 GHz). It draws approximately 2W, much less than any single core x86 CPU, and a multi-core version should be able to execute any reasonable web workload. Coupled with the promise of embedded GPUs, the notion of a server that consumes much less power than even the lowest power x86 begins to look attractive. But…

Read more

NetApp Acquires Akorri – Moving Up The Virtualization Stack

Richard Fichera

NetApp recently announced that it was acquiring Akorri, a small but highly regarded provider of management solutions for virtualized storage environments. All in all, this is yet another sign of the increasingly strategic importance of virtualized infrastructure and the need for existing players, regardless of how strong their positions are in their respective silos, to acquire additional tools and capabilities for management of an extended virtualized environment.

NetApp, while one of the strongest suppliers in the storage industry, not only faces continued pressure from not only EMC, which owns VMware and has been on a management software acquisition binge for years, but also renewed pressure from IBM and HP, who are increasingly tying their captive storage offerings into their own integrated virtualized infrastructure offerings. This tighter coupling of proprietary technology, while not explicitly disenfranchising external storage vendors, will still tighten the screws slightly and reduce the number of opportunities for NetApp to partner with them. Even Dell, long regarded as the laggard in high-end enterprise presence, has been ramping up its investment management and ability to deliver integrated infrastructure, including both the purchase of storage technology and a very clear signal with its run at 3Par and recent investments in companies such as Scalent (see my previous blog on Dell as an enterprise player and my colleague Andrew Reichman’s discussion of the 3Par acquisition) that it wants to go even further as a supplier of integrated infrastructure.

Read more

How Complexity Spilled The Oil

Jean-Pierre Garbani

The Gulf oil spill of April 2010 was an unprecedented disaster. The National Oil Spill Commission’s report summary shows that this could have been prevented with the use of better technology. For example, while the Commission agrees that the monitoring systems used on the platform provided the right data, it points out that the solution used relied on engineers to make sense of that data and correlate the right elements to detect anomalies. “More sophisticated, automated alarms and algorithms” could have been used to create meaningful alerts and maybe prevent the explosion. The Commission’s report shows that the reporting systems used have not kept pace with the increased complexity of drilling platforms. Another conclusion is even more disturbing, as it points out that these deficiencies are not uncommon and that other drilling platforms in the Gulf of Mexico face similar challenges.

If we substitute “drilling platform” with “data center,” this sound awfully familiar. How many IT organizations are relying on relatively simple data collection coming from point monitoring such as network, server, or application while trying to manage the performance and availability of increasingly complex applications? IT operations engineers sift through mountains of data from different sources trying to make sense of what is happening and usually fall short of finding meaningful alerts. The consequences may not be as dire as the Gulf oil spill, but they can still translate into lost productivity and revenue.

The fact that many IT operations have not (yet) faced a meltdown is not a valid counterargument: There is, for example, a good reason to purchase hurricane insurance when one lives in Florida, even though destructive storms are not that common. Like the weather, there are so many variables at play in today’s business services that mere humans can’t be expected to make sense of it.

Read more