Don’t Underestimate The Value Of Information, Documentation, And Expertise!

Andre Kindness

With all the articles written about IPv4 addresses running out, Forrester’s phone lines are lit up like a Christmas tree. Clients are asking what they should do, who they should engage, and when they should start embracing IPv6. Like the old adage “It takes a village to raise a child,” Forrester is only one component; therefore, I started to compile a list of vendors and tactical documentation links that would help customers transition to IPv6. As I combed through multiple sites, the knowledge and documentation chasm between vendors became apparent. If the vendor doesn’t understand your business goals or have the knowledge to solve your business issues, are they a good partner? Are acquisition and warranty costs the only or largest considerations to making a change to a new vendor? I would say no.

Support documentation and availability to knowledge is especially critical in networking design, deployment, maintenance, and upgrades. Some pundits have relegated networking to a commodity play, but networking is more than plumbing. It’s the fabric that supports a dynamic business connecting users to services that are relevant to the moment, are aggregated at the point of use, and originate from multiple locations. The complexity has evolved from designing in a few links to tens of hundreds of relationships (security, acceleration, prioritization, etc.) along the flow of apps and data through a network. Virtualization, convergence, consolidation, and the evolving data center networks are prime examples of today’s network complexity. In response to this complexity, architects and practitioners turn to books, training materials, blogs, and repositories so that they can:

  • Set up an infrastructure more quickly or with a minimal number of issues, since there is a design guide or blueprint.
Read more

AMD Bumps Its Specs, Waits For Interlagos And Bulldozer

Richard Fichera

Since its introduction of its Core 2 architecture, Intel reversed much of the damage done to it by AMD in the server space, with attendant publicity. AMD, however, has been quietly reclaiming some ground with its 12-core 6100 series CPUs, showing strength in  benchmarks that emphasize high throughput in process-rich environments as opposed to maximum performance per core. Several AMD-based system products have also been cited by their manufacturers to us as enjoying very strong customer acceptance due to the throughput of the 12-core CPUs combined with their attractive pricing. As a fillip to this success, AMD this past week announced speed bumps for the 6100-series products to give a slight performance boost as they continue to compete with Intel’s Xeon 5600 and 7500 products (Intel’s Sandy Bridge server products have not yet been announced).

But the real news last week was the quiet subtext that the anticipated 16-core Interlagos products based on the new Bulldozer core appear to be on schedule for Q2 ’11 shipments system partners, who should probably be able to ship systems during Q3, and that AMD is still certifying them as compatible with the current sockets used for the 12-core 6000 CPUs. This implies that system partners will be able to quickly deliver products based on the new parts very rapidly.

Actual performance of these systems will obviously be dependent on the workloads being run, but our gut feeling is that while they will not rival the per-core performance of the Intel Xeon 7500 CPUs, for large throughput-oriented environments with high numbers of processes, a description that fits a large number of web and middleware environments, these CPUs, each with up to a 50% performance advantage per core over the current AMD CPUs, may deliver some impressive benchmarks and keep the competition in the server  space at a boil, which in the end is always helpful to customers.

Is Infrastructure & Operations Vulnerable To Job Market Trends?

Jean-Pierre Garbani

A couple of weeks ago, I read that one of the largest US car makers was trying to buy out several thousand machinists and welders. While we have grown accustomed to bad news in this economy, what I found significant was that these were skilled workers. Personally, I find it a lot easier to write code than to weld two pieces of steel together, and I have tried both.

For the past 20 years, the job market in industrialized countries has shown a demand increase at the high and low ends of the wage and skill scale, to the detriment of the middle. Although it’s something that we may have intuitively perceived in our day-to-day lives, a 2010 paper by David Autor of MIT confirms the trend:

“. . . the structure of job opportunities in the United States has sharply polarized over the past two decades, with expanding job opportunities in both high-skill, high-wage occupations and low-skill, low-wage occupations, coupled with contracting opportunities in middle-wage, middle-skill white-collar and blue-collar jobs.”

One of the reasons for this bipolarization of the job market is that most of the tasks in the middle market are based on well-known and well-documented procedures that can be easily automated by software (or simply offshored). This leaves, at the high end, jobs that require analytical and decision-making skills usually based on a solid education, and at the low end, “situational adaptability, visual and language recognition, and in-person interactions. . . . and little in the way of formal education.”

Can this happen to IT? As we are fast-forwarding to an industrial IT, we tend to replicate what other industries did before us, that is remove the person in the middle through automation and thus polarize the skill and wage opportunities at both ends of the scale.

Read more

BSM Rediscovered

Jean-Pierre Garbani

I have in the past lamented the evolution of BSM into more of an ITIL support solution than the pure IT management project that we embarked on seven years ago. In the early years of BSM, we all were convinced of the importance of application dependency discovery: It was the bridge between the user, who sees an application, and IT, which sees infrastructures. We were all convinced that discovery tools should be embedded in infrastructure management solutions to improve them. I remember conversations with all big four product managers, and we all agreed at the time that the “repository” of dependencies, later to become CMDB, was not a standalone solution. How little we knew!

 What actually happened was the discovery tools showed a lot of limitations and the imperfect CMDB that was the result became the center of ITIL v2 universe. The two essential components that we saw in BSM for improving the breed of system management tools were quietly forgotten. These two major failures are: 1) real-time dependency discovery — because last month’s application dependencies are as good as yesterday’s newspaper when it comes to root cause analysis or change detection, and 2) the reworking of tools around these dependencies — because it added a level of visibility and intelligence that was sorely lacking in the then current batch of monitoring and management solutions. But there is hope on the IT operations horizon.

These past few days, I have been briefed by two new companies that are actually going back to the roots of BSM.

Neebula has introduced a real-time discovery solution that continuously updates itself and is embedded into an intelligent event and impact analysis monitor. It also discovers applications in the cloud.

Read more

The Passing Of A Giant – Digital Equipment Founder Ken Olsen Dead At 84

Richard Fichera

One evening in 1972 I was hanging out in the computer science department at UC Berkeley with a couple of equally socially backward friends waiting for our batch programs to run, and to kill some time we dropped in on a nearby physics lab that was analyzing photographs of particle tracks from one of the various accelerators that littered the Lawrence Radiation Laboratory. Analyzing these tracks was real scut work – the overworked grad student had to measure angles between tracks, length of tracks, and apply a number of calculations to them to determine if they were of interest. To our surprise, this lab had something we had never seen before – a computer-assisted screening device that scanned the photos and in a matter of seconds determined it had any formations that were of interest. It had a big light table, a fancy scanner, whirring arms and levers and gears, and off in the corner, the computer, “a PDP from Digital Equipment.” It was a 19” rack mount box with an impressive array of lights and switches on the front. As a programmer of the immense 1 MFLOP CDC 6400 in the Rad Lab computer center, I was properly dismissive…

This was a snapshot of the dawn of the personal computer era, almost a decade before IBM Introduced the PC and blew it wide open. The PDP (Programmable Data Processor) systems from MIT Professor Ken Olsen were the beginning of the fundamental change in the relationship between man and computer, putting a person in the computing loop instead of keeping them standing outside the temple.

Read more

Verizon Steps Into IaaS Cloud Leadership Ranks

James Staten

Pop Quiz: What’s the fastest way to build a credible, enterprise-relevant and highly profitable cloud computing services practice? Buy one that already is. That’s exactly what Verizon did last week when it pushed $1.4B across the table to Terremark. Despite its internal efforts to build an infrastructure-as-a-service (IaaS) business over the last two years, Verizon simply couldn’t learn the best practices fast enough to have matched the gains in the market it received through this move. Terremark has one of the strongest IaaS hosting businesses in the market and perhaps the best enterprise mix in its customer base of the top tier providers. It also has a significant presence with government clients including the United States’ Government Services Agency (GSA) which has production systems running in a hybrid mode between Terremark’s IaaS and traditional managed hosting services.

Confidential Forrester client inquiries have shown struggles by Verizon to win competitive IaaS bids with its computing-as-a-service (CaaS) offering, often losing to Terremark. This led to Verizon reselling the Terremark solution (its CaaS for SMB) so they could try before the buy.

Read more

Categories:

The ITSM Selection Process

Eveline Oehrlich

Almost every day I get the question: “We want to replace our ITSM support tool; which vendor should I look at?” There are many alternatives today and each vendor has certainly done a great amount of work to position themselves as the best. The success I had in consulting with these clients, and the knowledge I carry with me now, is thanks in part to the clients with whom I have discussed the ITSM space. They have all confirmed that the functionality across these vendors is very similar. This, however, does not help in decision-making — so I’m especially excited to have authored a three-piece research document which might take some magic out of the decision process when selecting ITSM support tools in the future.

This Forrester report is called Eliminate Magic When Selecting The Right IT Service Management (ITSM) Support Tool.  It’s an overview of the process decision-makers need to follow and the important — but sometimes overlooked — other criteria to keep in mind as they work toward launching or engaging with the ITSM vendor community.

I identified four phases of the evaluation process that should be followed:

Plan: Lay the groundwork, set objectives, explore existing conversations, and make necessary early decisions.

Assemble an evaluation team: Putting the right people together to understand the use cases and requirements is critical before the next step.

Define your requirements: Use the ITSM Support Tools Product Comparison to define your requirements.

Read more

Categories:

IBM And ARM Continue Their Collaboration – Major Win For ARM

Richard Fichera

Last week IBM and ARM Holdings Plc quietly announced a continuation of their collaboration on advanced process technology, this time with a stated goal of developing ARM IP optimized for IBM physical processes down to a future 14 nm size. The two companies have been collaborating on semiconductors and SOC design since 2007, and this extension has several important ramifications for both companies and their competitors.

It is a clear indication that IBM retains a major interest in low-power and mobile computing, despite its previous divestment of its desktop and laptop computers to Lenovo, and that it will be in a position to harvest this technology, particularly ARM's modular approach to composing SOC systems, for future productization.

For ARM, the implications are clear. Its latest announced product, the Cortex A15, which will probably appear in system-level products in approximately 2013, will be initially produced in 32 nm with a roadmap to 20nm. The existence of a roadmap to a potential 14 nm product serves notice that the new ARM architecture will have a process roadmap that will keep it on Intel’s heels for another decade. ARM has parallel alliances with TSMC and Samsung as well, and there is no reason to think that these will not be extended, but the IBM alliance is an additional insurance policy. As well as a source of semiconductor technology, IBM has a deep well of systems and CPU IP that certainly cannot hurt ARM.

Read more

POST: Refining Your Strategy For iPads and Tablets -- The Workshop!

JP Gownder

Are you a product strategist trying to craft an iPad (or general tablet) product strategy?  For example, are you thinking about creating an app to extend your product proposition using the iPad or other tablet computer?

At Forrester, we’ve noticed that product strategists in a wide variety of verticals – media, retail, travel, consumer products, financial services, pharmaceuticals, software, and many others – are struggling to make fundamental decisions about how the iPad (and newer tablets based on Android, Windows, webOS, RIM’s QNX, and other platforms) will affect their businesses.

To help these clients, an analyst on my team, Sarah Rotman Epps, has designed a one-day Workshop that she’ll be conducting twice, on February 8th and February 9th, in Cambridge, Massachusetts.

She’ll be helping clients answer fundamental questions, such as:

  • Do we need to develop an iPad app for our product/service/website? If we don't build an app, what else should we do?
  • What are the best practices for development app products for the iPad? What are the features of these best-in-class app products?
  • Which tablet platforms should we prioritize for development, aside from the iPad?
  • Which tablets will be the strongest competitors to the iPad?
Read more

Is The IaaS/PaaS Line Beginning To Blur?

James Staten

Forrester’s survey and inquiry research shows that, when it comes to cloud computing choices, our enterprise customers are more interested in infrastructure-as-a-service (IaaS) than platform-as-a-service (PaaS) despite the fact that PaaS is simpler to use. Well, this line is beginning to blur thanks to new offerings from Amazon Web Services LLC and upstart Standing Cloud.

The concern about PaaS lies around lock-in, as developers and infrastructure and operations professionals fear that by writing to the PaaS layer’s services their application will lose portability (this concern has long been a middleware concern — PaaS or otherwise). As a result, IaaS platforms that let you control the deployment model down to middleware, OS and VM resource choice are more open and portable. The tradeoff though, is that developer autonomy comes with a degree of complexity. As the below figure shows, there is a direct correlation between the degree of abstraction a cloud service provides and the skill set required by the customer. If your development skills are limited to scripting, web page design and form creation, most SaaS platforms provide the right abstraction for you to be productive. If you are a true coder with skills around Java, C# or other languages, PaaS offerings let you build more complex applications and integrations without you having to manage middleware, OS or infrastructure configuration. The PaaS services take care of this. IaaS, however, requires you to know this stuff. As a result, cloud services have an inverse pyramid of potential customers. Despite the fact that IaaS is more appealing to enterprise customers, it is the hardest to use.

Read more