Cloud Computing Will Save IT Millions, But Only If You Have Elastic Applications

Mike Gualtieri

Do you keep every single light on in your house even though you are fast asleep in your bedroom?

Of course you don't. That would be an abject waste. Then why do most firms deploy peak capacity infrastructure resources that run around the clock even though their applications have distinct usage patterns? Sometimes the applications are sleeping (low usage). At other times, they are huffing and puffing under the stampede of glorious customers. The answer is because they have no choice. Application developers and infrastructure operations pros collaborate (call it DevOps if you want) to determine the infrastucture that will be needed to meet peak demand.

  • One server, two server, three server, four.
  • The business is happy when the web traffic pedal is to the floor.
Read more

Painting The IT Industry Landscape

Chris Mines

All of us in the technology industry get caught up in the near-term fluctuations and pressures of our business. This quarter’s earnings, next quarter’s shipments, this year’s hiring plan . . . it’s easy to get swallowed up by the flood of immediate concerns. So one of the things that we work hard on at Forrester, and that our clients value in their relationships with us, is taking a few steps back and looking at the longer-term, bigger picture of the size and shape of the industry’s trajectory. It provides strategic and financial context for the short-term fluctuations and trends that buffet all of us.

I am lucky to co-lead research in Forrester's Vendor Strategy team, which is explicitly chartered to predict and quantify the new growth opportunities and disruptions facing strategists at some of our leading clients. We will put those predictions on display later this month at Forrester's IT Forum, our flagship client event. Among the sessions that Vendor Strategy analysts will be leading:

  • "The Software Industry in Transition": Holger Kisker will preview his latest research detailing best practices for software vendors navigating the tricky transition from traditional license to as-a-service pricing and engagement models.
  • "The Computing Technologies of 2016": Frank Gillett will put us in a time machine for a trip five years into the future of computing, storage, network, and component technologies that will underpin new applications, new experiences, and new computing capabilities.
Read more

Not Your Grandfather’s Data Warehouse

Brian  Hopkins

As I dig into my initial research, it dawned on me – some technology trends are having an impact on information management/data warehouse (DW) architectures, and EAs should consider these when planning out their firm’s road map. The next thought I had – this wasn’t completely obvious when I began. The final thought? As the EA role analyst covering emerging technology and trends, this is the kind of material I need to be writing about.

Let me explain:

No. 1: Big Data expands the scope of DWs. A challenge with typical data management approaches is that they are not suited to dealing with data that is poorly structured, sparsely attributed, and high-volume. For example, today’s DW appliances boast abilities to handle up to a 100 TB of volume, but the data must be transformed into a highly structured format to be useful. Big Data technology applies the power of massively parallel distributed computing to capture and sift through data gone wild – that is, data at an extreme scale of volume, velocity, and variability. Big Data technology does not deliver insight, however – insights depend on analytics that result from combing the results of things like Hadoop MapReduce jobs with manageable “small data” already in your DW.

Even the notion of a DW is changing when we start to think “Big” – Apache just graduated Hivefrom being part of Hadoop to its own project (Hive is a DW framework for Big Data). If you have any doubt, read James Kobielus’ “The Forrester Wave™: Enterprise Data Warehousing Platforms, Q1 2011.”

Read more

Intel Shows the Way Forward, Demos 22 nm Parts with Breakthrough Semiconductor Design

Richard Fichera

What Intel said and showed

Intel has been publishing research for about a decade on what they call “3D Trigate” transistors, which held out the hope for both improved performance as well as power efficiency. Today Intel revealed details of its commercialization of this research in its upcoming 22 nm process as well as demonstrating actual systems based on 22 nm CPU parts.

The new products, under the internal name of “Ivy Bridge”, are the process shrink of the recently announced Sandy Bridge architecture in the next “Tock” cycle of the famous Intel “Tick-Tock” design methodology, where the “Tick” is a new optimized architecture and the “Tock” is the shrinking of this architecture onto then next generation semiconductor process.

What makes these Trigate transistors so innovative is the fact that they change the fundamental geometry of the semiconductors from a basically flat “planar” design to one with more vertical structure, earning them the description of “3D”. For users the concepts are simpler to understand – this new transistor design, which will become the standard across all of Intel’s products moving forward, delivers some fundamental benefits to CPUs implemented with them:

  • Leakage current is reduced to near zero, resulting in very efficient operation for system in an idle state.
  • Power consumption at equivalent performance is reduced by approximately 50% from Sandy Bridge’s already improved results with its 32 nm process.
Read more

A Few Thoughts On Communicating Risk

Chris McClean

In my new report, The Risk Manager's Handbook: How To Measure And Understand Risks, I present industry best practices and guidance on ways to articulate the extent or size of a risk. More than the interpersonal, political, and leadership skills required of a risk management professional, defining how risks are measured and communicated is where I believe they prove their worth. If risk measurement techniques are too complicated, they may discourage crucial input from colleagues and subject matter experts... but if they are too simple, they won't yield enough relevant information to guide important business decisions. Great communication skills can only hide irrelevant information for so long.

This report includes factors to use in the risk measurement process, ways to present risk measurement data in meaningful ways, and criteria to use when deciding which of these methods are most appropriate. As always, your feedback is welcome and appreciated.

In addition, I will be covering a related topic with our Security and Risk Council in a session called Creating A High-Impact Executive Report along with my colleague Ed Ferrara at Forrester's upcoming IT Forum: Accelerate At The Intersection Of Business And Technology, May 25-27, in Las Vegas. Please join us if you can make it. Later in the week, I will be available for 1-on-1 meetings with attendees, and I'll also present sessions on linking goverannce and risk and establishing good vendor risk management practices. I hope to see you there. 

Categories:

Is SAP BusinessObjects 4.0 Worth The Wait?

Boris Evelson

SAP BusinessObjects (BO) 4.0 suite is here. It’s been in the ramp-up phase since last fall; according to our sources, SAP plans to announce its general availability sometime in May, possibly at Sapphire. It’s about a year late (SAP first told Forrester that it planned to roll it out in the spring of 2010, so I wanted to include it in the latest edition of the Forrester Wave™ for enterprise BI platforms but couldn’t), and the big question is: Was it worth the wait? In my humble opinion, yes, it was! Here are seven major reasons to upgrade or to consider SAP BI if you haven’t done so before:

  1. BO Universe (semantic layer) can now be sourced from multiple databases, overcoming a major obstacle of previous versions.
  2. Universe can now access MOLAP (cubes from Microsoft Analysis Services, Essbase, Mondrian, etc.) data sources directly via MDX without having to “flatten them out” first. In prior versions, Universe could only access SQL sources.
  3. There’s now a more common look and feel to individual BI products, including Crystal, WebI, Explorer, and Analysis (former BEx). This is another step in the right direction to unify SAP BI products, but it’s still not a complete solution. It will be a while before all SAP BI products are fully and seamlessly integrated, as well as other BI tools/platforms that grew more organically.
  4. All SAP BI tools, including Xcelsius (Dashboards in 4.0), that did not have access to BO Universe now do.
  5. There’s now a tighter integration with BW via direct exposure of BW metadata (BEx queries and InfoProviders) to all BO tools.
Read more

What Should IT Do If Empowered BT Increases In Popularity?

Marc Cecere

An empowered BT model includes the idea that end users will take on some functions that are typically performed within an IT organization. These may include selecting and deploying applications, buying mobile devices, and contracting with services firms.

With factors such as increased availability of cloud applications, more IT-savvy businesspeople, and IT shops buried in maintenance of existing applications, there’s a lot on the side of increasing IT functions outside of IT. However, security and compliance concerns, the need to integrate apps and data, the complexity of these applications, and cost are just some of the constraints that are holding back this approach.

Whether there will be a trend towards functions moving out of the IT organization or the reverse, with IT taking on more control, empowered BT will happen in some organizations. When it does, there are things that CIOs can do to exploit this and minimize potential damage:

  • Shift senior IT people from “doing” to consulting and overseeing. Architects, for example, spend a significant amount of their time on projects (doing). Some of their time needs to be freed up to provide advice to businesspeople on how to make these functions scalable, secure, and integrated where necessary. Similarly, vendor managers need time to help businesspeople in the selection process for vendors.
  • Select for and build up negotiations skills. The leader of apps that speaks in technical terms, the security expert who generates every possible scenario as an argument for not doing something, and the architect who hoards information while making pronouncements on what the business should and should not do are working against you in an empowered BT world. With technically sophisticated end users and tools that can quickly build functionality, business requests leading to IT responses now become negotiations.
Read more

A Gem of a Deal: SuccessFactors and Plateau Come Together

Claire Schooley

A major announcement in the human capital management (HCM) world occurred on April 26, 2011. SuccessFactors, a top vendor in performance management, announced its intention to purchase Plateau Systems, a leading learning management system (LMS) vendor. Although both vendors have competing products in the talent management space, Plateau had something SuccessFactors needed: an LMS. With the loss of GeoLearning, SuccessFactors’ former LMS partner, which was acquired by SumTotal in January 2011, SuccessFactors was left with a gaping hole in its solution set. Although SuccessFactors executives believe that the future of learning is more in the informal and social realms, organizations need and want LMSes to manage their increasing compliance training needs while keeping a close eye on the whole social and informal learning market. Organizations also have formal courses and simulated and role-play learning that the LMS tracks and reports on. The word is that SuccessFactors’ sales staff have been bemoaning the lack of an LMS to help them close deals. Today, organizations are much more interested in getting multiple HRM functionality from one vendor. Often this suite approach includes performance, compensation, learning, and even recruiting (for more details, see my “Four Pillars of Talent Management” research report). SuccessFactors now has a very strong and complete “four-pillar” solution.

Read more

The Application Server Bubble Is About To Burst

Mike Gualtieri

Traditional application servers such as WebSphere, WebLogic, and JBoss are dinosaurs tiptoeing through a meteor storm. Sure, IBM, Oracle, and Red Hat still have growing revenue in these brands, but the smart money should look for better ways to develop, deploy, and manage apps. The reason: cloud computing.

The availability of elastic cloud infrastructure means that you can conserve capital by avoiding huge hardware investments, deploy applications faster, and pay for only those infrastructure resources you need at a given time. Sound good? Yes. Of course there are myriad problems such as security and availability concerns (especially with the recent Amazon mishap) and others. The problem I want to discuss is that of application elasticity. Forrester defines application elasticity as:

The ability of an application to automatically adjust the infrastructure resources it uses to accommodate varied workloads and priorities while maintaining availability and performance.

Elastic Application Platforms Are Not Containers

Read more

Poor User Experiences WILL Kill Your Customer Service App

Kate Leggett

There’s a  huge graveyard of failed customer service software implementations, and still others are on life support due to the basic fact that they are not usable.  Think of the world we live in, with products and services from Amazon, Apple, Google, Facebook, and the like:

  • Intuitive user interfaces that don’t require training to be able to use them
  • Touchscreens
  • One-click processes
  • Predictive type-ahead where suggested topics are displayed in a dropdown menu to help users autocomplete their search terms
  • Aggregation of content from different sources, all linked together so that it adds value to the user
Read more