In a recent blog post called "Drop The Pilot," Andrew McAfee argues that most "Enterprise 2.0" pilots are unintentionally set up to fail. This is in part because such enterprise communities depend upon broad employee acceptance in order to be effective. This doesn't mean that collaboration platforms are only effective in organizations with tens of thousands of employees, but it certainly helps. And the challenge with pilots is that they are frequently focused on a subset of the organization -- these pilots never really have the chance to fully realize their potential. Perhaps the best pilots are those that are not limited in scale but limited in time -- they determine adoption rates over time and use the pilot to figure out how to make the final rollout more successful.
In his blog post McAfee goes on to suggest six steps toward effective deployment which gel nicely with the key lessons learned from the United Business Media (UBM) case study published recently. McAfee suggests you should:
So you need to formulate an application modernization decision -- what to do with a given application -- how do you begin that decision making process? In the past, modernization decisions were often simply declared -- "We are moving to this technology" -- for a number of reasons, such as, it:
Keeps us current on technology.
Provides a more acceptable user-interface or integration capability.
Increases our exposure to access by external customers.
Increases the volume of business transaction we can process.
Trades custom/bespoke applications for standardized application packages such as ERP, payroll, human resources, etc.
Fast-forward to today -- you could simply go with your gut -- declare a solution based on what you currently know (or think you know) about the application in question. But it's a new day baby -- a proposal like that, without proper justification, is likely to be met with one of two responses from management:
The green IT track at Interop Las Vegas kicked off with a session from yours truly on “The Evolution Of Green IT: Projects That Cut Cost, Avoid Risk, And Grow Revenues” to help IT professionals plan for green IT’s current and future state, backed up with a number of real-life examples. Here are the key takeaways that I&O professionals should pay attention to:
Business-IT alignment is one of those persistent "Top 3" CIO issues. It has been this way just about as long as I’ve been in IT. You would think this would have been solved by now. After all, you put in business-driven IT governance, relationship managers, and some really nice dashboard, and you’ve covered about 90% of the advice out there. I’m going to suggest that business-IT alignment is being held hostage by complexity. Not technology complexity, since business leaders seem to be coming to terms with that. And not the mind-numbing spaghetti charts that show how complex our application and infrastructure landscapes are. They don’t understand these charts, but since we don’t understand them either, we can hardly expect business execs to. The complexity I’m referring to lies between their goals and the "stuff" IT delivers. They don’t see the connection. And since we see business execs having lots of goals, which shift over time, and strategies that also shift, we can’t show the connection. Instead, we say, "This is what you asked for, and this is what we delivered."
This week is Interop Las Vegas 2010, arguably the largest industry conference in North America targeting IT professionals. While the event has its roots in networking, today’s Interop has 13 tracks ranging from cloud computing and virtualization, to mobility and video conferencing, to governance, risk, and compliance. I’ve had the pleasure of chairing the data center and green IT tracks at the last three Interop Las Vegas and New York events.
Don’t have the opportunity to be at Interop in person? Forrester has you covered…
Fellow Forrester analyst, Rachel Dines, and I are onsite at Interop and we will be posting the key takeaways for IT Infrastructure & Operations (I&O) professionals here on Forrester’s I&O blog. We encourage you to check the blog over the next few days for Forrester’s insights on the following data center and green IT sessions:
I'll admit to spending only 3 hours on the show floor. Most was spent in the cavernous and gloomy AIIM sessions area where I gave an "Analyst Take" session on SharePoint 2010, a talk on Dynamic Case Management, and reviewed suppliers for Document output for Customer Communications. My impression of the floor activity was an improvement over the last two years. Perhaps contraction of sponsorships had hit the right balance with demand, or perhaps the great spring weather and improving economy were at work, but the mood was upbeat and the crowds were steady. Vendors were grumbling less. Cloud talk and SaaS were under-represented. E-discovery and records management were in line. And the usual interesting collection of arcane conversion, migration, capture, and other providers - usually in the lower rent districts - continued the tradition. SharePoint was again pervasive. Those that say "that ship has come in" may not be aware of other ports and forms of transportation. One wonders what the future of the show is if the SharePoint sessions are the biggest draw and Microsoft and key partners have the biggest booths. Philly is a city that has lost its major corporate headquarters and no longer has growth industries - but it does not deserve its reputation. The AIIM show - with roots in microfilm and paper - is similar - and likewise - is still pretty good.
Product strategists struggle with the issue of value all the time: What constitutes a revenue-maximizing price for my product, given the audience I’m targeting, the competition I’m trying to beat, the channel for purchase, and the product’s overall value proposition?
There are tools like conjoint analysis that can help product strategists test price directly via consumer research. However, there’s a bigger strategic question in the background: How can companies create and sustain consistently higher prices than their key competitors over the long term?
The Mac represents a good case study for this business problem. Macs have long earned a premium over comparable Windows PCs. Though prices for Macs have come down over time, they remain relatively more expensive, on average, than Windows-based PCs. In fact, they’ve successfully cornered the market on higher-end PCs: According to companies that track the supply side, perhaps 90% of PCs that sold for over $1,000 in Q4, 2009 were Macs.
Macs share common characteristics with Windows PCs on the hardware front – ever since Apple switched to Intel processors about four years ago, they’ve had comparable physical elements. But the relative pricing for Macs has remained advantageous to Apple. At the same time, the Mac has gained market share and is bringing new consumers into the Mac family – for example, about half of consumers who bought their Mac in an Apple Store in Q1, 2010 were new to the Mac platform. So Apple is doing something right here – providing value to consumers to make them willing to pay more.
There’s a lot of hype out there by many vendors who claim that they have tools and technologies to enable BI end user self service. Do they? When you analyze whether your BI vendor can support end user self service, consider the following types of “self service” and related BI tool requirements:
#1. Self service for average, casual users.
What do these users need to do?
Run and lightly customize canned reports and dashboards
Run ad hoc queries
Add calculated measures
Fulfill their BI requirements with little or no training (typically one needs search-like, not point-and-click UI for this)
What capabilities do they need for this?
Report and dashboard templates
Customizable prompts, sorts, filters, and ranks
Report, query, dashboard building wizards
Semantic layer (not all BI vendor have a rich semantic layer)
Prompting for columns (not all BI vendors let you do that)
Drill anywhere (only BI vendors with ROLAP and multisourcing / data federation provide this capability)
#2. Self service for advanced, power users
What do these users need to do?
Perform what-if scenarios (this often requires write back, which very few BI vendors allow)
Add metrics, measures, and hierarchies not supported by the underlying data model (typically one needs some kind of in-memory analytics capability for this)
Explore based on new (not previously defined) entity relationships (typically one needs some kind of in-memory analytics capability for this)
Not knowing exactly what one is looking for (typically one needs search-like UI for this)
Over the past few months, I had the opportunity to interview representatives from 10 leading technology service providers about how they help their clients innovate. My recent research summarizing those interviews is available to Forrester clients on our website. For those interested in the high level points I raised, here are a few of the key findings:
Last week I published two research reports on the hottest topic in PCI: Tokenization and Transaction Encryption. Part 1 was an introduction into the topic and Part 2 provided some action items for companies to consider during their evolution of these technologies. Respected security blogger, Martin McKeay, commented on Part 1. Serendipitously, Martin was also in Dallas (where I live) last week and we got an opportunity to chat in person about the report and other security topics.
Martin’s post highlighted several issues that deserve some response. He felt that I, “glossed over several important points people who are considering either technology need to be aware of.” Let me review those items:
Comment: “This is one form of tokenization, but it completely ignores another form of tokenization that’s been on the rise for several years; internal tokenization by the merchant with a (hopefully) highly secure database that acts as a central repository for the merchant’s cardholder data, while the remainder of the card flow stays the same as it is now.”