Posted by Simon Yates on April 3, 2012
The pressure to innovate has never been greater for every company. After speaking to many CIOs and business leaders about innovation, a couple of issues rose to the surface: ideas are being drawn from so many sources now -- employees, partners, customers, and other stakeholders -- that the decision about what to invest in is getting harder. At the same time, the process for sorting through, evaluating and incubating is cumbersome. In my almost-finished report, I decided to take a crack at developing a simple methodology and evaluation tool that helps spot good ideas in the pile quickly and fast-track them into an incubation stage.
Innovation is a discipline with established best practices, needed skill sets, and time-tested processes and tools. But even the process of innovating is being disrupted by social media, collaborative tools, and customer empowerment. A good innovation program empowers your people, engages partners and customers, revives morale during challenging times, and provides an extra shot of competitive energy when you need it. Engaging in a genuine and meaningful dialog with customers, partners and employees around innovation is admirable and has proven to be a great source of new ideas for many companies. However with so many ideas being generated, time and energy gets wasted on ideas that won't go anywhere, vague criteria designed to cast the widest net don't provide any kind of focus, and opaque evaluation processes leave people guessing about the status of their idea.
The key to a good innovation program is sustainability -- attracting a regular flow of new ideas into the pipeline and pushing the best ideas through a series of stages towards commercialization. I argue that the problem for most programs today is that lots of ideas are being stuffed into the pipeline on the front-end through ideation programs, but the pipe clogs quickly if you can't quickly evaluate the ideas based on useful criteria and fast-track the most promising. That's the basic problem I decided to tackle in my upcoming report.
I call it the Forrester Innovation Heat Index. Similar to the National Weather Service Heat Index, which combines three distinct elements -- air temperature, relative humidity and altitude -- to determine how hot it actually feels, the Forrester Innovation Heat Index combines three distinct elements to determine how hot an idea really is based on individual scores for:
- Guardrail values. Guardrail values are the constant and core objectives that all ideas should satisfy. Innovative ideas are scored objectively as to their match with general organizational desires. The guardrail values are valuable, aligned, acceptable, feasible and enduring.
- Business impact goals. Business impact goals are the specific elements that align with business strategy goals. Current conditions, culture, and business goals shape the scoring in this category. Business impact goals are product, process, organizational and market innovations.
- Resource requirements. Resource requirements provide guidance on cost, resource consumption and timing. With this information, the size and scale of the innovation contributes to funding decisions.
Both the idea creator and the evaluator score the idea against these criteria. The idea, of course, is to make a quick decision about where to go from here without requiring a complete business case and all the analysis and hand-wringing that goes along with it. With the information in hand, the idea should fall into one of four categories:
- Respectfully reject and try again. A majority -- perhaps as high as 90% -- of submitted ideas will be rejected for a variety of reasons. Everyone understands this harsh reality, but how this outcome is communicated says a lot about the company. Don’t leave ideas that you won’t pursue in limbo. Reject them quickly, clearly explain the rationale behind the decision, and encourage creators to try another angle.
- Send back for rework. Some ideas will need more work, especially if the idea lacks key information to build a good case or doesn’t earn a high enough score on certain criteria in the evaluation process. The Heat Index can offer insight into two ways: 1) targeting areas with lower scores can have a significant impact on the overall score, and 2) using the gap analysis to understand where the creator and evaluator scores were significantly different.
- Put on the back burner. At any one time, a company can pursue only a limited number of innovations, and the borderline cases need to identified and sustained until the appropriate time. Back-burner innovations would have been approved but the timing just wasn’t right. Possible reasons include exceeding current funding capabilities, requiring resources that are tied to ongoing projects, or that they are simply too far ahead of the market to be successful upon launch. Ideas so categorized must be tracked and reevaluated at a later time, should conditions in the market or company change.
- Approve and move into incubation. If the idea passes the initial Heat Index scoring test, the evaluation committee must now approve and move the idea into the incubation phase, where investments of time and human and financial resources develop and nurture the innovation towards commercialization.
In my next post, I'll talk about our ideas around incubation. The back-of-the-envelope level of evaluation that characterizes the Heat Index isn't enough to make a real investment decision, so the first step in incubation is to assemble a team and build a real business case. That's what we'll talk about next.
If you are interested in seeing the model or hearing more about this research, drop me a line at email@example.com. I'd love to hear your thoughts, so please feel free to comment here too!
Thanks for reading!
Search Forrester's Blogs
Save Money On Your Next Software Negotiation
Work with our software negotiation experts to save 10–20% on your next contract »
Lead BT Transformation
Develop customer-obsessed strategies to drive growth »
Forrester's CX Index
Predict how actions to improve CX will affect revenue performance.
Measure the customer experiences that matter most »
- Alex Cullen (5)
- Andrew Bartels (75)
- Bobby Cameron (2)
- Brian Hopkins (1)
- Chip Gliedman (12)
- Chris Mines (36)
- Claire Schooley (39)
- Clement Teo (3)
- Craig Le Clair (4)
- Dan Bieler (89)
- Dane Anderson (11)
- Doug Washburn (1)
- Frank Gillett (36)
- Frank Liu (1)
- Fred Giron (8)
- George Lawrie (1)
- Holger Kisker (1)
- Jennifer Belissent, Ph.D. (128)
- John Brand (12)
- John McCarthy (19)
- JP Gownder (1)
- Kyle McNabb (3)
- Marc Cecere (10)
- Martha Bennett (1)
- Michael Barnes (2)
- Michael Yamnitsky (13)
- Mike Gualtieri (1)
- Nigel Fenwick (103)
- Pascal Matzke (1)
- Paul Miller (1)
- Peter Burris (7)
- Philipp Karcher (17)
- Sharyn Leaver (36)
- Skip Snow (8)
- Steven Peltzman (1)
- Ted Schadler (131)
- Tim Sheedy (31)
- TJ Keitt (45)
- Tyler McDaniel (1)