Posted by Tom Grant on February 1, 2011
Internal pilots (a.k.a. "eating your own dog food," or "drinking your own champagne") are important tools. But how? What kind of feedback do you get from these exercises, and what sort don't you get?
That's the topic of a new research project that we just launched. While Forrester starts hundreds of new research efforts this year, I'm highlighting this one for two reasons: (1) I'm doing the research, and I think that everything I do is interesting; (2) this is the second time I've done a research project in a very transparent way. From start to finish, we're going to work in the open, to give you, Dear Reader, an opportunity to comment on the research as we do it.
The first project done in this fashion, which investigated "thought leadership" (whatever that means) in the technology industry, resulted in this RoleView document. Along the way, we solicited comments on the basic research plan, the questions we asked our interview subjects, and the content of the document. We threw out questions to Forrester community members along the way, and at the end, we did a brief retrospection on how well we succeeded.
We're Interested In Your Feedback Again. No, Really.
We're following the same game plan this time. Three documents just went live in The Forrester Community For Application Development & Delivery Professionals:
- The document plan, which outlines the topic and the research approach.
- The interview guide, a very rough draft of the questions we'll ask during the primary research.
- The schedule and status, the one-stop location for information on how far we're progressed.
We're not only open to feedback on the document plan and interview guide, but it's important to get your comments. The earlier you, our intended audience, tell us whether we're addressing the questions you have, the better the final research. The high value of this feedback was extremely clear during the leadership research, when community members pointed out which facets of that potentially huge topic were most important to them.
Internal pilots are an equally big topic. In fact, I can easily imagine writing a short book on the subject, outlining all the best practices for getting the most out of these programs. For a normal-sized Forrester research document, it's extremely important that we address the questions you have, instead of the questions that I'm pretty sure, but not always 100% sure, you probably have. I could write about metrics, or best practices, or timelines for internal pilots, or countless other insights on this topic.
How To Fail At Internal Pilots Without Even Trying
I'll illustrate by way of confession: I have worked on at least one internal pilot that was not a shining success. In one case, a great deal of work went into a major UI overhaul. There were high hopes for the pilot – so many, in fact, that incorporating ideas into the new UI, and then building it, took longer than originally estimated. (Shocking, I know.) Therefore, the internal pilot happened later than originally expected, with the pressure of the release date still looming over the team.
Unfortunately, the new UI got very mixed reviews during the internal pilot. Reactions among the development team were predictable. For example, one faction thought that the internal users were unrepresentative of the customers for the product, so we should just ignore their criticisms. Others thought that we should push back the date to incorporate the feedback, leading to some heated discussions about the relative merits of UI improvements versus meeting the release date. Of course, the whole point of setting a release date was to deliver a better UI, but the new version contained other enhancements that customers wanted. As you probably guessed, these enhancements were implemented in the new UI, so it wasn't possible to disentangle them from what threatened to hold up the release.
At this point, our team, as a hypothetical customer for this research, might have appreciated some guidance across a wide range of questions. How do you design a pilot to get representative user feedback? Is there a way to shorten the time from collecting the feedback to implementing a response, perhaps within the time frame of the pilot itself? Independent of user feedback, how valuable was the technical feedback (bugs filed, performance bottlenecks identified, etc.) unearthed during the pilot? Should the people collecting and analyzing the results of the pilot be the ones to decide whether to proceed with the current schedule?
As you might expect, I'd love to unearth the answers to all these questions and pass them along to you. Sadly, even by dropping all the vowels in my written words, I can't pack every possible angle on internal pilots into a single document. Therefore, your feedback is genuinely and mutually beneficial. So, fire away!
Meanwhile, I've already posed my first question about this topic to the Forrester Application Development & Delivery Community. We'd love to hear your responses to that specific aspect of internal pilots, the metrics used to measure success.
Search Forrester's Blogs
Four Citizen-Driven Imperatives Governments Must Embrace »
Free Webinar Series
How Can You Master Big Data? »
- Anjali Yakkundi (24)
- Boris Evelson (139)
- Claire Schooley (2)
- Clay Richardson (1)
- Diego Lo Giudice (16)
- Gene Cao (1)
- George Lawrie (17)
- Holger Kisker (38)
- Ian Jacobs (1)
- James Staten (7)
- Jeffrey Hammond (27)
- John R. Rymer (45)
- Jost Hoppermann (33)
- Kate Leggett (118)
- Kurt Bittner (3)
- Kyle McNabb (12)
- Margo Visitacion (9)
- Mark Grannan (9)
- Martha Bennett (12)
- Michael Barnes (21)
- Michael Facemire (14)
- Mike Gualtieri (114)
- Noel Yuhanna (10)
- Paul Hamerman (2)
- Phil Murphy (24)
- Randy Heffner (15)
- Rob Koplowitz (1)
- Stephen Powers (23)
- Ted Schadler (4)