Now, we've got some other competitive actions coming back, and we'll talk about slates and tablets and blah, blah, blah, blah ....
Just like we had to make things happen on netbooks, we've got to make things happen with Windows 7 on slates. And we are in the process of doing that as we speak. We're working with our hardware partners, we're tuning Windows 7 to new slate hardware designs that they're bringing them to market. And, yeah, you're going to get a lot of cacophony. There will be people who do things with other operating systems. But we've got the application base, we've got the user familiarity. We've got everything on our side if we do things really right.
As I explained earlier this week, this is our first venture into Agile Research Development, a very different way of doing research. Since it's officially Agile, I'll use even the thinnest of excuses to explain how we're applying Agile principles. Pictured here is our Scrum Master, Eric Hsieh, taking a picture of our initial list of items that we're putting into the backlog. That chart sits right next to my desk in the Foster City office of Forrester, so I threw in another shot that gives a peek at the scenic view from my desk. (If you've never been to Foster City, it's the mini-Venice of the Bay Area. I could kayak to work.) Also taken from the Agile canon are the user stories that define how we expect the collaboration between us and the Forrester community to operate.
Today's a very exciting day for me. As you know, I'm a member of a team of Forrester analysts who write research specifically for product marketers and product managers in the tech industry. A few weeks ago, we launched a community for product marketers and product managers. Now, we're bringing those two activities together by including our PM community in every stage of a research project. Plus, we're using an Agile approach. And you're all invited.
(Role-based research. The voice of the customer, expressed loudly and regularly through social media. Agile as the vehicle for applying what customers say to the product we're developing, in the most rapid and substantive way possible. I guess we do read our own research.)
What's The Topic?
We've yet to meet a product marketer or product manager who isn't interested in thought leadership, given its attractiveness and elusiveness. Gaining recognition as a thought leader is only the first step. Having achieved that exalted status, how do vendors convert thought leadership into tangible business benefits?
For our first venture into this new research approach, thought leadership was an easy choice of topic: important, popular, practical, and manageable.
Where Does The Community Fit In?
The community has a bigger role to play in this project than just suggesting a topic. We need your suggestions and feedback throughout the entire research process, from inception to publication. For example, before doing the primary research, we'll draft a list of interview questions. Since thought leadership has no end of interesting aspects, we want to make sure that the questions we ask go straight to the issues that matter most to product marketers and product managers. Here are but a few examples:
In the tech industry, the earlier in the innovation process a developer works, the greater the prestige. Lower in the status hierarchy are developers who work on performance and scalability issues, build integrations with other systems, handle security issues, and (heaven forfend!) help the QA team set up test environments.
Of course, the customer has a much different set of priorities. Sure, new products and features are interesting, but their value is moot if the technology doesn't work. How many concurrent users can the system bear before it keels over? Are the big security holes plugged? Has anyone else run version 126.96.36.199 on Ubuntu 10.04? These questions determine whether a customer buys your product and how likely they are to remain a customer after they've tried to deploy it.
Remember The D In R&D
Some of the larger technology companies have decided to make a serious investment in fixing the problem. While sensitivity training might help some development teams better appreciate the contributions of some of their members ("Looks like the performance guy needs a hug!"), there's no replacement for a well-staffed, well-equipped, dedicated effort to ensuring that the technology works as advertised. Or, just as importantly, doesn't work as advertised, so that the customer knows what sort of configurations and use cases to avoid.
Every office has a gadget fetishist. These people can be indicators of technologies that might be interesting, but they're not 100% reliable. It's in the nature of experimentation to make occasional hits (iPhones, Flip video cameras, GPS navigation devices, noise-canceling headphones that actually work) and frequent misses (USB-powered toothbrushes, Segways, most noise-canceling headphones, anything applied directly to the forehead). Not everyone wants to be an experimenter, or can afford to be one.
Consequently, gadgeteers – "innovators," in the terminology of people who study the diffusion of innovations – are always a very small minority of the population. Given their hit-or-miss track records, others treat innovators skeptically.
Why then do many technology vendors, in their quest to become thought leaders, market to a tiny minority of buyers that others in their organizations don't see as reliable guides to technology investments?
Social technology start-ups provide the most obvious examples of this strategy. Foursquare's Web site, for example, is clearly pitched at the social enthusiast, someone who takes the value of giving "you & your friends new ways of exploring your city." The site's main page provides links to Foursquare apps written for the iPhone, BlackBerry, and other mobile devices, assuming that having Foursquare on your phone might be a good idea. Being a thought leader in location-based social networking, in Foursquare's case, means marketing to the tiny population of people willing to dive head-first into its service, figuring out its value as they go. If Foursquare, as a thought leader in social technology, is ahead of the market, there's always a population of social networking enthusiasts willing to experiment with them.
In the technology industry, we use the term thought leader far too loosely. This mistake is not just semantic, since technology vendors want to use thought leadership to achieve business results.
Big vendors like IBM, SAP, and Microsoft want you to believe that they are thought leaders, in spite of their size and age. Yes, we may bestride the world like a colossus, but, when need be, we can pick up our feet and run in the direction that the market is headed. Small vendors have the opposite problem. Sure, they may be nimble, and they may have Big New Ideas that anticipate the future of their markets. But can you rely on them to execute? And no matter how good your ideas might be, how do you get anyone to pay attention to them if no one has heard of you before?
If you're not convincing, you're not a thought leader. Therefore, thought leadership must be superior to the unconvincing claims that appear on vendor Web sites, where practically everyone claims to be the leading company in blahblahblah. (If everyone's a leader, then who are the followers?) Inserting the word thought into these elevator pitches doesn't convert them into magical words of power.
[Another interesting finding about the requirements tool market, from research by Yours Truly and Mary Gerush. For other observations about this very interesting market, click here and here.]
As organizations improve their requirements practices, and as the requirements tool vendors adapt in response, the front end of the requirements process is getting a disproportionate amount of attention. Many requirements defects start early, with the information that you feed into the requirements-generating machine. For example, if you don't have a clue how criminal investigators work, you're going to make basic mistakes in feature prioritization and design when you build a case management system for them.
And it's not just the information that has fallen under suspicion. A lot of people are equally worried about the other raw material, ideas, that you feed into this machinery. Here are a few commonly cited concerns:
Congratulations, product marketers: You've been a critical part of the innovation process.
Sorry to break it to you, product marketers: On average, technology vendors stink at the phase of innovation where you play the biggest part.
Translating The Nouns But Not The Verbs
By no means are product marketers per se to blame for this outcome. Until recently, the technology industry has been slow to realize how innovation works. Vendors tacitly assumed that there were two separate processes at work, one that brought new products and services to market and another that convinced people to buy and use them.
At the boundary between product management and product marketing (as clearly as you can draw one), a hand-off occurred where one process ended and another began. The hand-off created predictable problems, in the same fashion as phone companies that assign one team for turning off phone service at your old location and a second team for turning it on at your new one.
One of the most painful consequences is fragmented, incomplete, and inaccurate information about a topic of interest to everyone: adoption. Every person in a tech company depends on some understanding of the who, what, why, when, and how of adoption, from the engineer who builds the technology, to the marketer who describes it to a general audience, to the salesperson who tries to persuade specific people to buy it. With rare exceptions, the engineer, marketer, and salesperson do not share the same mental image.
Learning about customers is hard, and everyone has deadlines to meet. Meanwhile, engineers, marketers, and salespeople can achieve a gestalt on a different topic: the technology's capabilities. For product marketers, product-centric marketing content is a natural result.
By now, the arguments for improving product requirements are very familiar – all too familiar. Bad requirements lead to misconceived projects, many of which fail outright. The survivors can waste frightening quantities of time, effort, and money. (And if you've been unlucky enough to be part of one, you don't want to repeat the experience.) In technology companies, even the best requirements are no barrier against failure: insights into what kinds of problems customers face, how the company's products and services can help address them, and why they'd consider getting the vendor's help remained siloed in the requirements, never reaching other people in the company – sales, marketing, support, etc. – who deal directly with customers and partners.
Nonetheless, it wasn't until relatively recently that the market for requirements tools expanded and diversified, as described in my previous post. Many organizations had the same attitude toward requirements that most of us share about regular visits to the dentist: We understand the clear, direct benefits of changing our behavior, but we'd rather not make the effort. Clearly, something changed in the last few years to increase the general interest in adopting better requirements hygiene.
In fact, many things changed, all pointing in the same general direction. Not every potential requirements tool adopter experienced the same new pressures, so you won't be able to identify a single, iconic tale. Instead, the history of the requirements tool market is more of an anthology of stories that have the same moral: the business consequences of bad requirements are no longer tolerable.