Voice-controlled intelligent assistants offer a tantalizingly productive vision of end user computing. Using voice commands, users can extend the computing experience to not just mobile scenarios, but to hyper-mobile, on-the-go situations (such as while driving). With wearables like Google Glass, voice command promises even deeper integration into hyper-mobile experiences, as this video demonstrates. And voice controlled intelligent assistants can also enable next-generation collaboration tools like MindMeld.
In spite of this promise, there remains a lurking sense that voice control is more of a gimmick than a productivity enhancer. (As of the time I posted this blog, a Google search for Siri+gimmick yielded… “about 2,430,000 results”). To see where voice control really stands, we surveyed information workers in North American and Europe about their use of voice commands.
Information workers’ use of voice control today:
In reality, many information workers with smartphones are already using voice commands – at least occasionally. Our survey revealed that:
The name of Apple’s event today “Let’s Talk iPhone” indicates where much of the news focus is — on the new iPhone. But that focus distracts vendor strategists from understanding the deeper implications of Apple’s advances in online services and user experience.
Apple’s iCloud is an important new software platform and service that will integrate Apple’s customer experiences across their iPhone, iPad, iPod Touch, and Mac products. This first version creates a personal cloud experience of the individual’s work, personal, and purchased content being seamlessly available across all their Apple products, in contrast to the fragmented experience of Google, Microsoft, and Amazon. Beyond music plus contacts, calendar, and email, Apple is supporting iCloud push in iMessage, Safari’s Read It Later feature, and push distribution of photos. Be sure to watch Apple’s iCloud concept video — that really conveys the personal cloud idea.
The Siri feature is the beginning of a new user experience built around context that will eventually create a much more personal, intimate experience for using all of Apple’s mobile and Mac products. Both of these offerings will have enduring impact beyond the latest model of the iPhone. Though only supported today on the iPhone 4S, I believe it is the beginning of a new form of interacting with all mobile devices and PCs. Voice control and input have not been widely used despite long-standing offerings from Nuance and Microsoft’s Tellme, though they do have strong adoption in specific segments. Apple’s integration of the user’s context will make the experience compatible with mainstream users.