Title got your attention? It should. In a report I just published this week, I use our Forrester Consumer Technographics® data to identify the 7% of adults who are digital cord-nevers — measured as people who have never paid for TV and who are under age 32. This is the worrisome group whose arrival TV-industry pros have nervously anticipated. As we show in the report, they are officially now larger than the entire adult population of cord cutters, who come in at 6% of all adults. Put them together, and you have 15% of adults who are not paying for TV while still getting all the TV value they need from a combination of Netflix, Amazon Prime Video, and other tools.
Don't jump out of any Times Square windows just yet. TV is still massively popular and will continue to be. I wrote that report earlier this year, and Forrester clients can read it here. These defector groups are going to grow over time, true. And as the title of this post suggests, if we model this behavior out over the next 10 years, we expect that 50% of adults under age 32 will not pay for TV, at least not the way we think of it today. That compares to 35% of that age group that doesn't pay for TV today. (That's right, a third of them are already out of the pay TV game.)
There is a fundamental division at the heart of the digital economy. Digital tools make it possible for any company to build a direct relationship with its customers. At the same time, new digital intermediaries can use the same digital tools to create unprecedented intermediary roles. Torn between two lovers, anyone?
We’re in the age of the customer, a period during which end consumers have more access to the basic economic resources that help them make more rational and empowered decisions. The theory of perfect competition dictates that market economies flourish best on a foundation of perfect information that enables perfectly rational actors. The digital technologies we all carry in our pockets — not to mention, have surrounding us in our cars, our homes, and even strapped to our bodies — have initiated a chain reaction, unleashing an unprecedented level of information, which has enabled us — if we choose to accept our mission — to behave like much more rational actors than ever before. (Caveat lector, I didn’t say “perfectly rational” for a reason. See our research on how humans make choices to understand more.)
The more those technologies spread, the more buyers and sellers enter the system, the more innovation there is — at lower cost, thanks to the economics of digital disruption – and the spiral feeds itself.
Two weeks ago, I stood on Forrester's mainstage at its 2015 Forum For Marketing Leaders in New York (see a few minutes of the speech below). There, I told an audience of hundreds of our clients about hyperadoption, a term that I'm amazed no one has coined before now. Get used to the word. Because it's the characteristic that will define the next 10 years of your personal and business experience. In fact, in our first report debuting the concept of hyperadoption — released the same day I stood on the stage — I claim that hyperadoption will cause the next 10 years to generate an order of magnitude more change in your life than the past 10 years did.
That's an audacious claim. Because the past 10 years gave you the smartphone and the tablet. But I mean it, and over the coming year, I intend to prove it in my research.
Forrester clients can read the report, which synthesizes much of the work I've done over two decades, where I've had a front-row seat to the changes in how consumers adopt, such as the first consumer experiences in what was then known as the World Wide Web, including that very rare behavior known as online shopping. That experience, combined with the neuroscience research I've followed since my own days in the lab, has convinced me that the economics of digital disruption now allow people to bypass the ancient, loss-avoiding algorithms running in our heads that used to make us cautious of new things and now no longer do.
Windows 10 comes with holographic computing built into it. And to prove that it’s serious about holography, the company announced Microsoft HoloLens, a headset that lets people interact with holograms in the real world.
I know what you’re thinking. Microsoft has a credibility problem when it comes to launches of future tech. Remember that this is the company that tried to launch touch-based tablet computing in 2000. Microsoft launched a smartwatch years before anybody else that also came to naught. I’ll spare you a longer list of Microsoft’s mislaunches. It all adds up to a fair bit of earned skepticism. Surely Microsoft can’t be expected to create the computing interface that will do to graphical user interfaces what the mouse did to the text-based user interface.
Late last night, Sony revealed that it would pull The Interview from its release schedule. This decision was made in response to the step taken by the major theater chains, all agreeing that they would not screen the movie on its release day. The unprecedented decision is causing consternation among entertainment media types who feel that Sony has put the right of free speech in jeopardy. That's a conversation worth having, and I'm glad it's happening. But there is an entirely new question that this situation brings into dramatic relief, one that didn't exist before and one that our premeditations won't help us resolve. The question is this:
Can companies participate in cyber war?
Up until now, companies have prepared to defend themselves against cyber attacks as one-off nuisances. Such attacks are now so common that they no longer make the news. Even massive breaches where millions of customer data points are compromised tend to give us pause for only a few moments, perhaps a few days, and then we move on. But what Sony experienced was not just a security breach. This hack was a declaration of cyber war intended to bring Sony to its digital knees: a low-cost digitally effective cyber war that puts none of the hackers' assets in harm's way. And given yesterday's announcement, it appears to have worked.
I'm packing to leave Paris and it's a hard town to leave. Not only because I managed to catch a glimpse of a Paris sunset last night from the top of Notre Dame, but because I'm leaving LeWeb Paris 2014 while it's still in full swing. There's no denying LeWeb is one of the most invigorating events I've attended. Highlights in the first two days included a candid discussion on Uber with celebrity venture capitalist Fred Wilson and amazing comments from Web founder Tim Berners-Lee on everything from robots to net neutrality to Europe's "right to be forgottten" laws. Most invigorating for me personally was the day one session on wearables. LeWeb invited me to curate this hour-long track as part of its new format, tackling multiple themes in short bursts over several days. Curation required pulling together experts on the topic which was both simple and difficult. Simple because there are some great ones to choose from, difficult because I would have had 10 people on stage if I could have managed it. But that's where the hard task of curation comes in.
Yesterday Microsoft announced it would acquire Mojang along with its massive Minecraft gaming franchise for $2.5 billion. By now we've all seen the coverage, including the gratuitous interviews with middle-schoolers about whether Microsoft is "cool" enough to own Minecraft. By and large, we think this is a good acquisition for Microsoft, and we said as much in our Quick Take, just published this afternoon, summarizing the acquisition, its benefits, and its challenges for Forrester clients. Go to the report to read the client-only details of our analysis: "Quick Take: Microsoft Mines Minecraft for the Future of Interactive Entertainment." As we explain in the report, there are specific challenges Microsoft will face that will determine whether this ends up being a sensible acquisition or a sensational one.
Beyond the detailed analysis of the report, it's worth exploring the long-term question of what that sensational outcome would look like. The difference turns on the question of whether Microsoft is ready to invest in the future of digital interactive entertainment. This is a subtle point that has been missed in most analysis of the acquisition. Most people insist on covering the purchase as a gaming industry event. Microsoft, the owner of the Xbox, buys Minecraft, a huge gaming franchise. But that low-level analysis misses a bigger picture that I sincerely hope Microsoft is actively aware of.
The rumors turn out to be true. Apple is buying Beats for $3 billion, just slightly less money than originally suggested. Now that it's official, I'm confidently reiterating my conviction that Apple cannot be spending this unprecedented sum on Beats for either its headphones or its subscription music business. Because while the company may be worth that much, it's not worth that much to Apple, the world's most innovative consumer electronics and consumer software maker. Because choosing to buy Beats purely for its existing businesses and revenues would represent Apple significantly lowering its sights, aiming to graduate right from innovative leader of life-changing technology to kinda-cool company that makes stuff teenagers like. Not that selling to teenagers isn't a good thing; it can certainly bring in money, but it doesn't typically generate long-lasting brand relationships.
To be clear, if Apple is buying Beats purely for its headphones or music subscription business, then Apple is making a mistake. However, there are those of us who still believe that Apple hasn't thrown in the towel. And why would it? There are still many consumer markets to dominate — entire markets like wearables and home automation tech and even in-car experiences, all of which are in their infancy — and Apple still has the smarts, the brand, and certainly the money to make a run for any of those things, if not all of them. So why would Apple instead sign up to become a holding company for fashionable but not life-changing brands?
A media frenzy arose last night when the Financial Times suggested it had word from the inside that Apple is closing in on buying Beats Electronics for $3.2 billion. The immediate response from all quarters has been puzzlement and on multiple levels. As the sun rose today, so did the doubts about the impending deal. Generally, large strategic acquisitions — like when Google bought Nest for a similar figure — can be justified on the basis of buying something you don’t already have: a promising new technology, a large customer base, or entrée into a desirable industry. None of these things apply to this acquisition by Apple. Acquisitions at a more mature business stage can typically be justified purely on a revenue or margin basis or the desire to snap up a brand with more energy. Those don’t apply here, either. Even those who have tried to stretch the argument a bit have suggested that Apple could be buying Beats purely for a quick road into the music streaming business as a hedge against Spotify — except that Apple owns the music industry and doesn’t need Beats to build the music streaming offering that the company has denied for years that it should even consider getting into.
Google acquired Nest for billions, and then Facebook spent several more billion on Oculus VR. We’re only a few months into 2014, and already billions have been spent by some of the world’s largest digital players, with each of these companies eager to own the next big thing. Mobile is right here, right now, but everyone knows that very soon, there will be something else. But what else?
In the battle to find and claim the next device that everyone will want, these companies will soon realize that next big thing is not a thing at all: It’s your voice.
Voice control suffers from the same things plaguing augmented reality or virtual reality: It has been around for so long that we think we know what it is. Any fan of Star Trek: The Next Generation knows that voice control involves invoking an invisible computer with a command, “Computer,” followed by a query, “How many Klingons does it take to screw in a light bulb?” Maybe that’s a question you don’t want the answer to, but the computer — as voiced by Majel Barrett in the TV series — would know it.
It’s possibly a long history of popular depictions of voice control that made us collectively show so much enthusiasm for Siri when Apple first debuted it in 2011. It’s also partly to blame for why we quickly turned on Siri, declaring her soothing semi-robotic tones to be merely amusing at best or irrelevant at worst.
When Microsoft recently announced its long-rumored Cortana voice service for Windows Phone 8.1 as a catch-up to both Siri and Google Now’s own voice interface, the interest was modest, perhaps because if Siri hasn’t changed the way millions of Apple users use millions of Apple devices, how can Microsoft initiate a wave of behavior change when it has so few Windows Phone users?