Posted by James McQuivey on June 16, 2010
The future is here, folks, and the gaming industry is the first to get us there. Today I leave E3, the gaming industry's biggest US convention. When all is said and done, roughly 45,000 people will have come through LA's convention center -- most of them as nerdy as you're imagining right now -- to play the newest games, demo the latest hardware, and collectively drool over hyper-realistic zombies, aliens, robots, and other baddies game designers have placed in our digital sights.
At this E3 we have witnessed more advances in living room technology than the cable, consumer electronics, or the computer industry (yes, that includes Apple) have managed to pull off in many years of trying. Let me summarize:
- Natural user interfaces are ready to go. We don't have an industry-accepted name for it yet. I've been calling it NUI (like GUI, get it?) for natural user interface, while others call it whole-body interaction or gesture interface. Whatever you call it, the Wii was the first meaningful step in this direction, Sony Move is a next meaningful step in this direction (more refined control than the Wii, but the same concept), and Microsoft Xbox Kinect is a giant leap beyond anything you've ever done before. It changes everything: how you change the channels, how you shop a virtual store, and how you game. But it has the potential to expand the meaning of the word "game" to mean just about everything. I wrote a tired but stunned post on it before, see more detail there.
- Camera technology is put to good use. Sony Eye added the camera first, and it enables many interesting interactive benefits, especially when paired with Sony Move, because the camera can see you and place you virtually into many different aspects of the game -- you can be your own avatar, for example, or the game background can be a view of your living room as your TV sees it. However, the Kinect camera is unique because it not only sees you, it senses your distance from the screen. This makes for a deeper level of interaction because you can communicate in three physical dimensions. And while the camera's there doing more sophisticated things, it can also take pictures, enable video chat, and even recognize your face.
- Voice control finally makes sense. Ever since Star Trek, we've imagined a computer that we can talk to. And even though voice recognition has made significant advances, few of us use it when we have the alternative to type or click on something. But in a Kinect environment, you don't a controller in your hand. And some things don't make sense to do with a gesture when you can simply do them with your voice. "Xbox: play." Two simple words activates the device and gives it a command. Plus, it reminds you of the Xbox brand rather continuously!
As a result of all this simultaneous innovation on the input side, we now have an aching need for innovation the output side. True, we now have better interfaces which enable better experiences, but to really plant us squarely in the future, we have to have more immersive displays. That means 3D today, holography in the future. Starting small, Nintendo demoed its new handheld Nintendo 3DS, a dual-screen DS like the DSi, but with a 3D screen on the top and a touchscreen on the bottom. Here's the kicker: the user doesn't have to don expensive glasses to see the 3D effect. As long as you keep your eyes at the right distance from the screen, the screen can essentially aim different images to each eye, thus enabling a 3D effect. Plus, it has a 3D camera on the back for taking 3D images and sharing them with friends who also have a 3DS.
Meanwhile, Sony is eager to capitalize on the fact that it has the only game console capable of playing 3D games (though you have to have a 3DTV and glasses, sold, conveniently enough, by Sony). I tried a 3D boxing game using Sony Move and found it to be a decent play, combining a natural user interface with 3D. Perhaps the most impressive was Eye Pet, a virtual pet "game" that uses the Sony Eye camera, but has now been enhanced to use Sony Move as well. Most intriguingly, it is 3D capable. Watch this 2D demo, then imagine that you're watching it in 3D (as I did in the booth), and you get a quick sense of how all of these innovations, when put together, change what it means to be a human in this century. From here on out, it's all just a question of how quickly the virtual experience you value most is available. From the raunchy and violent fare game programmers produce by default to the inspirational and educational (imagine having Benjamin Franklin sit on your couch and talk to you about what it was like to frame the constitution).
That's why I stand by my claim: this isn't really about gaming. It's about every possible other thing as well. Every home will want one. And every experience provider (that's what publishers, producers, programmers, and developers are as a group) will want to provide experiences using these tools. Soon it will spill over and other devices like PCs and TVs will have these same capabilities. Even non-computing devices will learn to sense you and interact in three dimensions. Imagine a virtual Martha Stewart who can stand with you in the kitchen and show you how to perfect that mousse. Imagine a personal trainer who can sense not only your exact body position but who knows from other sensors in your home what you've been eating. I could go on, and in a report due later this summer, I will. Because it's time to make the future happen.
And it's all coming from the industry that many dismiss because of its nasty habit of blowing things up realistically as possible.