Well, too bad, because I’m here to tell you, without hesitation, that the future of virtual reality is here now. It works, and it’s going to change the world. The official launch of the Rift is scheduled to drop sometime before the end of next year, and that’s just the start.
Virtual Reality TodayHere’s what the first Rift dev kit (the one currently in the hands of developers and journalists) offers right now: stereoscopic 3D, a 90-degree horizontal field of view, 800×600 per-eye resolution, high-precision rotational head tracking, and a relatively low latency (the time the system takes to respond to a head motion) of about sixty milliseconds.
That means that, provided you hold your torso still, you can rotate your head around freely, and the view to your eyes, after a very slight delay, will be pretty much correct (and will fill most of your visual field). That’s enough to be pretty immersive (even the early content is very compelling), but there’s plenty of room for improvement. The LCD screen introduces motion blur, the resolution is painfully low, leaning your torso causes the world to slide with you in an unsettling manner, and the latency it does have is nauseating to some users.
The second dev kit, DK2, which ships next month, improves on this feature set with a camera that allows the headset to figure out where your head is in space (allowing you to lean and move around inside the camera’s view). It cuts the latency down close to 20 ms (the threshold of human perception). It raises the resolution to 960×1080 per eye, and improves the optics to push more of the available pixels into your field of view. It also replaces the LCD of the original with a Samsung OLED screen, which offers better color reproduction, higher refresh rates, and low-persistence.
That last one might need a bit of explanation: Here’s Palmer Luckey describing low persistence:
“The best way to think about it is that with a full persistence display, we render a frame, put it on the screen, and it shows it on the screen until the next frame, then it starts all over. The problem with that is that the frame is only correct when it’s in the right place — when it’s first there. For the rest of the time, it’s kind of like garbage data. It’s like a broken clock, you know how a broken clock is right occasionally when it’s in the right place, but most of the time it’s showing garbage data?
What we’re doing is rendering the image and sending it to the screen, we show it for a tiny period of time, then we blank the display and it’s black until we have another. So we’re only showing the image when we have a correct, up-to-date frame from the computer to show you.”
VR in 5 YearsThe first consumer version of the Rift (and the consumer version of Sony’s VR prototype, Project Morpheus), is expected to be an incremental update on the DK2: higher resolution, faster refresh rates, better optics, lower latency — that sort of thing. However, for the second generation of VR hardware (and the third and fourth) there are a lot of technologies on the horizon that are going to make a huge different in the quality and sense of presence provided by VR.
Here’s Oculus’ Michael Abrash (then of Valve) describing the power of VR presence(using Valve’s prototype VR room):
“We have a demo where you’re standing on a ledge, looking down at a substantial drop. Here’s the scene; the stone texture is a diving board-like ledge far above the floor of a box room that’s textured with outdated web pages. [...] Looking at this on a screen (even when it’s not warped) doesn’t do anything for me, but whenever I stand on that ledge in VR, my knees lock up, just like they did when I was on top of the Empire State building. [...] The inputs are convincing enough that my body knows, at a level below consciousness, that it’s not in the demo room; it’s someplace else, standing next to a drop.”
Foveated RenderingOne of the major limiting factors to VR right now is just the difficulty of rendering the world that you’re inhabiting quickly enough. Rendering a detailed scene in 3D at 75 FPS is a non-trivial challenge, even for relatively high-end PC gaming rigs. For console or mobile VR experiences like Sony’s Project Morpheus, those challenges are even tougher.
John Carmack has made considerable progress on finding cheats to allow VR experiences to run more smoothly (including a technique called “timewarp” thatfills in missing frames using a smart interpolation algorithm) – but it’s still a major limitation to consumer adoption of VR hardware. Foveated rendering offers a way out of this problem.
Foveated rendering depends on a critical fact about the human eye, which is that the photo-receptors on the human retina are not evenly distributed: almost all of them are clustered in a tiny circle in the middle of the retina called the fovea. Outside of the middle few percent of the visual field, humans are basically blind. We get around this by rapidly flicking our eyes around the world and stitching the resulting data together into the illusion of a continuous, detailed visual image.
This is mildly unsettling, but incredibly useful for head mounted displays. By including small cameras in the headset to track the user’s eyes, it’s possible to render only the parts of the image that the fovea can see at full detail, rendering the rest of the visual field at very low resolution. This offers a dramtatic speedup compared to rendering the scene conventionally, which makes a huge difference in the quality and consistency of the visual experience.
Johan Andersson, an engineer who works for DICE, describes foveated rendering like this:
“What we would like to be able to do, if we could, is essentially foveated rendering [...] so we could render at a couple of resolutions, essentially, and keep eye tracking, so you render at your super high resolution, but on your thumbnail, and then you render a little bit lower resolution around that, and a little bit lower around that, and you composite them together. It requires super high quality eye tracking, but I’ve seen some demos of that, and it actually works surprisingly well.”
Motion ControlsGetting your head into the game is a challenge in its own right, but it’s not enough. I’ve put dozens of people through the headset, and the first thing that almost all of them do is to look for their hands or try to touch something. Current VR tech makes you feel like an invisible, intangible ghost – or worse, a ghost trapped in a body they can’t control.
So far, Oculus has announced that they’re working on the controller problem, but haven’t found a good enough solution yet. Sony’s Morpheus includes PS Move controllers — tracking wands that let you move your hands in games, but suffer from precision and occlusion problems, and don’t feel quite like your real hands.
Several companies are developing VR input solutions that provide a glimpse at what the ideal VR input scheme looks like. In its limited tracking volume, the Leap Motion gives very high precision tracking, and the STEM controller offers a higher-precision PS Move-style experience without the occlusion problems of optical tracking. Control VR offers an inexpensive set of sensors that you wear on your torso and arms that can capture motion on the level of individual finger movements, but the system is bulky and requires precise calibration for good results.
On the haptic side of things, Tactical Haptics is developing devices that use skin-sheer to create the illusion of pressure on user’s hands. The now-likely-defunct Novint Xio was originally designed as an inexpensive exoskeleton that could provide pressure on the user’s hands in all three dimensions. This would be a cheaper alternative to high-end robotic haptic systems like those provided by Cyberglove.
Here’s what William Provancher of Tactical Haptics has to say about his controller:
“Our controllers are different because the ones you mention at most have vibration feedback. The Reactive Grip feedback in our controllers is capable of creating powerful force-torque-like haptic illusions”None of these products are perfect for VR just yet, but together they give a glimpse of what an inexpensive, high-quality haptic VR controller might look like in the relatively near future.
Social VRSome of you might read that heading and flinch. Don’t worry: whatever the Chicken Littles of the Internet might have told you about the Facebook acquisition, nobody is interested in Farmville in VR. What we’re talking about here is something a lot more interesting.
Take a minute and watch this video:
This system uses clever software to detect user facial expressions from a webcam and then map them onto a virtual avatar. It’s not perfect, but it’s good enough to show clear expression and make good use of nonverbal social cues. By embedding cameras and sensors in the headset, it should be possible to provide facial motion detection on par with commercial motion capture in a year or two.
Now, imagine being in a shared virtual space with someone else, using face tracking and motion controllers to capture expressions, gestures, and body language, and use them to drive character avatars for both of you. You both feel physically present in a space with the other person, and can talk to them while using non-verbal cues to communicate. It’s just like actually being in a room with someone – and, with haptic motion controllers, you could even hug, shake hands, or freely engage in casual social touch.
Social VR experiences are similar to video conferencing, but are deeper and richer in important ways. Eye contact, free motion, and the sense of physically sharing a space with another person are all powerful cues that are lacking in normal video conferencing.
Social VR provides a powerful way to conduct business, make friends, and keep in touch with loved ones when not physically near them. Social VR offers the possibility of online communities that use social cues to encourage civilized conversation and productive dialog: people are nicer to real people face to face than they are to faceless usernames on Internet message boards. For the first time in the history of the Internet, we might be able to create online communities without trolls.
Furthermore, social VR might be a feature improvement over actually hanging out with your friends in person! You can’t, in real life, get together with friends and play basketball on the moon. You can’t get an IMAX theater to yourselves and loudly heckle a new movie. You can’t kill each other with sci-fi weapons and then instantly re-spawn. You can’t explore Middle Earth together. In VR, you can. Forget paintball, LARPing, and movie theaters. Social VR is going to do all of that better, and someday it’s going to be the primary way you have fun with your friends.
Here’s Mark Zuckerberg, speaking about the Oculus Rift’ social potential shortly after the company’s acquisition:
“This is really a new communication platform. By feeling truly present, you can share unbounded spaces and experiences with the people in your life. Imagine sharing not just moments with your friends online, but entire experiences and adventures.”
The Future of Virtual RealityIf these sound like the same pie-in-sky VR dreams you’ve been hearing about since the 80’s, you aren’t completely wrong. A lot of these ideas have been around for a long time. The difference is that now, they’re tangibly close to reality. These aren’t “twenty-years-off” fantasies anymore. These are viable concepts for startups. The money is there, the industry inertia is there, and most importantly, the technology is finally here to do it right. These are experiences that are going to be cheaply available on the consumer market in the next couple of years.
The future of virtual reality that you’ve been dreaming about since you first readSnow Crash is almost here. You’ll be able to try the first wave of that future in a matter of months. It works, it’s amazingly cool, and it has some of the brightest minds in the tech industry working like crazy to make it even better.
Feature Image: “Anna Bashmakova and the Oculus Rift”, Sergey Galonkin