One of the central challenges for VR/AR companies is making a usable, compelling product once the initial novelty of the technology has worn off. We’ve been playing quite a bit with the Oculus Dev Kit 2, specifically, working on integrating it with our haptic feedback device to provide as unified an experience as possible. I’ve been making some notes as we got started to make sure I captured my first impressions of the Oculus as a serious product (rather than something worn in a rather contrived demo). Here are the results…
The new 980 compared to my existing 750ti
The main observation is one that I bet most consumers haven’t grocked yet: just how powerful a machine you need to operate the device. Oculus recommends a Nvidia 970 GPU, a i5-4590 processor, and at least 8GB of RAM. That is a very powerful set-up. Even a moderately enthusiastic gamer like myself, it turns out, lacks the horsepower to adequately run the DK2 – I thought I had a reasonable machine with a 750ti and tons of RAM…but I’m afraid not. And by “adequately” I mean at the recommended 90 frames per second, which the 750ti will not typically get. Even though I was seeing framerates of around 75, I still felt queasy after wearing the device for 30 minutes or more. It’s different than the olden days where you could just live with sub-30 framerates if you didn’t have an adequately powerful machine – I think you’d actually have difficulty using the device at all without getting the full 90 FPS…it might be just too uncomfortable. Turning my head quickly back and forth created the sensation of…ummmmm…being rather intoxicated – I was seeing double images of everything!
Behold my massive GPU!
I decided to bite the bullet and buy a 980 and an upgraded processor, but together these run around $700. So my prediction is that there will be many, many enthusiasts out there who will think they can get away with an Oculus and less than stellar hardware – I just don’t think this will be the case. It will be interesting to see how this plays out over time, though happily processing power has a wonderful tendency to get cheaper with time! Maybe it will be like Tesla’s first car – it cost over $100,000 and only appealed to the wealthy or true believers, but it was a completely viable business strategy for them to get a product quickly into the hands of enthusiastic consumers, and then to move downmarket.
Beyond the rather stringent hardware requirements, in the midst of the extensive coverage and excitement about VR, it’s easy to forget that every consumer-grade VR product out there is still seriously young. There are a number of idiosyncrasies we went through to get the device up and running. For example, which runtime version installed makes a massive difference, and many versions seemed to break at least some compatibility with previous versions. I expect this will be solved with time – it’s just so very early in the life of this technology.
As far as getting the right version of the runtime, I downloaded 10 example projects from share.oculusvr.com and of these, only two worked properly right out of the box. The others didn’t launch at all, simply responding with an error message. Again: Immature product. I have every confidence this will solved with time.
As for the experience itself, for me it’s a bit like sitting waaaaay too close to a TV. You definitely can see each pixel, which initially demands your focus, but as you get used to it and the overall image resolves, your eyes will eventually be tricked into focusing on the larger picture. The trick certainly isn’t perfect, and it made me feel off from time to time. Not motion sickness so much but just kind of…yucky. I think this may largely be solved by a better GPU…so we’ll see.
That said, there were a few demos that I got working right off the bat, and they were great fun that really showcased the tech. The coolest demo I played was a racing game, placing me squarely in the driver’s seat of the high performance race car. A quick glance up revealed my rearview mirror, nicely simulating the real world function, without taking away from what’s going on in front of the windshield.
My time with the Oculus was fun, but you’ve got to reset your expectations (at least for now) for visual fidelity. Now that we’re used to seeing absolutely gorgeous high def visuals, it’s a kind of a bummer to see the pixelated graphics. It’s kind of like playing Gran Turismo 2. I’m sure this will be fixed as display technology improves, if it’s not already fixed for the consumer edition.
After trying out these projects, it’s clear that there’s a steep learning curve and an art to creating good VR experiences. There are so many considerations with the new tech – how to prevent motion sickness, how to guide a user to look where they’re supposed to, and volumes of aspects that haven’t occurred to us yet.
And one more thing – wearing a HMD is a deeply isolating experience. You lose all sense of what’s going on around you, which is a bit spooky. There were window washers at our place yesterday and they could have easily come and gone, looking at the uncoordinated lunatic wearing the HMD and headphones… and I never would have known! Perhaps I’ll just double bolt the doors and try not to think too much about it…
I think of it this way: we’ve been able to believably replace audio for a while now. A good pair of headphones can instantly transport you somewhere else and replace sounds in a truly authentic way. We’ve still got a ways to go, but soon, HMDs may be able to do the same for the eyes, showing us something artificial, but pretty damn believable just the same.
So as far as we’re concerned, that just leaves touch to virtually re-create. Well, there’s still taste and smell, but I’ll let some other pioneer handle that one. I don’t think we’ll ever be able to create a believable touch experiences for people, outside of a few highly-constrained examples, but there’s a huge amount of empty space to play with along that road. I’ll continue to post on our experiments as we continue down this road. My next big task is to create a unified OmniWear-Oculus application in Unity…
The story of OmniWear Haptics starts with football. As sports fans know all too well, concussions have become a scandal in the NFL, maybe even an existential threat to the game. Earlier this year, a settlement was reached between the NFL and a group of players suffering from neurological conditions that may have been caused by their years playing football. Initially, the NFL agreed to pay up to $675 million to players with diagnosable neurological conditions, but later this limit was later removed and now there is no cap on damages the League might have to pay. Despite ample research and a mass of high-quality data, we’ve yet to see a truly effective means of preventing concussions in football other than, well…not playing football.
We came into the picture in April 2014, when we were approached to brainstorm some innovative “out-of-the-box” ideas for solving this problem. We gathered a number of experts in the field, from neurologists to NFL officials to sports kinesthesiologists to tackle this mandate. The ideas ranged from the fanciful (airbags that extend out from a helmet to cushion an oncoming blow) to the more practical (cervical spine support that deployed only when needed) to the just plain weird (a helmet that induces hypothermia upon impact).
One idea, however, seemed to resonate more than the rest: giving players advance warning of a impending hit. The inspiration, oddly enough, was Spiderman. Spiderman is endowed, naturally, with a number of superhuman powers, one of which is the ability to detect impending danger. This “Spidey Sense” allows Spiderman to dodge attacks, navigate when disoriented, and even find hidden objects. It is manifested by a tingling feeling at the base of his skull. Now, why can’t we give Spidey Sense to people in real life? Research shows that if a player tenses his neck muscles before the impact, the acceleration/deceleration on his brain is lessened, which should correspondingly lessen the severity of concussions. The problem is that players’ eyes and ears are already fully engaged in actually playing the game, and are also severely compromised by the helmet itself. So a visual or auditory warning probably wouldn’t work. But what if we used their sense of touch…
We embarked on a limited study of this particular concept and, in doing so, became more and more enamored with idea: tactile signals can be used to augment your awareness of what’s going on around you in a way that’s not otherwise distracting. With a bit of training, these should become intuitive, almost like a sixth sense. As we contemplated this idea further, we became excited about the wide number of areas where just such a sense would be valuable: aviation, motorcycle and bicycle blind-spot monitoring, workplace safety (especially in noisy or sight-restricted environments), coordinating the movements of a sports team during training, squad-level combat operations, and of course gaming.
As we studied the subject, we became convinced that tactile feedback opens up an entirely new channel for communications. Most of the haptic technologies in the market now are based on simulating what you would feel in the real world (think force-feedback or vibration motors in game controllers), but what about using the same haptic hardware as a way of conveying interesting information? Moreover, what if the device that conveys this tactile information is essentially a comfortable garment, such that you essentially just forget it’s there? It would be like having a sixth sense! Yes, it would be just like having Spidey-Sense…
After some thinking along these lines, we concluded that this idea was clearly worth seriously investigating, and we built a series of prototypes to test the concept. We decided to go with the head, for reasons I’ll elaborate on later, and tested a number of configurations of the haptic actuators. At first we built a cap with 35 vibrating motors glued into it. This created an…interesting sensation, a bit like having a hive of bees living in your hair, but it wasn’t quite what we were looking for. We experimented with dozens of designs, from hard hats to swim caps to cycling skullcaps. These experiments eventually settled on the current design, which uses just thirteen actuators arranged in concentric circles around the cap.
On the software side, we chose two open-source first-person shooter games to integrate: AssaultCube and Xonotic. These were fun, fairly modern, and because they were open source, offered us the best shot of getting something working quickly to iterate our design. As time went on, we implemented a growing number of haptic modes – most of these centered around situational awareness: augmenting the player’s sense of what’s around him in the virtual environment. This could be enemy players, teammates, interesting items, or traps. Without needing to look at a map, you gain an intuition about where things are around you. We continued with hit-detection that indicates where you’re being shot from, health warnings, and wall detection (this last feature is really aimed at fully-occluded head-mounted displays).
When in the virtual environment, how do you know what’s in the real environment?
Ultimately, we were seeking something simple, cheap, comfortable, yet effective that we could get to market as quickly as possible, since our learning would be greatly accelerated as soon as we put these into the hands of clever developers. Although our focus has been on gaming, we view the device as a general-purpose interface for use in numerous settings. Having such a device gives developers a canvas to use for communicating with their users in entirely new ways.
That’s really the genesis of the haptic cap – our vision is that this technology will someday be incorporated into many products where people need a gentle, intuitive, non-obtrusive way of getting information about what’s going on around them, as well as being a stand-alone human-computer interface. (more…)