I’ve had some fun playing with my Oculus Rift dev kit. It’s got some great features, it’s got the latency problem whipped, and just needs to work a bit on tools and upping the resolution – if’s a great proof-of-concept, and if they can survive for another iteration or two, I think they will have something, perhaps something revolutionary.
Two folks I know have gone on to work at Oculus V.R. (OVR), so I know it’s building momentum. I was surprised to find that John Carmack not only was named the CTO, but has actually left id to work at OVR. They now have a Dallas office . So John is a force in the industry, and he’s just the kind of guy VR/AR needs to try to put this stuff into the hands of the consumer. I played around with the older VR/AR HMDs a long time ago, and while the actual experience wasn’t that great (I don’t get motion sickness), the lag and the low resolution were the real killers of the tech back then.
O.R. has the latency pretty much whipped, and when they come out with their Gen2/3 HMD which will need better resolution (already here – just pick up any good smartphone), wireless connectivity (also any smartphone), it will be on the road to becoming a must-have product, at least for the tech savvy.
We’ve got Google Glass, Technical Illusions castAR, the Oculus Rift, Sony’s HMZ-T3W, and whatever Valve is going to show at their Dev. Conf. in Jan. 2014, so I think there’s enough folks piling on the notion that AR/VR is almost here so that it may finally get some traction in the consumer space. Like most cutting edge tech., the game players will adopt it first, then everyone else will come around. If you offload processing to any of the next-gen mobile platforms, you can wirelessly connect to the HMD via something like BLE while getting orientation information back,then you’ve really got something, since the offload platform will replace the user’s smart phone, it’s not going to be something extra they have to buy, it’s just going to be more capable version of something they already own.
It’s going to be awesome!
AR has a major usability problem – how to interact with the program – after all you can’t (easily) carry around a keyboard. MIT’s Sixth Sense uses a video camera to discern how you are interacting with a projected interface. At the CHI conference this week there’s a presentation on a novel way of using capacitance of skin (among other effects) or figure out what the user is doing.
Touché: Enhancing Touch Interaction on Humans, Screens, Liquids, and Everyday Objects
Using a very novel Swept Frequency Capacitive Sensing technique they are able to figure out generally what gross action or gesture the user is doing, including discerning if they are touching, one, two, three, etc. fingers together.
It’s probably totally wrong if it’s raining or your sweaty, but not having to interact with a projected interface but just “touch type” as it were is definitely pretty cool. Nice introductory video here.
I was really thrilled when I saw Google’s Glass, a video presentation about Augmented Reality using some tech mounted on a glasses frame. Then Oakley stated it was working on something like that, tying smartphones to AR glasses (ok they’re thinking a bit small), then Mike Abrash from Valve (previously from RAD, Microsoft, id) posts that he’s working on “wearable computers”, citing Neal Stephenson’s augmented reality Metaverse from his Snow Crash book.
I went to school before the Internet took off and with the power I now have at my fingertips to just plain find stuff out, I’m a hell of a lot more effective in getting stuff done and choosing the right direction to go in more often than not. In my day job I work on cool personal computing tech, trying to make it even more effective and cooler, and trying to guess how it’ll be implemented in 3 years. I find that reality is way cooler that I think it’ll be. I’ve watched cell phones turn into GPS units, entertainment systems, video players, cameras, email systems, sensor platforms, crowdsourcing enablers, and realtime networked data sampling probes, and I realize that in a few short years the computing power that’s in a cell phone, coupled with networked connectivity and a sensor platform will certainly be able to drive some form of AR. As soon as you have some form of hands-free interface, the “cell phone” goes away because the phone part is just networked connectivity that part of a larger AR package. Glasses, direct visual input, or something similar are the natural way to do it. So yes – “wearable computer” is more apt than “cell phone”. To have Google and Valve working on tech similar to what MIT researchers Pattie Maes and Pranav Ministry’s SixthSense AR tech demo’d is nine kinds of awesome. I’ve played enough games to know that a HUD coupled with networking and sensors is game changing. Just read Daniel Suarez’s book Daemon and you’ll get an idea of the power of an AR system. darknet anyone?
- Just why was Tim Cook – Apple’s CEO – at Value a few days ago?
- (UPDATE: Valve says that Tim Cook didn’t visit them…)