AR has a major usability problem – how to interact with the program – after all you can’t (easily) carry around a keyboard. MIT’s Sixth Sense uses a video camera to discern how you are interacting with a projected interface. At the CHI conference this week there’s a presentation on a novel way of using capacitance of skin (among other effects) or figure out what the user is doing.
Touché: Enhancing Touch Interaction on Humans, Screens, Liquids, and Everyday Objects
Using a very novel Swept Frequency Capacitive Sensing technique they are able to figure out generally what gross action or gesture the user is doing, including discerning if they are touching, one, two, three, etc. fingers together.
It’s probably totally wrong if it’s raining or your sweaty, but not having to interact with a projected interface but just “touch type” as it were is definitely pretty cool. Nice introductory video here.
I was really thrilled when I saw Google’s Glass, a video presentation about Augmented Reality using some tech mounted on a glasses frame. Then Oakley stated it was working on something like that, tying smartphones to AR glasses (ok they’re thinking a bit small), then Mike Abrash from Valve (previously from RAD, Microsoft, id) posts that he’s working on “wearable computers”, citing Neal Stephenson’s augmented reality Metaverse from his Snow Crash book.
I went to school before the Internet took off and with the power I now have at my fingertips to just plain find stuff out, I’m a hell of a lot more effective in getting stuff done and choosing the right direction to go in more often than not. In my day job I work on cool personal computing tech, trying to make it even more effective and cooler, and trying to guess how it’ll be implemented in 3 years. I find that reality is way cooler that I think it’ll be. I’ve watched cell phones turn into GPS units, entertainment systems, video players, cameras, email systems, sensor platforms, crowdsourcing enablers, and realtime networked data sampling probes, and I realize that in a few short years the computing power that’s in a cell phone, coupled with networked connectivity and a sensor platform will certainly be able to drive some form of AR. As soon as you have some form of hands-free interface, the “cell phone” goes away because the phone part is just networked connectivity that part of a larger AR package. Glasses, direct visual input, or something similar are the natural way to do it. So yes – “wearable computer” is more apt than “cell phone”. To have Google and Valve working on tech similar to what MIT researchers Pattie Maes and Pranav Ministry’s SixthSense AR tech demo’d is nine kinds of awesome. I’ve played enough games to know that a HUD coupled with networking and sensors is game changing. Just read Daniel Suarez’s book Daemon and you’ll get an idea of the power of an AR system. darknet anyone?
- Just why was Tim Cook – Apple’s CEO – at Value a few days ago?
- (UPDATE: Valve says that Tim Cook didn’t visit them…)