Oculus Rift – may just be able to succeed, and change the world as we know it

I’ve had some fun playing with my Oculus Rift dev kit. It’s got some great features, it’s got the latency problem whipped, and just needs to work a bit on tools and upping the resolution – if’s a great proof-of-concept, and if they can survive for another iteration or two, I think they will have something, perhaps something revolutionary.

Two folks I know have gone on to work at Oculus V.R. (OVR), so I know it’s building momentum. I was surprised to find that John Carmack not only was named the CTO, but has actually left id to work at OVR. They now have a Dallas office :-) . So John is a force in the industry, and he’s just the kind of guy VR/AR needs to try to put this stuff into the hands of the consumer. I played around with the older VR/AR HMDs a long time ago, and while the actual experience wasn’t that great (I don’t get motion sickness), the lag and the low resolution were the real killers of the tech back then.

O.R. has the latency pretty much whipped, and when they come out with their Gen2/3 HMD which will need better resolution (already here – just pick up any good smartphone), wireless connectivity (also any smartphone), it will be on the road to becoming a must-have product, at least for the tech savvy.

We’ve got Google Glass, Technical Illusions castAR, the Oculus Rift, Sony’s HMZ-T3W, and whatever Valve is going to show at their Dev. Conf. in Jan. 2014, so I think there’s enough folks piling on the notion that AR/VR is almost here so that it may finally get some traction in the consumer space. Like most cutting edge tech., the game players will adopt it first, then everyone else will come around.  If you offload processing to any of the next-gen mobile platforms, you can wirelessly connect to the HMD via something like BLE while getting orientation information back,then you’ve really got something, since the offload platform will replace the user’s smart phone, it’s not going to be something extra they have to buy, it’s just going to be more capable version of something they already own.

It’s going to be awesome!

Posted in Augmented Reality | Leave a comment

Win8 Metro: Harder than it has to be….

I’ve been playing with Metro off and on for a few months now, my frustration levels mounting everything I try to do something I already know how to do. Most of this I’ve come to realize is because of Microsoft’s attempt to morph existing languages and API’s into the walled-garden of WinRT. They went and bastardized the languages that Visual Studio supports to enable  use of reference counted resources and forced nearly any API call to be choked through a dispatch-callback mechanism and try to hide it all through syntactic sugar.

Now, I understand the reasoning behind this. the reason for the reference counting and the use of nonblocking API calls are all designed to foster a  fast and fluid user experience, that won’t end up eating system resources when an application isn’t active. But why they try to hide the fact that some API calls result in a callback I think they are doing a bit too much in trying to hide the threading model from the programmer – particularly when they make it look like you are writing single threaded code – nay, they encourage this style of obfuscation. Now all the Basic, Javascript, HTML5 programmers out there may be fairly unfamiliar with multithreaded programming, heck i find that even the C# and C++ programmers are usually fairly innocent of multithreading techniques. But this style that they’ve come up with I think does a disservice to programmers. A lot of my work focuses on making programmers understand how to make their programs run faster and more efficiently on a particular platform, and to understand the tradeoffs and opportunities that they are presented with. Microsoft is going out of their way to hide the fact that you’re starting a background process and are waiting at a completion point for a task completion notification to signal, missing an opportunity to present the programmer with the opportunity to do some useful work instead of (seemingly) spinning, waiting for the action to complete.

Granted, you don’t have to do it the way they show in the examples, but that presupposes you know what’s going on under the hood. And the current way they have it set up  discurages most programmers from ever discovering there’s a different and potentially more productive way of doing something.

Posted in Windows 8 | Leave a comment

Microsoft decided not to share finished Win 8.1 with developers.. why?

Normally I get access (as an MSDN member) to the RTM (i.e. finished version) of all Microsoft OS versions. This is the way that most devs. get access to the OS’s, tools, etc. For developers this is important since this is how you typically make those last minute tests on your software to make sure it’ll work with the retail version when it’s released.

With Win 8/Metro apps this is particularly hard because of the numerous restrictions about deploying Metro apps (you have to build it on 8.1 if it’s to get deployed to 8.1, you can’t build on 8.0 for 8.1 unless it’s not using anything new), putting things in the store (the public 8.0 store doesn’t work with the 8.1 OS till they turn it on).

Traditionally, the company has made new OSs available to MSDN and TechNet subscribers, as well as volume license customers. With all the changes in the new OS, particularly given the track record of MS changing the interfaces in the run up to the RTM of Win 8.0 (where they did give early access), it seems important to assure devs that their apps will work on the day Win 8.1 is actually released. It’s bewildering that they’d suddenly restrict access like this – I can only suspect they are trying to limit criticisms of 8.1 prior to it’s release. But, like other decisions the company has made in recent years, it doesn’t seem to have been well thought out.

Windows 8.1 has an October 18 retail launch.

Update: Sept 9 – Microsoft has relented under a lot of criticism; Windows 8.1 will be made available on MSDN and TechNet, and the company is also launching a Release Candidate of Visual Studio 2013 for developers. Thank you!

Posted in Windows 8 | Leave a comment

OpenGL-ES 3.0 Specification Published.

The OpenGL ES 3.0 specification was publicly released at Siggraph this year – coinciding with OpenGL’s 20th anniversary. OpenGL ES 3.0 is backwards compatible with OpenGL ES 2.0, enabling applications to incrementally add new visual features to applications.

OpenGL ES is starting to become the graphics API of choice on a majority of platforms – running on everything from phones and tablets, embedded systems, to desktops. Linux, iOS, Android, even Windows can all run OpenGL ES (assuming there are drivers provided). I think this is surprising to most folks that OpenGL runs on Windows – but all you need are drivers, and Nvidia, AMD, and Intel all provide OpenGL drivers for most of their video cards when you install the DirectX drivers for them. What’s not too surprising is that along with the desktop OpenGL drivers OpenGL ES drivers are usually installed as well. Since OpenGL ES is (mostly) a subset of OpenGL, but ES is there most of the action is in innovation, you can now usually use your desktop (Windows or Linux) to develop OpenGL ES apps and have code that mostly ready to run on Android or iOS as well – at least from a graphics standoint.

Some of the new functionality in  OpenGL ES 3.0 includes:

  • A collection of required, explicitly sized texture and render-buffer formats, standardizing implementation variability and making it easier to write portable applications.
  • Enhanced texturing functionality including; floating point textures, swizzles, 3D textures,  2D array textures, LOD and mip level clamps, seamless cube maps, immutable textures, depth textures, vertex textures, NPOT textures, R/RG textures and sampler objects. Required ETC2 and EAC texture compression format support.
  • Enhancements to the rendering pipeline for support of advanced visual effects including: occlusion queries, transform feedback, instanced rendering and support for four or more rendering targets
  • Enhanced GLSL support that now included 32-bit float operations.

You can read more about it here.

Posted in Graphics API, OpenGL | Leave a comment

Working with C++0X/C++11: Lambdas – part 4 – using closures

In my last C++11 blog I talked about creating functions that takes a storage location by reference and return a function that takes a string input, parses it and stores the resulting value into the storage location. So we’re creating a function that returns a function.  The outer function takes a storage location. The inner function takes a string and stores it in the storage location.

So let’s create a function that takes an integer reference and a const string pointer, parses the string to an integer value, and stores it in the int. So far, so good. The tricky part is we’re going to return a function pointer to a function that takes the const string pointer as it’s argument. It looks like this;

    auto store_int = [](int& x) -> function < void (const char*) > { return [&x](const char* y){ stringstream(y) >> x; }; }; 

Let’s go over this in detail;

    auto store_int = 

I’m storing something (TBD but I know it’ll be a function pointer) in a variable named “store_int”. The next part is;

    [](int& x)

I’m storing a lambda that takes an int address and doesn’t capture any scope (it’s just a function that takes an int reference). Now we need to specify what this function returns;

     -> function< void (const char*) >

… it returns a function pointer (the keyword function) and this function returns void and takes a const char pointer as an argument. In other words, I’m creating a function that takes an integer address as its argument, and this function then returns a function pointer to a function that takes a const char* as an argument and returns void. Now I just need to write the body of the lambda; This is going to be *another* lambda!

     { return [&x](const char* y) { stringstream(y) &gt&gt x; }; };

So I start the lambda body with an open curly brace, then I return a new lambda which I create in place; This inner lambda takes the reference to argument “x” (which was passed in as an argument to the outer lambda). Pass by reference allows it to modify the value in “x”; The inner lambda takes a const char pointer as its function argument. This argument comes from the argument list of the inner lambda I’m creating. Finally in the body of the inner lambda – I use stringstream to convert the string to an int through the right shift operator (it’s less error prone than atoi, and it works to convert to most data types, I use stringstream when I need to parse strings). So this is where the two function arguments actually connect, I parse the string and store it in the integer address. But these are provided in two separate function calls – the outer one and the inner one.

This is the trick – I create one function that takes a reference to the variable I want to store the command line parameter result in. This function encapsulates that integer reference up in another function that takes a char pointer. This is what gets returned. The integer reference is totally hidden from the outside. When you call this function the string argument gets parsed and stored in the referenced integer. This is a closure. Sometime after I create this function, I can call it with a const char* argument, and that string will get converted to an integer and stored in the integer address that was passed in when I created the closure.

I’ll post how you’d use this in a more complete implementation in my next post. That will make the magic of closures much more obvious. But to wrap up, let’s look at using what we created above;

    int myInt; // create an integer

    // pass the integer to store_int
    auto mfunc = store_int(myInt);

    // mfunc is a function taking a const char*
    // the reference to myInt is hidden inside mfunc

    mfunc("4"); // use it, the string gets parsed
    // myInt gets set to 4 

So we’ve used closures to hide a reference to a variable and its type. Next time we’ll use this not only to hide variable address and type information, but to hide behavior modifications as well.

Posted in C++0X/C++11, Code | Leave a comment

Augmented Reality: Swept Frequency Capacitive Sensing turns your skin into an input device

AR has a major usability problem – how to interact with the program – after all you can’t (easily) carry around a keyboard. MIT’s Sixth Sense uses a video camera to discern how you are interacting with a projected interface. At the CHI conference this week there’s a presentation on a novel way of using capacitance of skin (among other effects) or figure out what the user is doing.

Touché: Enhancing Touch Interaction on Humans, Screens, Liquids, and Everyday Objects

Using a very novel Swept Frequency Capacitive Sensing technique they are able to figure out generally what gross action or gesture the user is doing, including discerning if they are touching, one, two, three, etc. fingers together.

It’s probably totally wrong if it’s raining or your sweaty, but not having to interact with a projected interface but just “touch type” as it were is definitely pretty cool. Nice introductory video here.

Posted in Augmented Reality, Technology | Leave a comment

Augmented Reality: Suddenly it’s hot – are we reaching a tipping point?

I was really thrilled when I saw Google’s Glass, a video presentation about Augmented Reality using some tech mounted on a glasses frame. Then Oakley stated it was working on something like that, tying smartphones to AR glasses (ok they’re thinking a bit small), then Mike Abrash from Valve (previously from RAD, Microsoft, id) posts that he’s working on “wearable computers”, citing Neal Stephenson’s  augmented reality Metaverse from his Snow Crash book.


I went to school before the Internet took off and with the power I now have at my fingertips to just plain find stuff out, I’m a hell of a lot more effective in getting stuff done and choosing the right direction to go in more often than not. In my day job I work on cool personal computing tech, trying to make it even more effective and cooler, and trying to guess how it’ll be implemented in 3 years. I find that reality is way cooler that I think it’ll be. I’ve watched cell phones turn into GPS units, entertainment systems, video players, cameras, email systems, sensor platforms, crowdsourcing enablers, and realtime networked data sampling probes, and I realize that in a few short years the computing power that’s in a cell phone, coupled with networked connectivity and a sensor platform will certainly be able to drive some form of AR. As soon as you have some form of hands-free interface, the “cell phone” goes away because the phone part is just networked connectivity that part of a larger AR package. Glasses, direct visual input, or something similar are the natural way to do it. So yes – “wearable computer” is more apt than “cell phone”. To have Google and Valve working on tech similar to what MIT researchers Pattie Maes and Pranav Ministry’s SixthSense AR tech demo’d is nine kinds of awesome. I’ve played enough games to know that a HUD coupled with networking and sensors is game changing. Just read Daniel Suarez’s book Daemon and you’ll get an idea of the power of an AR system. darknet anyone?

  • Just why was Tim Cook – Apple’s CEO – at Value a few days ago?
  • (UPDATE: Valve says that Tim Cook didn’t visit them…)

Posted in Augmented Reality, Technology | Leave a comment

Standalone DirectX no more – Starting with Win8 new DirectX versions will be OS upgrades

Many folks have been wondering where the DirectX SDK (the developers package for writing DX applications) update has been. Microsoft had been churning them out like clockwork but – pfffft- nothing for over a year. With the advent of hardware accelerated UI elements (DWM – the Desktop Window Manager) and the optimized software rasterizer (Microsoft’s WARP), it’s pretty obvious that MS has realized that utilizing hardware acceleration (even if it’s a software fallback) of the entire desktop is imperative.

This was posted under “Where is the DirectX SDK?” on the MSDN web site, along with the following quote;

Starting with Windows 8 Consumer Preview, the DirectX SDK is included as part of the Windows SDK.

We originally created the DirectX SDK as a high-performance platform for game development on top of Windows. As DirectX technologies matured, they became relevant to a broader range of applications. Today, ubiquity of Direct3D hardware in computers drives even traditional desktop applications to use graphics hardware acceleration. In parallel, DirectX technologies are more integrated with Windows. DirectX is today a fundamental part of Windows.

Because the Windows SDK is the primary developer SDK for Windows, we now ship DirectX as part of the Windows SDK. You can now use the Windows SDK to build great games for Windows.

So starting with Win8 we’re not going to see a lot of innovation in the graphics API anymore, especially if if only get major updates with a new operation system. Of course there’ll be service packs, but it’s rare for a service pack to modify an API beyond minor tweaks. I suspect that with the folding in of DirectX into the operating system innovation will slow as the graphics system becomes a resource managed by the operating system, rather than just a host for a graphics program.

Posted in Windows 8 | Leave a comment