From out of Virtual Left Field – Magic Leap arrives!

magicleaplogoThere’s a new player in town that’s bound to shake things up in the VR/AR community. Magic Leap has pretty much come out of nowhere and has thudded to earth with the subtly of an asteroid strike.

Even if you doubt this, just look at some of the particulars. Magic Leap is a Hollywood  Florida start-up. Rony Abovitz, the eccentric President, CEO & Founder of Magic Leap was a co-founder of surgical robotics firm Mako Surgical, which was sold for $1.65 billion in December of 2013. Now he’s hit the ground running and apparently has tech that’s turning heads and garnishing a lot of capital. Magic Leap received $50 million in a 1st round of investing in Feb. What’s left a lot of folks gaping is the $542 million they got in a 2nd round in October – with a large part of that coming from Goggle (corporate, not the investment arms), plus Qualcomm, Legendary Entertainment and some other VC firms.

So now they have nearly $600 million to play with – they are hiring like mad. But what are they doing?  From their press release:

With our founding principles our team dug deep into the physics of the visual world, and dug deep into the physics and processes of our visual and sensory perception. We created something new. We call it a Dynamic Digitized Lightfield Signal™ (you can call it a Digital Lightfield™). It is biomimetic, meaning it respects how we function naturally as humans (we are humans after all, not machines).

In time, we began adding a number of other technologies to our Digital Lightfield: hardware, software, sensors, core processors, and a few things that just need to remain a mystery. The result of this combination enabled our technology to deliver experiences that are so unique, so unexpected, so never-been-seen-before, they can only be described as magical.

We are building a world-class team of experience developers, and are reaching out to application wizards, game developers, story-tellers, musicians, and artists who are motivated by just wanting to make cool stuff.

So what does this mean? According to some accounts of folks who’ve seen the technology and if you glean info from their patents, it’s apparently a very realistic projection system that either projects onto something over the eyes or actually into the eyes. As creepy as that sounds, it’s called Virtual Retinal Display and I remember work being done at the Human Interface Technology Lab at the University of Washington in the 90’s. Couple that with some sort of head mounted display  (not only to mount the projectors, but a front facing RGBZ camera, speakers, headphones, location/orientation tracker) and you’ve got a pretty awesome VR *or* AR system. It’s possible to calculate where (direction AND distance) the user is looking and tailor the display to be focused correctly – this from the reference to a lightfield signal (see Lytro). Throw in some input gloves (ideally haptic) and you’ve a pretty awesome platform. The awesome part if that it crosses the line between very practical (think Google Glasses on steroids – a HUD that’s able to interact with the real-world scene that you eyes are currently viewing) and very entertainment oriented – a totally immersive VR experience that has a Game/VR HUD that encompasses your total field of view or can project high-def movies into you eyes.

I find the most fascinating thing being the involvement/investment by Google corporate. Given the bad press Glass has garnered in the last year, the lack of mention of Glass at Google IO this year, one starts wondering. The folks at Google are smart, perhaps it’s time to spin off the tech. Florida is pretty far from Silicon Valley.



Posted in Augmented Reality, Graphics Hardware, Virtual Reality | Leave a comment

Android 5.0 “Lollipop” debuts the OpenGL-ES 3.1 API

lollipopAndroid 5.0 – Lollipop  – Arrives. And it’s got lot’s of developer goodies in it.

ART Runtime

Google has replaced “Dalvik” (the Java-esqe runtime, which was a Just-In-Time compiling virtual machine) with ART – the new “Ahead-Of-Time RunTime” compiling virtual machine. This should make apps faster, but also brings 64-bit support in automatically for Java apps. Native code will need to be recompiled, but the new NDK adds support for 64-bit for all platforms. 64-bit is important because it frees you from a 3 Gig memory limit on devices.

OpenGL-ES 3.1 and Android Extension Pack

The Android 5.0 SDK is what devs will use to develop for Android Lollipop, and there’s a new “Expansion Pack” that will help game devs take advantage of things that are in OpenGL-ES 3.1, like new shader models, and has support for new hardware platforms as well (like PC’s). Out of the box you’ll get support for compute shaders, much better instancing support, new compressed texture formats, better texture sizes support and  more render buffer formats.

With the AEP you’ll get access to the newer features of the underlying graphics hardware supports that aren’t in the 3.1 OpenGL-ES spec. Intel HD graphics, Ardeno 420, Mali T6xx, PowerVR Series 6, Nvidia Tegra K1, should all have some extensions supported, so you might find support for tessellation and geometry shaders (likely on most), ASTC texture formats, new interpolation models, etc.

This brings Android graphics on-par with the capabilities of PC’s and consoles – it’s now nearly the same API, just not as fast (yet). This will change.


Posted in Android, Graphics Hardware, OpenGL | Leave a comment

Android OpenGL-ES 3.0 surpasses 25%

OpenGL-ES 3.0 is now over 25%.


The folks at Google must have been busy the last month – they forgot to update the Dashboard for October – which is why there’s a gap between the last two points. (Yes, somebody noticed). With the release of Lollipop we should soon start to see OpenGL-ES 3.1 numbers as well.

Posted in Android, OpenGL | Leave a comment

My Oculus DK2 arrived!

I heard about a week ago that folks were starting to get their Oculus (Dev Kit 2) DK2’s. The big changes are the improivement of the display to be more VR friendly  – the low-persistence OLED display in the DK2 are improvements to “eliminate motion blur and judder” – and they go a long way to that end.  You now have a camera to track the HMD (and your head) in six dimensions (pitch, yaw, and roll , and XYZ positioning) along with the accelerometers in the HMD. And they finally put headphones on the HMD! Overall when I got to try them at GDC they were instantly immerse and provided a completely believable VR experience with no disorientation (I was sitting) and it was very intuitive to control and look around.

Here’s what my DK2 looks like.


PROS: Ergonomics of the headset are about the same (fit and heft), but the single cable from the headset, a nicer sleeker HMD enclosure,  960×1090/eye with lower persistence and the better head tracking are definite improvements for the VR experience. Headphones on the HDM for 3D positional sound as well. The new headset has a (optionally powered) USB port for adding accessories, a plethora of various style AC power connectors (a nice touch) and a new SDK. Oh yeah – and it’s $350.

CONS:  None of the old DK1 demos will runs without a recompilation – so it’s harder to find content. The new SDK is required. Still not the consumer model, but is a significant step in the right direction. The plastic carrying case of the DK1 is now a cardboard one :-( .

I’m thrilled start working with the new DK2 and I’m really looking forward to posting some of the results here. I’m a firm believer in the benefits of what VR/AR can become.

Posted in Augmented Reality | Leave a comment

OpenGL 4.5 Specs Released at Siggraph

The Khronos group publicly released the OpenGL 4.5 specification at Siggraph this week. The two biggest changes are OpenGL ES 3.1 compatibility (including ES shaders) and DX11 feature emulation. The first will make it easier to write OpenGL or OpenGL-ES  apps for any platform on a desktop. The second will make it easier to port DX11.2 apps to OpenGL. The main list of new features is;

  • Direct State Access (DSA) – object accessors enable state to be queried and modified without binding objects to contexts, for increased application and middleware efficiency and flexibility;
  • Flush Control – applications can control flushing of pending commands before context switching – enabling high-performance multithreaded applications;
  • Robustness – providing a secure platform for applications such as WebGL browsers, including preventing a GPU reset affecting any other running applications;
  • OpenGL ES 3.1 API and shader compatibility – to enable the easy development and execution of the latest OpenGL ES applications on desktop systems;
  • DX11 emulation features – for easier porting of applications between OpenGL and Direct3D.

Most of the changes deal with API alignment, corralling the proliferation of OpenGL, OpenGL-ES and WebGL syntactic differences and making WebGL a bit more secure – a requirement from browsers for a while now. This will allow OpenGL devices to run GL-ES, and with WebGL it will remove some obstacles that have prevented WebGL from being adopted widely. OpenGL-ES will again be an API aligned subset of OpenGL.

The DX11 emulation will allow easier porting by reproducing some of DX11.2’s implementation details into the OpenGL API – reducing the need to an extensive rewrite by providing simulation of some of DX11’s quirks/API features.

You can read all about the news here.

Posted in Conferences, OpenGL | Leave a comment

20% of Android devices are OpenGL-ES 3.0 capable

OpenGL-ES 3.0 adoption is continuing at a high pace. We’re still on track for 1/3rd of all Android devices to be OpenGL-ES 3.0 capable by years end, and we haven’t even seen the newest hardware due out in a few months, AND Google has announced the Android Extension Pack for OpenGL-ES 3.1 support for release with the next Android version!


Posted in Android, Hardware, OpenGL | Leave a comment

Procedurally Distributing Game Objects

I was discussing procedural content with someone last week and I was reminded of some research into space filling distributions and their use in games. One interesting procedure that I’ve run across is called the Halton Sequence, and it has some interesting properties.  It’s space filling, deterministic, and pseudo-random. As such, it’s not only great for distributing things seemingly at random over a surface, but the objects will never overlap and can be made to cluster. You can  read a nice description of how it was used in the game Spore from Maxis’ 2007 Game Developer Conference presentation.

It’s not a terribly efficient sequence generation algorithm if you’re generating them on the fly, but if you know you’ll need a block of them it’s much more efficient to generate a table at a time. This is because the distribution function requires the previously generated value to calculate the next – similar to a Fibonacci sequence. You can create multiple distributions because one of the input parameters is a prime number, thus it’s easy to generate 2D, 3D, etc. sequences just by providing different primes for each dimension you need.

It’s better if we look at it in action. Let’s look at a random distribution of 256 points via rand(). The first quarter generated are red, the second are blue, the third green, the fourth white.


You get the random scattering you’d expect, with some too near or even intersecting each other.

Now let’s use a Halton Sequence with a prime of 2 for the horizontal distribution and 3 for the vertical with the same colorization scheme.


See how much more evenly distributed the colors are? And how “later” colors are clustered around earlier one? You can use this to distribute “types” evenly and cluster objects around them – for example trees in the first pass, then – continuing with the same Halton Sequence – tall bushes, then following that, still using the same sequence – shorter shrubs, then grasses. The Halton sequence will prevent them from overlapping but they will fill in space around already placed objects.

It’s not quite that easy in practice, but it gets you 95% of the way there, and with some filtering techniques and some care about the primes you use (some start off too well ordered – see the Wikipedia article) you can put the generation tools in the hands of your artists and generate tons of seeming naturally ordered, yet randomized procedurally grouped content. YMMV – will require some tweaking to get that last 5%.

And it’s not just geometric objects you can place, but NPCs (or clusters of NPCs), loot, terrain features —  you can even use it to procedurally generate features in texture maps (like spots on a leopard, etc. ). I love procedurally generated content, and this is one of the tools I use to get great looking environments.

OK – what does the code look like – here’s a very simple and unoptimized version – you start off with some seed index value (zero works initially) along with a prime number as the base, then you increment the index value  with that same prime to generate the next number in the sequence.

// A simple Halton Sequence generation routine
// indx is the starting value for the sequence (0 or 1 is always good)
// and base is a prime number. 
// Increment indx value for the next value in the sequence.
float halton(int indx, int base)
    float b = (float)base;
    float i = (float)indx;
    float h = 0.0f, digit = 0.0f, f = 1.0f/b;

    while (i>0)
        digit = (int)i % base;
        h += digit * f;
        i  = (i-digit) / b;
        f /= b;


It’s not efficient to generating each number one at a time, but it works. A better way is to have it generate an array sequentially using results to continue to the next value in the sequence  – this is a much more efficient calculation. Then you can generate batches of numbers with much less overhead (or have some state residing the the sequence generator).

Posted in Code, Graphics | Leave a comment

OpenGL-ES 3.1 Support & Extensions are to be supported in next Android “L” release.

At Google IO, Google announced that OpenGL ES 3.1 and the Android Extension Pack  going to be are released in the upcoming Android L release. OpenGL ES 3.1 brings cleaner shader support and compute shaders, while the Android Extension Pack is a set of extensions to OpenGL ES which provides support for tessellation and geometry shaders, and ASTC compression formats. This is an important step because (for those GPU’s that support 3.1, which are the majority that support 3.0) will enable some rendering techniques that have long been used by desktop rendering systems, including;

  • HDR Tone Mapping
  • More Efficient/Better Smoke/Particles Effects
  • Deferred Shading
  • Global Illumination & Reflection
  • Physically Based Shading

“Quite literally, this is PC gaming graphics in your pocket” Dave Burke, Android engineering director at Google.

Posted in Android, OpenGL | Leave a comment