OpenGL 3.1 Specs released at GDC

The Khronos Group made a formal announcement at GDC 2014 on the OpenGL-ES 3.1 API specification. It turns out that with the 3.0 spec released late 2014 we’ve been waiting for the hardware to catch up to the API. As hardware started to become available it became apparent that the hardware was much more capable than just what was required for 3.0, but was in fact anticipating some of the OpenGL 4.x feature set. Thus just over a year after the 3.0 API spec was released we’re not only seeing the 3.1 spec released but we actually have actual hardware announcements. OpenGL 3.1 is a superset of 3.0 and is fully compatible with 3.0 and 2.0 programs. The major improvements are;

Compute Shaders: Compute shaders are the big addition to 3.1. They bring the ability to use OpenGL applications for general purpose GPU computing. Where you might normally need to use the CPU to do calculations, you can now take advantage of the massive parallel capabilities of a GPU to offload them onto a faster and more efficient computational engine. Things like physics calculations, AI, post processing effects, ambient occlusion, photographic filtering effects, etc. This alone brings the power of the desktop graphics API to mobile space.

Shader Objects: It’s now possible to mix and match vertex and pixel shaders and also to have pre-compiled shaders – albeit you’ll still need to compile the shaders the first time and store the resulting binaries. This will make it possible to make program loading much much faster by eliminating the compilation and linking steps for the shaders.

An Updated Shader Language: As with the 3.0 spec, the 3.1 spec continues to add some features previously found only on desktop OpenGL and makes it much easier to support more efficient and advanced shader usage.

Indirect Draw Commands:  The ability to submit draw commands from objects in GPU memory rather than have the CPU kick off drawing helps make the pipeline more efficient. Combine this with compute shaders and you can have the GPU computing, updating, and rendering part of the scene itself without any intervention by the CPU.

Enhanced Texture Functionality: Some features from desktop have made it over as well, including mutisampling, stencil textures, texture gather.

There has also been much support for providing a set of conformance tests as well and a standardized shader compiler so that individual vendor’s drivers can now be tested against the ”standard” compiler for conformance. This should help for those situations where some vendors have implemented (or read) the spec differently.

So what’s left out? Tessellation and Geometry shaders are really the two biggest features. Those vendors that are moving their desktop hardware to mobile (Nvidia and Intel) will probably be shipping drivers with extensions for their hardware. Intel is showing off their “PixelSync” extension (which was in their DX drivers) which allows order-independent transparency effects among others.

Posted in OpenGL | Leave a comment

OpenGL @ GDC

There are a bunch of happenings @ GDC regarding OpenGL this year.

The Khronos meetings – OpenGL, OpenGL-ES, & WebGL

Meeting room #262 is located on the West Mezzanine level of the Moscone convention center, just down from the South Lobby and above Halls ABC. Attendees must have a GDC conference or exhibitor pass to attend.

Wednesday, March 19, 2014

OpenGL-ES  – 5:00 PM to 6

OpenGL – 6:00 PM to

Thursday, March 20, 2014

WebGL – 5:00 PM to


Tech Talks, Sessions, Courses and Papers at GDC

Massively Parallel AI on GPGPUs with OpenCL or C++

Mon March 17
1:45 PM
Alex Champandard (, Andrew Richards (Codeplay)

Avoiding Catastrophic Performance Loss: Detecting CPU-GPU Sync Points

Wed March 19
2:00 PM
John McDonald (NVIDIA)

OpenGL ES 3.0 and Beyond: How To Deliver Desktop Graphics on Mobile Platforms

(Presented by Intel Corp)

Wed March 19
2:00 PM
Chris Kirkpatrick (Intel Corp), Jon Kennedy (Intel Corp)

Getting the Most Out of OpenGL ES

(Presented by ARM)

Wed March 19
3:30 PM
Dave Shreiner (ARM), Tom Olson (ARM), Daniele Di Donato (ARM)

Approaching Zero Driver Overhead in OpenGL

(Presented by NVIDIA)

Thu March 20
1:00 PM
Cass Everitt (NVIDIA), John McDonald (NVIDIA), Graham Sellers (AMD), Tim Foley (Intel)

Bringing Unreal Engine 4 to OpenGL: Enabling High-End Visuals from PC to Mobile

(Presented by NVIDIA)

Thu March 20
2:30 PM
Evan Hart (NVIDIA), Mathias Schott (NVIDIA), Nick Penwarden (Epic Games)
Posted in Conferences, OpenGL | Leave a comment

Sony to announce VR HMD at GDC?

The rumor is that Sony might announce its Oculus competitor at GDC – speculation fueled by two of the presenters (Marks and Mikhailov) who have some experience with new technology and might signal the announcement of a commercial implementation of the HMD prototypes Sony has been showing around. While the Sony device is probably going to be targeted at the PS4 – let’s hope that they 1) price it competitively and 2) publish a multi-platform SDK. I’ve seen many bits of excellent tech die because the parent company wanted to “guide” development when in reality you want the thing to take off like one dollar beer on habañero salsa night. The official entrance of a commercially successful company such as Sony into the VR market seem to be further proof that the market is starting to be a viable one.

Here’s the GDC information;

Driving the Future of Innovation at Sony Computer Entertainment

Location: Room 130, North Hall
Date: Tuesday, March 18
Time: 5:45pm-6:45pm
Join Sony Computer Entertainment for a presentation on innovation at PlayStation® and the future of gaming.


Posted in Augmented Reality, Hardware, Technology | Leave a comment

VOGL- Valve’s new OpenGL Debugger

At Valve’s Steam Dev Days presentation there was a presentation on their new OpenGL tracer/debugger (VOGL). Debugging OpenGL applications has always been way too hard. You could debug graphics for a while using DX on a Windows machine, but that’s a hard thing to maintain, and eventually it got dropped. I applaud Valve for making the effort to fix this situation.  Somebody needs to step in and fix it if OpenGL is going to get the acceptance and love from the game dev. community it deserves.

Things I like about Valve’s efforts;

  • GL support for 3.x with 4.x planned
  • Extension support
  • Driver benchmarking
  • Open Source
  • Extensive support for trace recording and playback.

Things I’m not so thrilled about;

  • Support for way-old crap (Yes I understand why, but really?) like 1.x support and glBegin/glEnd stuff, fixed function pipeline stuff – blech
  • ASM shader support

And stuff I’m (almost) devastated over

  • Only Linux based (yes, but…)
  • No Android support
  • The focus on existing (read *old*) Valve games and not *new* games. Really – they are already written and out there. Let’s write new games/a new engine. Use Modern OpenGL.

I’m hoping that they will get enough support to correct some of these shortcomings. I know there a re other companies working on similar tech, but Valve is a neutral player – with Open Source support I’m hoping they can create something that will be here in the near future and will march forward as the flavors of OpenGL move on.

Posted in OpenGL | Leave a comment

New Oculus Head Mounted Display Wows them at CES

Oculus VR marches on, WOWing folks at CES that have gotten a chance to  try out their new Crystal Cove 1080p OLED display plus the newly added motion tracking ability courtesy  of the dots embedded in the headset and monitored by an external camera which add in the ability to track the unit in 3-space in addition to the actual HMD’s direction and orientation. Given these improvements plus the already low-latency of the headset have just added to the already excellent user experience of the original SDK and seem  to have eliminated some of the motion-sickness complaints of the original.

From Engadget

From Engadget

OculusVR has raised almost $100 million, has delivered over 50,000 SDKs and has been attracting a lot of attention with some awesome demo’s, some great PC, and some high-level additions to the company. Maybe VR/AR is really here? Back in the day when I was trying out some VR HMDs at Siggraph, they suffered from crappy visual resolution and serious lag. I have a tough stomach, so it was more distracting than annoying, but with the current state of tech OVR (plus the others dog-piling onto VR) might be the tipping point for commercialization of VR.

Posted in Augmented Reality, Hardware, Technology | Leave a comment

CES 2014: OpenGL-ES Working Group plans new version in 2014

An announcement at CES by the Khronos OpenGL-ES Working Group announces the intent to ship a new  version of OpenGL beyond 3.0 in 2014. This version will be backwardly compatible with OpenGL ES 2.0 and 3.0. Not much detail was given, but the features announced were;
  • Compute shaders, with atomics and image load/store capability
  • Separate shader objects
  • Indirect draw commands
  • Enhanced texturing functionality including texture gather
  • multisample textures and stencil textures
  • Enhanced shading language functionality
The new API will not include tessellation or geometry shaders.
Posted in OpenGL | Leave a comment

EGL – The initialization interface for OpenGL-ES

EGL is the syntactic sugar between the particular hardware & operating system your program is currently running on and OpenGL. OpenGL is agnostic to the underlying operating system’s windows system. EGL is an interface between Khronos rendering APIs (like OpenGL-ES and OpenVG) and the underlying native platform windowing system. EGL is designed to wrap the graphics context management, surface/buffer binding, rendering synchronization, and hides the underlying OS-specific calls in EGL wrappers. EGL simply provides some decoration about the fact that OpenGL needs to be able to communicate with and get resources from the native operating system that its running on. This basically means the state, context, and buffers that both the OS and OpenGL need to work together.  Specifically EGL is a wrapper over the following subsystems;

  • WGL – Windows GL – the Windows-OpenGL interface (pronounced wiggle)
  • CGL – the Mac OS X-OpenGL interface (the AGL layer sits on top of CGL)
  • GLX – the equivalent X11-OpenGL interface

EGL is a convenience, as there’s nothing preventing you from directly calling the underlying OS-OGL interface layer directly. In general, you will usually not have to do this, as EGL provides the same overlapping subset of functionality found between all of the OS-OGL interface layers. EGL not only provides a convenient binding between the operating system resources and the OpenGL subsystem, but also provides the hooks to the operating system to inform it when you require something, such as;

  1. Iterating, selecting, and initializing an OpenGL context
    • This can be the OGL API level, software vs. hardware rendering, etc.
  2. Requesting a surface or memory resource.
    • The OS services requests for system or video memory
  3. Iterating through the available surface formats (to pick an optimal one)
    • You can find out properties of the video card(s) from the OS – the surfaces presented will resides on the video card(s) or software renderer interface.
  4. Selecting the desired surface format
  5. Informing the OS you are done rendering and it’s time to show the scene.
  6. Informing the OS to use a different OpenGL context
  7. Informing the OS you are done with the resources.

If you are a Windows programmer, you might be familiar with DXGI, which is the relatively newer Windows API that handles a similar function for DirectX programmers. For iOS, Apple has EAGL, which is their own flavor of EGL. Android programmers may or may not be exposed to EGL – you can always make an EGL call if you want to do something special, but if you use the NativeActivity class, the EGL calls are done for you.

The basic usage of EGL and similar API are the following;

  1. (Android) Obtain the EGL interface.
    • So you can make EGL calls
  2. Obtain a display that’s associated with an app or physical display
  3. Initialize the display
  4. Configure the display
  5. Create surfaces
    • Front, back, offscreen buffers, etc.
  6. Create a context associated with the display
    • This holds the “state” for the OpenGL calls
  7. Make the context “current”
    • This selects the active state
  8. Render with OpenGL (OpenGL not EGL calls, the OpenGL state is held by EGL context)
  9. Flush or swap the buffers so EGL tells the OS to display the rendered scene. Repeat rendering till done.
  10. Make the context “not current”
  11. Clean up the EGL resources

After obtaining a display, you initialize it, set the preferred configuration, and create a surface with a back buffer you can draw into.

You kick things off by getting a display connection through a call to eglGetDisplay by passing in either a native display handle or EGL_DEFAULT_DISPLAY.

    // Get Display Type - for Windows this is the Window DC
    EGLDisplay eglDisplay = eglGetDisplay( ::GetDC(hWnd) );
    eglInitialize( eglDisplay, NULL, NULL);

    // typical PC attrib list
    EGLint defaultPCAttribList[] = {
	// 32 bit color
	// at least 24 bit depth
	// want opengl-es 2.x conformant CONTEXT

    EGLint numConfigs;
    EGLConfig config;

    // just grab the first configuration
    eglChooseConfig(eglDisplay, attribs, &config, 1, &numConfigs)

    // create a surface - note Windows window handle
    EGLSurface surface =
	eglCreateWindowSurface(display, config, hWnd, NULL);

    // create a context
    EGLContext context =
	eglCreateContext(display, config, NULL, NULL);

    // now make the context current 
    eglMakeCurrent(display, surface, surface, context);

    // the context is now bound to the surfaces, OpenGL is "live"
    // perform your rendering
    // …

    // when done, unbind the context and surfaces
    eglMakeCurrent(display, EGL_NO_SURFACE, EGL_NO_SURFACE, EGL_NO_CONTEXT);
    // terminate the connection to the display, release all resources

I’ll say a few comments about the configuration selection. The most important thing this code does is try to choose a set of attributes that matches your needs. It’s up to you to pick a configuration that is an actual good match for the hardware you are running on. For example, if you are running in on fairly capable GPU, you’d want to pick a configuration that support a good, high-quality color and depth buffer. (Here I’m assuming that is what your application needs). So for a PC, you’d want at least a  32 bit color buffer (8 bits for each of the RGBA values) (though you can sometimes get 32 bits per color for a 128 bit per pixel surface). Also choose your depth buffer carefully, you want to pick a natively supported format, which will usually mean 24 or 32 bit over 16 bit for PC graphics. So the attrib list for a PC title might look like this;

    // typical attrib list for PC's or modern mobile devices
    EGLint defaultAttribList[] = {
	// at least 32 bit color
	// at least 24 bit depth
	// want opengl-es 2.x conformant CONTEXT

Mobile devices are generally slower, and have smaller sizes, so choose the least acceptable range as a starting point. For supporting older mobile devices you might pick one of the compressed color formats;

    // typical phone/tablet attrib list for older mobile devices
    EGLint defaultolderMobileAttribList[] = {
	// at least 5-6-5 color
	// at least 8 bit depth
	// want opengl-es 2.x conformant CONTEXT

Most of the time you will call eglChooseConfig and just grab the first configuration. Sometimes this is the wrong thing to do. You should at least take a look at what configurations are presented and try to sort by the features that are most important to your app. You’ll typically see the color and depth values changing, and not in the order you might expect. In a future post I’ll post some code that shows how to go about iterating through the list of surface formats and rating them.

Posted in OpenGL | Leave a comment

Oculus Rift – may just be able to succeed, and change the world as we know it

I’ve had some fun playing with my Oculus Rift dev kit. It’s got some great features, it’s got the latency problem whipped, and just needs to work a bit on tools and upping the resolution – if’s a great proof-of-concept, and if they can survive for another iteration or two, I think they will have something, perhaps something revolutionary.

Two folks I know have gone on to work at Oculus V.R. (OVR), so I know it’s building momentum. I was surprised to find that John Carmack not only was named the CTO, but has actually left id to work at OVR. They now have a Dallas office :-) . So John is a force in the industry, and he’s just the kind of guy VR/AR needs to try to put this stuff into the hands of the consumer. I played around with the older VR/AR HMDs a long time ago, and while the actual experience wasn’t that great (I don’t get motion sickness), the lag and the low resolution were the real killers of the tech back then.

O.R. has the latency pretty much whipped, and when they come out with their Gen2/3 HMD which will need better resolution (already here – just pick up any good smartphone), wireless connectivity (also any smartphone), it will be on the road to becoming a must-have product, at least for the tech savvy.

We’ve got Google Glass, Technical Illusions castAR, the Oculus Rift, Sony’s HMZ-T3W, and whatever Valve is going to show at their Dev. Conf. in Jan. 2014, so I think there’s enough folks piling on the notion that AR/VR is almost here so that it may finally get some traction in the consumer space. Like most cutting edge tech., the game players will adopt it first, then everyone else will come around.  If you offload processing to any of the next-gen mobile platforms, you can wirelessly connect to the HMD via something like BLE while getting orientation information back,then you’ve really got something, since the offload platform will replace the user’s smart phone, it’s not going to be something extra they have to buy, it’s just going to be more capable version of something they already own.

It’s going to be awesome!

Posted in Augmented Reality | Leave a comment