EGL- Understanding eglChooseConfig, then ignoring it

A few months ago I posted a talk on initializing OpenGL-ES using the EGL API. Well now I’m going to walk you through how to actually get the configuration you want. Nearly all of the OpenGL-ES code I’ve seen (including the Android SDK samples) provide boilerplate so that you just ask for the configs that meet your requirements, and just take the config that is first in the list. Usually this is the exact WRONG configuration you want. It will certainly work, but it’s usually the “most capable” configuration, where as you just want the “best for my needs” configuration.

What you are actually choosing is the format of the “Surface” (EGL terminology for the Render Target (aka destination buffer) where the output will be rendered). If you are coming from Windows this is the “pixel format descriptor”. On Mac it’s the “pixel format object”. You look at what you are rendering and make a decision – for example, I need to render to a 32bit/32bit/32bit RGB color buffer, with a 16-bit depth buffer, and a 32-bit stencil buffer.

So let’s review the code that you typically see in an OpenGL-ES app to get select a Surface.

    // Get Display Type
    EGLDisplay eglDisplay = eglGetDisplay( EGL_DEFAULT_DISPLAY );
    eglInitialize( eglDisplay, NULL, NULL);

    // typical high-quality attrib list
    EGLint defaultAttribList[] = {
	// 32 bit color
	// at least 24 bit depth
	// want opengl-es 2.x conformant CONTEXT

    EGLint numConfigs;
    EGLConfig config;

    eglChooseConfig(eglDisplay, defaultAttribList,
                   &config, 1, &numConfigs)

Since we ask for just one config, we get one config. You can only choose one config.  What you really need to do is two steps;

  1. Make the call to eglChooseConfig as before, but pass in a null pointer for the configs parameter. This will return to total available configs that match the description in the numConfigs parameter.
  2. Allocate an array of EGLConfig’s big enough and make the same call, this time passing in the new array pointer and the new size. You will then have an array of all possible configurations available.

This will return you all the the configs that EGL thinks MEET OR BEAT your specified criteria. Note the OR BEAT.  The eglChooseConbfig spec clearly states;

When more than one EGL frame buffer configuration matches the specified attributes, a list of matching configurations is returned. The list is sorted according to the following precedence rules, which are applied in ascending order (i.e., configurations that are considered equal by a lower numbered rule are sorted by the higher numbered rule):



Special: by larger total number of color bits (for an RGB color buffer, this is the sum of EGL_RED_SIZE, EGL_GREEN_SIZE, EGL_BLUE_SIZE, and EGL_ALPHA_SIZE; for a luminance color buffer, the sum of EGL_LUMINANCE_SIZE and EGL_ALPHA_SIZE). If the requested number of bits in attrib_list is 0 or EGL_DONT_CARE for a particular color component, then the number of bits for that component is not considered.

This sort rule places configs with deeper color buffers before configs with shallower color buffers, which may be counter-intuitive.

Special: EGL_NATIVE_VISUAL_TYPE (the actual sort order is implementation-defined, depending on the meaning of native visual types).
Smaller EGL_CONFIG_ID (this is always the last sorting rule, and guarantees a unique ordering).


The emphasis is mine – but what it means is that it will prefer 1) A larger color buffer format than you specify, 2) you might want to choose a depth buffer that a different size depending on the hardware (some have odd sizes that might work better – i.e. 24-bit native might be better than 16-bit) and 3) there are attribs that it won’t sort on at all that you have no control over.

Or you can ignore eglChooseConfig entirely and do you own sorting and selection. In this case you’d just call eglGetConfigs to get all the number of TOTAL configurations (for THAT eglDisplay – yes turtles all the way down). And then, for each config, call eglGetConfigAttrib to query each attribute you care about. Shove them all in a list and THEN sort by desirability.  And this post is already too long, so I’ll leave that bit of code for next time.

Here’s how to query/store all the configs. oglesBufferFormat is a structure that contains the attrib I’m interested in. You need to make your own for your needs.

// Get number of all configs, have gotten display from EGL
if ( EGL_FALSE == eglGetConfigs(_eglDisplay, NULL, 0, &numConfigs) )
    return EGL_FALSE;
DebugMsg("there are %d configurations available.\n", numConfigs);

// collect information about the configs
EGLConfig *configs = new EGLConfig[numConfigs];

if ( EGL_FALSE == eglGetConfigs(_eglDisplay,configs,numConfigs,&numConfigs) )
    delete [] configs;
    return EGL_FALSE;

std::vector<oglesBufferFormat> _bufferFormats;

oglesBufferFormat newFormat;

for ( GLint c = 0 ; c < numConfigs ; ++c)
    EGLConfig config = configs[c]; 
    eglGetConfigAttrib( _eglDisplay, config, EGL_ALPHA_SIZE, &(newFormat._alpha_size));
    eglGetConfigAttrib( _eglDisplay, config, EGL_BIND_TO_TEXTURE_RGB, &(newFormat._bind_to_texture_rgb));
    eglGetConfigAttrib( _eglDisplay, config, EGL_BIND_TO_TEXTURE_RGBA, &(newFormat._bind_to_texture_rgba));
    eglGetConfigAttrib( _eglDisplay, config, EGL_BLUE_SIZE, &(newFormat._blue_size));
    eglGetConfigAttrib( _eglDisplay, config, EGL_BUFFER_SIZE, &(newFormat._buffer_size));
    eglGetConfigAttrib( _eglDisplay, config, EGL_CONFIG_CAVEAT, &(newFormat._config_caveat));
    eglGetConfigAttrib( _eglDisplay, config, EGL_CONFIG_ID, &(newFormat._config_id));
    eglGetConfigAttrib( _eglDisplay, config, EGL_DEPTH_SIZE, &(newFormat._depth_size));
    eglGetConfigAttrib( _eglDisplay, config, EGL_GREEN_SIZE, &(newFormat._green_size));
    eglGetConfigAttrib( _eglDisplay, config, EGL_LEVEL, &(newFormat._level));
    eglGetConfigAttrib( _eglDisplay, config, EGL_MAX_PBUFFER_WIDTH, &(newFormat._max_pbuffer_width));
    eglGetConfigAttrib( _eglDisplay, config, EGL_MAX_PBUFFER_HEIGHT, &(newFormat._max_pbuffer_height));
    eglGetConfigAttrib( _eglDisplay, config, EGL_MAX_PBUFFER_PIXELS, &(newFormat._max_pbuffer_pixels));
    eglGetConfigAttrib( _eglDisplay, config, EGL_MAX_SWAP_INTERVAL, &(newFormat._max_swap_interval));
    eglGetConfigAttrib( _eglDisplay, config, EGL_MIN_SWAP_INTERVAL, &(newFormat._min_swap_interval));
    eglGetConfigAttrib( _eglDisplay, config, EGL_NATIVE_RENDERABLE, &(newFormat._native_renderable));
    eglGetConfigAttrib( _eglDisplay, config, EGL_NATIVE_VISUAL_ID, &(newFormat._native_renderable));
    /// etc etc etc for all those that you care about

    if ( majorVersion >= 1 && minorVersion >= 2 )
        // 1.2
        eglGetConfigAttrib( _eglDisplay, config, EGL_ALPHA_MASK_SIZE, &(newFormat._alpha_mask_size));
        eglGetConfigAttrib( _eglDisplay, config, EGL_COLOR_BUFFER_TYPE, &(newFormat._color_buffer_type));
        eglGetConfigAttrib( _eglDisplay, config, EGL_LUMINANCE_SIZE, &(newFormat._luminance_size));
        eglGetConfigAttrib( _eglDisplay, config, EGL_RENDERABLE_TYPE, &(newFormat._renderable_type));

    if ( majorVersion >= 1 && minorVersion >= 3 )
        // 1.3
        eglGetConfigAttrib( _eglDisplay, config, EGL_CONFORMANT, &(newFormat._conformant));
Posted in OpenGL | Leave a comment

OpenGL-ES 3.0 surpasses 16% of Android market and is accelerating

The Android dashboard continues to show that OpenGL-ES 3.0 adoption is marching on. In fact it seems like the trend over the last few months as been one of accelerated adoptions. We’re on track for 1/3rd of all Android devices to be OpenGL-ES 3.0 capable by years end, and we haven’t even seen the newest hardware due out in a few months.


Posted in Android, Hardware, OpenGL | Leave a comment

Oculus Rift Musings: Part 2 – the Facebook Acquisition

I’m excited to announce that we’ve agreed to acquire Oculus VR, the leader in virtual reality technology.

Mark Zuckerberg March 25, 2014

And that was the 2 billion dollar “cha-ching” heard ’round the world. Now there are two extreme this could spin off to;

Pessimistic View: Zuck attempts to create FBReality.  The VR world is rife with “Friends”, “Likes”, and plenty and plenty of pointless posting about stuff I don’t care about. It can be annoying now, but when a “FBriend”  posts a 3D VR  video of their dog trying to lick peanut butter off  the roof of their mouth and it’s plopped in front of my 3D wall and it NEVER ENDS then I lose my faith in humanity.  It’s a big FU to the kickstarters while the OVR founders and talent go off any buy personal islands. VR dies a second time.  Grrrr.

Optimistic View:  And I’m being pretty optimistic here. $2B in the bank makes it pretty easy to kick back and yell “Miller Time”. But let’s play anyway. Palmer et. al. have said that having FB behind them gives them the weight to dictate what the actual consumer hardware will look like because they suddenly become a tier 1 IHV and not have to rely on scraps. OK I buy that. VR is hard to get right – just read Carmack’s, Abrash’s or Forsyth’s posts.  They have solved problems that haven’t been addressed in nearly 30 years. I’ve played with DK2 and the specs for the consumer headset keep getting better and better. The hardware *will* be killer and *will* be able to win large segments of the population into VR (assuming that they actually deliver).

They are beyond solving the gross problems now and are moving onto the more subtle ones. The tech is viable, the $2B was a wake-up call that this is a serious, serious undertaking and is moving right along and has woken up some other competition that  this indeed might be the next big thing. It’s also given them the opportunity to poach talent (more on that later), so while they have gone somewhat silent, if I received a windfall for a project I’d scrap initial plans and re-scope it out too.  You really don’t want consumer acceptance to be as slow as the initial console market was to get going. (But now they could probably survive a slow ramp-up to acceptance). So yeah, if this is the direction that they were thinking of, and if FB let’s them get on with what they were doing, then this just might work out. It might.

Please Dear God don’t let them screw it up.

Posted in Augmented Reality | Leave a comment

OpenGL-ES 3.0 Proliferation

The Android dashboard provides some useful information about the Android environment in the wild. One interesting thing is they provide values for the supported OpenGL-ES versions. Since OpenGL-ES 3.0 started showing up after KitKat was released, its numbers have been growing quite steadily. From a start of 0% in October 2013, it’s reached over 10% in 6 months. Considering that 3.0 capable devices are not in the majority yet, this is still pretty impressive. (Note: GL-ES1.x is < 1%)




Posted in Android, Graphics Hardware, OpenGL | Leave a comment

I’ll be teaching Modern OpenGL-ES 3.0 & 3.1 Programming @ AnDevCon in Boston

I will be speaking at AnDevCoADC-SpeakerBadgen in Boston, MA this coming May. My half-day tutorial starts Tuesday May 22nd at 9am. I’ll be talking about Modern OpenGL-ES, basically covering all the new stuff in 3.0 AND 3.1. The tutorial will focus on best practices for OpenGL-ES, what the new features are that you should know about and how to optimize for best performance. I’ll also be covering different ways to start programming GLES-3.0 right now! Topic will include;

  • Compute Shaders
  • Shader Objects
  • New Shader Language Features
  • Indirect Draw Commands
  • How to use EGL
  • All the ways to draw stuff – ease of use vs. performance
  • Debugging tools and techniques

I’ll be co-presenting the tutorial with Rudy Cazabon (@RudyCazabon) owner of the Synthetic Aperture blog.  If you’d like to save some money on registration, you can use “Fosner” as a discount code. Follow the updates on Twitter @AnDevCon

Posted in Android, Conferences, OpenGL | Leave a comment

Oculus Rift Musings: Part 1 – Oculus at GDC2014

I’ve played with the initial SDK and unlike some folks I don’t get motion sickness and the lag was barely noticeable. It was a solid 1st effort and I was looking forward to trying out Crystal Cove, Oculus VR’s 2nd gen hardware which I talk about here;

At GDC 2014 I got a chance to try it out. oculusGDC2014

From a recommendation from a friend who works there I played Couch Nights, which is a simple Unreal Engine game about two tiny cartoon knights running around a living room. Using an Xbox controller I was quickly able to quickly to grasp movement and attack, and my “opponent” and I quickly set about destroying the living room before turning on each other and ending the experience. It was a great deal of fun.


It was also an order of magnitude better than the original SDK experience and adding in both the changes to the visual display (documented here) plus the HMD tracking puts you into an incredibly immersive experience.

The consumer version will have even better resolution than Crystal Cove, but with the 2nd gen SDK (DK2) not even being available till “summer”, devs will needs at least some lead time to make viable games for the consumer hardware. That said, I’ve already ordered my DK2. I’ve done a lot of 3D real-time interactive development – either data visualization or 3D games – both require low latency and high interactivity. I see the hardware that’s been developed in the last 10 years allowing for some really compelling applications/demos. The one thing that I see as being crucial to the success of any dev platform is the ability to allow the creative independent devs to take it in a new, unforeseen-by-the -original-creators direction – and by this I mean the ability to provide a dev platform that is as unrestricted and open as possible. UDK, Unity, Cocos2d, Android, iOS – all provide a dev platform so that folks who are creative and have an idea can run with it and present it to the masses. Twitter, Instagram, and Waze are all platforms that take advantage of the mass of connectivity that we’ve been blessed with the through the Internet and allow folks to connect up and share information in a way never before possible. I find it extremely compelling and I’m enough of a sci-fi geek to have a good idea where the possibilities are. This is awesome. Let the Metaverse begin!




Posted in Augmented Reality | Leave a comment

OpenGL 3.1 Specs released at GDC

The Khronos Group made a formal announcement at GDC 2014 on the OpenGL-ES 3.1 API specification. It turns out that with the 3.0 spec released late 2014 we’ve been waiting for the hardware to catch up to the API. As hardware started to become available it became apparent that the hardware was much more capable than just what was required for 3.0, but was in fact anticipating some of the OpenGL 4.x feature set. Thus just over a year after the 3.0 API spec was released we’re not only seeing the 3.1 spec released but we actually have actual hardware announcements. OpenGL 3.1 is a superset of 3.0 and is fully compatible with 3.0 and 2.0 programs. The major improvements are;

Compute Shaders: Compute shaders are the big addition to 3.1. They bring the ability to use OpenGL applications for general purpose GPU computing. Where you might normally need to use the CPU to do calculations, you can now take advantage of the massive parallel capabilities of a GPU to offload them onto a faster and more efficient computational engine. Things like physics calculations, AI, post processing effects, ambient occlusion, photographic filtering effects, etc. This alone brings the power of the desktop graphics API to mobile space.

Shader Objects: It’s now possible to mix and match vertex and pixel shaders and also to have pre-compiled shaders – albeit you’ll still need to compile the shaders the first time and store the resulting binaries. This will make it possible to make program loading much much faster by eliminating the compilation and linking steps for the shaders.

An Updated Shader Language: As with the 3.0 spec, the 3.1 spec continues to add some features previously found only on desktop OpenGL and makes it much easier to support more efficient and advanced shader usage.

Indirect Draw Commands:  The ability to submit draw commands from objects in GPU memory rather than have the CPU kick off drawing helps make the pipeline more efficient. Combine this with compute shaders and you can have the GPU computing, updating, and rendering part of the scene itself without any intervention by the CPU.

Enhanced Texture Functionality: Some features from desktop have made it over as well, including mutisampling, stencil textures, texture gather.

There has also been much support for providing a set of conformance tests as well and a standardized shader compiler so that individual vendor’s drivers can now be tested against the ”standard” compiler for conformance. This should help for those situations where some vendors have implemented (or read) the spec differently.

So what’s left out? Tessellation and Geometry shaders are really the two biggest features. Those vendors that are moving their desktop hardware to mobile (Nvidia and Intel) will probably be shipping drivers with extensions for their hardware. Intel is showing off their “PixelSync” extension (which was in their DX drivers) which allows order-independent transparency effects among others.

Posted in OpenGL | Leave a comment

OpenGL @ GDC

There are a bunch of happenings @ GDC regarding OpenGL this year.

The Khronos meetings – OpenGL, OpenGL-ES, & WebGL

Meeting room #262 is located on the West Mezzanine level of the Moscone convention center, just down from the South Lobby and above Halls ABC. Attendees must have a GDC conference or exhibitor pass to attend.

Wednesday, March 19, 2014

OpenGL-ES  – 5:00 PM to 6

OpenGL – 6:00 PM to

Thursday, March 20, 2014

WebGL – 5:00 PM to


Tech Talks, Sessions, Courses and Papers at GDC

Massively Parallel AI on GPGPUs with OpenCL or C++

Mon March 17
1:45 PM
Alex Champandard (, Andrew Richards (Codeplay)

Avoiding Catastrophic Performance Loss: Detecting CPU-GPU Sync Points

Wed March 19
2:00 PM
John McDonald (NVIDIA)

OpenGL ES 3.0 and Beyond: How To Deliver Desktop Graphics on Mobile Platforms

(Presented by Intel Corp)

Wed March 19
2:00 PM
Chris Kirkpatrick (Intel Corp), Jon Kennedy (Intel Corp)

Getting the Most Out of OpenGL ES

(Presented by ARM)

Wed March 19
3:30 PM
Dave Shreiner (ARM), Tom Olson (ARM), Daniele Di Donato (ARM)

Approaching Zero Driver Overhead in OpenGL

(Presented by NVIDIA)

Thu March 20
1:00 PM
Cass Everitt (NVIDIA), John McDonald (NVIDIA), Graham Sellers (AMD), Tim Foley (Intel)

Bringing Unreal Engine 4 to OpenGL: Enabling High-End Visuals from PC to Mobile

(Presented by NVIDIA)

Thu March 20
2:30 PM
Evan Hart (NVIDIA), Mathias Schott (NVIDIA), Nick Penwarden (Epic Games)
Posted in Conferences, OpenGL | Leave a comment