Sony to announce VR HMD at GDC?

The rumor is that Sony might announce its Oculus competitor at GDC – speculation fueled by two of the presenters (Marks and Mikhailov) who have some experience with new technology and might signal the announcement of a commercial implementation of the HMD prototypes Sony has been showing around. While the Sony device is probably going to be targeted at the PS4 – let’s hope that they 1) price it competitively and 2) publish a multi-platform SDK. I’ve seen many bits of excellent tech die because the parent company wanted to “guide” development when in reality you want the thing to take off like one dollar beer on habañero salsa night. The official entrance of a commercially successful company such as Sony into the VR market seem to be further proof that the market is starting to be a viable one.

Here’s the GDC information;

Driving the Future of Innovation at Sony Computer Entertainment

Location: Room 130, North Hall
Date: Tuesday, March 18
Time: 5:45pm-6:45pm
 
Join Sony Computer Entertainment for a presentation on innovation at PlayStation® and the future of gaming.

 

Posted in Augmented Reality, Hardware, Technology | Leave a comment

VOGL- Valve’s new OpenGL Debugger

At Valve’s Steam Dev Days presentation there was a presentation on their new OpenGL tracer/debugger (VOGL). Debugging OpenGL applications has always been way too hard. You could debug graphics for a while using DX on a Windows machine, but that’s a hard thing to maintain, and eventually it got dropped. I applaud Valve for making the effort to fix this situation.  Somebody needs to step in and fix it if OpenGL is going to get the acceptance and love from the game dev. community it deserves.

Things I like about Valve’s efforts;

  • GL support for 3.x with 4.x planned
  • Extension support
  • Driver benchmarking
  • Open Source
  • Extensive support for trace recording and playback.

Things I’m not so thrilled about;

  • Support for way-old crap (Yes I understand why, but really?) like 1.x support and glBegin/glEnd stuff, fixed function pipeline stuff – blech
  • ASM shader support

And stuff I’m (almost) devastated over

  • Only Linux based (yes, but…)
  • No Android support
  • The focus on existing (read *old*) Valve games and not *new* games. Really – they are already written and out there. Let’s write new games/a new engine. Use Modern OpenGL.

I’m hoping that they will get enough support to correct some of these shortcomings. I know there a re other companies working on similar tech, but Valve is a neutral player – with Open Source support I’m hoping they can create something that will be here in the near future and will march forward as the flavors of OpenGL move on.

Posted in OpenGL | Leave a comment

New Oculus Head Mounted Display Wows them at CES

Oculus VR marches on, WOWing folks at CES that have gotten a chance to  try out their new Crystal Cove 1080p OLED display plus the newly added motion tracking ability courtesy  of the dots embedded in the headset and monitored by an external camera which add in the ability to track the unit in 3-space in addition to the actual HMD’s direction and orientation. Given these improvements plus the already low-latency of the headset have just added to the already excellent user experience of the original SDK and seem  to have eliminated some of the motion-sickness complaints of the original.

From Engadget

From Engadget

OculusVR has raised almost $100 million, has delivered over 50,000 SDKs and has been attracting a lot of attention with some awesome demo’s, some great PC, and some high-level additions to the company. Maybe VR/AR is really here? Back in the day when I was trying out some VR HMDs at Siggraph, they suffered from crappy visual resolution and serious lag. I have a tough stomach, so it was more distracting than annoying, but with the current state of tech OVR (plus the others dog-piling onto VR) might be the tipping point for commercialization of VR.

Posted in Augmented Reality, Hardware, Technology | Leave a comment

CES 2014: OpenGL-ES Working Group plans new version in 2014

An announcement at CES by the Khronos OpenGL-ES Working Group announces the intent to ship a new  version of OpenGL beyond 3.0 in 2014. This version will be backwardly compatible with OpenGL ES 2.0 and 3.0. Not much detail was given, but the features announced were;
  • Compute shaders, with atomics and image load/store capability
  • Separate shader objects
  • Indirect draw commands
  • Enhanced texturing functionality including texture gather
  • multisample textures and stencil textures
  • Enhanced shading language functionality
The new API will not include tessellation or geometry shaders.
Posted in OpenGL | Leave a comment

EGL – The initialization interface for OpenGL-ES

EGL is the syntactic sugar between the particular hardware & operating system your program is currently running on and OpenGL. OpenGL is agnostic to the underlying operating system’s windows system. EGL is an interface between Khronos rendering APIs (like OpenGL-ES and OpenVG) and the underlying native platform windowing system. EGL is designed to wrap the graphics context management, surface/buffer binding, rendering synchronization, and hides the underlying OS-specific calls in EGL wrappers. EGL simply provides some decoration about the fact that OpenGL needs to be able to communicate with and get resources from the native operating system that its running on. This basically means the state, context, and buffers that both the OS and OpenGL need to work together.  Specifically EGL is a wrapper over the following subsystems;

  • WGL – Windows GL – the Windows-OpenGL interface (pronounced wiggle)
  • CGL – the Mac OS X-OpenGL interface (the AGL layer sits on top of CGL)
  • GLX – the equivalent X11-OpenGL interface

EGL is a convenience, as there’s nothing preventing you from directly calling the underlying OS-OGL interface layer directly. In general, you will usually not have to do this, as EGL provides the same overlapping subset of functionality found between all of the OS-OGL interface layers. EGL not only provides a convenient binding between the operating system resources and the OpenGL subsystem, but also provides the hooks to the operating system to inform it when you require something, such as;

  1. Iterating, selecting, and initializing an OpenGL context
    • This can be the OGL API level, software vs. hardware rendering, etc.
  2. Requesting a surface or memory resource.
    • The OS services requests for system or video memory
  3. Iterating through the available surface formats (to pick an optimal one)
    • You can find out properties of the video card(s) from the OS – the surfaces presented will resides on the video card(s) or software renderer interface.
  4. Selecting the desired surface format
  5. Informing the OS you are done rendering and it’s time to show the scene.
  6. Informing the OS to use a different OpenGL context
  7. Informing the OS you are done with the resources.

If you are a Windows programmer, you might be familiar with DXGI, which is the relatively newer Windows API that handles a similar function for DirectX programmers. For iOS, Apple has EAGL, which is their own flavor of EGL. Android programmers may or may not be exposed to EGL – you can always make an EGL call if you want to do something special, but if you use the NativeActivity class, the EGL calls are done for you.

The basic usage of EGL and similar API are the following;

  1. (Android) Obtain the EGL interface.
    • So you can make EGL calls
  2. Obtain a display that’s associated with an app or physical display
  3. Initialize the display
  4. Configure the display
  5. Create surfaces
    • Front, back, offscreen buffers, etc.
  6. Create a context associated with the display
    • This holds the “state” for the OpenGL calls
  7. Make the context “current”
    • This selects the active state
  8. Render with OpenGL (OpenGL not EGL calls, the OpenGL state is held by EGL context)
  9. Flush or swap the buffers so EGL tells the OS to display the rendered scene. Repeat rendering till done.
  10. Make the context “not current”
  11. Clean up the EGL resources

After obtaining a display, you initialize it, set the preferred configuration, and create a surface with a back buffer you can draw into.

You kick things off by getting a display connection through a call to eglGetDisplay by passing in either a native display handle or EGL_DEFAULT_DISPLAY.

.
    // Get Display Type - for Windows this is the Window DC
    EGLDisplay eglDisplay = eglGetDisplay( ::GetDC(hWnd) );
    eglInitialize( eglDisplay, NULL, NULL);

    // typical PC attrib list
    EGLint defaultPCAttribList[] = {
	// 32 bit color
	EGL_RED_SIZE, 8,
	EGL_GREEN_SIZE, 8,
	EGL_BLUE_SIZE, 8,
	// at least 24 bit depth
	EGL_DEPTH_SIZE, 24,
	EGL_SURFACE_TYPE, EGL_WINDOW_BIT,
	// want opengl-es 2.x conformant CONTEXT
	EGL_RENDERABLE_TYPE, EGL_OPENGL_ES2_BIT, 
	EGL_NONE
    };

    EGLint numConfigs;
    EGLConfig config;

    // just grab the first configuration
    eglChooseConfig(eglDisplay, defaultPCAttribList,
                    &config, 1, &numConfigs)

    // create a surface - note Windows window handle
    EGLSurface surface =
	eglCreateWindowSurface(display, config, hWnd, NULL);

    // create a context
    EGLContext context =
	eglCreateContext(display, config, NULL, NULL);

    // now make the context current 
    eglMakeCurrent(display, surface, surface, context);

    // the context is now bound to the surfaces, OpenGL is "live"
    //…
    // perform your rendering
    // …

    // when done, unbind the context and surfaces
    eglMakeCurrent(display, EGL_NO_SURFACE, EGL_NO_SURFACE, EGL_NO_CONTEXT);
    // terminate the connection to the display, release all resources
    eglTerminate(display);
.

I’ll say a few comments about the configuration selection. The most important thing this code does is try to choose a set of attributes that matches your needs. It’s up to you to pick a configuration that is an actual good match for the hardware you are running on. For example, if you are running in on fairly capable GPU, you’d want to pick a configuration that support a good, high-quality color and depth buffer. (Here I’m assuming that is what your application needs). So for a PC, you’d want at least a  32 bit color buffer (8 bits for each of the RGBA values) (though you can sometimes get 32 bits per color for a 128 bit per pixel surface). Also choose your depth buffer carefully, you want to pick a natively supported format, which will usually mean 24 or 32 bit over 16 bit for PC graphics. So the attrib list for a PC title might look like this;

.
    // typical attrib list for PC's or modern mobile devices
    EGLint defaultAttribList[] = {
	// at least 32 bit color
	EGL_RED_SIZE,   8,
	EGL_GREEN_SIZE, 8,
	EGL_BLUE_SIZE,  8,
	// at least 24 bit depth
	EGL_DEPTH_SIZE, 24,
	// want opengl-es 2.x conformant CONTEXT
	EGL_RENDERABLE_TYPE, EGL_OPENGL_ES2_BIT,
	EGL_NONE
	};
.

Mobile devices are generally slower, and have smaller sizes, so choose the least acceptable range as a starting point. For supporting older mobile devices you might pick one of the compressed color formats;

.    
    // typical phone/tablet attrib list for older mobile devices
    EGLint defaultolderMobileAttribList[] = {
	// at least 5-6-5 color
	EGL_RED_SIZE, 5,
	EGL_GREEN_SIZE, 6,
	EGL_BLUE_SIZE, 5,
	// at least 8 bit depth
	EGL_DEPTH_SIZE, 8,
	// want opengl-es 2.x conformant CONTEXT
	EGL_RENDERABLE_TYPE, EGL_OPENGL_ES2_BIT,
	EGL_NONE
	};
.

Most of the time you will call eglChooseConfig and just grab the first configuration. Sometimes this is the wrong thing to do. You should at least take a look at what configurations are presented and try to sort by the features that are most important to your app. You’ll typically see the color and depth values changing, and not in the order you might expect. In a future post I’ll post some code that shows how to go about iterating through the list of surface formats and rating them.

Posted in OpenGL | Leave a comment

Oculus Rift – may just be able to succeed, and change the world as we know it

I’ve had some fun playing with my Oculus Rift dev kit. It’s got some great features, it’s got the latency problem whipped, and just needs to work a bit on tools and upping the resolution – if’s a great proof-of-concept, and if they can survive for another iteration or two, I think they will have something, perhaps something revolutionary.

Two folks I know have gone on to work at Oculus V.R. (OVR), so I know it’s building momentum. I was surprised to find that John Carmack not only was named the CTO, but has actually left id to work at OVR. They now have a Dallas office :-) . So John is a force in the industry, and he’s just the kind of guy VR/AR needs to try to put this stuff into the hands of the consumer. I played around with the older VR/AR HMDs a long time ago, and while the actual experience wasn’t that great (I don’t get motion sickness), the lag and the low resolution were the real killers of the tech back then.

O.R. has the latency pretty much whipped, and when they come out with their Gen2/3 HMD which will need better resolution (already here – just pick up any good smartphone), wireless connectivity (also any smartphone), it will be on the road to becoming a must-have product, at least for the tech savvy.

We’ve got Google Glass, Technical Illusions castAR, the Oculus Rift, Sony’s HMZ-T3W, and whatever Valve is going to show at their Dev. Conf. in Jan. 2014, so I think there’s enough folks piling on the notion that AR/VR is almost here so that it may finally get some traction in the consumer space. Like most cutting edge tech., the game players will adopt it first, then everyone else will come around.  If you offload processing to any of the next-gen mobile platforms, you can wirelessly connect to the HMD via something like BLE while getting orientation information back,then you’ve really got something, since the offload platform will replace the user’s smart phone, it’s not going to be something extra they have to buy, it’s just going to be more capable version of something they already own.

It’s going to be awesome!

Posted in Augmented Reality | Leave a comment

Win8 Metro: Harder than it has to be….

I’ve been playing with Metro off and on for a few months now, my frustration levels mounting everything I try to do something I already know how to do. Most of this I’ve come to realize is because of Microsoft’s attempt to morph existing languages and API’s into the walled-garden of WinRT. They went and bastardized the languages that Visual Studio supports to enable  use of reference counted resources and forced nearly any API call to be choked through a dispatch-callback mechanism and try to hide it all through syntactic sugar.

Now, I understand the reasoning behind this. the reason for the reference counting and the use of nonblocking API calls are all designed to foster a  fast and fluid user experience, that won’t end up eating system resources when an application isn’t active. But why they try to hide the fact that some API calls result in a callback I think they are doing a bit too much in trying to hide the threading model from the programmer – particularly when they make it look like you are writing single threaded code – nay, they encourage this style of obfuscation. Now all the Basic, Javascript, HTML5 programmers out there may be fairly unfamiliar with multithreaded programming, heck i find that even the C# and C++ programmers are usually fairly innocent of multithreading techniques. But this style that they’ve come up with I think does a disservice to programmers. A lot of my work focuses on making programmers understand how to make their programs run faster and more efficiently on a particular platform, and to understand the tradeoffs and opportunities that they are presented with. Microsoft is going out of their way to hide the fact that you’re starting a background process and are waiting at a completion point for a task completion notification to signal, missing an opportunity to present the programmer with the opportunity to do some useful work instead of (seemingly) spinning, waiting for the action to complete.

Granted, you don’t have to do it the way they show in the examples, but that presupposes you know what’s going on under the hood. And the current way they have it set up  discurages most programmers from ever discovering there’s a different and potentially more productive way of doing something.

Posted in Windows 8 | Leave a comment

Microsoft decided not to share finished Win 8.1 with developers.. why?

Normally I get access (as an MSDN member) to the RTM (i.e. finished version) of all Microsoft OS versions. This is the way that most devs. get access to the OS’s, tools, etc. For developers this is important since this is how you typically make those last minute tests on your software to make sure it’ll work with the retail version when it’s released.

With Win 8/Metro apps this is particularly hard because of the numerous restrictions about deploying Metro apps (you have to build it on 8.1 if it’s to get deployed to 8.1, you can’t build on 8.0 for 8.1 unless it’s not using anything new), putting things in the store (the public 8.0 store doesn’t work with the 8.1 OS till they turn it on).

Traditionally, the company has made new OSs available to MSDN and TechNet subscribers, as well as volume license customers. With all the changes in the new OS, particularly given the track record of MS changing the interfaces in the run up to the RTM of Win 8.0 (where they did give early access), it seems important to assure devs that their apps will work on the day Win 8.1 is actually released. It’s bewildering that they’d suddenly restrict access like this – I can only suspect they are trying to limit criticisms of 8.1 prior to it’s release. But, like other decisions the company has made in recent years, it doesn’t seem to have been well thought out.

Windows 8.1 has an October 18 retail launch.

Update: Sept 9 – Microsoft has relented under a lot of criticism; Windows 8.1 will be made available on MSDN and TechNet, and the company is also launching a Release Candidate of Visual Studio 2013 for developers. Thank you!

Posted in Windows 8 | Leave a comment