Way back when I was writing my OpenGL Book, I had the good fortune to run across some stupendous folks at Microsoft who were forthright and open and very very helpful to me, answering questions, implementation details, and generally reviewing a lot of my work. One of these guys was Otto Berkes. Later the API wars would put him and most of the other OpenGL team onto the DirectX team (which is one reason it suddenly jumped in quality between releases), and eventually he went on with Seamus Blackley, Kevin Bachus, and Ted Hase to be the core Xbox team. He’s the last one to leave Microsoft, and I wish him all the best.
Back in 2008 I took a job with Intel. Intel was starting to ramp up its Larrabee effort and was looking to hire some knowledgeable graphics engineers who could also talk to game developers about *other* things like threading, performance, and optimization. This kept me pretty busy with not enough time to talk about graphics, optimization. And there’s no though about talking about Larrabee since that was not my group. Unfortunately Larrabee wasn’t ready in time so that aspect of my job, while quite a lot of fun, won’t be ready for public consumption for a while. On the other hand, the other aspects of my job were chugging along and I get to do conference talks of these aspects as well, so expect to see some information on multithreading using tasks, using performance tools, programming on Intel’s integrated graphics cores (which really don’t suck anymore), and something new for me, data parallelism though SSE/AVX programming. Since this stuff isn’t as secret as Larrabee, I’m reactivating and reworking my old website into a blog format, since that what is was way back before they had anything called a blog.
It turns out that Microsoft did something very interesting Windows 7, something incredibly cool in fact! Windows 7 will contain something called WARP10, which stands for Windows Advanced Rasterization Platform. This is, in essence, a software rasterizer for DirectX. WARP10 is a high speed, fully conformant software rasterizer that supports DX10+. WARP allows 3D rendering in a variety of situations where hardware implementations are unavailable, including:
- When the user does not have any Direct3D capable hardware or driver, or the card or driver crashes
- When running as a service or in a server environment
Microsoft lists a bunch of other scenarios, but basically WARP10 will run anytime you do not have a working video card or driver. (Assuming you program for it, that is – that is are targeting feature set DX 10.1 or less) What this means is application developers are no longer constrained from using 3-D effects when they find compatible hardware. This is going to open up a whole new realm of casual games that don’t have extreme hardware requirements as well as applications that could be improved by rendering a static 3-D scene. When you combine this with the new Direct3D hardware requirements, applications that are not real-time 3-D will be able to first try hardware driver, and if that’s not available, fall back to a WARP10 interface.
This is an incredibly great thing that Microsoft has done! This brings us back to the halcyon days of software rasterizers where GPUs were measured by their bit-blt rate. This is circa 1999. Things were simple in those days, as you didn’t need to check for shader model support or for a particular pixel buffer format or for a CAPS bit.
I see the biggest use of this is in the coming years, as WARP is multithreaded, so that it can effectively use higher-end hardware – in particular multi-core CPUs and SSE instructions to maximize the throughput. Over the past decade we’ve been maxing out GPU’s while the power of CPU and continued to grow steadily and now pretty much all CPUs are multicore systems I continue to see game developers hitting a GPU limit and failing to take advantage of multicore CPU systems. While I rail against this short-sightedness, after all, while physics and AI can be programmed on a GPU, you really can’t do anything fancy very easily in a shader, it’s just not made for it, but it does attack the problem from the other end – offloading some of the GPU tasks onto the CPU. There’s no reason you can’t have two rendering pipelines – one for the GPU and one for the CPU. This allows you to off-load simple things, like occlusion queries, shadow map generation, etc. onto the CPU, giving you more time for the GPU to actually render stuff.
Visit the Microsoft WARP web page here.
The more time goes by, the more I really think that Intel want’s back into the graphics market. Graphics are the most obvious pathway to beefier machines for the consumer and the easier it is to create cool graphics that consumers find compelling (like in games), the better marketplace for Intel to upgrade everyone. So what does this have to do with Intel buying the Irish company Havok (whose physics engine is used in Half Life 2, BioShock, Stranglehold, The Elder Scrolls IV: Oblivion, Crackdown, Lost Planet: Extreme Condition, MotorStorm, Halo 3, etc.) you might ask? According to Renee James, VP of Intel’s Software and Solutions Group: “Havok is a proven leader in physics technology for gaming and digital content, and will become a key element of Intel’s visual computing and graphics efforts. Havok will operate its business as usual, which will allow them to continue developing products that are offered across all platforms in the industry.” Intel has been pushing developers to make more use of multi-threaded architecture. Just as game developers now target game eye-candy to the ability of the graphics card, Intel is pushing them to do more with beefier processors to enhance the user experience without affecting game play. This means things like adding more detailed models or more particles in a particle system if the user’s machine can handle it. This is a tall order, but one of the things that’ll make it easier is providing a cutting edge physics engine that’s highly tuned to take advantage of multi-core systems. Since the deal is worth about an estimated $US 110 million, I’d guess Intel is really serious about it. That’s cool for all the game developers and the eventual consumer apps that will be physics enabled. But a physics engine is the 2nd thing I would have bought if I were Intel. I expect to see some other acquisitions in the near future.
At the GDC, graphics chip maker NVIDIA announced they are releasing a bunch of updated and new tools. The tool upgrades are: FX Composer 2, PerfHUD 5, ShaderPerf 2. They are also releasing a new GPU-accelerated texture tool, plus a Shader Library. The most interesting toolkit is the DX10 SDK. Targeted towards the GeForce 8 series of GPU (the only GPU that can run D3D10 so far), it’s a collection of samples for both OpenGL and DirectX, executables and source, that demonstrate and showcase DX10 features. The installer looks and acts like the Microsoft DX Sampler.
The DirectX collection makes up the bulk of the samples, while the OpenGL side is a bit thin. The samples include Rain, Smoke, Fur, Shadows, cloth simulation, Render Target usage, etc. The Direct3D SDK is 256MB while the OpenGL SDK is 45MB.
To compile the source you’ll need Microsoft Visual Studio 2005 plus have the Feb. 2007 DirectX SDK installed (which you can get here.) if you want to compile the DX samples.
If you actually want to run the code you’ll need a DirectX10 video card – which currently means an NVIDIA GeForce 8. There are videos of the programs so even if you don’t have a DX10 video card you can still see the programs running.
|The OpenGL ARB has formally released the OpenGL 2.0 specification. This not only included things like the improved OpenGL Shading Language API’s, but a bunch of other new features as well. The highlights are;
This brings OpenGL up to speed with DirectX 9, and since most of the folks on the ARB are video card manufacturers whom have to support DirectX as well, it’s in their interest to have as few differences as possible. Given OpenGL’s extension mechanism (which is rumored to be considered as a possible feature for DirectX 10), and the rapid development of the last few OpenGL versions, it’s obvious that the hardware folks don’t want Microsoft to dictate the way that 3D graphics are going to look in the future.
So…should you go for OpenGL or DirectX? Just listen to John Carmack from his Feb. 2003 .plan
Reasonable arguments can be made for and against the OpenGL or Direct-X style of API evolution. With vendor extensions, you get immediate access to new functionality, but then there is often a period of squabbling about exact feature support from different vendors before an industry standard settles down. With central planning, you can have “phasing problems” between hardware and software releases, and there is a real danger of bad decisions hampering the entire industry, but enforced commonality does make life easier for developers. Trying to keep boneheaded-ideas-that-will-haunt-us-for-years out of Direct-X is the primary reason I have been attending the Windows Graphics Summit for the past three years, even though I still code for OpenGL.
|DirectX Next – Oh Pleeeze!
The slides from Microsoft’s Meltdown 2003 are available here. I’ve not been a fan of DirectX’s piecewise distribution of shader technology – not so much for the hardware folks as for the consumers. When I’d chat with the folks who write shader code for a living (outside the Evil Empire) – I’d get hints as to the stuff “for the next release”. This was particularly annoying as I was writing a book targeting this audience at the time and you’d think Microsoft, at the very least, would want to publicize this stuff. The hardware folks, the top-tier game writers, they were all in the know. They’d let me know, generally, that there was more to be had. Even when they did come out and state what was going on, I, under NDA, couldn’t disclose what I knew. It was very frustrating. For all intents and purposes, Microsoft does indeed seem to want to disseminate this info. Unfortunately they don’t seem to speak with a single clear voice since Phil Taylor left for the warm arms of ATI. Sigh, instead of having someone spoon-feed this out to the public, you’ve now got to glean this stuff yourself. Let’s look at the recent Meltdown slides for example.
What’s new with DX?
NVIDIA is now one of nine permanent board members, the other being 3Dlabs, ATI, Evans & Sutherland, Hewlett-Packard, IBM, Intel, and SGI. NVIDIA had been a term member. Apple, Dell Computer, Matrox, and Sun are term members. A term member still has a vote, but is on 1-year membership. This is a bit like the UN, as the permanent members vote (in closed sessions) on whom to admit as a permanent or term member. It comes as a bit of a surprise that NVIDIA wasn’t a permanent member before this. It also brings to mind “why now” questions. There was some contention between the ARB and NVIDIA over Cg and the GL Shading Language, with NVIDIA pushing an NVIDIA-centric Cg featureset and some others pushing anything but Cg.
Nonvoting participants include (as of June 2002) Alt. software, Crytek GmbH, Discreet, Empire Interactive, Ensemble Studios, Epic Games, GLSetup (which tells you this is an old list), id Software, Imagination Technologies (PowerVR), Intelligraphics, Micron, NEC, Obsession Development, Quantum3D, RAD Game Tools, Raven Software, S3/Diamond Multimedia, SiS, Spinor GmbH, Tungsten Graphics, University of Central Florida, Verant Interactive, and Xi Graphics. Microsoft quit the ARB to focus on DirectX issues.