The more time goes by, the more I really think that Intel want’s back into the graphics market. Graphics are the most obvious pathway to beefier machines for the consumer and the easier it is to create cool graphics that consumers find compelling (like in games), the better marketplace for Intel to upgrade everyone. So what does this have to do with Intel buying the Irish company Havok (whose physics engine is used in Half Life 2, BioShock, Stranglehold, The Elder Scrolls IV: Oblivion, Crackdown, Lost Planet: Extreme Condition, MotorStorm, Halo 3, etc.) you might ask? According to Renee James, VP of Intel’s Software and Solutions Group: “Havok is a proven leader in physics technology for gaming and digital content, and will become a key element of Intel’s visual computing and graphics efforts. Havok will operate its business as usual, which will allow them to continue developing products that are offered across all platforms in the industry.” Intel has been pushing developers to make more use of multi-threaded architecture. Just as game developers now target game eye-candy to the ability of the graphics card, Intel is pushing them to do more with beefier processors to enhance the user experience without affecting game play. This means things like adding more detailed models or more particles in a particle system if the user’s machine can handle it. This is a tall order, but one of the things that’ll make it easier is providing a cutting edge physics engine that’s highly tuned to take advantage of multi-core systems. Since the deal is worth about an estimated $US 110 million, I’d guess Intel is really serious about it. That’s cool for all the game developers and the eventual consumer apps that will be physics enabled. But a physics engine is the 2nd thing I would have bought if I were Intel. I expect to see some other acquisitions in the near future.
|3Dlabs has released the front-end OpenGL Shading Language compiler as an open source project (here). This is for 3Dlabs’ Wildcat VP line of cards (which will also require the OpenGL 2.0 drivers). This is just the front-end compiler – it reduces the shader to an intermediate representation. A target specific back-end compiler is required to generate machine code for a specific graphics card. It’s interesting to note that despite the ATI-3Dlabs OpenGL lovefest at last year’s Siggraph, we see no RenderMonkey stuff from 3Dlabs, nor any OpenGL 2.0 stuff from ATI.|
When fetching a graphics device from DirectX, it’s possible to request a software renderer, though until recently no such thing existed. If you didn’t know, graphics god Mike Abrash left Microsoft and had been quietly working on a software renderer – called Pixomatic – at RAD GameTools, which has been included in UT2003 for those folks who are running on older or unstable hardware. Apparently a 1GHz CPU will run the game, though with some things like shadows and some dynamic lighting turned off. Still, it’s nice for those of us who have to support older hardware (typically done via an OpenGL layer) but would like to use all the latest DirectX API. Find out more info here at the Unreal web site.
I have to admit that that I wasn’t particularly enthralled when NVIDIA announced Cg. It’s bad enough that we’ve got DirectX 9′s HLSL and OpenGL 2.0′s GSLang. As someone who got to program DirectX, OpenGL, Glide and CIF (ATI’s proprietary 3D language, RIP), I really hate competing graphics languages. I feel it’s a waste of time to reinvent the wheel in a different flavor, and I really would rather be creating something new than porting code. On the other hand, I ran across this article by Colin Stoner that hits on some of the more recent uneasiness. While Colin does raise some interesting points, like even though Cg is open source and theoretically could output for ATI chips, ATI doesn’t give a shit as they are on the HLSL/GSLang wagon, and how NVIDIA is getting games to brand themselves with the NVIDIA logo, he really is complaining about something that I don’t think is a bad direction for the PC graphics world in general, at least I think so…
I got a fair amount of (good-natured but strident) flak from NVIDIA for my “short sighted” viewpoint. Yeah, well sorry. Cg is so close to HLSL that I understood the need to get something out there while HLSL shaped up. But hey – HLSLs here now, so why hasn’t Cg been merged into HLSL?. The recent Cg book by Fernando Kilgard (a really nice book by the way, see the Gamasutra article) is being followed by another one. Which seems strange to me. The Cg book is selling well, but it just came out – it’s a bit soon to be following up with another one. This seems to point to Cg being around for a while, and NVIDIA pushing some not insignificant resources at it. Let’s face it, NVIDIA ain’t stupid, they’ve got some of the smartest engineers in the business even though I don’t care much for some of their marketing practices. What other reason could Cg exist when HLSL/GSLang could easily fill the role? It’s got to be because NVIDIA owns Cg. It owns a high level rendering language. What can you do with that? It’s only an advantage if you can do something with it. NVIDIA quietly picked up some IP last year that really could make CineFX engine a real cinematic experience. This is all sheer speculation on my part, but it’s what I’d do if I had those resources. NVIDIA’s market share is still twice as big as ATI’s, a nice end run could cut ATI off at the knees if executed correctly and put to rest any doubt about who’ll supply the chips for Xbox2.
So how can this be a good thing? DX9 is going to be here for a while. ATI is settling down to a slower R&D cycle. Ho hum. Nothing new on the horizon for a while, it looks like smooth, straight sailing for a while. Unless someone decides to rock the boat.
Well, in an about face from previous years, there was lots going on at this year’s GDC. Here are some highlights.
NVIDIA Announces new cards – The GeForceFX 5200 and 5600. Both are DirectX 9 cards. The 5200 is expected to sell starting at $79 MSRP. NVIDIA becomes the first card company with DirectX 9 capable cards in its entire front line.
ATI Announces new cards – They announced the 9200 (DirectX 8.1) , 9600, and 9800 (DirectX 9) cards. Also under-the-radar was incorporation of F-Buffer (fragement-stream-buffer) in SmartShader 2.1, which is supposed to allow shaders of any length without resorting to multipass rendering. These cards compliment the 9700.
3DLabs – not to be left behind – announced the WildCat VP990 Pro.
ATI and 3DLabs announced they are working jointly on our fav shader tool – RenderMonkey! In a not so subtle swipe at Cg, ATI and 3DLabs have teamed up to work on RenderMonkey. ATI will continue to work on the framework and both will work on plug-ins for HLSL and OpenGL’s shader language GLSL. In addition they say they’ll work closely with 3D party vendors to incorporate RenderMonkey functionality into tools – so expect to see RM plug-ins for Maya, 3DSMax, etc. in the near future. Press announcement.
Microsoft withdraws from OpenGL ARB – citing failure for OpenGL to keep pace with graphics features, Microsoft says that it’ll focus on DirectX.
DirectX 9.1 is it – for now. According to Microsoft’s Dean Lester, the next major release of DirectX isn’t scheduled until the release of the next OS (codenamed Longhorn), which is now due out sometime in (survey says) 2005.