Is Intel just getting back into the graphics business, or are they going to change it?

It’s no great secret that Intel has been eyeing the discreet graphics market. Intel typically owns about 30-40% of the desktop graphics market, but that’s strictly integrated (and hence – usually considered underpowered) graphics. ATI and Nvidia own most of the other part of the market and anyone who’s interested in playing 3D games wouldn’t consider using an integrated graphics solution if they could help it. Apparently the acquisition of ATI by rival AMD has set Intel to aggressively start talking about discreet graphics. In fact Intel has recently started aggressively hiring engineers (both software and hardware) for their Visual Computing Group. From the recruitment copy:

Join us as we focus on strengthening our leadership in integrated and high-throughput graphics and gaming experiences by developing innovative processing products based on a many-core architecture. We’re looking for engineers, developers, and architects who share our vision and understand what can happen when serious skills and vast resources join forces.

It would seem that Intel is pushing a multi-core architecture for the 2008-2009 timeframe. Given Intel’s manufacturing chops they could, if they set their mind to it, make a pretty deep impression on the discreet graphics market. Given that timeframe you’re looking at a 10x to 20x performance boost over the current top GPU, an Nvidia G80. Even if Intel makes a few missteps and produces something that’s underpowered compared to the best from ATI or Nvidia, they would still probably be competitive on price alone.

Or perhaps they could be doing something to surprise us all? Intel recently did something pretty smart, they hired Daniel Pohl, recent Erlangern University graduate. Daniel coauthored a paper – Realtime ray tracing for current and future games in which the case is made that the traditional hardware rendering pipeline – i.e. object built up independently of each other with depth created through the use of a z-buffer, and all object-object visual interactions (shadows, reflections, etc.) having to be added on – has just about reached the end of its lifetime. Daniel is pushing raytracing instead. Raytracing lets you build up complex scenes in which all the lighting, shadowing, transparency, reflection, caustics, etc. are all handled by the ray tracer. Granted, that the framerates that Daniel gets on the custom raytracer board were about 10% of current consumer level boards, but raytracing will be the next big step in graphics architecture, not a kludge like the doomed Talisman architecture. Raytracing really is the way to render scenes. Maybe Intel will be the first?

This entry was posted in Graphics Hardware. Bookmark the permalink.