Talisman: The Graphics Architecture that never was

What is/was Talisman?

A Still from the Chicken Crossing Siggraph Video

Talisman was the name of a technology that was designed to improve 3D graphics by essentially reducing the memory requirements of 3D graphics. Unveiled at Siggraph 1996, it was essentially a new way of rendering graphics objects onto layers (technical term is “chunking”), and then smartly compositing these layers onto a final scene. Thus only the “updated” regions would be sent to the display. Sounds good, no? The reasoning for this approach can be summed up in one sentence. RAM is expensive. Thus Microsoft, single-handedly, decided that they’d redesign the entire graphics API and make it cheaper to mass produce sub $300 high-quality 3D graphics cards. Talisman would provide 1,344 x 1,024-pixel resolution at 75-Hz rates, with 24-bit color.


Microsoft’s Siggraph 2006 Talisman Video

Microsoft’s description of THE PROBLEM: 3D graphics needs to get on more desktops and true 3D graphics cards require lot’s of RAM, and RAM is expensive. Besides current RAM is too slow.

Microsoft’s description of THE SOLUTION: We’ll redesign the graphics pipeline, using lots of fancy spacial/temporal coherence algorithms, and work with some external hardware folks to produce some working reference video boards to prove this works. (Hopefully forcing all 3D graphics card manufacturers to go along with us.)

The reason why it failed:

RAM got lots cheaper. We tried to redesign the graphics pipeline, stalling adoption. Intel went and increased CPU speed and memory throughput. First person shooters on PC’s got popular and 3D discreet card manufacturers managed to get lots of memory onto cards that supported existing graphics APIs. Dang!

While I might sound bitter, I really (at the time) did applaud Microsoft for attacking the problem. Admittedly I later did cast doubt on the whole process, particularly when Samsung dropped out, then Cirrus Logic. The whole thing folded in on itself in 18 months when Microsoft failed to get four separate chip companies to do what it wanted to do, failed to produce any of the reference boards (code-named Escalante), Intel boosted CPU and RAM speeds (and announced the AGP bus), and the 3D graphics companies did what they do best. Microsoft also found that the couldn’t change the infrastructure of the 3D graphics industry, since there’s too much invested in rendering fast polygons go switch to some other method. Eventually Trident Microsystems paid Microsoft $250,000, where they planned on producing their own- single-chip Talisman implementation (getting rid of those other bothersome chip companies) but nothing ever showed up.

About this time 3Dfx was going strong, Diamond, Intergraph, 3DLabs, were all pushing out competent OpenGL capable cards for less than $300.

The end result of Talisman was to scare the 3D graphics companies and light a fire under Intel (who came out with MMX and AGP specs. soon after Talisman’s announcement) Eventually Microsoft gave up, but snippets of Talisman technology still show up occasionally.

Companies that worked on Talisman hardware were Samsung, Fujitsu Microelectronics, Cirrus Logic, Phillips (replaced Samsung), Silicon Engineering.

Here’s a link to the Siggraph presentation; Talisman: Commodity Realtime 3D Graphics for the PC.

If anyone has details or corrections feel free to mail them to me – let me know if I can post them here or just smile in secrecy.

Intro to the 1996 Siggraph Talisman paper

A new 3D graphics and multimedia hardware architecture, code-named Talisman, is described which exploits both spatial and temporal coherence to reduce the cost of high quality animation. Individually animated objects are rendered into independent image layers which are composited together at video refresh rates to create the final display. During the compositing process, a full affine transformation is applied to the layers to allow translation, rotation, scaling and skew to be used to simulate 3D motion of objects, thus providing a multiplier on 3D rendering performance and exploiting temporal image coherence. Image compression is broadly exploited for textures and image layers to reduce image capacity and bandwidth requirements. Performance rivaling high-end 3D graphics workstations can be achieved at a cost point of two to three hundred dollars.

Leave a Reply

Your email address will not be published. Required fields are marked *

What is 7 + 10 ?
Please leave these two fields as-is:
IMPORTANT! To be able to proceed, you need to solve the following simple math (so we know that you are a human) :-)

This site uses Akismet to reduce spam. Learn how your comment data is processed.