The Inquirer-Home

Nvidia G80 meets DX 10 spec with 'dis-unified' shader

Horses for courses
Mon Jul 03 2006, 10:50
WE HEAR Nvidia has been beavering away fo meet the DirectX 10 specification.

And the firm decided it doesn't need a unified Shader for its upcoming G80 chipset. Instead, it decided that you will be fine with twice as many pixel Shader numbers as geometry and vertex Shaders.

AS we understand it, if a Nvidia DX10 chip ends up with 32 pixel-Shaders, the same chip will have 16 Shaders that will be able to process geometry instancing or the vertex information.

ATI's R600 and its unified Shaders work a bit differently. Let's assume that ATI hardware has 64 unified Shaders. This means that ATI can process 64 pixel lines only per clock. That may be in the proportions: 50 pixel, 14 vertex and geometry lines per clock, or 40 vertex, 10 pixel and 14 geometry information per clock. Any ratio that adds up to 64 will do. I hope you get this maths.

The Nvidian chippery is limited to 32 pixel and 16 vertex and geometry lines per clock, which might be a wining ratio but it is still too early to say. We don't know who will win the next generation hardware game and whose approach is better: ATI's unified or Nvidia's two-to-one ratio.

DirectX 10 actually doesn't care how you do your Shaders as you speak with an abstraction layer and hardware can do its pixel, vertex and geometry data the way it wants. It will serve up the information to DirectX 10 that will process it inthe end.

Nvidia's G80 is fully DirectX 10 capable as well as Shader Model 4.0 capable but it won't be unified, according to our sources.

In the end, people care about the frames per second that that will ultimately decide who will win the next-generation graphics hardware race. µ

 

Share this:

blog comments powered by Disqus
Advertisement
Subscribe to INQ newsletters

Sign up for INQbot – a weekly roundup of the best from the INQ

Advertisement
INQ Poll

Heartbleed bug discovered in OpenSSL

Have you reacted to Heartbleed?