The Inquirer-Home

EVGA Geforce GTX 680 vs Sapphire Radeon HD 7970

AMD and Nvidia battle for your £450
Wed Jun 06 2012, 13:51
sapphire-cooling

AMD AND NVIDIA waited a long time to release their latest Graphics Core Next (GCN) and Kepler architectures, respectively, and we look at two cards that incorporate the headline GPU chips.

AMD's GCN architecture was first off the mark with the firm 'releasing' the Radeon HD 7970 just before Christmas. However AMD had to endure serious supply problems courtesy of TSMC and it was April before there was widespread availability, by which time Nvidia had caught up with its Kepler architecture showcased in the Geforce GTX 680.

Both AMD and Nvidia were at the mercy of TSMC and its 28nm process node. While TSMC has been trying to meet demand on its 28nm process node, AMD and Nvidia have been left with graphics card vendors unable to supply products to eager customers who want to upgrade from graphics cards that carry chips based on two year old architectures.

AMD's GCN is geared more towards GPGPU compute, citing higher instructions per clock cycle with each compute unit, of which there are 32 on the Tahiti Radeon HD 7970 GPU, being able to execute instructions from different CPU cores. This effectively means that up to 32 instructions can be executed in one clock cycle.

Even though AMD was first out of the gate, due to the aforementioned TSMC supply issues the firm wasn't able to capitalise on the three month head start it had over Nvidia's Kepler.

When Nvidia showed off its Kepler architecture the firm was keen to stress that it was all about efficiency and simplification, and in particular lowering power consumption. Nvidia had built up a reputation - at times not entirely justified - of designing GPU chips that were power hungry, and the firm realised that had to change if it wanted to eventually cascade the Kepler architecture down to its integrated CPU plus GPU Tegra system-on-chip.

Not only did Nvidia bump up the number of CUDA codes on the GK104 to 1,536 but, thanks to a considerable frequency jump, bumped up transactional throughout. Nvidia's most visible change from a customer's point of view will come from board vendors not citing shader clock frequencies, something Nvidia called an "area optimisation" but what others would call a temporary fudge.

AMD and Nvidia have gone different routes with their respective 28nm architectures, however both GPUs are clocked similarly across various board vendors, hovering around the 1GHz mark. Not only have the firms opted for similar base clock speeds for their GPUs but both firms have opted to add turbo boost modes, though the actual figures vary between board vendors.

AMD's decision to focus on GPGPU compute with its GCN architecture makes more sense when taken in the context of its accelerated processor units (APUs). Although AMD's Trinity still uses the firm's almost ancient Northern Islands graphics core, eventually some form of the GCN architecture will end up in APUs where GPGPU compute is far more useful to AMD than a few percentage point gains in game titles.

At least on paper, AMD has the advantage when it comes to memory bandwidth due to the reference design stipulating 3GB of 384-bit GDDR5 memory. It should be noted that some of Nvidia's board partners have stuck 4GB of 256-bit GDDR5 memory on their boards, albeit with price premiums attached.

Both AMD and Nvidia will tell you that the amount of onboard memory, often referred to as the frame buffer, is becoming increasingly important, with some textures hitting close to 1GB and the growing emphasis on multi-monitor PC gaming. Perhaps the most graphic illustration of the need for more high-speed on-board memory is the difference in bandwidth between GPU and onboard memory, close to 200GB/s, and GPU and main memory, accessed through a PCI-Express Gen3 bus that can manage only a relatively meagre 32GB/s.

Before we get into the cards and their performance it should be obvious that these ultra-high end cards are viable options for those that have multi-monitor setups or a single 2560x1600 screen. Even then, paying AMD's or Nvidia's board partners the best part of £400 is still only justified if the games library includes Microsoft DirectX 11 games.

 

Share this:

blog comments powered by Disqus
Advertisement
Subscribe to INQ newsletters

Sign up for INQbot – a weekly roundup of the best from the INQ

Advertisement
INQ Poll

Masque malware is putting iPad and iPhone user data at risk

Has news of iOS malware made you reconsider getting an iPhone?