The Inquirer-Home

Nvidia says limited Geforce GPGPU performance is useful for debugging

Claims developers can code anywhere
Thu Jan 31 2013, 11:07
nvidia-tesla-k20x-k20-gpu-accelerator

BOLOGNA: CHIP DESIGNER Nvidia justified its decision to limit double precision floating point GPGPU computing in its consumer Geforce video cards by saying that coders can debug code on their local machines to efficiently make use of compute time on larger clusters with Tesla GPGPUs.

Nvidia has been criticised for artificially limiting the double precision floating point GPGPU performance of its consumer Geforce video cards while Tesla GPGPU boards sporting chips with the same architecture have considerably higher performance. The firm justified this by claiming that developers can use consumer GPU cards to debug code using relatively inexpensive machines rather than waste allocated time on Tesla GPGPU high performance computing (HPC) clusters.

Geoff Ballew, senior manager of Nvidia's Tesla Compute business unit said, "An individual scientist or developer can actually program on a notebook as long as it has a Nvidia GPU. They can program their code and tune their kernels on an aeroplane, at an airport, out in the park. What we want to make sure is that they don't spend the precious time they get on a large scale system tuning code or finding issues with their code. We would much rather they code wherever they want, tune their code, and when they get the allocation on that system, that's where they can run their application at scale."

Ballew didn't mention that the double precision floating point GPGPU performance of those consumer parts is considerably less than that of Tesla accelerators found in large scale clusters. However he did point out that because all of Nvidia's GPUs for the past few years support some level of GPGPU computing, universities are teaching CUDA, something that will help the firm as those students get out into industry and start demanding Nvidia hardware to make use of their skills.

Nvidia's decision to limit double precision floating point performance on Geforce has long been seen as a commercial decision to ensure that the firm sells the far more expensive Tesla boards. While Ballew has a point in that the relative ubiquity of Nvidia GPUs helps people code, test and debug anywhere, the same can now be said with regard to Intel's and AMD's support of OpenCL on their respective processors with integrated GPGPU cores. µ

 

Share this:

blog comments powered by Disqus
Advertisement
Subscribe to INQ newsletters

Sign up for INQbot – a weekly roundup of the best from the INQ

Advertisement
INQ Poll

Apple announces the iPhone 6, iPhone 6 Plus and Apple Watch

Which of Apple's new products will you be buying?