The Inquirer-Home

Nvidia releases CUDA 5 to support Kepler GPGPU features

Allows developers to access other GPU's RAM
Mon Oct 15 2012, 14:00
cuda-parallelism

CHIP DESIGNER Nvidia has updated its CUDA programming language to take advantage of its Kepler based GPUs.

Nvidia's Kepler architecture brought with it a number of advances in GPGPU with the firm touting the ability to 'vary the parallelism' depending on workload characteristics. Now the firm has released CUDA 5 to support such features, including GPU Direct and the Nsight Eclipse integrated development environment (IDE).

Nvidia's CUDA 5 programming language will be of particular interest for those developing applications that run on Kepler GPGPUs, with the firm claiming the code changes required to make use of Kepler's dynamic parallelism features are minimal. The firm also introduced the ability to directly access libraries from object code.

However Nvidia's biggest new feature in CUDA 5 is GPU Direct, which allows CUDA applications to access memory on other GPUs using the PCI-Express bus and the network card. According to Nvidia, the technology for accessing the memory of other GPUs through the PCI-Express bus can be used to lower the latency of memory accesses rather than increase the local memory available to CUDA applications.

The Achilles' heel of GPGPU accelerators is their relatively small amount of local memory. When working with large datasets that can span tens of gigabytes the card has to request data from main memory, resulting in a significant bottleneck as the GPU waits to be fed data.

Nvidia also announced its Nsight Eclipse IDE that not only includes CUDA syntax highlighting but also a debugger and a code profiler that the firm claims can identify performance issues with code. The firm said that its Nsight IDE is available for Linux and Mac OS X.

While Nvidia is making CUDA the programming language of choice for those that use Kepler GPGPUs the wider market is moving towards OpenCL, a language that Nvidia does support but clearly not in the same way as its own proprietary CUDA language. Even though Nvidia's CUDA language is popular with researchers, it might find that some of these CUDA-only features will also have to be made available to OpenCL should it want to see continued growth. µ

 

Share this:

blog comments powered by Disqus
Advertisement
Subscribe to INQ newsletters

Sign up for INQbot – a weekly roundup of the best from the INQ

Advertisement
INQ Poll

Internet of Things at Christmas poll

Which smart device are you hoping Santa brings?