The Inquirer-Home

Nvidia brings LLVM to CUDA promising a 10 per cent speed bump

Boosts signal processing libraries too
Thu Jan 26 2012, 17:20

CHIP DESIGNER Nvidia has updated its CUDA GPGPU programming framework to include a low level virtual machine (LLVM) to boost performance.

Nvidia had announced major changes to its proprietary CUDA programming framework last year and today released the first version to incorporate changes such as an LLVM compiler. According to the company, the LLVM brings an "instant 10 per cent increase in application performance".

Although Nvidia might extol the virtues of an LLVM compiler, the firm also provides a visual profiler that should help coders optimise their code. The truth is that coding for GPGPU in most cases requires considerable optimisation in order to squeeze every last bit of speed from the GPU.

Nvidia also expanded its signal processing library. Usually standalone digital signal processors are used by researchers to simulate certain workloads, but with a growing signal processing library some workloads can be run on CUDA-enabled Nvidia graphics boards.

Although Nvidia has enjoyed considerable popularity in the research community with CUDA, it has recently faced stiff competition from OpenCL, an open GPGPU programming language. Although Nvidia has said it doesn't care what language coders use as long as they use its graphics boards, pushing CUDA is still a good way to guarantee sales of its GPU products. µ

 

Share this:

blog comments powered by Disqus
Advertisement
Subscribe to INQ newsletters

Sign up for INQbot – a weekly roundup of the best from the INQ

Advertisement
INQ Poll

Microsoft's Windows 10 Preview has permission to watch your every move

Does Microsoft have the right to keylog users of its Windows 10 Technical Preview?