CHIP DESIGNER Nvidia has updated its CUDA GPGPU programming framework to include a low level virtual machine (LLVM) to boost performance.
Nvidia had announced major changes to its proprietary CUDA programming framework last year and today released the first version to incorporate changes such as an LLVM compiler. According to the company, the LLVM brings an "instant 10 per cent increase in application performance".
Although Nvidia might extol the virtues of an LLVM compiler, the firm also provides a visual profiler that should help coders optimise their code. The truth is that coding for GPGPU in most cases requires considerable optimisation in order to squeeze every last bit of speed from the GPU.
Nvidia also expanded its signal processing library. Usually standalone digital signal processors are used by researchers to simulate certain workloads, but with a growing signal processing library some workloads can be run on CUDA-enabled Nvidia graphics boards.
Although Nvidia has enjoyed considerable popularity in the research community with CUDA, it has recently faced stiff competition from OpenCL, an open GPGPU programming language. Although Nvidia has said it doesn't care what language coders use as long as they use its graphics boards, pushing CUDA is still a good way to guarantee sales of its GPU products. µ
We round up the top 10 stories from the past seven days
For when you just can't take another long lunch break
Control your Android TV from an iOS device? Um, no
Somebody call the irony police