CHIP DESIGNER Nvidia has announced that the LLVM compiler will support its GPUs and in particular its CUDA GPGPU programming language in the upcoming 3.1 release.
Nvidia has worked with the LLVM developers to incorporate its CUDA compiler code into the LLVM core and parallel execution backend. Now the firm claims that developers can program applications that make use of its GPUs using a larger range of programming languages, not just C, C++ and Fortran that the CUDA compiler originally supported.
The LLVM Project already has support for Objective-C, Ada, Haskell, Java bytecode, Python, Ruby and GLSL along with C, C++ and Fortran. Perhaps surprisingly, Nvidia also mentioned that the compiler infrastructure it adopted for CUDA is also used by AMD.
Ian Buck, GM of GPU computing software at Nvidia said, "The code we provided to LLVM is based on proven, mainstream CUDA products, giving programmers the assurance of reliability and full compatibility with the hundreds of millions of Nvidia GPUs installed in PCs and servers today."
Nvidia's work with the LLVM Project is another step in its move towards opening up its CUDA GPGPU language. In recent quarters the firm's proprietary CUDA language has come under heavy fire from OpenCL, which is being pushed by AMD and Intel, among many others. While Nvidia supports OpenCL there is no doubt that it would prefer that developers use its CUDA programming language, as no other GPU firm has licensed its technology, meaning it is a form of vendor lock-in.
The LLVM Project will release version 3.1 of its compiler with support for Nvidia's GPUs on 14 May, while the code trunk is already viewable. µ
Sign up for INQbot – a weekly roundup of the best from the INQ