THE OLD COBBLERS that Nvidia is busily chipping away trying to adapt its Cuda technology to AMD's GPUs has emerged yet again, seemingly over some confusing comments made by the Green Goblin's chief scientist, Bill Dally.
In a roundtable discussion, Ben Hardwidge of Techradar asked Dally about Cuda, mentioning that it currently works only on Nvidia GPUs. "If you're a developer who wants to reach as wide an audience as possible, wouldn't it be better just to go with OpenCL?" probed Hardwidge, only to be told "In the future you'll be able to run C with CUDA extensions on a broader range of platforms." Dally went on to cryptically add "I'm familiar with some projects that are underway to enable CUDA on other platforms." He didn't elaborate further.
Surprised by this, the INQ decided to ask Nvidia outright whether it was indeed fiddling about with Cuda to allow developers to use it on both NV and AMD GPUs, making money off AMD's products. The answer, when it finally arrived after hours of waiting, was evasive.
Nvidia PR told us Dally had probably been referring to something else entirely, like a Linux-based tool designed to compile the CUDA programming model to a CPU architecture, or running C on anything from PCs, to handhelds, to servers and Playstations.
"So, nothing to do with AMD GPUs then?" *Cough, ehem, cough* "Er, we'll get back to you, but don't think so" - or something to that effect - came Nvidia's response. When that response half heartedly did come back to us, it stutteringly read "he [Dally] was giving a hypothetical....technology wise it could....both companies would have to do some work..." Aha. A hypothetical, eh? Hypothetically we could all be living on the moon by 2020 too.
Viewing the bigger picture of GPGPU, there are a couple of general purpose standards already available, including OpenCL and Microsoft's DirectX Compute, both of which are supported by AMD and Nvidia. But both firms have also decided to make their own proprietary flavours, with Nvidia far out ahead with Cuda, whilst AMD lags far behind with Stream.
What work AMD/ATI is doing is focused on Open CL. The implementation realities of Open CL, however, and whether it could support heterogeneous ATI/NV mixed environments are still very much unclear.
Cuda is generally believed to be a fair bit better than Stream - although, obviously AMD begs to differ - with a multitude of developers having already put it to the test on hundreds of applications, which are already available. AMD, on the other hand, ‘boasts' a paltry five ISVs, two of which are also working with Cuda, and says a development driver for OCL will be out "Very soon". Nvidia's has already been released.
Still, for Cuda to be able to work on AMD GPUs, Nvidia would absolutely need AMD's support. Without it, Nvidia wouldn't be able to get low-level programming access to the GPU to develop the API. Even Nvidia admits that AMD would probably never allow this to happen. As for AMD, the company's point man on Stream seemed amazed we'd even asked.
AMD's Gary Silcott told the INQ "they [Nvidia] would intentionally damage performance to make Nvidia GPUs run the same app better." Then, perhaps thinking better of accusing Nvidia of hypothetical, yet outright, sabotage, Silcott added "Even if it wasn't intentional, it would not be optimized for our instruction set architecture like our own SDK."
That's okay though, since Nvidia has no intention of adapting its GPUs for AMDs technology either. "No, I don't see us supporting Steam..." said Nvidia's Derek Perez acidly when we asked him for his response. µ
Plus, it's goodbye to Device Assist
Vulnerabilities in the iOS sandbox thankfully found by the good guys
Data watchdog will make sure firm is being fully transparent about the controversial move
Chinese firm reportedly forces staff to do 82 hours of overtime a month