SAN JOSE: CHIP DESIGNER Nvidia announced at its GPU Technology Conference (GTC) on Tuesday that its upcoming Volta GPU architecture will stack DRAM on the same silicon as the GPU.
Nvidia's GPUs presently rely on local memory residing on the same circuit board as the GPU itself. However Jen-Hsun Huang, co-founder and CEO of Nvidia said that the memory bandwidth between the GPU and local memory is "never enough", and with Volta the firm will stack DRAM modules on the same silicon substrate with the GPU.
Huang said that Volta, which will follow its next generation Maxwell GPU architecture, will use through-silicon vias to connect the DRAM chips. He claimed that the bandwidth between the Volta GPU and the memory will hit 1TB/s, which compares favourably to the 250GB/s that the firm's Tesla K20X GPGPU accelerator can achieve.
Aside from the performance benefits, Huang also pointed out that overall the graphics card will be smaller. However what Huang didn't explain is how it will maintain its gross margins when it has to pay for a chip that includes DRAM rather than getting the graphics card vendor to solder the memory chips onto the circuit card.
Nvidia's decision to move the GPU local memory physically closer to the GPU itself signals the growing importance of local memory to the overall performance of the GPU. Yesterday, Acceleware said most CUDA GPGPU applications are bound by memory performance rather than outright GPU computing performance.
Nvidia's decision to mount DRAM on the same substrate as the Volta GPU is a massive gamble for the firm, but judging by what was said at GTC regarding GPU performance bottlenecks, it is a risk that Nvidia seems willing to take. µ
It's time for our regular two-step through the Google news
Bug bounty offer: accepted