NVIDIA HAS SNAPPED BACK at Intel, poking fun at the company's claims that it's CPUs beat Team Green's Tesla GPUs at inference workloads.
"It's not every day that one of the world's leading tech companies highlights the benefits of your products," spouted Paresh Kharya, director of product marketing at Nvidia.
"To achieve the performance of a single mainstream Nvidia V100 GPU, Intel combined two power-hungry, highest-end CPUs with an estimated price of $50,000-$100,000, according to Anandtech. Intel's performance comparison also highlighted the clear advantage of NVIDIA T4 GPUs, which are built for inference.
"When compared to a single highest-end CPU, they're not only faster but also 7x more energy-efficient and an order of magnitude more cost-efficient."
That's some serious shade slinging, but Kharya went on to breakdown the bang-for-buck and bang-for-power-consumption that are offered by the Tesla V100 and Turing-based T4 GPUs over a dual-socket Intel Xeon 9282.
And in fairness to Nvidia, the results are fairly compelling. Yes, Intel's CPU wins on raw performance results in the ResNet-50 test, but it takes a lot more power.
The efficiency of the GPUs over the CPUs is arguably expected, given GPUs are more geared up for handling parallel processing, a big thing in AI training and inference, than CPUs.
It's also worth noting that Intel never positioned its Xeon chips as ideal for inference workloads, unlike Nvidia's Tesla and T4 GPUs. Rather the Xeon CPUs are general purpose processors for data centre use that can do a good job at inference if needed.
But it looks like the language Intel used and the way it touted its results as a red flag to Nvidia, hence the pithy response.
At the same time, Kharya did acknowledge Intel's CPUs are pretty decent at inference: "Intel's latest Cascade Lake CPUs include new instructions that improve inference, making them the best CPUs for inference."
He then added that they just can't hold a candle to Nvidia GPUs with dedicated deep learning-optimised Tensor cores, which seem a fair enough observation.
This type of tech willy-waving is silly, but it does show that healthy competition in the market leads to more innovative products. The new GeForce RTX graphics cards, for example, come with deep learning capabilities thanks to their Tensor cores, which undoubtedly filtered down from Nvidia's work on its data centre GPUs. µ
You're not the voice, try and understand it
Not 'Appy bunnies
News reaches us, per Plex