CHIPMAKER Intel has said it needs to get beyond existing technologies if it is to hit usable exascale supercomputers by 2018.
Intel has already said it needs to beat Moore's Law in order to hit its goal of producing an exascale supercomputer by 2018, and now the firm has added that transistor performance is not scaling like it used to and current trends will not be enough to reach the performance or power requirements of exascale computing.
Intel senior fellow Steve Pawlowski said Intel has used multiple cores in order to overcome transistor performance barriers. Pawlowski used the example of clock speeds stagnating around the 3GHz mark to highlight the problem, but said that this wasn't the solution.
Pawlowski said Intel was "not getting the same energy efficienies seen before" from transistors, but more worryingly he said that adding more cores has resulted in "not scaling the bandwidth" and explained further that "I/O [input/output] is a big issue".
Intel has to make significant power savings, according to Pawlowski, who said that using current technology and extrapolating it would result in a exascale supercomputer that used 54MW. Pawloski said this was pointless, as most datacenters are simply unable to power and cool that sort of cluster. Pawlowski said Intel will have to design an exascale supercomputer that uses no more than 20MW.
Usually Intel and other chip designers can rely on shrinking process nodes to produce both power and performance gains, but Pawlowski said even that avenue has been shut off. He said, "Even with 8nm [process node] in 2018, that would give 8Gflop/W with current scaling... that is not enough for exascale."
Pawlowski's comments highlight that Moore's law might be good enough for consumers, but at the high-end Intel has to do a lot more. Intel and other chipmakers have to move away from traditional methods such as shrinking process nodes and adding ever more cores if they are to create a usable exascale supercomputer. µ
Sign up for INQbot – a weekly roundup of the best from the INQ