US PRESIDENT Barack Obama launched an initiative this week to develop an exaflop supercomputer that will be 30 times more powerful than today's fastest machine.
Obama signed an executive order to establish the National Strategic Computing Initiative (NSCI) on Thursday, a project looking to bolster the country's stance on scientific computing and beat China when it comes to high-performance computing (HPC).
The White House said that the NSCI will "maximise [the] benefits of HPC research, development and deployment", and will require supercomputers to achieve levels of performance and power efficiency not achieved before. This means a machine capable of one exaflop, a billion billion calculations per second.
"Over the past six decades, US computing capabilities have been maintained through continuous research and the development and deployment of new computing systems with rapidly increasing performance on applications of major significance to government, industry and academia," reads the executive order.
"Maximising the benefits of HPC in the coming decades will require an effective national response to increasing demands for computing power, emerging technological challenges and opportunities, and growing economic dependency on, and competition with, other nations."
No prizes for guessing to which nation in particular that refers.
The initiative will have five main objectives:
• To accelerate delivery of an exascale computing system capable of delivering 100 times the performance of current 10 petaflop systems.
• Better consistency between modelling and simulation used for data analytics computing.
• Establishing where the next 15 years of HPC might take us after the limits of current semiconductor technology are reached, a "post- Moore's Law era".
• Increasing the capacity and capability of an enduring national HPC ecosystem through networking technology, workflow, downward scaling, foundational algorithms, software, accessibility and workforce development.
• Developing a public-private collaboration so that findings of the research are shared between the US government and the industrial and academic sectors.
However, the White House did not reveal any information about how much funding is being put forward for the project, or when it expects the supercomputer to be operational. Although there is talk that it will be up and running by 2025.
GPUs not CPUs
An exaflop supercomputer is all well and good on paper, but these powerful machines will require rethinking in the way they are built and how they consume energy to make them more efficient.
For starters, it would need to be powered by GPUs, because CPUs alone would suck up two gigawatts of electricity. This equals the output of the Hoover Dam, according to graphics firm Nvidia.
"Unlike CPUs, GPUs rely on large numbers of small, power-efficient computing cores that can handle up to 10 times more operations per unit of energy," the company said in a blog post about the exaflop supercomputer development.
A key new technology Nvidia is developing is a high-speed NVLink interconnect, which it said will help the CPUs and GPUs inside supercomputers exchange data five to 12 times faster.
Nvidia touted NVLink as the "world's first" high-speed GPU interconnect, using the analogy of doing away with congestion in Los Angeles by expanding the roads from four lanes to 20.
There are other benefits, too. NVLink lets CPUs and GPUs connect in new ways to enable more flexibility in server design. NVLink is also much more energy efficient than PCI Express.
NVLink technology will play a role in the next-generation systems currently in development: the Summit system at Oak Ridge and the Sierra system at Lawrence Livermore National Labs. These were announced in April, marking the latest development since the US Department of Energy (DoE) threw some $325m at IBM and Nvidia to build the world's fastest supercomputers by 2017.
The Summit system is the more powerful of the two but will not be completed until 2018 - despite the DoE's original goal of 2017 - but will offer 150 to 300 petaflops of computational performance when completed, Nvidia said.
That's at least five times more than Oak Ridge's Titan, which is currently the fastest supercomputer in the US, and three times that of the world's current champion, the Intel-powered Tianhe-2 in China at 55 petaflops.
Nvidia didn't say whether it will be involved in the NSCI, but did say that the initiative will extend efforts now underway at the DoE, the Department of Defense and the National Science Foundation, where Nvidia's GPUs are already playing a role.
"A key focus for NCSI is developer productivity and portability. We recently announced a free toolkit around OpenACC, which dramatically simplifies programming for parallel processors, whether it's x86 CPUs or GPUs," said the firm.
"With OpenACC, the same code can run on different exascale architectures, delivering performance portability on any system."
Groundbreaking or unrealistic?
Once completed, the US exaflop supercomputer will be required to perform complex simulations that will aid scientific research and national security projects, along with weather modelling and medical applications.
The payoff of a supercomputer efficient enough to reach exaflop speeds could be "enormous", according to Nvidia.
An exaflop computer would "have the potential to provide unprecedented insight into the workings of the human brain", or even lead to breakthroughs in personalised medicine. For example, assisting in cancer diagnoses by analysing X-ray images.
However, not everyone is convinced. UK supercomputer and optical computing firm Optalysys believes that, if Obama is to have a chance of achieving his target, he needs to take a novel approach, as his proposed scheme is "massively inefficient" and will cost at least £60m a year to run.
Optalysys chairman James Duez said that there are huge problems involved just in scaling the electronic processing, and that an optical system, predictably, would be much more efficient.
"The power consumption alone will be massive," he said. "An optical approach uses light rather than electricity as the processing medium, so you have the potential to run even a massive supercomputer from a domestic power supply for just a few thousand a year."
Duez explained that using the exaflop computers to analyse X-ray images is just "the tip of the iceberg".
"An optical approach is ideally suited to analysing MRI data which is currently massively inefficient, capable of processing only a small percentage of the data generated," he said.
"A patient has to sit in an MRI scanner for a long time and rely on a doctor to correctly scrutinise pictures manually.
"An optical approach could lead the way to a universal scanner capable of comparing all MRI data on a single pass with reference data to determine all possible ailments."
The US isn't alone in its bid to build faster supercomputers. China and other nations have exascale aspirations, but this move by the Obama administration is designed to get the US there first.
The US is currently lagging behind China in the supercomputer league table. China's Tianhe-2 machine leads the way with performance of 33.86 petaflop/s (quadrillions of calculations per second). The best US machine can achieve 17.59 petaflop/s. µ
Watch your back, Huawei
Porn-based prattery gets fisted
As long as it follows the rules
The Home in the home could be a legal minefield