WHEN YOU HEAR THE TERM ‘high-performance computing', or HPC as it is better known, the images conjured in the mind are most likely of vast dark rooms illuminated only by thousands of blinking lights emitted from stacks of servers, married with the buzz of 100 fans working tirelessly to cool the towers of number-crunching machines.
These systems, which boast exceptional calculation and processing power, may appear space-age in their form, but they are far more significant than just looking like something from a sci-fi film. HPC has one chief motive: the pursuit of advanced scientific research missions. And even the fastest and most powerful machines out there still don't seem to be enough, especially for some governments.
Take the US, for example. President Barack Obama launched an initiative in July to develop a supercomputer that will be 30 times more powerful than today's fastest machine. He signed an executive order to establish the National Strategic Computing Initiative, a project looking to bolster the country's stance on scientific computing.
The White House said that the initiative was set up to "maximise [the] benefits of HPC research, development and deployment", and will require supercomputers to achieve levels of performance and power efficiency not achieved before: one exaflop, a billion billion calculations per second.
But while the US injects energy, and a lot of money, into beating China to the top spot for the world's fastest supercomputer by exceeding the exaflop mark, HPC is taking on a more humble yet equally valuable role in the UK.
Researchers from one of the UK's most prestigious institutions, the University of Oxford, are benefiting from a new HPC system designed and integrated by big data management, storage and analytics provider OCF.
The Advanced Research Computing (ARC) resource was installed earlier this year and is not just used to show off number crunching benchmarks in the name of science. It is installed as a central cluster to support research across all four divisions at the university: Mathematical, Physical and Life Sciences; Medical Sciences; Social Sciences; and Humanities.
Lending a helping hand across a range of disciplines, the integration of HPC across the university's more traditional fields of study allows professors and students to innovate and advance research in all sectors.
"Our role at the university is to help people roll out HPC in departments that don't already use it, and if HPC is applicable there we will also provide training," said the university's head of ARC IT services, Andrew Richards.
"We are a one-stop shop around HPC, working with specific departments that use HPC and applying sensible decisions to get the results they need.
"All kinds of people use our systems: those new to the university, and also new to HPC, ask us to help them install specific applications, and - sometimes - people that have never thought of using it before come to us because they've hit a wall in their research, or have a problem, and they start to look for help."
Built by OCF, the University of Oxford's HPC cluster is run by five individuals - two who specialise in running the systems and two who work on apps, alongside Richards. It comprises Lenovo NeXtScale servers with Intel Haswell CPUs connected by 40GB Infiniband to an existing Panasas storage system. The storage system was upgraded by OCF to add 166TB, giving a total of 400TB of capacity. Existing Intel Ivy Bridge and Sandy Bridge CPUs from the university's older machine are still running and have been merged with the new cluster. Twenty Nvidia Tesla K40 GPUs were also added.
The supercomputer is also pretty power-efficient, despite the performance upgrades. Richards explained that that the university can operate the 5,000-core machine for almost exactly the same power requirements as the old 1,200-core machine which it replaced in April.
The HPC resource supports a broad range of research projects across the university. As well as computational chemistry, engineering, financial modelling and data mining of ancient documents, the cluster is used in collaborative projects like the T2K experiment using the J-PARC accelerator in Tokai, Japan. Other research includes the Square Kilometre Array project, and anthropologists using agent-based modelling to study religious groups.
An example of a project in which the machine has helped another department is the Networked Quantum Information Technologies Hub, an initiative led by Oxford, which looks to design new forms of computers that will accelerate discoveries in science, engineering and medicine.
"Through the university's Department of Physics, we have researched the development of a quantum computer, including lots of research into what that means and how we would build it, and the mathematical analysis of how we'd handle the data, but using classic computers to make quantum computers," explained Richards.
"At first, the intention was for the physics department to build their own machine and system just for their needs and they needed data space. We convinced them to co-invest in the cluster procurement that we were doing and then we could help them build the system and carry on their research."
Richards added that the project was a huge success because they were the largest co-invester in HPC.
"No one uses their system 100 percent of time and [the physics department has] realised the best part is the centralised Oxford system as they don't have to worry about hardware failure and the general management of the system. If they'd been doing it alone they wouldn't have had access to people on our team that helped in the early days of the project," he said.
"We helped them realise they could get the same result doing things differently and do the same work in a much shorter time."
The cluster is also used in collaborative projects owing to the university being part of Science Engineering South, a consortium of five universities working on e-infrastructure particularly around HPC. They work with commercial companies that can buy time on the machine, so the new cluster is supporting a host of different research across the region.
As a central resource for the entire university, the ARC department sees itself as the first stepping stone into HPC.
"From PhD students upwards, for example, people that haven't used HPC before - are who we really want to engage with," said Richards. "I don't see our facility as just running a big machine. We're here to help people do their research. That's our value proposition."
Richards told us that there are big challenges in how much the university continues to run on-premise, and how much will be powered by the cloud.
"We will potentially use more in the cloud. We are going in that direction as it's beginning to look more interesting and promising from a tech point of view," he said.
Moving to the cloud would mean that the ARC department wouldn't have to run hardware or maintain hardware contracts, so costs would decline in that respect. The Carbon Tax would also fall, which would work in the university's favour as everyone is under pressure to cut the electricity bill at the moment as tax levies increase.
However, while Oxford's ARC department can envisage moving to the cloud later down the line, there is concern that to do it too soon wouldn't be cost effective.
"Once you put a lot of the workload into the cloud, meaning 80 percent of the system fully loaded all the time, it doesn't make sense to move as the cost difference is too large," concluded Richards. µ
It's a bit bobbins, but it's a good start
Removed job listings suggests Cupertino is after chip talent
But some say the overall effect on privacy is unacceptable
Multi-core performance is just 500 points higher than the Snapdragon 845