THE SAN DIEGO SUPERCOMPUTER CENTER (SDSC) plans to build a supercomputer that uses massive amounts of flash based memory to help improve processing speeds.
Using a $20 million grant from the National Science Foundation, the SDSC is building Gordon, a supercomputer designed to tackle 'critical science and societal problems now overwhelmed by the avalanche of data generated by the digital devices of our era', which is expected to be operational in mid-2011.
Once up and running, Gordon will feature 245 teraFLOPS (TF) of total compute power, 64TB of DRAM, 256TB of flash memory and four petabytes of disk storage, placing it pretty high in the Top 500 supercomputers list.
Although Gordon packs some impressive raw processing power, the architecture is specifically designed to cope with high performance computing (HPC) problems that involve large data sets. It will make optimal use of tiered storage to ensure that processors are not left waiting for data from slower spinning disk storage.
"We are clearly excited about the potential for Gordon," said Michael Norman, interim director of the SDSC.
"This HPC system will allow researchers to tackle a growing list of critical 'data-intensive' problems. These include the analysis of individual genomes to tailor drugs to specific patients, the development of more accurate models to predict the impact of earthquakes on buildings and other structures, and simulations that offer greater insights into what's happening to the planet's climate."
Gordon will be built primarily on Intel hardware, using the newest processors and SSD technology available in 2011. The entire system will consist of 32 'supernodes', each of which will be comprised of 32 compute nodes with 64GB of DRAM capable of 240 gigaFLOPS of processing power.
The supernodes will each incorporate two I/O nodes, each with 4TB of flash memory, that will be interconnected via an Infiniband network capable of 16Gbps bi-directional bandwidth. When tied together by virtual shared memory, each supernode will have 2TB of DRAM and 8TB of flash memory, with the potential for nearly 7.7TF of compute power.
"SDSC's Gordon will be the most recent tool that can be applied to data-driven scientific exploration," explained José Muñoz, deputy director and senior science advisor for the National Science Foundation's Office of Cyberinfrastructure.
"It was conceived and designed to enable scientists and engineers -indeed any area requiring demanding extensive data analysis - to conduct their research unburdened by the significant latencies that impede much of today's progress."
The SDSC reckons that Gordon will be perfect for 'data-mining' tasks, crunching not just large volumes of data, but massive chunks of information such as the huge quantities of raw data from three-dimensional seismic tomographic images used to understand and predict the impact of large-scale earthquakes on buildings and other structures along major fault lines.
In these applications, large databases could be loaded into flash memory and queried with much lower latency than if they were resident on disk.
"Moving a physical disk-head to accomplish random I/O is so last-century," said Allan Snavely, associate director of SDSC and co-principal investigator for this innovative system.
"Indeed, Charles Babbage designed a computer based on moving mechanical parts almost two centuries ago. With respect to I/O, it's time to stop trying to move protons and just move electrons. With the aid of flash solid-state drives (SSDs), this system should do latency-bound file reads 10 times faster and more efficiently than anything done today."
Full details of the new system will be unveiled at the international Supercomputing Conference 2009 on high performance computing, networking, storage and analysis to be held in Portland, Oregon from 14 to 20 November. µ
Red Hat becomes first firm to announce support for open source platform
Simple code has escaped the computer and is running amok on the floor
The microprocessors that changed the world
Great opportunity to say Orwellian