HPC | News
UC San Diego Deploying 2 PFLOPS System Supporting High-Performance Virtualization
- By Dian Schaffhauser
Two universities are working on high performance research clusters based on Dell hardware. Both are expected to go online in 2015. The San Diego Supercomputer Center at the University of California, San Diego is deploying Comet, a virtualized petascale supercomputer to address general research areas such as social sciences and genomics. Comet is funded by a $6 million National Science Foundation (NSF) grant. The Texas Advanced Computing Center (TACC) at the University of Texas at Austin is building Wrangler, a data analysis and management system funded by a $6 million NSF grant for the national open science community.
Comet is expected to have peak performance of "nearly" 2 petaFLOPS; however, its design is intended to support many modest-scale jobs. Each node will be equipped with two processors, 128 GB of traditional DRAM and 320 GB of flash memory. Comet will also include large-scale nodes and graphics processing unit-heavy nodes for work with intensive visualization aspects.
Comet will be the first Extreme Science and Engineering Discovery Environment (XSEDE) production system to support high-performance virtualization. XSEDE is a five-year, multi-institutional, NSF-funded project that will let researchers share computing resources, data, and support on a single virtual system.
"Comet is all about [high performance computing] for the 99 percent," said San Diego's Supercomputer Center Director Michael Norman. "As the world's first virtualized HPC cluster, Comet is designed to deliver a significantly increased level of computing capacity and customizability to support data-enabled science and engineering at the campus, regional and national levels."
U Texas' Wrangler will feature Dell PowerEdge R620 and R720 rack server compute nodes and 20 petabytes of storage built with PowerEdge C8000 components. It's expected to integrate with TACC's Stampede supercomputer and through that to XSEDE.
"Wrangler is designed from the ground up for emerging and existing applications in data intensive science," said Dan Stanzione, Wrangler's lead principal investigator and TACC deputy director. "Wrangler will be one of the largest secure, replicated storage options for the national open science community."
In January 2014, the university also expects to deploy Maverick, which will perform remote data analysis and visualization work; Maverick will use HP and Nvidia components.
Dell introduced both institutions as customers during SC13, an international conference for high-performance computing taking place this week in Denver.
Dian Schaffhauser is a writer who covers technology and business for a number of publications. Contact her at email@example.com.