San Diego Supercomputer Center Tapped To Help Shape Next Gen Computing
- By Dian Schaffhauser
- 11/17/10
Intel has brought the San Diego Supercomputer Center at the University of California, San Diego into a project designed to create the next generation of computer systems. Intel is one of four entities funded to develop prototypes for the Ubiquitous High Performance Computing program being run by the Department of Defense Advanced Research Projects Agency (DARPA).
The goal of this multi-year initiative, announced in August 2010, is to develop radically new computer architectures and programming models that deliver 100 to 1,000 times more performance and that are easier to program than current systems. The expectation is that the new system capabilities will provide at least 50 times greater energy, computing, and productivity efficiency, which in turn will slash the time needed to design and develop complex computing applications. To give it perspective, the current crop of top supercomputers operate at the petascale level, which equates to a thousand trillion calculations per second. The next level is exascale, a million trillion calculations per second--a thousand times faster than today's machines.
During the first phases of the award, the Supercomputer Center's Performance Modeling and Characterization (PMaC) laboratory will analyze and map applications to run efficiently on Intel hardware. PMaC researches the factors affecting performance of high performance computing platforms. The kinds of applications to be analyzed in this latest project perform rapid processing of real-time sensor data, ferret out complex connections within graphs, and handle complex strategy planning.
The project will explore new circuit topologies, new chip and system architectures, and new programming techniques to reduce the amount of energy required per computation by two to three orders of magnitude.
According to Allan Snavely, associate director of the Supercomputer Center and head of the PMaC lab, the Intel team includes experts in low-power device design, optimizing compilers, expressive program languages, and high-performance applications, which is PMaC's special expertise. "We are working to build an integrated hardware/software stack that can manage data movement with extreme efficiency," he said. "Today's crude and simplistic memory cache and prefetch policies won't work at the exascale level because of the tremendous energy costs associated with that motion. Today it takes a nano joule to move a byte even a short distance. Multiply that byte into an exabyte--1 quintillion bytes--and one would need a nuclear plant's worth of instantaneous power to move it based on today's technology."
Intel's other academic partners for the project include computer science and engineering faculty at the University of Delaware and the University of Illinois at Urbana-Champaign.
The first two phases of the project extend into 2014, and a full system design and simulation is expected at the completion of those phases. Phases 3 and 4 of the project, which have not yet been awarded by DARPA, are expected to result in a full prototype system sometime in 2018.
About the Author
Dian Schaffhauser is a former senior contributing editor for 1105 Media's education publications THE Journal, Campus Technology and Spaces4Learning.