U Tennessee Prof Takes on Exascale Computing

A computer science professor from the University of Tennessee in Knoxville has just earned a million dollar grant to explore the next generation of high performance computing. Jack Dongarra, who's also affiliated with the University of Manchester and Oak Ridge National Laboratory, received the three-year grant from the United States Department of Energy to better understand the changes that will be required in the software for exascale supercomputers, as they're called.

This new generation of supercomputers will be capable of a quintillion floating point operations per second or one exaflop — equivalent to a thousand petaflops. Today's fastest supercomputer, China's Tianhe-2, performs at about 34 petaflops per second; Cray's Titan supercomputer was benchmarked at 18 petaflops per second. Exascale computing is expected to be reached much later in this decade.

However, said Dongarra in a statement, "You can't wait for the exascale computers to be delivered and then start thinking about the software and algorithms." The challenges for what he calls "extreme computing" include programming issues, fault tolerance, and power usage.

Dongarra, who continues to teach and contributed to 46 peer-reviewed papers in 2012, is working with researchers on several problems. One project is called the Parallel Runtime Scheduling and Execution Control (PaRSEC). PaRSEC is a generic framework with libraries, a runtime system, and development tools to help developers port their applications to new kinds of environments.

Dongarra is also developing an algorithm to overcome a reliability problem associated with the increasing number of processors. Now, when one processor fails, the calculation may have to be repeated in part or fully. The algorithm project aims to develop software that can survive failures and perform auto-tuning to adapt to the hardware.

"The exascale computers are going to be dramatically different than the computers we have today," he noted. "We have to have the techniques and software to effectively use these machines on the most challenging science problems in the near future."

About the Author

Dian Schaffhauser is a former senior contributing editor for 1105 Media's education publications THE Journal, Campus Technology and Spaces4Learning.

Featured

  • futuristic crystal ball with holographic data projections

    Call for Opinions: 2025 Predictions for Higher Ed IT

    How will the technology landscape in higher education change in the coming year? We're inviting our readership to weigh in with their predictions, wishes, or worries for 2025.

  • cloud icon connected to a data network with an alert symbol (a triangle with an exclamation mark) overlaying the cloud

    U.S. Department of Commerce Proposes Mandatory Reporting Requirement for AI, Cloud Providers

    This proposed rule from the department's Bureau of Industry and Security aims to enhance national security by establishing reporting requirements for the development of advanced AI models and computing clusters.

  • person signing a bill at a desk with a faint glow around the document. A tablet and laptop are subtly visible in the background, with soft colors and minimal digital elements

    California Governor Signs AI Content Safeguards into Law

    California Governor Gavin Newsom has officially signed off on a series of landmark artificial intelligence bills, signaling the state’s latest efforts to regulate the burgeoning technology, particularly in response to the misuse of sexually explicit deepfakes. The legislation is aimed at mitigating the risks posed by AI-generated content, as concerns grow over the technology's potential to manipulate images, videos, and voices in ways that could cause significant harm.

  • glowing AI symbol integrated into a stylized cloud icon, surrounded by interconnected digital nodes and translucent security shields, set against a gradient white-to-blue background with grid lines and abstract risk charts

    Cloud Security Alliance Report Plots Path to Trustworthy AI

    A new report from the Cloud Security Alliance highlights the need for AI audits that extend beyond regulatory compliance, and advocates for a risk-based, comprehensive methodology designed to foster trust in rapidly evolving intelligent systems.