Open Menu Close Menu

Stanford Lab Tackles Parallel Computing

Stanford University and a consortium of technology companies are announcing a joint effort to build a Pervasive Parallelism Lab. The initiative pools the efforts of many Stanford computer scientists and electrical engineers with support from Sun Microsystems, Advanced Micro Devices, Nvidia, IBM, HP, and Intel.

The announcement of the lab comes less than two months after the University of California at Berkeley and the University of Illinois at Urbana-Champaign each received multimillion-dollar grants from Microsoft and Intel to address the issue.

The Stanford center, with a budget of $6 million over three years, will research and develop a top-to-bottom parallel computing system, stretching from fundamental hardware to new user-friendly programming languages that will allow developers to exploit parallelism automatically.

Until recently, computers with large numbers of multiple processors were typically found in specialized environments such as supercomputing centers. As a consequence, few programmers have learned how to design software that exploits parallelism. The problem has caused serious concern among computer scientists that the progress of computing overall could stall.

"Parallel programming is perhaps the largest problem in computer science today and is the major obstacle to the continued scaling of computing performance that has fueled the computing industry, and several related industries, for the last 40 years," said Bill Dally, chair of the Computer Science Department at Stanford.

Dally will participate in the lab's research, which will be directed by Kunle Olukotun, a professor of electrical engineering and computer science. Olukotun has worked for more than a decade on multicore computer architecture, in which many processors inhabit the same silicon chip.

Olukotun said he hopes that by working directly with industrial supporters, the work of lab faculty and students will reach the marketplace. He emphasized that the lab is open to other companies joining the effort; none of the participants has exclusive intellectual property rights.

To enable the research, the team's hardware experts will develop a testbed called FARM, for Flexible Architecture Research Machine. The system, which Olukotun said will be ready by the end of the summer, will blend reprogrammable chips with conventional processors.

Olukotun said he hopes the effort will pave the way for programmers to easily create powerful new software for applications such as artificial intelligence and robotics, business data analysis, and virtual worlds and gaming. Among the lab's faculty are experts in each of these areas, including Pat Hanrahan, a professor of computer science and electrical engineering whose graphics rendering expertise has earned him two Academy Awards.

"We believe in driving applications," says Hanrahan. "Among the most interesting are immersive, richly graphical, virtual worlds, both because of the unique experiences for users as well as the challenges in building such demanding parallel applications."

Stanford has already developed parallelism technologies, including Olukotun's collaboration with computer science and electrical engineering Assistant Professor Christos Kozyrakis to develop a more efficient way for processors to share memory, called "transactional memory." Dally has developed new ways for the flow, or "streaming," of software instructions from a compiler to parallel processors to work much more efficiently than in conventional supercomputers.

"We have a history here of trying to close this gap between parallel hardware and software," Olukotun says. "It's not enough just to put a bunch of cores on a chip. You also have to make the job of translating software to use that parallelism easier."

About the Author

Dian Schaffhauser is a former senior contributing editor for 1105 Media's education publications THE Journal, Campus Technology and Spaces4Learning.

comments powered by Disqus