Open Menu Close Menu

Research | News

Researchers at MIT, U of Connecticut Develop Faster, More Efficient Data Caching Designs

Researchers at the Massachusetts Institute of Technology (MIT) and the University of Connecticut have developed new approaches to data caching that have been found to speed execution time for massively multicore chips by six to 15 percent and produce energy savings of 13 to 25 percent.

According to information from MIT, "computer chips keep getting faster because transistors keep getting smaller. But the chips themselves are as big as ever, so data moving around the chip, and between chips and main memory, has to travel just as far." Chip designers work around this problem by caching frequently used data close to processors, but as the number of processor cores increase, they have to share data more frequently, creating bottlenecks in the communication network. The new approach changes the way data is cached, so processors can access data faster and more efficiently and save energy in the process.

The researchers published their findings in two separate papers. They presented the first paper at the most recent ACM/IEEE International Symposium on Computer Architecture and they will present the second paper at the IEEE International Symposium on High Performance Computer Architecture. Each paper presents a different data cache design, but both designs could potentially work together to produce even greater benefits. "The two different designs seem to be working synergistically, which would indicate that the final result of combining the two would be better than the sum of the individual parts," said Nikos Hardavellas, an assistant professor of electrical engineering and computer science at Northwestern University, in a prepared statement.

Data on multi-core chips is stored either in the cores' private cache or in the last-level cache (LLC), which is shared by all cores. The most recently accessed data is stored in the private cache of the core that last used it, while data that has not been accessed recently gradually gets pushed out of the private cache down to the LLC. When that data is accessed again, it moves back to the private cache. Consequently, data frequently has to swap between the private cache and LLC.

The first paper describes a solution to this problem. According to information released by MIT, "when an application's working set exceeds the private-cache capacity, the MIT researchers' chip would simply split it up between the private cache and the LLC. Data stored in either place would stay put, no matter how recently it's been requested, preventing a lot of fruitless swapping." And if two cores work on the same data, that shared data would always be stored in the LLC, rather than in both cores' private cache, so the cores wouldn't have to communicate constantly to keep their cached copies consistent with each other.

The second paper further improves the storage of data when two cores are accessing the same data but communicating infrequently. In such cases, that shared data would still be stored in the LLC, but each core would "receive its own copy in a nearby chunk of the LLC, enabling much faster data access," according to MIT.

The lead author of both papers was George Kurian, a graduate student in MIT’s Department of Electrical Engineering and Computer Science. His research partners were his advisor, Srini Devadas, a professor of electrical engineering and computer science at MIT and Omer Khan, an assistant professor of electrical and computer engineering at the University of Connecticut.

About the Author

Leila Meyer is a technology writer based in British Columbia. She can be reached at [email protected].

comments powered by Disqus