Open Menu Close Menu

HPC | News

Mellanox InfiniBand Interconnects Nodes of Purdue's Latest Supercomputer

Forget about gigabit performance. Purdue University needs 40 gigabit/second interconnectivity for its newest computer cluster, the Conte. Put into full production in October, the supercomputer was recently ranked No. 33 on the list of the world's fastest supercomputers by Top 500. Conte was built with 580 HP ProLiant SL250 Generation 8 servers, each with two Intel Xeon processors and two Intel Xeon Phi coprocessors. Connecting the nodes of the cluster is Mellanox Technologies' FDR-10 40 Gbps InfiniBand.

FDR-10 InfiniBand is the newest generation of the InfiniBand switched fabric communications link. Primarily used in high-performance computing, the specification defines the connection between server and storage compute nodes. The Mellanox's solution includes Connect-IB adapters, SwitchX-2-based switches and cables,

"The increasing complexity of science and engineering research at Purdue is driving a need for increasingly faster and scalable computational resources," said Michael Shuey, Purdue's HPC system manager. "Mellanox's FDR InfiniBand solutions, and in particular their Connect-IB adapters, allow MPI codes to scale more readily than our previous systems. This enables more detailed simulations and helps empower Purdue scientists to push the envelope on their research in weather, bioscience, materials engineering, and more."

Among the projects that Purdue faculty are pursuing is research on making jets quieter by modeling the exhaust flow using as many as one billion data points; creating high-resolution images of the structure of viruses at the atomic level; and making batteries smaller, lighter and longer-lasting through atom-scale models.

In other InfiniBand news, Microsoft has joined the trade organization that defines the InfiniBand specification and related standards.

The Redmond company has become a Steering Committee member of the InfiniBand Trade Association, alongside IBM, Intel, Oracle, and 39 other companies. Currently, much of the committee's work focuses on building out InfiniBand-related specifications for input/output architectures such as Remote Direct Memory Access (RDMA) and RDMA over Converged Ethernet (RoCE). The organization performs compliance and interoperability testing of commercial products and promotes RDMA technologies.

"Microsoft's membership comes at an ideal time as mainstream data centers continue to adopt advanced technologies previously only deployed in high-performance computing applications," said Mark Atkins, chairman of the committee. "Microsoft will bring a strong enterprise data center perspective to the organization as well as viewpoints on deployments of RDMA at scale for storage and cloud applications."

Added Microsoft Windows Azure Distinguished Engineer Yousef Khalidi, "Increasing numbers of server CPUs sharing a network link, coupled with ever-increasing workload virtualization, are making unprecedented demands on modern datacenter networks. RDMA networking is a key part of the solution." He said that he expects Microsoft's participation in the trade association to "help promote RDMA and further drive specifications and standards to enable performance gains and reduce networking overhead on the CPUs in large, mainstream data centers."

About the Author

Dian Schaffhauser is a former senior contributing editor for 1105 Media's education publications THE Journal, Campus Technology and Spaces4Learning.

comments powered by Disqus