InfiniBand Goes the Distance

InfiniBand beats TCP/IP over a dedicated WAN, government researchers find

Researchers at the Energy Department's Oak Ridge National Laboratory have shown that InfiniBand can be used to transport large data sets over a dedicated network thousands of miles in length, with a throughput unmatched by high-speed TCP/IP connections.

In a test setup, researchers were able to achieve an average throughput of 7.34 Gigabits per second (Gbps) between two machines on each end of the 8,600 mile optical link. In contrast, the throughout of such traffic using a tweaked high-throughput version of the Transmission Control Protocol, called HTCP, was 1.79 Gbps at best.

Oak Ridge researcher Nageswara Rao, presented a paper on the group's work, "Wide-Area Performance Profiling of 10GigE and InfiniBand Technologies" (PDF), at the SC08 conference, held in Austin, Texas last month.

Increasingly, the Energy Department labs are finding they need to move large files over long distances. Within the next few months, for instance, the European Union's Large Hadron Collider will start operation, generating petabytes of data to across the Atlantic Ocean back to the Energy labs and academic institutions in the United States.

Rao noted that difficulties abounded with high-speed large data transfers over WANs, however, including packet conversion from storage networks and the complex task of TCP/IP tuning. "The task of sustaining end-to-end throughput ... over thousands of miles still remains complex," the researchers noted in the paper.

While InfiniBand interconnects are widely used within high-performance computer systems, they aren't usually deployed to carry traffic across long distances. Usually traffic as converted at edge of each end-point into TCP/IP packets, sent over WAN, either by 10 gigabit Ethernet or some other protocol, and converted back to InfiniBand at the other end. A few vendors, however, such as Obsidian Research and Network Equipment Technologies, have started offering InfiniBand over Wide-Area (IBoWA) devices, which allow the traffic to stay in InfiniBand for the whole journey.

Oak Ridge wanted to test how well these long-distance InfiniBand connections could work, in comparison to some specialized forms of TCP/IP over 10GigE.

Using Energy's experimental circuit-switched testbed network UltraScienceNet, the researchers set up a 10 Gigabit optical link that stretched 8,600 miles round-trip between Oak Ridge--which resides outside Knoxville Tennessee--and Sunnyvale, CA, via Atlanta, Chicago, and Seattle.

At each end-point, they set up a set of Obsidian Research's Longbow XR InfiniBand switches, which run IBoWA. The network itself was a dual SONET OC192, which could support a throughput up to 9.6Gbps.

Overall, the researchers found that InfiniBand worked well at transferring large files across great distances of a dedicated network. For shorter distances, HTCP ruled: HTCP could convey 9.21 Gbps over 0.2 miles, compared to 7.48 Gbps with InfiniBand. But as the distance between the two end-points grew, HTCP's performance deteriorated. In contrast, InfiniBand throughput stayed pretty steady as the mileage increased.

Rao did note HTCP was more resilient on networks that carry additional traffic. This is not surprising, as TCP/IP was designed to for timesharing networks--networks carrying traffic among multiple endpoints. Tweaking TCP/IP to take full advantage of a dedicated network, however, takes considerable work and still may not produce optimal results, Rao said.

In conclusion, the researchers found that InfiniBand "somewhat surprisingly offer[s] a potential alternate solution for wide-area data transport."

Both the Defense Department and the Energy Department's High Performance Networking Program and supported the research.

About the Author

Joab Jackson is the chief technology editor of Government Computing News. You can contact Joab at [email protected].

Featured