Internet2: A New Network for a 25th Anniversary

A Q&A with Rob Vietzke

With Internet2 in its 25th year, it seems fitting that this is the time the organization will transition to a new network. And after months, or by some measurements, even years of preparation and work, Internet2's new network is set to be rolled out in October. Here, CT speaks with Internet2 VP of Network Services Rob Vietzke for an update on the Next Generation Infrastructure to be launched this coming month.

Two Software Engineers Talking Next to a Big Screen in a Modern Monitoring Office with Live Analysis Feed with Charts.

"This is the fifth generation of infrastructure for the Internet2 community, and we are delivering the most comprehensive set of network improvements in Internet2's history." —Rob Vietzke

Mary Grush: Given that the new network rollout is about to happen, how does it fit in with the 25th anniversary of Internet2?

Rob Vietzke: The timing is somewhat serendipitous. The majority of the work on the Internet2 Next Generation Infrastructure will be finished in early October, when we celebrate our 25th anniversary. This is the fifth generation of infrastructure for the Internet2 community, and we are delivering the most comprehensive set of network improvements in Internet2's history.

Grush: Can you describe the process of replacing an already advanced, powerful network with a new one? How do you establish organizing principles for such a large transition?

Vietzke: For us, the process began with the Next Generation Infrastructure project, which we started almost five years ago. We asked the community to tell us what was important. As a result, there were about 50 papers published, ranging from software enablement, to economics and agility, to greater support for data-intensive sciences and global science collaborations.

The process began with the Next Generation Infrastructure project, which we started almost five years ago. We asked the community to tell us what was important.

Those papers brought out five key principles: supporting access to the cloud for research and administration, supporting teams with software enablement, supporting the campus and regional ecosystem on an edge-to-edge basis, resetting the economies of scale to enable reinvestment, and supporting data-intensive research. Those five things became the template through which we did procurements and the basis for the way we designed the infrastructure.

That design and procurement phase ended about a year and a half ago. Since then, our attention has focused mainly on getting the infrastructure deployed, and on moving the traffic from the "old" network to the new network as we build the new network in parallel with the old.

Grush: How does Internet2 handle change of this scope, especially considering risk?

Vietzke: I've always thought of Internet2 as a really dynamic and even aggressive leader of new technologies and strategies. Internet2 is willing to take some risk in order to move forward on behalf of our broad community in ways that individual institutions can't.

Change is constant in what we do, and in many ways, what we are trying to do is orchestrate the biggest return on investment with the greatest benefits for our community in the context of limited resources. We focus on moving quickly and being agile — all for the purpose of serving our community and moving forward with the community's values.

Change is constant in what we do, and in many ways, what we are trying to do is orchestrate the biggest return on investment with the greatest benefits for our community in the context of limited resources.

Grush: What are the main differences between the current, "old" network and the new network? How would you describe the new network infrastructure?

Vietzke: Let's talk about the numbers first, though I think the real story is the new services and the ways they will enable workflows for users. Internet2 has about 15,500 miles of fiber optic cable between all the big cities in the U.S., and we connect more than 40 state and regional networks with each other and with our international partners. On the current network — soon to be the "old" network — we've traditionally had about 40 sites around the country, where we have maintained racks of equipment supporting Internet2.

The real story is the new services and the ways they will enable workflows for users.

Two notable things are happening with the new physical network. We'll have close to the same number of sites across the country, but we're going from about 40 core nodes to 95, or a little more than two per site. And we're going from about one terabit of capacity per site, to something more like 100 terabits of capacity per site. That's a massive amount of 400G ports and capacity pre-provisioned to the sites.

Grush: Is that going to mean 400G access through all the nodes?

Vietzke: Yes, we're replacing all the backbone links between cities — 95 of them, now — and all of them are going up from 100G to 400G.

Looking at capacity in perspective, with the previous generation of the network the top speed we had been able to achieve on our intercity links was 100G.

Networking has different layers to it, so with the new generation of the network in terms of the raw capacity between the cities, we can technically go up to 800G on some of those links. But generally, if we can get 800G, we'll break that down to two 400G connections.

Grush: How does new equipment factor into this kind of capacity and improved performance?

Vietzke: Part of that increased capacity reflects progress in the industry, which now offers space saving and more powerful hardware. The new hardware and the speeds achieved are really amazing.

And there's a huge environmental impact that should be mentioned here, too. We'll be going from multiple large racks at each site, down to a single rack per node, with powerful yet compact equipment — which will give us about 70 percent savings in energy costs. The economies of scale are significant, but it's also the right thing to do for the environment.

We'll be going from multiple large racks at each site, down to a single rack per node, with powerful yet compact equipment — which will give us about 70 percent savings in energy costs.

Grush: What are the new types of services, capabilities, or other exceptional efficiencies this new network will enable? Who will benefit most, or be the most excited, if you will, by the potential the new network presents?

Vietzke: There are two tightly interlocked aspects of doing an upgrade like this. One is the new services that are enabled for everyone, and the other is the greatly improved efficiencies. These stories are best told together.

Many of your readers are going to be interested in how our new network infrastructure can speed the time to science. Accelerating science happens on different fronts. Lessening the amount of time it takes an administrative group to set up a new cloud application in support of a research project is one example. There will be many aspects of the new network that will simply make it easier for IT professionals and researchers to do their work.

As another example of improved efficiency, it used to be that a researcher would call the IT department for help in getting connectivity to the cloud, and then the campus IT people would call the regional network, who would eventually contact us at Internet2… So, we'd end up with a whole series of phone calls in order to get connectivity to a commercial cloud provider for just one researcher.

In our new infrastructure, there is a Web-based portal with programming interfaces where either the campus IT support person or the researcher herself could go in directly and create the capacity to the cloud needed for a particular project. This greatly reduces the time it takes for the researchers to do their work and cuts down on phone calls and coordination activity — serving time to science better.

Another important point, when you do a very large network upgrade like this, you get to reset the economies of scale. This is appealing for campus IT groups and administrators who are always concerned about containing costs and trying to do a greater amount of work for the same dollar.

So being able to make work easier for people and resetting the economies of scale are two really significant benefits of the new network. These improvements are appreciated both by researchers and administrators.

Grush: What is changing for researchers and administrators who handle greater and greater amounts of data — those for whom data is increasingly important to their work? How will you plan for them?

Vietzke: Every year, we see that there's more and more data. Take, for just one example, the pandemic and the massive changes in traffic patterns and demands for data transfer as our campuses pivoted to remote learning (and back). There was so much more pressure on our abilities to be dynamic and agile, and on our capability for scaling as things changed. The takeaway is that, while we can be prepared to move more data, we also have to plan to be agile in the way we move capacity to emerging demands. That agility also supports the time to science needs we talked about earlier.

While we can be prepared to move more data, we also have to plan to be agile in the way we move capacity to emerging demands.

We have seen — just in the past couple of weeks — eight of the biggest data days in Internet2's history. When the current network was designed, in 2010, we moved 104 petabytes in a year's time. Now, we're doing that in about every 10 days. If we see consistent growth along these lines, we have to ask: What will a member institution need from Internet2 in terms of connectivity and data throughput, going forward?

We have seen — just in the past couple of weeks — eight of the biggest data days in Internet2's history.

Grush: What’s changing most about the way the new network will interact with the cloud?

Vietzke: One of the things we've noticed in the past 10 years is that early on, campuses that had been interested in getting connectivity to the cloud were viewing that connectivity as a static and monolithic configuration. More recently, we are seeing that individual projects now require private connectivity to cloud services. When you move from one connection to the cloud for a campus or region to many individual projects each having their own private connections to the cloud, it changes the way you must think about security, capacity, budgeting, and several other issues. So we're watching the trend towards smaller, private, discrete, and secure cloud connections, and we're anticipating even more of that dynamic as we work on the software portal and software enablement for everything from the individual researcher's needs to research administration applications.

We're watching the trend towards smaller, private, discrete, and secure cloud connections, and we're anticipating even more of that dynamic as we work on the software portal and software enablement.

Grush: Is Cloud Connect going to be a part of that, in the new network?

Vietzke: Yes, though Cloud Connect is something Internet2 has been doing for a few years now. It started out with a small number of connections to commercial cloud providers. But in the past couple of years we've seen, interestingly, multiple connections at universities, per campus. So via Cloud Connect, we've added a self-service portal that allows campuses to control their own cloud access directly.

Grush: How will software enablement make provisioning easier, in general? Is that going to happen right away, or over time?

Vietzke: You might think of this network transition in two ways: First, that we are replacing hardware to gain efficiencies and better performance or capacity. Second, in an area that I think excites a lot of people, is that with the new network we are significantly updating our software enablement layer. That is, and will be, a continuous upgrade over time, but with the new network you'll see that the software enablement layer already fits in much better with the workflows that researchers and administrators have on their campuses.

With the new network you'll see that the software enablement layer already fits in much better with the workflows that researchers and administrators have on their campuses.

Let's look at Cloud Connect again, as an example of change at the software enablement layer. Cloud Connect is updated and improved four times a year — not every five years. It's a good example of continuous improvement in the software enablement layer.

The software enablement layer is also known for some other things: It changes the infrastructure from a separate, fixed resource to something that's much more dynamic and can be more tightly coupled to the workflow.

So, for instance, Cloud Connect isn't necessarily something you set up and then just leave. Some scientists are using Cloud Connect dynamically, where, as they run their data transfer software, it turns up the connection, transfers the data, and then turns the connection down. That improves the cost, it improves the security, and it improves transparency.

The more software enablement we do, the closer the control gets to the end users and the administrators. In our example, the smaller and more discreet nature of Cloud Connect also improves the ability to secure it. All in all, networking gets much more dynamic once we have that software layer in place.

The more software enablement we do, the closer the control gets to the end users and the administrators.

Grush: How does Internet2 know or anticipate needed changes? How does the community help guide the network changes?

Vietzke: Growth is constant, and the pressure for efficiency is constant. In the networking world that's something we always have to pay attention to, and we use projects like network upgrades to help us reset and catch up.

In terms of strategies for the future and what's going to be important, we learn a lot from our community. We try to watch the early adopters and learn from their successes, and of course we watch adoption trends. We have advisory groups throughout Internet2, including an Architecture, Operations, and Policy Advisory Council and a Network Advisory Council. And we have several working groups to try things out at the project level. From all these voices, we can learn where the community's energy is and try to discern what's going to be important in the coming years.

Grush: Though Internet2 is a national research and education network, does the trend towards globalization factor into the new network?

Vietzke: The global aspects of science and education are evolving very quickly. This means we might be extending access to compute and cloud resources, for research or education programs that may be operating almost anywhere in the world. Currently, we collaborate with more than 100 research and education networks around the globe. And we have working groups that explore, in a global context, many of the same design, scaling, software enablement, and automation capabilities that we work with on a national basis.

Currently, we collaborate with more than 100 research and education networks around the globe.

Another aspect of globalization for education includes the movement of students to institutions all around the world, for different reasons — from pandemic travel restrictions, to research field work. Supporting the global movement of students may be getting more complex, but at the same time our network capabilities are more and more fit to meet those challenges.

Grush: I know that Internet2 is always looking forward, and of course it will be busy, even after the network transition, working to stay ahead of the trends. But from your own point of view, once the new network is fully deployed and operational, is there some area you'd particularly like to see Internet2 put more focus on?

Vietzke: One thing we're already focusing on, that I'd like to see more of, is furthering democratization of access. In what now is beginning to seem like the distant past, Internet2 access was originally designed for universities with very high research activity that could put in place some rather imposing infrastructure.

One thing we're already focusing on, that I'd like to see more of, is furthering democratization of access.

But the flexible and agile capabilities of our new network infrastructure, along with the recognition of the need to support brilliant minds conducting research at smaller colleges and historically underserved universities, are together creating greater demands for access — democratized access.

We are exploring ways to create a working environment for everyone who wants to access the network infrastructure through the software layer. We already have key elements in place, such as InCommon and the NET+ program, that will enable collaboration and access to the advanced infrastructure that Internet2 provides.

We are exploring ways to create a working environment for everyone who wants to access the network infrastructure through the software layer.

The Next Generation Infrastructure that Internet2 is about to launch during its 25th anniversary can help solve some of the world's most difficult research problems. But that takes collaboration and teamwork that's informed by a diverse community. We've seen this teamwork in all the efforts to bring the Next Generation Infrastructure to this point, and we're excited to continue the collaboration as we begin to realize its benefits.

There is much to look forward to these next 25 years, both with the acceleration of science and the democratization of access.

[Editor's note: Photo credit: Gorodenkoff Video Productions / Shutterstock.]

Featured

  • pattern featuring interconnected lines, nodes, lock icons, and cogwheels

    Red Hat Enterprise Linux 9.5 Expands Automation, Security

    Open source solution provider Red Hat has introduced Red Hat Enterprise Linux (RHEL) 9.5, the latest version of its flagship Linux platform.

  • glowing lines connecting colorful nodes on a deep blue and black gradient background

    Juniper Launches AI-Native Networking and Security Management Platform

    Juniper Networks has introduced a new solution that integrates security and networking management under a unified cloud and artificial intelligence engine.

  • a digital lock symbol is cracked and breaking apart into dollar signs

    Ransomware Costs Schools Nearly $550,000 per Day of Downtime

    New data from cybersecurity research firm Comparitech quantifies the damage caused by ransomware attacks on educational institutions.

  • landscape photo with an AI rubber stamp on top

    California AI Watermarking Bill Garners OpenAI Support

    ChatGPT creator OpenAI is backing a California bill that would require tech companies to label AI-generated content in the form of a digital "watermark." The proposed legislation, known as the "California Digital Content Provenance Standards" (AB 3211), aims to ensure transparency in digital media by identifying content created through artificial intelligence. This requirement would apply to a broad range of AI-generated material, from harmless memes to deepfakes that could be used to spread misinformation about political candidates.