Lustre 2.1 Adds Progressive File Layouts, Project Quota Support

Lustre 2.1 was quietly released last week, adding several new features to the open source filesystem for high-performance computing environments.

Lustre, which has its roots in a project out of Carnegie Mellon University, is a parallel file system designed to handle petabytes of data, with throughput in the hundreds of gigabits per second, and to scale to thousands of clients.

The latest release adds support for progressive file layouts, which allows "file layouts to automatically adjust as size of files grow thus optimizing performance for diverse workloads" by letting administrators "set a specific progressive layout for new files created in the filesystem, and users ... specify progressive layouts on their own files for their specific needs," according to Lustre.

Other new features include:

  • Support for project quotas (in addition to user and group quotas);
  • Support for Lustre Networking (LNet) "to utilize multiple network interfaces on a node in parallel";
  • Improved snapshot capabilities; and
  • NRS Delay Policy, which "simulates high server load as a way of validating the resilience of Lustre under load."

Complete details about the new release can be found on the Lustre wiki. The next major release, 2.11, is expected in January 2018.

About the Author

David Nagel is the former editorial director of 1105 Media's Education Group and editor-in-chief of THE Journal, STEAM Universe, and Spaces4Learning. A 30-year publishing veteran, Nagel has led or contributed to dozens of technology, art, marketing, media, and business publications.

He can be reached at [email protected]. You can also connect with him on LinkedIn at https://www.linkedin.com/in/davidrnagel/ .


Featured

  • stylized illustration of people conversing on headsets

    AI and Our Next Conversations in Higher Education

    Ryan Lufkin, the vice president of global strategy for Instructure, examines how the focus on AI in education will move from experimentation to accountability.

  • AI word on microchip and colorful light spread

    Microsoft Unveils Maia 200 Inference Chip to Cut AI Serving Costs

    Microsoft recently introduced Maia 200, a custom-built accelerator aimed at lowering the cost of running artificial intelligence workloads at cloud scale, as major providers look to curb soaring inference expenses and lessen dependence on Nvidia graphics processors.

  • large group of college students sitting on an academic quad

    Student Readiness: Learning to Learn

    Melissa Loble, Instructure's chief academic officer, recommends a focus on 'readiness' as a broader concept as we try to understand how to build meaningful education experiences that can form a bridge from the university to the workplace. Here, we ask Loble what readiness is and how to offer students the ability to 'learn to learn'.

  • Blue metallic mesh fabric folds

    Microsoft Acquires Osmos for Agentic AI Data Engineering

    In a strategic move to reduce time-consuming manual data preparation, Microsoft has acquired Seattle-based startup Osmos, specializing in agentic AI for data engineering.