HPC Server 2008 RC1 Coming This Month

At the International Supercomputing Conference being held this week in Germany, Microsoft announced that it will roll out the first release candidate (RC1) of its Windows HPC Server 2008 for high-performance computing in the last week of June. A system at the National Center for Supercomputing Applications (NCSA) out of the University of Illinois at Urbana-Champaign running a beta of HPC Server 2008 on Dell PowerEdge hardware debuted today at No. 23 on the list top 500 supercomputing sites for June.

As we reported previously, "hundreds" of universities have been beta testing HPC Server 2008, the successor to Microsoft's Compute Cluster Server 2003 (CCS 2003). Of these, the NCSA system, called Abe, ranks the highest so far on the top-500 list, churning out 68.48 teraflops maximal LINPACK performance (89.59 teraflops theoretical peak). It uses 2,400 quad-core Intel Xeon 2.3 GHz processors (1,200 dual-socket PowerEdge 1955s for 9,600 cores total), with each processor sporting 4 GB memory for 9,600 GB total memory.

"Our experience with Windows HPC Server 2008 has been impressive," said Robert Pennington, deputy director of the NCSA, in a statement released to coincide with the announcement. "Deploying it was much easier than we expected, and the performance results have surpassed our expectations. When we deployed Windows on our cluster, which has more than 1,000 nodes, we went from bare metal to running the LINPACK benchmark programs in just four hours. The performance of Windows HPC Server 2008 has yielded efficiencies that are among the highest we've seen for this class of machine."

The second-largest HPC Server 2008 (beta) cluster, out of Umea University in Sweden, was announced earlier this week. That system, called Akka, comprises mixed hardware and software: 672 IBM HS21XM Blades with Xeon quad-core processors, along with IBM Cell BE-blades and Power6-blades, producing 46.04 teraflops maximal and a peak 54 teraflops theoretical on HPC Server 2008. A dual-boot system, Akka also runs Linux. It came in at No. 39 on the new top-500 list.

Microsoft said a download of Windows HPC Server 2008 will be available at the end of June. Further information can be found here.

About the Author

David Nagel is the former editorial director of 1105 Media's Education Group and editor-in-chief of THE Journal, STEAM Universe, and Spaces4Learning. A 30-year publishing veteran, Nagel has led or contributed to dozens of technology, art, marketing, media, and business publications.

He can be reached at [email protected]. You can also connect with him on LinkedIn at https://www.linkedin.com/in/davidrnagel/ .


Featured

  • abstract generative AI technology

    Apple and Google Strike AI Deal to Bring Gemini Models to Siri

    Apple and Google announced they have embarked on a multiyear partnership that will put Google's Gemini models and cloud technology at the core of the next generation of Apple Foundation Models, a move that could help Apple accelerate long-promised upgrades to Siri while handing Google a high-profile distribution win on the iPhone.

  • network of various technology icons

    Newly Launched Agentic AI Foundation Brings Together Tech Giants for Open Source AI Development

    The Linux Foundation has announced the formation of the Agentic AI Foundation, bringing together Microsoft, OpenAI, Anthropic, and other major tech companies to advance open source development of autonomous AI systems.

  • glowing brain above stacked coins

    The Higher Ed Playbook for AI Affordability

    Fulfilling the promise of AI in higher education does not require massive budgets or radical reinvention. By leveraging existing infrastructure, embracing edge and localized AI, collaborating across institutions, and embedding AI thoughtfully across the enterprise, universities can move from experimentation to impact.

  • AI word on microchip and colorful light spread

    Microsoft Unveils Maia 200 Inference Chip to Cut AI Serving Costs

    Microsoft recently introduced Maia 200, a custom-built accelerator aimed at lowering the cost of running artificial intelligence workloads at cloud scale, as major providers look to curb soaring inference expenses and lessen dependence on Nvidia graphics processors.