Dunwoody College of Tech Taps Private Partner To Help Expand 3D Printing Curriculum

Minnesota's Dunwoody College of Technology is expanding its additive manufacturing, or 3D printing, curriculum with the help of a private partner.

The Minneapolis-based institution "has been on the forefront of manufacturing education for over 100 years. Because additive manufacturing is becoming a more mainstream technique, Dunwoody decided to increase emphasis on it in its program."

The college, which is using Fortus 400mc and Fortus 250mc 3D production systems in its programs, has teamed with the manufacturer of those machines, Stratasys, to expand the curriculum and offer a certificate program in additive manufacturing.

"I see additive manufacturing as an essential partner to the traditional manufacturing process," said E.J. Daigle, the dean of the school's Robotics and Manufacturing Department, in a prepared statement. "Not only do we want to give our students the tools to intertwine both, but we saw a need for businesses in the industry to further their education. Stratasys has been the ideal partner for the development of our courses and curriculum."

Founded in 1914, Dunwoody College of Technology is a private, nonprofit institution serving approximately 1,400 students with about 140 faculty members. The college offers associate's and bachelor's degrees, as well as certificate programs for students and professionals.

About the Author

Joshua Bolkan is contributing editor for Campus Technology, THE Journal and STEAM Universe. He can be reached at [email protected].

Featured

  • glowing digital brain-shaped neural network surrounded by charts, graphs, and data visualizations

    Google Releases Advanced AI Model for Complex Reasoning Tasks

    Google has released Gemini 2.5 Deep Think, an advanced artificial intelligence model designed for complex reasoning tasks.

  • abstract pattern of cybersecurity, ai and cloud imagery

    OpenAI Report Identifies Malicious Use of AI in Cloud-Based Cyber Threats

    A report from OpenAI identifies the misuse of artificial intelligence in cybercrime, social engineering, and influence operations, particularly those targeting or operating through cloud infrastructure. In "Disrupting Malicious Uses of AI: June 2025," the company outlines how threat actors are weaponizing large language models for malicious ends — and how OpenAI is pushing back.

  • cybersecurity book with a shield and padlock

    NIST Proposes New Cybersecurity Guidelines for AI Systems

    The National Institute of Standards and Technology has unveiled plans to issue a new set of cybersecurity guidelines aimed at safeguarding artificial intelligence systems, citing rising concerns over risks tied to generative models, predictive analytics, and autonomous agents.

  • magnifying glass highlighting a human profile silhouette, set over a collage of framed icons including landscapes, charts, and education symbols

    AWS, DeepBrain AI Launch AI-Generated Multimedia Content Detector

    Amazon Web Services (AWS) and DeepBrain AI have introduced AI Detector, an enterprise-grade solution designed to identify and manage AI-generated content across multiple media types. The collaboration targets organizations in government, finance, media, law, and education sectors that need to validate content authenticity at scale.