Researchers Develop Methods to Detect Hacking of 3D Printers

A team of researchers from Rutgers University-New Brunswick and the Georgia Institute of Technology has developed three strategies for determining if 3D printers have been hacked.

"They will be attractive targets because 3D-printed objects and parts are used in critical infrastructures around the world, and cyberattacks may cause failures in health care, transportation, robotics, aviation and space," said Saman Aliari Zonouz, coauthor of the study and associate professor in the electrical and computer engineering department at Rutgers University-New Brunswick, in a report on the research.

Hackers could conceivably insert tiny defects in printed objects, too small to detect visually, but that nevertheless compromise the integrity of the piece with potentially disastrous consequences. Many organizations outsource their 3D printing needs rather than buying expensive printers themselves, which centralizes and, perhaps, exacerbates the threat.

"The results could be devastating and you would have no way of tracing where the problem came from," said Mehdi Javanmard, study coauthor and assistant professor in the electrical and computer engineering department at Rutgers-New Bruswick, in a prepared satement.

"While anti-hacking software is essential, it's never 100 percent safe against cyberattacks," the university research news organization Futurity recently reported. "So the researchers looked at the physical aspects of 3D printers."

The team eventually developed three ways to detect tampering, either during or after printing an object:

  • Compare the sound of the printer as it operates to a recording of a printer creating a print known to be correct;
  • Compare the physical movement of the printer's parts as it works to a the movements of a printer as it produces a correct print; and
  • Mix gold nanorods with the printing filament, then use Raman spectroscopy and computed tomography to make sure the nanorods are dispersed throughout the object as expected.

In the future, the group plans to find new ways to attack the printers so they can develop new defenses, transfer their methods to industry and refine the techniques they've already developed.

"These 3D printed components will be going into people, aircraft and critical infrastructure systems," said Raheem Beyah, a professor and associate chair in Georgia Tech's School of Electrical and Computer Engineering. "Malicious software installed in the printer or control computer could compromise the production process. We need to make sure that these components are produced to specification and not affected by malicious actors or unscrupulous producers."

The full study is available as a PDF here.

About the Author

Joshua Bolkan is contributing editor for Campus Technology, THE Journal and STEAM Universe. He can be reached at [email protected].

Featured

  • AI-inspired background pattern with geometric shapes and fine lines in muted blue and gray on a dark background

    IBM Releases Granite 3.0 Family of Advanced AI Models

    IBM has introduced its most advanced family of AI models to date, Granite 3.0, at its annual TechXchange event. The new models were developed to provide a combination of performance, flexibility, and autonomy that outperforms or matches similarly sized models from leading providers on a range of benchmarks.

  • An abstract depiction of a virtual reality science class featuring two silhouetted figures wearing VR headsets

    University of Nevada Las Vegas to Build VR Learning Hub for STEM Courses

    A new immersive learning center at the University of Nevada, Las Vegas is tapping into the power of virtual reality to support STEM engagement and student success. The institution has partnered with Dreamscape Learn on the initiative, which will incorporate the company's interactive VR platform into introductory STEM courses.

  • futuristic crystal ball with holographic data projections

    Call for Opinions: 2025 Predictions for Higher Ed IT

    How will the technology landscape in higher education change in the coming year? We're inviting our readership to weigh in with their predictions, wishes, or worries for 2025.

  • abstract pattern of interlocking circuits, hexagons, and neural network shapes

    Anthropic Announces Cautious Support for New California AI Regulation Legislation

    Anthropic has announced its support for an amended version of California’s Senate Bill 1047 (SB 1047), the "Safe and Secure Innovation for Frontier Artificial Intelligence Models Act," because of revisions to the bill the company helped to influence, but not without some reservations.