Researchers Develop Methods to Detect Hacking of 3D Printers

A team of researchers from Rutgers University-New Brunswick and the Georgia Institute of Technology has developed three strategies for determining if 3D printers have been hacked.

"They will be attractive targets because 3D-printed objects and parts are used in critical infrastructures around the world, and cyberattacks may cause failures in health care, transportation, robotics, aviation and space," said Saman Aliari Zonouz, coauthor of the study and associate professor in the electrical and computer engineering department at Rutgers University-New Brunswick, in a report on the research.

Hackers could conceivably insert tiny defects in printed objects, too small to detect visually, but that nevertheless compromise the integrity of the piece with potentially disastrous consequences. Many organizations outsource their 3D printing needs rather than buying expensive printers themselves, which centralizes and, perhaps, exacerbates the threat.

"The results could be devastating and you would have no way of tracing where the problem came from," said Mehdi Javanmard, study coauthor and assistant professor in the electrical and computer engineering department at Rutgers-New Bruswick, in a prepared satement.

"While anti-hacking software is essential, it's never 100 percent safe against cyberattacks," the university research news organization Futurity recently reported. "So the researchers looked at the physical aspects of 3D printers."

The team eventually developed three ways to detect tampering, either during or after printing an object:

  • Compare the sound of the printer as it operates to a recording of a printer creating a print known to be correct;
  • Compare the physical movement of the printer's parts as it works to a the movements of a printer as it produces a correct print; and
  • Mix gold nanorods with the printing filament, then use Raman spectroscopy and computed tomography to make sure the nanorods are dispersed throughout the object as expected.

In the future, the group plans to find new ways to attack the printers so they can develop new defenses, transfer their methods to industry and refine the techniques they've already developed.

"These 3D printed components will be going into people, aircraft and critical infrastructure systems," said Raheem Beyah, a professor and associate chair in Georgia Tech's School of Electrical and Computer Engineering. "Malicious software installed in the printer or control computer could compromise the production process. We need to make sure that these components are produced to specification and not affected by malicious actors or unscrupulous producers."

The full study is available as a PDF here.

About the Author

Joshua Bolkan is contributing editor for Campus Technology, THE Journal and STEAM Universe. He can be reached at [email protected].

Featured

  • abstract colored blocks

    OpenAI Drops Sora Short-Form AI Video Platform

    OpenAI is reportedly dropping Sora, its generative AI model that creates short video clips from text prompts, images, or existing video inputs. The move upends the company's December partnership with The Walt Disney Company.

  • digital lock on a virtual background

    Encryptionless Extortion on the Rise as Ransomware Groups Shift Tactics

    Ransomware attacks continued to climb in 2025 as attackers increasingly timed operations around year-end staffing gaps and shifted away from traditional file encryption, according to new research from NordStellar.

  • glowing brain above stacked coins

    The Higher Ed Playbook for AI Affordability

    Fulfilling the promise of AI in higher education does not require massive budgets or radical reinvention. By leveraging existing infrastructure, embracing edge and localized AI, collaborating across institutions, and embedding AI thoughtfully across the enterprise, universities can move from experimentation to impact.

  • A man stands at the threshold of a wide open door, looking outward into a glowing, abstract digital landscape filled with light and network‑like patterns

    Shadow AI Isn't a Threat: It's a Signal

    Unofficial AI use on campus reveals more about institutional gaps than misbehavior.