Madison, Berkeley Team Develop Malware Modeling Tool

A research team from the University of Wisconsin, Madison and the University of California, Berkeley have developed virus scanning software they describe as the "next generation in malware detection."

Instead of scanning for specific virus signatures, their Static Analyzer for Executables (SAFE) looks for suspicious behaviors typical of malware, such as reading an address book and sending e-mails.

Commercial scanners search programs for specific patterns, or signatures, which leaves an opening for virus programmers to disguise the virus. Each disguised variant then must be distributed and added to the virus scanners on a weekly or sometimes daily basis.

"Essentially, this is an arms race," said Somesh Jha, an associate professor of computer science at the University of Wisconsin, Madison, who, with graduate student Mihai Christodorescu, helped develop the program.

"I don't think the approaches currently being used by commercial companies are going to be sustainable," Jha told the Wisconsin Business Journal.

SAFE requires updates only when viruses exhibit new behavior. It is proactive, rather than reactive. The researchers began working on SAFE when they tested variations of four viruses on Norton and McAfee antivirus scanners and found that only the original variation of each virus was caught. SAFE caught all variations.

"[Attackers] are already becoming very sophisticated. They are using on-the-fly evasion techniques," Jha told WBJ. "As they use more sophisticated things to hide their malware, your detection has to become better and better."

Read More:

About the Author

Paul McCloskey is contributing editor of Syllabus.

Featured

  • landscape photo with an AI rubber stamp on top

    California AI Watermarking Bill Garners OpenAI Support

    ChatGPT creator OpenAI is backing a California bill that would require tech companies to label AI-generated content in the form of a digital "watermark." The proposed legislation, known as the "California Digital Content Provenance Standards" (AB 3211), aims to ensure transparency in digital media by identifying content created through artificial intelligence. This requirement would apply to a broad range of AI-generated material, from harmless memes to deepfakes that could be used to spread misinformation about political candidates.

  • stylized illustration of an open laptop displaying the ChatGPT interface

    'Early Version' of ChatGPT Windows App Now Available to Paid Users

    OpenAI has announced the release of the ChatGPT Windows desktop app, about five months after the macOS version became available.

  • person signing a bill at a desk with a faint glow around the document. A tablet and laptop are subtly visible in the background, with soft colors and minimal digital elements

    California Governor Signs AI Content Safeguards into Law

    California Governor Gavin Newsom has officially signed off on a series of landmark artificial intelligence bills, signaling the state’s latest efforts to regulate the burgeoning technology, particularly in response to the misuse of sexually explicit deepfakes. The legislation is aimed at mitigating the risks posed by AI-generated content, as concerns grow over the technology's potential to manipulate images, videos, and voices in ways that could cause significant harm.

  • Jetstream logo

    Qualified Free Access to Advanced Compute Resources with NSF's Jetstream2 and ACCESS

    Free access to advanced computing and HPC resources for your researchers and education programs? Check out NSF's Jetstream2 and ACCESS.