New York U and Google Researchers Expose Shady Business of Pay-Per-Install

The next time you install new software, you might want to ask yourself what else is riding along. A research team at New York University and Google are reporting this week on the "shady practices" of delivering unwanted advertising and software as part of the payload of legitimate programs. The practice of commercial pay-per-install (PPI) allows companies to bundle their undesirable applications with other software that users want in return for a fee.

The research project, led by Damon McCoy, an assistant professor of computer science and engineering at New York U's Tandon School of Engineering, and Kurt Thomas, a research scientist at Google, developed an "analysis pipeline" to track the business dealings and software bundles that sustain four of the largest commercial PPI networks.

According to the researchers, unwanted ad injectors, browser settings hijackers and cleanup utilities dominate the software "families" that buy installs. The companies behind those families typically pay between a dime and $1.50 per installation, which they recoup by monetizing users without their consent or by charging exorbitant subscription fees. Worse, the research suggests that some of the affiliates distributing such software are active and willing participants in the schemes, even as they deny culpability in the installation of unwanted software. One operation identified as a player reported $460 million in revenue in 2014, generated through a combination of legitimate and unwanted software downloads.

Based on Google calculations, PPI networks push more than 60 million download attempts every week — nearly triple that of malware. While anti-virus and browser makers have developed defenses against unwanted software, the research found that PPI networks go out of their way to interfere with or evade detection, often using data gleaned during the install process and provided by the companies they're paying for the software ride-along.

How do you know when you've been a victim of PPI? The researchers describe what happens: a barrage of advertisements on the screen; flashing pop-ups warning of malware and promoting the purchase of specialized antivirus software that's often fraudulent itself. In other scenarios, the system's default browser is hijacked, and users are sent to "ad-laden pages."

The analysis of PPI appears in the paper, "Investigating Commercial Pay-Per-Install and the Distribution of Unwanted Software," which is being presented this week at the USENIX Security Symposium taking place in Austin. The paper will be openly available after the event begins.

"If you've ever downloaded a screen saver or other similar feature for your laptop, you've seen a 'terms and conditions' page pop up where you consent to the installation," New York U's McCoy explained in a statement about the research. "Buried in the text that nobody reads is information about the bundle of unwanted software programs in the package you're about to download."

What those terms and conditions do, he explained, is allow the businesses to operate legally while exploiting the trusted relationship they have with their customers. "We're hoping to expose these business practices so people are less likely to get duped into flooding their computers with programs they never wanted," McCoy said.

About the Author

Dian Schaffhauser is a former senior contributing editor for 1105 Media's education publications THE Journal, Campus Technology and Spaces4Learning.

Featured

  • abstract graph showing growth

    Where Are You on the Ed Tech Maturity Curve?

    Ed tech maturity models can help institutions map progress and make smarter tech decisions.

  • row of digital padlocks

    2026 Cybersecurity Trends to Watch in Higher Education

    In an open call last month, we asked education and industry leaders for their predictions on the cybersecurity landscape for schools, districts, colleges, and universities in 2026. Here's what they told us.

  • Interface buttons of Generative AI tool

    Report: No Foolproof Method Exists for Detecting AI-Generated Media

    Microsoft has released a new research report warning that no single technology can reliably distinguish AI-generated content from authentic media, and that deepening reliance on any one method risks misleading the public.

  • Abstract digital cloudscape of glowing interconnected clouds and radiant lines

    Cloud Complexity Outpacing Human Defenses, Report Warns

    According to the 2026 Cloud Security Report from Fortinet, while cloud security budgets are rising, 66% of organizations lack confidence in real-time threat detection across increasingly complex multi-cloud environments, with identity risks, tool sprawl, and fragmented visibility creating persistent operational gaps despite significant investment increases.