Stanford Team Enters Robot Car in DARPA Urban Race

A team of  Stanford University robotic researchers will test a driverless Volkswagen Passat wagon named Junior in this fall's Urban Challenge, an unmanned car race sponsored by the Defense Advanced Research Projects Agency.

The Urban Challenge course in November will be a 60-mile test of city driving, interrupted by intersections, rights-of-way, stop signs, and lane changes.

Junior was developed by a team led by Sebastian Thrun, an associate professor of computer science and electrical engineering at Stanford. Thrun said he envisions a crash-less future in which robotic cars will save people from the hassles and dangers of modern traffic and congestion.

"There are so many aspects of society you could change if you just make cars drive themselves," he told the Los Angeles Times. He described the concept as combining the " convenience of a train with the convenience of a car."

Thrun has a good track-record at the Urban Challenge. Two years ago, the Stanford team won the race with a modified VW Touareg sport utility vehicle called Stanley. That course was mostly a test of speed which took place in the Nevada desert.

The Urban Challenge hands out more than $2 million in prize money. DARPAs said its wants to encourage the " development of robotic-vehicle technology that will someday save the lives of American men and women on the battlefield."

Read More:

About the Author

Paul McCloskey is contributing editor of Syllabus.

Featured

  • glowing digital brain-shaped neural network surrounded by charts, graphs, and data visualizations

    Google Releases Advanced AI Model for Complex Reasoning Tasks

    Google has released Gemini 2.5 Deep Think, an advanced artificial intelligence model designed for complex reasoning tasks.

  • abstract pattern of cybersecurity, ai and cloud imagery

    OpenAI Report Identifies Malicious Use of AI in Cloud-Based Cyber Threats

    A report from OpenAI identifies the misuse of artificial intelligence in cybercrime, social engineering, and influence operations, particularly those targeting or operating through cloud infrastructure. In "Disrupting Malicious Uses of AI: June 2025," the company outlines how threat actors are weaponizing large language models for malicious ends — and how OpenAI is pushing back.

  • cybersecurity book with a shield and padlock

    NIST Proposes New Cybersecurity Guidelines for AI Systems

    The National Institute of Standards and Technology has unveiled plans to issue a new set of cybersecurity guidelines aimed at safeguarding artificial intelligence systems, citing rising concerns over risks tied to generative models, predictive analytics, and autonomous agents.

  • magnifying glass highlighting a human profile silhouette, set over a collage of framed icons including landscapes, charts, and education symbols

    AWS, DeepBrain AI Launch AI-Generated Multimedia Content Detector

    Amazon Web Services (AWS) and DeepBrain AI have introduced AI Detector, an enterprise-grade solution designed to identify and manage AI-generated content across multiple media types. The collaboration targets organizations in government, finance, media, law, and education sectors that need to validate content authenticity at scale.