Udacity Releases Self-Driving Car Simulator Source Code

Image Credit: GitHub.

Udacity this week released the source code to its self-driving car simulator. The simulator was originally built to teach its Self-Driving Car Engineer Nanodegree students how to use deep learning to clone driving behavior.

In the simulator, users can steer a car around a track to collect “image data and steering angles to train a neural network,” according to the project overview. They will train, validate and test their model to drive the car autonomously around the track using Keras, a high-level neural networks library that is written in Python and capable of running on top of Google’s TensorFlow or Theano, two open source deep learning frameworks. The Unity game developer platform is needed to load all the assets for the project.

When Udacity launched its Self-Driving Car Engineer Nanodegree last September, CEO Sebastian Thrun said the end-goal was to open source the software for anybody to use. Since most self-driving software is developed in a virtual environments, the repository serves as a resource for individuals and organizations to develop their own scenes in Unity or test out their own software — including higher ed institutions that have been ramping up their own research efforts.

Featured

  • young woman using a smartphone, with digital AI and chat icons overlaid in a blurred academic setting

    Duolingo Embraces AI in Push for Scalable Learning

    Learning platform Duolingo has officially declared itself "AI-first," aiming to make learning replicable, scalable, and always available.

  • stylized AI code and a neural network symbol, paired with glitching code and a red warning triangle

    New Anthropic AI Models Demonstrate Coding Prowess, Behavior Risks

    Anthropic has released Claude Opus 4 and Claude Sonnet 4, its most advanced artificial intelligence models to date, boasting a significant leap in autonomous coding capabilities while simultaneously revealing troubling tendencies toward self-preservation that include attempted blackmail.

  • abstract pattern of cybersecurity, ai and cloud imagery

    OpenAI Report Identifies Malicious Use of AI in Cloud-Based Cyber Threats

    A report from OpenAI identifies the misuse of artificial intelligence in cybercrime, social engineering, and influence operations, particularly those targeting or operating through cloud infrastructure. In "Disrupting Malicious Uses of AI: June 2025," the company outlines how threat actors are weaponizing large language models for malicious ends — and how OpenAI is pushing back.

  • charts, graphs, and shapes

    1EdTech: 6 Keys to Effective Learning Analytics

    1EdTech Consortium has released a free report offering six steps to implementing learning analytics effectively in higher education.