Outracing Champion Gran Turismo Drivers with Deep Reinforcement Learning

Pete Wurman

Samuel Barrett

Kenta Kawamoto

James MacGlashan

Kaushik Subramanian

Thomas J. Walsh

Roberto Capobianco

Alisa Devlic

Franziska Eckert

Florian Fuchs

Leilani Gilpin

Piyush Khandelwal

Varun Kompella

Hao Chih Lin

Patrick MacAlpine

Declan Oller

Takuma Seno

Craig Sherstan

Michael D. Thomure

Houmehr Aghabozorgi

Leon Barrett

Rory Douglas

Dion Whitehead Amago

Peter Dürr

Peter Stone

Michael Spranger

Hiroaki Kitano

Nature

2022

Abstract

Many potential applications of artificial intelligence involve making real-time decisions in physical systems while interacting with humans. Automobile racing represents an extreme example of these conditions; drivers must execute complex tactical manoeuvres to pass or block opponents while operating their vehicles at their traction limits. Racing simulations, such as the PlayStation game Gran Turismo, faithfully reproduce the non-linear control challenges of real race cars while also encapsulating the complex multi-agent interactions. Here we describe how we trained agents for Gran Turismo that can compete with the world’s best e-sports drivers. We combine state-of-the-art, model-free, deep reinforcement learning algorithms with mixed-scenario training to learn an integrated control policy that combines exceptional speed with impressive tactics. In addition, we construct a reward function that enables the agent to be competitive while adhering to racing’s important, but under-specified, sportsmanship rules. We demonstrate the capabilities of our agent, Gran Turismo Sophy, by winning a head-to-head competition against four of the world’s best Gran Turismo drivers. By describing how we trained championship-level racers, we demonstrate the possibilities and challenges of using these techniques to control complex dynamical systems in domains where agents must respect imprecisely defined human norms.

Related Publications

BOME! Bilevel Optimization Made Easy: A Simple First-Order Approach

NeurIPS, 2022
Bo Liu*, Mao Ye*, Stephen Wright*, Peter Stone, Qiang Liu*

Bilevel optimization (BO) is useful for solving a variety of important machine learning problems including but not limited to hyperparameter optimization, meta-learning, continual learning, and reinforcement learning. Conventional BO methods need to differentiate through the…

Value Function Decomposition for Iterative Design of Reinforcement Learning Agents

NeurIPS, 2022
James MacGlashan, Evan Archer, Alisa Devlic, Takuma Seno, Craig Sherstan, Peter R. Wurman, Peter Stone

Designing reinforcement learning (RL) agents is typically a difficult process that requires numerous design iterations. Learning can fail for a multitude of reasons and standard RL methods provide too few tools to provide insight into the exact cause. In this paper, we show …

Outsourcing Training without Uploading Data via Efficient Collaborative Open-Source Sampling

NeurIPS, 2022
Junyuan Hong, Lingjuan Lyu, Jiayu Zhou*, Michael Spranger

As deep learning blooms with growing demand for computation and data resources, outsourcing model training to a powerful cloud server becomes an attractive alternative to training at a low-power and cost-effective end device. Traditional outsourcing requires uploading device…

  • HOME
  • Publications
  • Outracing Champion Gran Turismo Drivers with Deep Reinforcement Learning

JOIN US

Shape the Future of AI with Sony AI

We want to hear from those of you who have a strong desire
to shape the future of AI.