Title: Markov Game for CV Joint Adaptive Routing in Stochastic Traffic Networks: A Scalable Learning Approach
Authors: Shan Yang, Yang Liu*
Abstract: This study proposes a learning-based approach to tackle the challenge of joint adaptive routing in stochastic traffic networks with Connected Vehicles (CVs). We introduce a Markov Routing Game (MRG) to model the adaptive routing behavior of all vehicles in such networks, thereby incorporating both competitive route choices and real-time decision-making. We establish the existence of the Nash policy (i.e., optimal joint adaptive routing policy) within the MRG that enables vehicles to adapt optimally to real-time traffic conditions online through efficient communication. To enhance scalability, we innovate with a homogeneity-based mean-field approximation method and, based on that, further develop the Homogeneity-based Mean-Field Deep Reinforcement Learning (HMF-DRL) algorithm to learn the Nash policy within the MRG. Through numerical experiments on the Nguyen-Dupuis network, we demonstrate our algorithm’s ability to efficiently converge and learn the joint adaptive routing policy that significantly enhances traffic network efficiency. Furthermore, our study provides insights into the effects of travel demand, penetration of CVs, and levels of uncertainty on the performance of the joint adaptive routing policy. This paper presents a significant step towards improving network efficiency and reducing the travel time for a majority of vehicles amid uncertain traffic conditions.
Key Words: Markov Routing Game, Connected Vehicles, Joint Adaptive Routing, Mean-Field Multi-Agent Reinforcement Learning, Stochastic Traffic Network
Paper Link: Markov Game for CV Joint Adaptive Routing in Stochastic Traffic Networks: A Scalable Learning Approach