A Variable Speed Limit (VSL) System Using an AI-Based Algorithm on I-24 in Nashville Reduced Traffic Speed Variations by 32.7 Percent to Improve Safety with Limited Impacts on Vehicle Hours of Delay (VHD).

Microsimulation Study Assessed the Safety and Mobility Impacts of Using Artificial Intelligence (AI) and a Reinforcement Learning Algorithm to Enhance the Function of a VSL Sign Network.

Date Posted
04/23/2024
Identifier
2024-B01844

Cooperative Multi-Agent Reinforcement Learning for Large Scale Variable Speed Limit Control

Summary Information

Variable Speed Limit (VSL) systems that adjust speed limits dynamically in response to real-time traffic conditions are a popular traffic management strategy that offers great potential to improve transportation safety. This study introduced a machine learning algorithm called Multi-Agent Reinforcement Learning (MARL) for implementing a large-scale VSL system. The MARL algorithm was designed to improve the performance of VSL systems and address congestion. Using a microscopic traffic simulator, researchers tested the algorithm on a seven-mile section of I-24 that included four westbound general-purpose lanes and three ramps.

METHODOLOGY

Researchers obtained data from commonly available Radar Detection Systems (RDS) on highways to shape the state input of each VSL controller unit. Regarding the action space of the agents modeled in the MARL algorithm, researchers used a set of discrete values including 30, 50, and 70 mi/h as the range speed limits to be posted. Finally, for the experimental implementation of the MARL algorithm using traffic simulation, researchers set up VSL controller units spaced at 0.5-mile intervals along the test corridor which resulted in 15 VSL units in total. The simulation covered a period of 7:50 AM to 10:00 AM and mimicked recurring congestion induced by on-ramp traffic weaving behavior. Changes in the coefficient of variation of speed (CVS) were used as a safety metric. The study area included VSL controller units located upstream of the on-ramp merging location in the training scenario with an expectation that five units were sufficient for agents to learn cooperation while tackling different traffic conditions represented by the following three scenarios:

  • Scenario A: Mainstream flow of 1,750 vehicle/lane/hour, five percent compliance rate, ten VSL controllers.
  • Scenario B: Mainstream flow of 1,850 vehicle/lane/hour, five percent compliance rate, ten VSL controllers.
  • Scenario C: Mainstream flow of 1,950 vehicle/lane/hour, five percent compliance rate, ten VSL controllers.

FINDINGS

  • Scenario A was not evaluated since conditions did not trigger enough congestion for effective treatment.
  • Scenario B demonstrated that the safety metric CVS was reduced by 32.7 percent (from 0.52 to 0.35) as a result of the MARL algorithm, with minor impact on mobility, increasing Vehicle Hours of Delay (VHD) by an amount of five percent.
  • Scenario C revealed that the safety metric CVS was reduced by 30.8 percent (from 0.52 to 0.36) with very little negative effect on mobility, increasing VHD by an amount of 6.5 percent.
     

Cooperative Multi-Agent Reinforcement Learning for Large Scale Variable Speed Limit Control

Cooperative Multi-Agent Reinforcement Learning for Large Scale Variable Speed Limit Control
Source Publication Date
06/26/2023
Author
Zhang, Yuhang; Marcos Quinones-Grueiro; William Barbour; Zhiyao Zhang; Joshua Scherer; Gautam Biswas; and Daniel Work
Publisher
Prepared by Vanderbilt University and University of Illinois, Urbana-Champaign for the IEEE International Conference on Smart Computing
Other Reference Number
10.1109/SMARTCOMP58114.2023.00036
Goal Areas
Results Type
Deployment Locations