Abstract
Adaptive Cruise Control (ACC) is an essential component of Advanced Driver Assistance System (ADAS) which regulates the motor vehicles in adhering to speed limit and maintaining a safe following distance without driver’s intervention. This work showcases the use of Deep Reinforcement Learning (DRL) controller with Deep Deterministic Policy Gradient (DDPG) algorithm to make acceleration and braking decisions to meet the multi-objective ACC requirements.
A Simulation environment of ACC, with a lead car and a follower Battery Electric Vehicle (BEV) is integrated with DRL, where the controller regulates the velocity of the ego car (BEV). Individual driving objectives namely, tracking accuracy, ride comfort and safety are translated to appropriate reward function components to ensure the conformity of ACC. During training, the DRL agent interacts with the environment and learns an optimal policy by trial and error, which is evaluated in a highway driving scenario and compared with Model Predictive Control (MPC) controller. The DRL and MPC controllers are comparable in tracking accuracy and safety, but the DRL significantly outperforms MPC in ride comfort.
The MPC frequently crosses the jerk limit of 2.5 m/s3 whereas the DRL stays well within this limit. Further, the DRL agent is evaluated under different scenarios which include, highway cruising and cut-in cut-out. The DRL proved to be robust, efficient, and optimal in its control whilst having strong generalization characteristics.
Access The Research Paper:
-
Lattice | Vol 5 Issue 1₹1,678.00