Except for safety, can we anticipate a self-driving model to reduce congestion?

Hao Zhou
2 min readMar 23, 2023

--

A reinforcement learning approach based on Openpilot and Carla

Introduction: we aim to train a more traffic-friendly end-to-end longtitudinal model for self-driving cars. Since it is too expensive or even impossible to conduct RL in real world, a driving simulator, CARLA, could be an option.

Transfer openpilot from real world to Carla:

Camera sensor in Carla 0.9.10

The authors try to fine-tune the parameters the openpilot model from Comma.ai. In order to test out and evaluate the final model in real life using the same devices (Comma Two), we decided to repeat, or mimic as much as possible, the same procedure of openpilot. With that being said, we try to keep everything else the same, except changing the input source, which is a phone camera for openpilot in real world but a simulated camera sensor for Carla.

It’s not to hard to copy the same plan and control method from openpilot. For the planning part, we simply use the predicted trajectories output by the model and a MPC controller to get the next step target speed, aka vTarget. For the control part, it uses the planned vTarget and a PI controller for execuation. For reference, one can get everything by looking into the source code in controlsd.py and plannerd.py in openpilot/selfdrive.

What bothers us is the iamge preprocessing. The code on image preprocessing is not very straightforward to read or understand, and note that although we can keep the input dimension correctly, all preprocessing (such as some complex image transformations) are tailored to the specific camera on the Comma Two device. Of course the faked camera in Carla has different properties, thus we don’t expect the same image processing would fit. Here we need some computer vision knowledge, and indeed we need to dive deeper and understand why those image proprocessions are needed for the camera frames, and, in turn, what preprocessions are needed for Carla frames.

Let’s look at what OP does for image preprocessing from the code:

Of course, we can first ignore the different properties of Carla images and C2(Comma Two) images, and use the same approach.

Beyond safety, we aim to learn to mitigate traffic congestion:

As most self-driving tech heads are focusing on improving safey, we go beyong this goal and try to emulate the self-driving technology to reduce traffic congestion. The idea might sound too wild for an average person, but the logic is sound and has been investigated for many years by traffic flow researchers.

A platoon of vehicles driven by a vision-based, and end-to-end self-driving model

--

--