Optimizing trajectories for highway driving with offline reinforcement learning
- 1Department of Computer Science, University of Freiburg, Freiburg, Germany
- 2BMW Group, Munich, Germany
- 3IMBIT // BrainLinks-BrainTools, University of Freiburg, Freiburg, Germany
A Corrigendum on
Optimizing trajectories for highway driving with offline reinforcement learning
by Mirchevska B, Werling M and Boedecker J (2023). Front. Future Transp. 4:1076439. doi: 10.3389/ffutr.2023.1076439
In the published article, there was an error. Algorithm 2: alo should be
A correction has been made to 3 Approach, 3.2 Decision making. This sentence previously stated:
“
The corrected sentence appears below:
“
In the published article, there was an error. Algorithm 2: alo should be
A correction has been made to 3 Approach, 3.2 Decision making. This sentence previously stated:
“
The corrected sentence appears below:
“
A correction has been made to 4 MDP Formalization, 4.3 Reward. This sentence previously stated:
“For the first objective, not causing collisions and remaining within the road boundaries, we define an indicator indf signaling when the agent has failed in the following way:”
The corrected sentence appears below:
“For the first objective, not causing collisions and remaining within the road boundaries, we define an indicator f signaling when the agent has failed in the following way:”
A correction has been made to 4 MDP Formalization, 4.3 Reward. This equation previously stated:
The corrected equation appears below:
A correction has been made to 4 MDP Formalization, 4.3 Reward. This equation previously stated:
The corrected equation appears below:
A correction has been made to 4 MDP formalization, 4.3 Reward. This equation previously stated:
The corrected equation appears below:
A correction has been made to 6 Experiments and results, 6.3 Smoothness analysis. This equation previously stated:
The corrected equation appears below:
A correction has been made to 6 Experiments and results, 6.3 Smoothness analysis. This sentence previously stated:
“The results indicate that the best performance in terms of jerk is yielded when the reward function from Eq. 8 is used and when jw is assigned a value around 2. However, is important to note that the performance is not very sensitive to the value chosen for jw and performs similarly well in a range of values. It is interesting to note that when the value for jw is too low, e.g., 0.5, the agent deems the jerk-related reward component less significant which results in higher jerk values.”
The corrected sentence appears below:
“The results indicate that the best performance in terms of jerk is yielded when the reward function from Eq. 8 is used and when jrw is assigned a value around 2. However, is important to note that the performance is not very sensitive to the value chosen for jrw and performs similarly well in a range of values. It is interesting to note that when the value for jrw is too low, e.g., 0.5, the agent deems the jerk-related reward component less significant which results in higher jerk values.”
A correction has been made to Appendix, Trajectory generation details. This equation previously stated:
The corrected equation appears below:
A correction has been made to Appendix, Trajectory generation details. This equation previously stated:
The corrected equation appears below:
The authors apologize for these errors and state that this does not change the scientific conclusions of the article in any way. The original article has been updated.
Publisher’s note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
Keywords: reinforcement learning, trajectory optimization, autonomous driving, offline reinforcement learning, continuous control
Citation: Mirchevska B, Werling M and Boedecker J (2023) Corrigendum: Optimizing trajectories for highway driving with offline reinforcement learning. Front. Future Transp. 4:1320940. doi: 10.3389/ffutr.2023.1320940
Received: 13 October 2023; Accepted: 09 November 2023;
Published: 11 December 2023.
Approved by:
Frontiers Editorial Office, Frontiers Media SA, SwitzerlandCopyright © 2023 Mirchevska, Werling and Boedecker. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Branka Mirchevska, bWlyY2hldmJAaW5mb3JtYXRpay51bmktZnJlaWJ1cmcuZGU=