AUTHOR=Pan Zebang , Wen Guilin , Tan Zhao , Yin Shan , Hu Xiaoyan TITLE=An immediate-return reinforcement learning for the atypical Markov decision processes JOURNAL=Frontiers in Neurorobotics VOLUME=16 YEAR=2022 URL=https://www.frontiersin.org/journals/neurorobotics/articles/10.3389/fnbot.2022.1012427 DOI=10.3389/fnbot.2022.1012427 ISSN=1662-5218 ABSTRACT=
The atypical Markov decision processes (MDPs) are decision-making for maximizing the immediate returns in only one state transition. Many complex dynamic problems can be regarded as the atypical MDPs, e.g., football trajectory control, approximations of the compound Poincaré maps, and parameter identification. However, existing deep reinforcement learning (RL) algorithms are designed to maximize long-term returns, causing a waste of computing resources when applied in the atypical MDPs. These existing algorithms are also limited by the estimation error of the value function, leading to a poor policy. To solve such limitations, this paper proposes an immediate-return algorithm for the atypical MDPs with continuous action space by designing an unbiased and low variance target Q-value and a simplified network framework. Then, two examples of atypical MDPs considering the uncertainty are presented to illustrate the performance of the proposed algorithm, i.e., passing the football to a moving player and chipping the football over the human wall. Compared with the existing deep RL algorithms, such as deep deterministic policy gradient and proximal policy optimization, the proposed algorithm shows significant advantages in learning efficiency, the effective rate of control, and computing resource usage.