AUTHOR=Su Binbin , Gutierrez-Farewik Elena M. TITLE=Simulating human walking: a model-based reinforcement learning approach with musculoskeletal modeling JOURNAL=Frontiers in Neurorobotics VOLUME=17 YEAR=2023 URL=https://www.frontiersin.org/journals/neurorobotics/articles/10.3389/fnbot.2023.1244417 DOI=10.3389/fnbot.2023.1244417 ISSN=1662-5218 ABSTRACT=Introduction

Recent advancements in reinforcement learning algorithms have accelerated the development of control models with high-dimensional inputs and outputs that can reproduce human movement. However, the produced motion tends to be less human-like if algorithms do not involve a biomechanical human model that accounts for skeletal and muscle-tendon properties and geometry. In this study, we have integrated a reinforcement learning algorithm and a musculoskeletal model including trunk, pelvis, and leg segments to develop control modes that drive the model to walk.

Methods

We simulated human walking first without imposing target walking speed, in which the model was allowed to settle on a stable walking speed itself, which was 1.45 m/s. A range of other speeds were imposed for the simulation based on the previous self-developed walking speed. All simulations were generated by solving the Markov decision process problem with covariance matrix adaptation evolution strategy, without any reference motion data.

Results

Simulated hip and knee kinematics agreed well with those in experimental observations, but ankle kinematics were less well-predicted.

Discussion

We finally demonstrated that our reinforcement learning framework also has the potential to model and predict pathological gait that can result from muscle weakness.