Skip to main content

ORIGINAL RESEARCH article

Front. Neurorobot.
Volume 18 - 2024 | doi: 10.3389/fnbot.2024.1453571
This article is part of the Research Topic Advanced Technology for Human Movement Rehabilitation and Enhancement View all 10 articles

CAM-Vtrans: Real-time Sports Training Utilizing Multi-modal Robot Data

Provisionally accepted
Hong LinLin Hong LinLin 1*Lee Sangheang Lee Sangheang 2Song Guanting Song Guanting 1
  • 1 Other, Gongqing, China
  • 2 Jeonju University, Jeonju, North Jeolla, Republic of Korea

The final, formatted version of the article will be published soon.

    Assistive robots and human-robot interaction have become integral parts of sports training.However, existing methods often fail to provide real-time and accurate feedback, and they often lack integration of comprehensive multi-modal data. To address these issues, we propose a groundbreaking and innovative approach: CAM-Vtrans -Cross-Attention Multi-modal Visual Transformer. By leveraging the strengths of state-of-the-art techniques such as Visual Transformers (ViT) and models like CLIP, along with cross-attention mechanisms, CAM-Vtrans harnesses the power of visual and textual information to provide athletes with highly accurate and timely feedback. Through the utilization of multi-modal robot data, CAM-Vtrans offers valuable assistance, enabling athletes to optimize their performance while minimizing potential injury risks. This novel approach represents a significant advancement in the field, offering an innovative solution to overcome the limitations of existing methods and enhance the precision and efficiency of sports training programs.

    Keywords: assistive robotics, human-machine interaction, Balance control, movement recovery, Vision-Transformer, Clip, cross-attention

    Received: 23 Jun 2024; Accepted: 25 Jul 2024.

    Copyright: © 2024 LinLin, Sangheang and Guanting. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

    * Correspondence: Hong LinLin, Other, Gongqing, China

    Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.