Objective: Sensory feedback of upper-limb prostheses is widely desired and studied. As important components of proprioception, position, and movement feedback help users to control prostheses better. Among various feedback methods, electrotactile stimulation is a potential method for coding proprioceptive information of a prosthesis. This study was motivated by the need for proprioception information for a prosthetic wrist. The flexion-extension (FE) position and movement information of the prosthetic wrist are transmitted back to the human body through multichannel electrotactile stimulation.
Approach: We developed an electrotactile scheme to encode the FE position and movement of the prosthetic wrist and designed an integrated experimental platform. A preliminary experiment on the sensory threshold and discomfort threshold was performed. Then, two proprioceptive feedback experiments were performed: a position sense experiment (Exp 1) and a movement sense experiment (Exp 2). Each experiment included a learning session and a test session. The success rate (SR) and discrimination reaction time (DRT) were analyzed to evaluate the recognition effect. The acceptance of the electrotactile scheme was evaluated by a questionnaire.
Main results: Our results showed that the average position SRs of five able-bodied subjects, amputee 1, and amputee 2 were 83.78, 97.78, and 84.44%, respectively. The average movement SR, and the direction and range SR of wrist movement in five able-bodied subjects were 76.25, 96.67%, respectively. Amputee 1 and amputee 2 had movement SRs of 87.78 and 90.00% and direction and range SRs of 64.58 and 77.08%, respectively. The average DRT of five able-bodied subjects was less than 1.5 s and that of amputees was less than 3.5 s.
Conclusion: The results indicate that after a short period of learning, the subjects can sense the position and movement of wrist FE. The proposed substitutive scheme has the potential for amputees to sense a prosthetic wrist, thus enhancing the human-machine interaction.
Closing the prosthesis control loop by providing artificial somatosensory feedback can improve utility and user experience. Additionally, closed-loop control should be more robust with respect to disturbance, but this might depend on the type of feedback provided. Thus, the present study investigates and compares the performance of EMG and force feedback in the presence of control disturbances. Twenty able-bodied subjects and one transradial amputee performed delicate and power grasps with a prosthesis in a functional task, while the control signal gain was temporarily increased (high-gain disturbance) or decreased (low-gain disturbance) without their knowledge. Three outcome measures were considered: the percentage of trials successful in the first attempt (reaction to disturbance), the average number of attempts in trials where the wrong force was initially applied (adaptation to disturbance), and the average completion time of the last attempt in every trial. EMG feedback was shown to offer significantly better performance compared to force feedback during power grasping in terms of reaction to disturbance and completion time. During power grasping with high-gain disturbance, the median first-attempt success rate was significantly higher with EMG feedback (73.3%) compared to that achieved with force feedback (60%). Moreover, the median completion time for power grasps with low-gain disturbance was significantly longer with force feedback than with EMG feedback (3.64 against 2.48 s, an increase of 32%). Contrary to our expectations, there was no significant difference between feedback types with regards to adaptation to disturbances and the two feedback types performed similarly in delicate grasps. The results indicated that EMG feedback displayed better performance than force feedback in the presence of control disturbances, further demonstrating the potential of this approach to provide a reliable prosthesis-user interaction.
Multiple types of brain-control systems have been applied in the field of rehabilitation. As an alternative scheme for balancing user fatigue and the classification accuracy of brain–computer interface (BCI) systems, facial-expression-based brain control technologies have been proposed in the form of novel BCI systems. Unfortunately, existing machine learning algorithms fail to identify the most relevant features of electroencephalogram signals, which further limits the performance of the classifiers. To address this problem, an improved classification method is proposed for facial-expression-based BCI (FE-BCI) systems, using a convolutional neural network (CNN) combined with a genetic algorithm (GA). The CNN was applied to extract features and classify them. The GA was used for hyperparameter selection to extract the most relevant parameters for classification. To validate the superiority of the proposed algorithm used in this study, various experimental performance results were systematically evaluated, and a trained CNN-GA model was constructed to control an intelligent car in real time. The average accuracy across all subjects was 89.21 ± 3.79%, and the highest accuracy was 97.71 ± 2.07%. The superior performance of the proposed algorithm was demonstrated through offline and online experiments. The experimental results demonstrate that our improved FE-BCI system outperforms the traditional methods.