Reinforcement Learning Based Fast Self-Recalibrating Decoder for Intracortical Brain-Machine Interface

Sensors (Basel). 2020 Sep 27;20(19):5528. doi: 10.3390/s20195528.

Abstract

Background: For the nonstationarity of neural recordings in intracortical brain-machine interfaces, daily retraining in a supervised manner is always required to maintain the performance of the decoder. This problem can be improved by using a reinforcement learning (RL) based self-recalibrating decoder. However, quickly exploring new knowledge while maintaining a good performance remains a challenge in RL-based decoders.

Methods: To solve this problem, we proposed an attention-gated RL-based algorithm combining transfer learning, mini-batch, and weight updating schemes to accelerate the weight updating and avoid over-fitting. The proposed algorithm was tested on intracortical neural data recorded from two monkeys to decode their reaching positions and grasping gestures.

Results: The decoding results showed that our proposed algorithm achieved an approximate 20% increase in classification accuracy compared to that obtained by the non-retrained classifier and even achieved better classification accuracy than the daily retraining classifier. Moreover, compared with a conventional RL method, our algorithm improved the accuracy by approximately 10% and the online weight updating speed by approximately 70 times.

Conclusions: This paper proposed a self-recalibrating decoder which achieved a good and robust decoding performance with fast weight updating and might facilitate its application in wearable device and clinical practice.

Keywords: adaptive decoder; intracortical brain–machine interface; reinforcement learning; transfer learning.

MeSH terms

  • Algorithms
  • Animals
  • Brain-Computer Interfaces*
  • Hand / physiology
  • Haplorhini
  • Machine Learning*