Brain-machine interface control of a robot arm using actor-critic rainforcement learning

Eric A. Pohlmeyer, Babak Mahmoudi, Shijia Geng, Noeline Prins, Justin C. Sanchez

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Here we demonstrate how a marmoset monkey can use a reinforcement learning (RL) Brain-Machine Interface (BMI) to effectively control the movements of a robot arm for a reaching task. In this work, an actor-critic RL algorithm used neural ensemble activity in the monkey's motor cortext to control the robot movements during a two-target decision task. This novel approach to decoding offers unique advantages for BMI control applications. Compared to supervised learning decoding methods, the actor-critic RL algorithm does not require an explicit set of training data to create a static control model, but rather it incrementally adapts the model parameters according to its current performance, in this case requiring only a very basic feedback signal. We show how this algorithm achieved high performance when mapping the monkey's neural states (94%) to robot actions, and only needed to experience a few trials before obtaining accurate real-time control of the robot arm. Since RL methods responsively adapt and adjust their parameters, they can provide a method to create BMIs that are robust against perturbations caused by changes in either the neural input space or the output actions they generate under different task requirements or goals.

Original languageEnglish
Title of host publicationProceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS
Pages4108-4111
Number of pages4
DOIs
StatePublished - Dec 14 2012
Event34th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS 2012 - San Diego, CA, United States
Duration: Aug 28 2012Sep 1 2012

Other

Other34th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS 2012
CountryUnited States
CitySan Diego, CA
Period8/28/129/1/12

Fingerprint

Brain-Computer Interfaces
Reinforcement learning
Brain
Learning
Robots
Haplorhini
Learning algorithms
Decoding
Supervised learning
Real time control
Callithrix
Feedback
Reinforcement (Psychology)

ASJC Scopus subject areas

  • Computer Vision and Pattern Recognition
  • Signal Processing
  • Biomedical Engineering
  • Health Informatics

Cite this

Pohlmeyer, E. A., Mahmoudi, B., Geng, S., Prins, N., & Sanchez, J. C. (2012). Brain-machine interface control of a robot arm using actor-critic rainforcement learning. In Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS (pp. 4108-4111). [6346870] https://doi.org/10.1109/EMBC.2012.6346870

Brain-machine interface control of a robot arm using actor-critic rainforcement learning. / Pohlmeyer, Eric A.; Mahmoudi, Babak; Geng, Shijia; Prins, Noeline; Sanchez, Justin C.

Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS. 2012. p. 4108-4111 6346870.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Pohlmeyer, EA, Mahmoudi, B, Geng, S, Prins, N & Sanchez, JC 2012, Brain-machine interface control of a robot arm using actor-critic rainforcement learning. in Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS., 6346870, pp. 4108-4111, 34th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS 2012, San Diego, CA, United States, 8/28/12. https://doi.org/10.1109/EMBC.2012.6346870
Pohlmeyer EA, Mahmoudi B, Geng S, Prins N, Sanchez JC. Brain-machine interface control of a robot arm using actor-critic rainforcement learning. In Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS. 2012. p. 4108-4111. 6346870 https://doi.org/10.1109/EMBC.2012.6346870
Pohlmeyer, Eric A. ; Mahmoudi, Babak ; Geng, Shijia ; Prins, Noeline ; Sanchez, Justin C. / Brain-machine interface control of a robot arm using actor-critic rainforcement learning. Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS. 2012. pp. 4108-4111
@inproceedings{6c58a62ab09443699b85af4c02a2eeb8,
title = "Brain-machine interface control of a robot arm using actor-critic rainforcement learning",
abstract = "Here we demonstrate how a marmoset monkey can use a reinforcement learning (RL) Brain-Machine Interface (BMI) to effectively control the movements of a robot arm for a reaching task. In this work, an actor-critic RL algorithm used neural ensemble activity in the monkey's motor cortext to control the robot movements during a two-target decision task. This novel approach to decoding offers unique advantages for BMI control applications. Compared to supervised learning decoding methods, the actor-critic RL algorithm does not require an explicit set of training data to create a static control model, but rather it incrementally adapts the model parameters according to its current performance, in this case requiring only a very basic feedback signal. We show how this algorithm achieved high performance when mapping the monkey's neural states (94{\%}) to robot actions, and only needed to experience a few trials before obtaining accurate real-time control of the robot arm. Since RL methods responsively adapt and adjust their parameters, they can provide a method to create BMIs that are robust against perturbations caused by changes in either the neural input space or the output actions they generate under different task requirements or goals.",
author = "Pohlmeyer, {Eric A.} and Babak Mahmoudi and Shijia Geng and Noeline Prins and Sanchez, {Justin C.}",
year = "2012",
month = "12",
day = "14",
doi = "10.1109/EMBC.2012.6346870",
language = "English",
isbn = "9781424441198",
pages = "4108--4111",
booktitle = "Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS",

}

TY - GEN

T1 - Brain-machine interface control of a robot arm using actor-critic rainforcement learning

AU - Pohlmeyer, Eric A.

AU - Mahmoudi, Babak

AU - Geng, Shijia

AU - Prins, Noeline

AU - Sanchez, Justin C.

PY - 2012/12/14

Y1 - 2012/12/14

N2 - Here we demonstrate how a marmoset monkey can use a reinforcement learning (RL) Brain-Machine Interface (BMI) to effectively control the movements of a robot arm for a reaching task. In this work, an actor-critic RL algorithm used neural ensemble activity in the monkey's motor cortext to control the robot movements during a two-target decision task. This novel approach to decoding offers unique advantages for BMI control applications. Compared to supervised learning decoding methods, the actor-critic RL algorithm does not require an explicit set of training data to create a static control model, but rather it incrementally adapts the model parameters according to its current performance, in this case requiring only a very basic feedback signal. We show how this algorithm achieved high performance when mapping the monkey's neural states (94%) to robot actions, and only needed to experience a few trials before obtaining accurate real-time control of the robot arm. Since RL methods responsively adapt and adjust their parameters, they can provide a method to create BMIs that are robust against perturbations caused by changes in either the neural input space or the output actions they generate under different task requirements or goals.

AB - Here we demonstrate how a marmoset monkey can use a reinforcement learning (RL) Brain-Machine Interface (BMI) to effectively control the movements of a robot arm for a reaching task. In this work, an actor-critic RL algorithm used neural ensemble activity in the monkey's motor cortext to control the robot movements during a two-target decision task. This novel approach to decoding offers unique advantages for BMI control applications. Compared to supervised learning decoding methods, the actor-critic RL algorithm does not require an explicit set of training data to create a static control model, but rather it incrementally adapts the model parameters according to its current performance, in this case requiring only a very basic feedback signal. We show how this algorithm achieved high performance when mapping the monkey's neural states (94%) to robot actions, and only needed to experience a few trials before obtaining accurate real-time control of the robot arm. Since RL methods responsively adapt and adjust their parameters, they can provide a method to create BMIs that are robust against perturbations caused by changes in either the neural input space or the output actions they generate under different task requirements or goals.

UR - http://www.scopus.com/inward/record.url?scp=84870807774&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84870807774&partnerID=8YFLogxK

U2 - 10.1109/EMBC.2012.6346870

DO - 10.1109/EMBC.2012.6346870

M3 - Conference contribution

SN - 9781424441198

SP - 4108

EP - 4111

BT - Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS

ER -