Rllib

C++ library to predict, control, and represent learnable knowledge using on/off policy reinforcement learning

Saminda Abeyruwan, Ubbo E Visser

Research output: Chapter in Book/Report/Conference proceedingConference contribution

1 Citation (Scopus)

Abstract

RLLib is a lightweight C++ template library that implements incremental, standard, and gradient temporal-difference learning algorithms in reinforcement learning. It is an optimized library for robotic applications and embedded devices that operates under fast duty cycles (e.g., ≤30ms). RLLib has been tested and evaluated on RoboCup 3D soccer simulation agents, NAO V4 humanoid robots, and Tiva C series launchpad microcontrollers to predict, control, learn behavior, and represent learnable knowledge.

Original languageEnglish (US)
Title of host publicationLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
PublisherSpringer Verlag
Pages356-364
Number of pages9
Volume9513
ISBN (Print)9783319293387
DOIs
StatePublished - 2015
Event19th Annual RoboCup International Symposium, 2015 - Hefei, China
Duration: Jul 23 2015Jul 23 2015

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume9513
ISSN (Print)03029743
ISSN (Electronic)16113349

Other

Other19th Annual RoboCup International Symposium, 2015
CountryChina
CityHefei
Period7/23/157/23/15

Fingerprint

Reinforcement learning
Reinforcement Learning
C++
Predict
Humanoid Robot
Microcontroller
Microcontrollers
Learning algorithms
Robotics
Template
Learning Algorithm
Robots
Gradient
Cycle
Series
Simulation
Knowledge
Policy
Libraries
Standards

Keywords

  • Gradient temporal-difference
  • Reinforcement learning
  • RLLib

ASJC Scopus subject areas

  • Computer Science(all)
  • Theoretical Computer Science

Cite this

Abeyruwan, S., & Visser, U. E. (2015). Rllib: C++ library to predict, control, and represent learnable knowledge using on/off policy reinforcement learning. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9513, pp. 356-364). (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 9513). Springer Verlag. https://doi.org/10.1007/978-3-319-29339-4_30

Rllib : C++ library to predict, control, and represent learnable knowledge using on/off policy reinforcement learning. / Abeyruwan, Saminda; Visser, Ubbo E.

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). Vol. 9513 Springer Verlag, 2015. p. 356-364 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 9513).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abeyruwan, S & Visser, UE 2015, Rllib: C++ library to predict, control, and represent learnable knowledge using on/off policy reinforcement learning. in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). vol. 9513, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 9513, Springer Verlag, pp. 356-364, 19th Annual RoboCup International Symposium, 2015, Hefei, China, 7/23/15. https://doi.org/10.1007/978-3-319-29339-4_30
Abeyruwan S, Visser UE. Rllib: C++ library to predict, control, and represent learnable knowledge using on/off policy reinforcement learning. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). Vol. 9513. Springer Verlag. 2015. p. 356-364. (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)). https://doi.org/10.1007/978-3-319-29339-4_30
Abeyruwan, Saminda ; Visser, Ubbo E. / Rllib : C++ library to predict, control, and represent learnable knowledge using on/off policy reinforcement learning. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). Vol. 9513 Springer Verlag, 2015. pp. 356-364 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)).
@inproceedings{b98312f65b7e49a18e423008c7de4387,
title = "Rllib: C++ library to predict, control, and represent learnable knowledge using on/off policy reinforcement learning",
abstract = "RLLib is a lightweight C++ template library that implements incremental, standard, and gradient temporal-difference learning algorithms in reinforcement learning. It is an optimized library for robotic applications and embedded devices that operates under fast duty cycles (e.g., ≤30ms). RLLib has been tested and evaluated on RoboCup 3D soccer simulation agents, NAO V4 humanoid robots, and Tiva C series launchpad microcontrollers to predict, control, learn behavior, and represent learnable knowledge.",
keywords = "Gradient temporal-difference, Reinforcement learning, RLLib",
author = "Saminda Abeyruwan and Visser, {Ubbo E}",
year = "2015",
doi = "10.1007/978-3-319-29339-4_30",
language = "English (US)",
isbn = "9783319293387",
volume = "9513",
series = "Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)",
publisher = "Springer Verlag",
pages = "356--364",
booktitle = "Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)",

}

TY - GEN

T1 - Rllib

T2 - C++ library to predict, control, and represent learnable knowledge using on/off policy reinforcement learning

AU - Abeyruwan, Saminda

AU - Visser, Ubbo E

PY - 2015

Y1 - 2015

N2 - RLLib is a lightweight C++ template library that implements incremental, standard, and gradient temporal-difference learning algorithms in reinforcement learning. It is an optimized library for robotic applications and embedded devices that operates under fast duty cycles (e.g., ≤30ms). RLLib has been tested and evaluated on RoboCup 3D soccer simulation agents, NAO V4 humanoid robots, and Tiva C series launchpad microcontrollers to predict, control, learn behavior, and represent learnable knowledge.

AB - RLLib is a lightweight C++ template library that implements incremental, standard, and gradient temporal-difference learning algorithms in reinforcement learning. It is an optimized library for robotic applications and embedded devices that operates under fast duty cycles (e.g., ≤30ms). RLLib has been tested and evaluated on RoboCup 3D soccer simulation agents, NAO V4 humanoid robots, and Tiva C series launchpad microcontrollers to predict, control, learn behavior, and represent learnable knowledge.

KW - Gradient temporal-difference

KW - Reinforcement learning

KW - RLLib

UR - http://www.scopus.com/inward/record.url?scp=84958093451&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84958093451&partnerID=8YFLogxK

U2 - 10.1007/978-3-319-29339-4_30

DO - 10.1007/978-3-319-29339-4_30

M3 - Conference contribution

SN - 9783319293387

VL - 9513

T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

SP - 356

EP - 364

BT - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

PB - Springer Verlag

ER -