Your browser doesn't support javascript.
loading
Embodied Synaptic Plasticity With Online Reinforcement Learning.
Kaiser, Jacques; Hoff, Michael; Konle, Andreas; Vasquez Tieck, J Camilo; Kappel, David; Reichard, Daniel; Subramoney, Anand; Legenstein, Robert; Roennau, Arne; Maass, Wolfgang; Dillmann, Rüdiger.
Afiliación
  • Kaiser J; FZI Research Center for Information Technology, Karlsruhe, Germany.
  • Hoff M; FZI Research Center for Information Technology, Karlsruhe, Germany.
  • Konle A; Institute for Theoretical Computer Science, Graz University of Technology, Graz, Austria.
  • Vasquez Tieck JC; FZI Research Center for Information Technology, Karlsruhe, Germany.
  • Kappel D; FZI Research Center for Information Technology, Karlsruhe, Germany.
  • Reichard D; Institute for Theoretical Computer Science, Graz University of Technology, Graz, Austria.
  • Subramoney A; Bernstein Center for Computational Neuroscience, III Physikalisches Institut-Biophysik, Georg-August Universität, Göttingen, Germany.
  • Legenstein R; Technische Universität Dresden, Chair of Highly Parallel VLSI Systems and Neuromorphic Circuits, Dresden, Germany.
  • Roennau A; FZI Research Center for Information Technology, Karlsruhe, Germany.
  • Maass W; Institute for Theoretical Computer Science, Graz University of Technology, Graz, Austria.
  • Dillmann R; Institute for Theoretical Computer Science, Graz University of Technology, Graz, Austria.
Front Neurorobot ; 13: 81, 2019.
Article en En | MEDLINE | ID: mdl-31632262
The endeavor to understand the brain involves multiple collaborating research fields. Classically, synaptic plasticity rules derived by theoretical neuroscientists are evaluated in isolation on pattern classification tasks. This contrasts with the biological brain which purpose is to control a body in closed-loop. This paper contributes to bringing the fields of computational neuroscience and robotics closer together by integrating open-source software components from these two fields. The resulting framework allows to evaluate the validity of biologically-plausibe plasticity models in closed-loop robotics environments. We demonstrate this framework to evaluate Synaptic Plasticity with Online REinforcement learning (SPORE), a reward-learning rule based on synaptic sampling, on two visuomotor tasks: reaching and lane following. We show that SPORE is capable of learning to perform policies within the course of simulated hours for both tasks. Provisional parameter explorations indicate that the learning rate and the temperature driving the stochastic processes that govern synaptic learning dynamics need to be regulated for performance improvements to be retained. We conclude by discussing the recent deep reinforcement learning techniques which would be beneficial to increase the functionality of SPORE on visuomotor tasks.
Palabras clave

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Idioma: En Revista: Front Neurorobot Año: 2019 Tipo del documento: Article País de afiliación: Alemania Pais de publicación: Suiza

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Idioma: En Revista: Front Neurorobot Año: 2019 Tipo del documento: Article País de afiliación: Alemania Pais de publicación: Suiza