Fast convergence of learning requires plasticity between inferior olive and deep cerebellar nuclei in a manipulation task: A closed-loop robotic simulation

Niceto R. Luque, Jesús A. Garrido, Richard R. Carrillo, Egidio D'Angelo, Eduardo Ros

Research output: Contribution to journalArticle

28 Citations (Scopus)

Abstract

The cerebellum is known to play a critical role in learning relevant patterns of activity for adaptive motor control, but the underlying network mechanisms are only partly understood. The classical long-term synaptic plasticity between parallel fibers (PFs) and Purkinje cells (PCs), which is driven by the inferior olive (IO), can only account for limited aspects of learning. Recently, the role of additional forms of plasticity in the granular layer, molecular layer and deep cerebellar nuclei (DCN) has been considered. In particular, learning at DCN synapses allows for generalization, but convergence to a stable state requires hundreds of repetitions. In this paper we have explored the putative role of the IO-DCN connection by endowing it with adaptable weights and exploring its implications in a closed-loop robotic manipulation task. Our results show that IO-DCN plasticity accelerates convergence of learning by up to two orders of magnitude without conflicting with the generalization properties conferred by DCN plasticity. Thus, this model suggests that multiple distributed learning mechanisms provide a key for explaining the complex properties of procedural learning and open up new experimental questions for synaptic plasticity in the cerebellar network.

Original languageEnglish
Article number97
JournalFrontiers in Computational Neuroscience
Volume8
Issue numberAUG
DOIs
Publication statusPublished - Aug 15 2014

Fingerprint

Cerebellar Nuclei
Robotics
Learning
Neuronal Plasticity
Purkinje Cells
Synapses
Cerebellum
Motor Activity
Weights and Measures

Keywords

  • Cerebellar nuclei
  • Inferior olive
  • Learning consolidation
  • Long-term synaptic plasticity
  • Modeling

ASJC Scopus subject areas

  • Neuroscience (miscellaneous)
  • Cellular and Molecular Neuroscience

Cite this

Fast convergence of learning requires plasticity between inferior olive and deep cerebellar nuclei in a manipulation task : A closed-loop robotic simulation. / Luque, Niceto R.; Garrido, Jesús A.; Carrillo, Richard R.; D'Angelo, Egidio; Ros, Eduardo.

In: Frontiers in Computational Neuroscience, Vol. 8, No. AUG, 97, 15.08.2014.

Research output: Contribution to journalArticle

@article{01b1d7480121432f886f5fd3fa4da83f,
title = "Fast convergence of learning requires plasticity between inferior olive and deep cerebellar nuclei in a manipulation task: A closed-loop robotic simulation",
abstract = "The cerebellum is known to play a critical role in learning relevant patterns of activity for adaptive motor control, but the underlying network mechanisms are only partly understood. The classical long-term synaptic plasticity between parallel fibers (PFs) and Purkinje cells (PCs), which is driven by the inferior olive (IO), can only account for limited aspects of learning. Recently, the role of additional forms of plasticity in the granular layer, molecular layer and deep cerebellar nuclei (DCN) has been considered. In particular, learning at DCN synapses allows for generalization, but convergence to a stable state requires hundreds of repetitions. In this paper we have explored the putative role of the IO-DCN connection by endowing it with adaptable weights and exploring its implications in a closed-loop robotic manipulation task. Our results show that IO-DCN plasticity accelerates convergence of learning by up to two orders of magnitude without conflicting with the generalization properties conferred by DCN plasticity. Thus, this model suggests that multiple distributed learning mechanisms provide a key for explaining the complex properties of procedural learning and open up new experimental questions for synaptic plasticity in the cerebellar network.",
keywords = "Cerebellar nuclei, Inferior olive, Learning consolidation, Long-term synaptic plasticity, Modeling",
author = "Luque, {Niceto R.} and Garrido, {Jes{\'u}s A.} and Carrillo, {Richard R.} and Egidio D'Angelo and Eduardo Ros",
year = "2014",
month = "8",
day = "15",
doi = "10.3389/fncom.2014.00097",
language = "English",
volume = "8",
journal = "Frontiers in Computational Neuroscience",
issn = "1662-5188",
publisher = "Frontiers Research Foundation",
number = "AUG",

}

TY - JOUR

T1 - Fast convergence of learning requires plasticity between inferior olive and deep cerebellar nuclei in a manipulation task

T2 - A closed-loop robotic simulation

AU - Luque, Niceto R.

AU - Garrido, Jesús A.

AU - Carrillo, Richard R.

AU - D'Angelo, Egidio

AU - Ros, Eduardo

PY - 2014/8/15

Y1 - 2014/8/15

N2 - The cerebellum is known to play a critical role in learning relevant patterns of activity for adaptive motor control, but the underlying network mechanisms are only partly understood. The classical long-term synaptic plasticity between parallel fibers (PFs) and Purkinje cells (PCs), which is driven by the inferior olive (IO), can only account for limited aspects of learning. Recently, the role of additional forms of plasticity in the granular layer, molecular layer and deep cerebellar nuclei (DCN) has been considered. In particular, learning at DCN synapses allows for generalization, but convergence to a stable state requires hundreds of repetitions. In this paper we have explored the putative role of the IO-DCN connection by endowing it with adaptable weights and exploring its implications in a closed-loop robotic manipulation task. Our results show that IO-DCN plasticity accelerates convergence of learning by up to two orders of magnitude without conflicting with the generalization properties conferred by DCN plasticity. Thus, this model suggests that multiple distributed learning mechanisms provide a key for explaining the complex properties of procedural learning and open up new experimental questions for synaptic plasticity in the cerebellar network.

AB - The cerebellum is known to play a critical role in learning relevant patterns of activity for adaptive motor control, but the underlying network mechanisms are only partly understood. The classical long-term synaptic plasticity between parallel fibers (PFs) and Purkinje cells (PCs), which is driven by the inferior olive (IO), can only account for limited aspects of learning. Recently, the role of additional forms of plasticity in the granular layer, molecular layer and deep cerebellar nuclei (DCN) has been considered. In particular, learning at DCN synapses allows for generalization, but convergence to a stable state requires hundreds of repetitions. In this paper we have explored the putative role of the IO-DCN connection by endowing it with adaptable weights and exploring its implications in a closed-loop robotic manipulation task. Our results show that IO-DCN plasticity accelerates convergence of learning by up to two orders of magnitude without conflicting with the generalization properties conferred by DCN plasticity. Thus, this model suggests that multiple distributed learning mechanisms provide a key for explaining the complex properties of procedural learning and open up new experimental questions for synaptic plasticity in the cerebellar network.

KW - Cerebellar nuclei

KW - Inferior olive

KW - Learning consolidation

KW - Long-term synaptic plasticity

KW - Modeling

UR - http://www.scopus.com/inward/record.url?scp=84924201566&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84924201566&partnerID=8YFLogxK

U2 - 10.3389/fncom.2014.00097

DO - 10.3389/fncom.2014.00097

M3 - Article

AN - SCOPUS:84924201566

VL - 8

JO - Frontiers in Computational Neuroscience

JF - Frontiers in Computational Neuroscience

SN - 1662-5188

IS - AUG

M1 - 97

ER -