TY - JOUR
T1 - Fast convergence of learning requires plasticity between inferior olive and deep cerebellar nuclei in a manipulation task
T2 - A closed-loop robotic simulation
AU - Luque, Niceto R.
AU - Garrido, Jesús A.
AU - Carrillo, Richard R.
AU - D'Angelo, Egidio
AU - Ros, Eduardo
PY - 2014/8/15
Y1 - 2014/8/15
N2 - The cerebellum is known to play a critical role in learning relevant patterns of activity for adaptive motor control, but the underlying network mechanisms are only partly understood. The classical long-term synaptic plasticity between parallel fibers (PFs) and Purkinje cells (PCs), which is driven by the inferior olive (IO), can only account for limited aspects of learning. Recently, the role of additional forms of plasticity in the granular layer, molecular layer and deep cerebellar nuclei (DCN) has been considered. In particular, learning at DCN synapses allows for generalization, but convergence to a stable state requires hundreds of repetitions. In this paper we have explored the putative role of the IO-DCN connection by endowing it with adaptable weights and exploring its implications in a closed-loop robotic manipulation task. Our results show that IO-DCN plasticity accelerates convergence of learning by up to two orders of magnitude without conflicting with the generalization properties conferred by DCN plasticity. Thus, this model suggests that multiple distributed learning mechanisms provide a key for explaining the complex properties of procedural learning and open up new experimental questions for synaptic plasticity in the cerebellar network.
AB - The cerebellum is known to play a critical role in learning relevant patterns of activity for adaptive motor control, but the underlying network mechanisms are only partly understood. The classical long-term synaptic plasticity between parallel fibers (PFs) and Purkinje cells (PCs), which is driven by the inferior olive (IO), can only account for limited aspects of learning. Recently, the role of additional forms of plasticity in the granular layer, molecular layer and deep cerebellar nuclei (DCN) has been considered. In particular, learning at DCN synapses allows for generalization, but convergence to a stable state requires hundreds of repetitions. In this paper we have explored the putative role of the IO-DCN connection by endowing it with adaptable weights and exploring its implications in a closed-loop robotic manipulation task. Our results show that IO-DCN plasticity accelerates convergence of learning by up to two orders of magnitude without conflicting with the generalization properties conferred by DCN plasticity. Thus, this model suggests that multiple distributed learning mechanisms provide a key for explaining the complex properties of procedural learning and open up new experimental questions for synaptic plasticity in the cerebellar network.
KW - Cerebellar nuclei
KW - Inferior olive
KW - Learning consolidation
KW - Long-term synaptic plasticity
KW - Modeling
UR - http://www.scopus.com/inward/record.url?scp=84924201566&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84924201566&partnerID=8YFLogxK
U2 - 10.3389/fncom.2014.00097
DO - 10.3389/fncom.2014.00097
M3 - Article
AN - SCOPUS:84924201566
VL - 8
JO - Frontiers in Computational Neuroscience
JF - Frontiers in Computational Neuroscience
SN - 1662-5188
IS - AUG
M1 - 97
ER -