Parallelization of deep networks

Michele De Filippo De Grazia, Ivilin Stoianov, Marco Zorzi

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Learning multiple levels of feature detectors in Deep Belief Networks is a promising approach both for neuro-cognitive modeling and for practical applications, but it comes at the cost of high computational requirements. Here we propose a method for the parallelization of unsupervised generative learning in deep networks based on distributing training data among multiple computational nodes in a cluster. We show that this approach significantly reduces the training time with very limited cost on performance. We also show that a layerwise convergence stopping criterion yields faster training.

Original languageEnglish
Title of host publicationESANN 2012 proceedings, 20th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning
Publisheri6doc.com publication
Pages621-626
Number of pages6
ISBN (Print)9782874190490
Publication statusPublished - 2012
Event20th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning, ESANN 2012 - Bruges, Belgium
Duration: Apr 25 2012Apr 27 2012

Other

Other20th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning, ESANN 2012
CountryBelgium
CityBruges
Period4/25/124/27/12

ASJC Scopus subject areas

  • Information Systems
  • Artificial Intelligence

Fingerprint Dive into the research topics of 'Parallelization of deep networks'. Together they form a unique fingerprint.

  • Cite this

    De Grazia, M. D. F., Stoianov, I., & Zorzi, M. (2012). Parallelization of deep networks. In ESANN 2012 proceedings, 20th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (pp. 621-626). i6doc.com publication.