Sensory processing during viewing of cinematographic material: Computational modeling and functional neuroimaging

Cecile Bordier, Francesco Puja, Emiliano Macaluso

Research output: Contribution to journalArticlepeer-review


The investigation of brain activity using naturalistic, ecologically-valid stimuli is becoming an important challenge for neuroscience research. Several approaches have been proposed, primarily relying on data-driven methods (e.g. independent component analysis, ICA). However, data-driven methods often require some post-hoc interpretation of the imaging results to draw inferences about the underlying sensory, motor or cognitive functions. Here, we propose using a biologically-plausible computational model to extract (multi-)sensory stimulus statistics that can be used for standard hypothesis-driven analyses (general linear model, GLM). We ran two separate fMRI experiments, which both involved subjects watching an episode of a TV-series. In Exp 1, we manipulated the presentation by switching on-and-off color, motion and/or sound at variable intervals, whereas in Exp 2, the video was played in the original version, with all the consequent continuous changes of the different sensory features intact. Both for vision and audition, we extracted stimulus statistics corresponding to spatial and temporal discontinuities of low-level features, as well as a combined measure related to the overall stimulus saliency. Results showed that activity in occipital visual cortex and the superior temporal auditory cortex co-varied with changes of low-level features. Visual saliency was found to further boost activity in extra-striate visual cortex plus posterior parietal cortex, while auditory saliency was found to enhance activity in the superior temporal cortex. Data-driven ICA analyses of the same datasets also identified "sensory" networks comprising visual and auditory areas, but without providing specific information about the possible underlying processes, e.g., these processes could relate to modality, stimulus features and/or saliency. We conclude that the combination of computational modeling and GLM enables the tracking of the impact of bottom-up signals on brain activity during viewing of complex and dynamic multisensory stimuli, beyond the capability of purely data-driven approaches.

Original languageEnglish
Pages (from-to)213-226
Number of pages14
Publication statusPublished - Feb 5 2013


  • Biologically-inspired vision and audition
  • Cinematographic material
  • Data-driven
  • Multi-sensory
  • Saliency

ASJC Scopus subject areas

  • Cognitive Neuroscience
  • Neurology


Dive into the research topics of 'Sensory processing during viewing of cinematographic material: Computational modeling and functional neuroimaging'. Together they form a unique fingerprint.

Cite this