Sensory processing during viewing of cinematographic material: Computational modeling and functional neuroimaging

Cecile Bordier, Francesco Puja, Emiliano Macaluso

Research output: Contribution to journalArticle

Abstract

The investigation of brain activity using naturalistic, ecologically-valid stimuli is becoming an important challenge for neuroscience research. Several approaches have been proposed, primarily relying on data-driven methods (e.g. independent component analysis, ICA). However, data-driven methods often require some post-hoc interpretation of the imaging results to draw inferences about the underlying sensory, motor or cognitive functions. Here, we propose using a biologically-plausible computational model to extract (multi-)sensory stimulus statistics that can be used for standard hypothesis-driven analyses (general linear model, GLM). We ran two separate fMRI experiments, which both involved subjects watching an episode of a TV-series. In Exp 1, we manipulated the presentation by switching on-and-off color, motion and/or sound at variable intervals, whereas in Exp 2, the video was played in the original version, with all the consequent continuous changes of the different sensory features intact. Both for vision and audition, we extracted stimulus statistics corresponding to spatial and temporal discontinuities of low-level features, as well as a combined measure related to the overall stimulus saliency. Results showed that activity in occipital visual cortex and the superior temporal auditory cortex co-varied with changes of low-level features. Visual saliency was found to further boost activity in extra-striate visual cortex plus posterior parietal cortex, while auditory saliency was found to enhance activity in the superior temporal cortex. Data-driven ICA analyses of the same datasets also identified "sensory" networks comprising visual and auditory areas, but without providing specific information about the possible underlying processes, e.g., these processes could relate to modality, stimulus features and/or saliency. We conclude that the combination of computational modeling and GLM enables the tracking of the impact of bottom-up signals on brain activity during viewing of complex and dynamic multisensory stimuli, beyond the capability of purely data-driven approaches.

Original languageEnglish
Pages (from-to)213-226
Number of pages14
JournalNeuroImage
Volume67
DOIs
Publication statusPublished - Feb 5 2013

Fingerprint

Functional Neuroimaging
Visual Cortex
Auditory Cortex
Temporal Lobe
Linear Models
Occipital Lobe
Parietal Lobe
Brain
Neurosciences
Cognition
Hearing
Magnetic Resonance Imaging
Research

Keywords

  • Biologically-inspired vision and audition
  • Cinematographic material
  • Data-driven
  • Multi-sensory
  • Saliency

ASJC Scopus subject areas

  • Cognitive Neuroscience
  • Neurology

Cite this

Sensory processing during viewing of cinematographic material : Computational modeling and functional neuroimaging. / Bordier, Cecile; Puja, Francesco; Macaluso, Emiliano.

In: NeuroImage, Vol. 67, 05.02.2013, p. 213-226.

Research output: Contribution to journalArticle

Bordier, Cecile ; Puja, Francesco ; Macaluso, Emiliano. / Sensory processing during viewing of cinematographic material : Computational modeling and functional neuroimaging. In: NeuroImage. 2013 ; Vol. 67. pp. 213-226.
@article{bd953e84ee754c318506c5ef1699e004,
title = "Sensory processing during viewing of cinematographic material: Computational modeling and functional neuroimaging",
abstract = "The investigation of brain activity using naturalistic, ecologically-valid stimuli is becoming an important challenge for neuroscience research. Several approaches have been proposed, primarily relying on data-driven methods (e.g. independent component analysis, ICA). However, data-driven methods often require some post-hoc interpretation of the imaging results to draw inferences about the underlying sensory, motor or cognitive functions. Here, we propose using a biologically-plausible computational model to extract (multi-)sensory stimulus statistics that can be used for standard hypothesis-driven analyses (general linear model, GLM). We ran two separate fMRI experiments, which both involved subjects watching an episode of a TV-series. In Exp 1, we manipulated the presentation by switching on-and-off color, motion and/or sound at variable intervals, whereas in Exp 2, the video was played in the original version, with all the consequent continuous changes of the different sensory features intact. Both for vision and audition, we extracted stimulus statistics corresponding to spatial and temporal discontinuities of low-level features, as well as a combined measure related to the overall stimulus saliency. Results showed that activity in occipital visual cortex and the superior temporal auditory cortex co-varied with changes of low-level features. Visual saliency was found to further boost activity in extra-striate visual cortex plus posterior parietal cortex, while auditory saliency was found to enhance activity in the superior temporal cortex. Data-driven ICA analyses of the same datasets also identified {"}sensory{"} networks comprising visual and auditory areas, but without providing specific information about the possible underlying processes, e.g., these processes could relate to modality, stimulus features and/or saliency. We conclude that the combination of computational modeling and GLM enables the tracking of the impact of bottom-up signals on brain activity during viewing of complex and dynamic multisensory stimuli, beyond the capability of purely data-driven approaches.",
keywords = "Biologically-inspired vision and audition, Cinematographic material, Data-driven, Multi-sensory, Saliency",
author = "Cecile Bordier and Francesco Puja and Emiliano Macaluso",
year = "2013",
month = "2",
day = "5",
doi = "10.1016/j.neuroimage.2012.11.031",
language = "English",
volume = "67",
pages = "213--226",
journal = "NeuroImage",
issn = "1053-8119",
publisher = "Academic Press Inc.",

}

TY - JOUR

T1 - Sensory processing during viewing of cinematographic material

T2 - Computational modeling and functional neuroimaging

AU - Bordier, Cecile

AU - Puja, Francesco

AU - Macaluso, Emiliano

PY - 2013/2/5

Y1 - 2013/2/5

N2 - The investigation of brain activity using naturalistic, ecologically-valid stimuli is becoming an important challenge for neuroscience research. Several approaches have been proposed, primarily relying on data-driven methods (e.g. independent component analysis, ICA). However, data-driven methods often require some post-hoc interpretation of the imaging results to draw inferences about the underlying sensory, motor or cognitive functions. Here, we propose using a biologically-plausible computational model to extract (multi-)sensory stimulus statistics that can be used for standard hypothesis-driven analyses (general linear model, GLM). We ran two separate fMRI experiments, which both involved subjects watching an episode of a TV-series. In Exp 1, we manipulated the presentation by switching on-and-off color, motion and/or sound at variable intervals, whereas in Exp 2, the video was played in the original version, with all the consequent continuous changes of the different sensory features intact. Both for vision and audition, we extracted stimulus statistics corresponding to spatial and temporal discontinuities of low-level features, as well as a combined measure related to the overall stimulus saliency. Results showed that activity in occipital visual cortex and the superior temporal auditory cortex co-varied with changes of low-level features. Visual saliency was found to further boost activity in extra-striate visual cortex plus posterior parietal cortex, while auditory saliency was found to enhance activity in the superior temporal cortex. Data-driven ICA analyses of the same datasets also identified "sensory" networks comprising visual and auditory areas, but without providing specific information about the possible underlying processes, e.g., these processes could relate to modality, stimulus features and/or saliency. We conclude that the combination of computational modeling and GLM enables the tracking of the impact of bottom-up signals on brain activity during viewing of complex and dynamic multisensory stimuli, beyond the capability of purely data-driven approaches.

AB - The investigation of brain activity using naturalistic, ecologically-valid stimuli is becoming an important challenge for neuroscience research. Several approaches have been proposed, primarily relying on data-driven methods (e.g. independent component analysis, ICA). However, data-driven methods often require some post-hoc interpretation of the imaging results to draw inferences about the underlying sensory, motor or cognitive functions. Here, we propose using a biologically-plausible computational model to extract (multi-)sensory stimulus statistics that can be used for standard hypothesis-driven analyses (general linear model, GLM). We ran two separate fMRI experiments, which both involved subjects watching an episode of a TV-series. In Exp 1, we manipulated the presentation by switching on-and-off color, motion and/or sound at variable intervals, whereas in Exp 2, the video was played in the original version, with all the consequent continuous changes of the different sensory features intact. Both for vision and audition, we extracted stimulus statistics corresponding to spatial and temporal discontinuities of low-level features, as well as a combined measure related to the overall stimulus saliency. Results showed that activity in occipital visual cortex and the superior temporal auditory cortex co-varied with changes of low-level features. Visual saliency was found to further boost activity in extra-striate visual cortex plus posterior parietal cortex, while auditory saliency was found to enhance activity in the superior temporal cortex. Data-driven ICA analyses of the same datasets also identified "sensory" networks comprising visual and auditory areas, but without providing specific information about the possible underlying processes, e.g., these processes could relate to modality, stimulus features and/or saliency. We conclude that the combination of computational modeling and GLM enables the tracking of the impact of bottom-up signals on brain activity during viewing of complex and dynamic multisensory stimuli, beyond the capability of purely data-driven approaches.

KW - Biologically-inspired vision and audition

KW - Cinematographic material

KW - Data-driven

KW - Multi-sensory

KW - Saliency

UR - http://www.scopus.com/inward/record.url?scp=84871663666&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84871663666&partnerID=8YFLogxK

U2 - 10.1016/j.neuroimage.2012.11.031

DO - 10.1016/j.neuroimage.2012.11.031

M3 - Article

C2 - 23202431

AN - SCOPUS:84871663666

VL - 67

SP - 213

EP - 226

JO - NeuroImage

JF - NeuroImage

SN - 1053-8119

ER -