Cross-modal spatial integration between auditory, and visual stimuli is a common phenomenon in space perception. The principles underlying such integration have been outlined by neurophysiological and behavioral studies in animals (Stein & Meredith, 1993), but little evidence exists proving that similar principles occur also in humans. In the present study, we explored such possibility in patients with visual neglect, namely, patients with visuospatial impairment. To test this hypothesis, neglect patients were required to detect brief flash of light presented in one of six spatial positions, either in a unimodal condition (i.e., only visual stimuli were presented) or in a cross-modal condition (i.e., a sound was presented simultaneously to the visual target, either at the same spatial position or at one of the remaining five possible positions). The results showed an improvement of visual detection when visual and auditory stimuli were originating from the same position in space or at close spatial disparity (16°). In contrast, no improvement was found when the spatial separation of visual and auditory, stimuli was larger than 16°. Moreover, the improvement was larger for visual positions that were more affected by the spatial impairment, i.e., the most peripheral positions in the left visual field (LYF). In conclusion, the results of the present study considerably extend our knowledge about the multisensory integration, by showing in humans the existence of an integrated visuoauditory system with functional properties similar to those found in animals.
ASJC Scopus subject areas
- Behavioral Neuroscience
- Experimental and Cognitive Psychology
- Cognitive Neuroscience