About Brainglot
Brainglot personnel
Research Areas and Projects
Consolider CogNeuro Seminar Series
Consolider Meetings and Workshops
Technical and Administrative Support Services
Research Reports
Consolider Groups
Speech Perception, Production and Bilingualism
Cognitive Neuroscience of Auditory Perception and Attention
Group of Attention, Action and Perception
Computational and Theoretical Neuroscience
Neuropsychology and Functional Neuroimaging
Grammar and Bilingualism

Group of Attention Action and Perception (GAAP)





Group coordinator: Salvador Soto-Faraco.

Group members: Jordi Navarra, Joan López-Moliner, Montserrat Juncadella, Maya Visser, Carolina Sánchez, Philip Jaekl, Agnès Alsius, Scott C. Sinnett, Alexis Pérez, Elena Azañón, Karla Camacho, Emmanuel Biau, Antonia Najas. 

Visit our web page. 




Subjective experience tells us that understanding someone in a second language is easier in person than over the telephone. The availability of extra contextual and sensory information (including orofacial movements) provides considerable help when the acoustic signal is insufficient. However, the potential benefits of multisensory integration in L2 have received little attention as yet. The GAAP focuses on the cognitive and neural processes leading to multisensory integration, and in particular, on those related to speech perception in bilingualism. These potential benefits are addressed at the level of comprehension (sentence level) as well as at the phonological level, using behavioural and neuroimaging methods.




Subproject 1: Ongoing developments in the benefits of multisensory integration in second language processing. We are currently focusing on the detailed processes leading to integration between seen and heard speech in L2, and its differences vis-à-vis L1. Our research on crossmodal predictive coding is now aimed at addressing the capacity for prediction in second and unknown languages, and in determining its phonological characterization. This is complementary to our research on the temporal dynamics of audiovidual speech processing, where audiovisual perceptual asymmetries based on visual saliency are now being measured for speech segments as a function of whether they do or do not belong to the observer’s native repertoire. This behavioural approach is paralleled by a new fMRI study on audiovisual integration effects for native vs. non-native phonemes.  

Subproject 2: Ongoing developments in domain-general vs. speech-specific aspects of multisensory integration. The stimuli used so far to test crossmodal enhancement of visual perception consist mostly of static events. We are now starting to include dynamic stimuli, more akin to the speech signal, in order to reveal potential low-level multisensory interactions more clearly. In particular, our current work addresses the dissociation between magno- and parvo-cellular processing using dynamic point-light speakers. A further line of research addresses the decisional aspects in multisensory contexts; specifically, we are now using our previous behavioural results to infer the underlying nature of this decision making process (based on low level integration, or on late decision) through computational modeling




Figure 1 (a) Typical set up for an audiovisual shadowing experiment. (b) Close up of the visual realization of French /ve/(top) and /be/(bottom) 120 ms after onset.

Other ongoing research lines, complementary to these main areas above, include rule- and statistical-learning, the role of attention in multisensory integration, and several others related to the encoding of touch. Our methods cover, apart from ERP and fMRI, Transcranial Magnetic Stimulation (TMS), Magnetoencephalography (MEG), Neuropsychology and studies with children.