About Brainglot
Presentation
History
Objetives
Summary
Brainglot personnel
 
 
Activities
Research Areas and Projects
PhD
Publications
Media
Consolider CogNeuro Seminar Series
Consolider Meetings and Workshops
Technical and Administrative Support Services
Research Reports
 
 
Consolider Groups
Speech Perception, Production and Bilingualism
Cognitive Neuroscience of Auditory Perception and Attention
Group of Attention, Action and Perception
Computational and Theoretical Neuroscience
Neuropsychology and Functional Neuroimaging
Grammar and Bilingualism
 
 
 

Multisensory Integration in second language

Project coordinator: Salvador Soto-Faraco.

Project members: César Ávila, Jordi Navarra, Marco Calabresi, Maya Visser, Carolina Sánchez, Agnès Alsius, Scott C. Sinnett, Alfonso Barrós-Loscertales, Ignacio Velasco, Emmanuel Biau.

       

GOALS OF THE PROJECT 

The benefits of multisensory integration in perception have been repeatedly documented. In speech perception, multisensory benefits have been studied profusely in the monolingual case. Here, we address audiovisual speech perception in non-native languages, focusing on behavioural manifestations and on physiological expression.

 

SUBPROJECT 1. Behavioural correlates of av speech processing in L2 

(a) We addressed the temporal dynamics of auditory-visual speech processing as a function of prior language experience (Navarra et al., 2010). English and Spanish monolingual speakers performed audio-visual simultaneity judgments (SJ) on audio-visual speech in both English and Spanish. Perceptual simultaneity required vision to lead sound in time, but this temporal shift was larger for L1. We propose that experience modulates the constraining role of vision in audiovisual speech processing, and hence audiovisual alignment. We address audiovisual timing across visual saliency values and language background (Fig. 1).    

           

Figure 1. Empirical data (dots) of the audio-visual simultaneity judgments (SJ) as a function of visual saliency (from high to low, /θ/, /s/, and /x/ visemes). Lines are Gaussian fits. The visual temporal shift for perceived simultaneity decreases as the visual aspect of the phoneme becomes more salient. Insets show the point of perceptual simultaneity (PSS, top) and Just Noticeable Difference (JND, bottom). 

 

(b) Perceiving language involves predictive coding at several information levels. We addressed prediction across sensory modalities and whether language-specific knowledge is a necessary condition. We used a speeded audio-visual matching task that involves phonological comparison, and manipulated the immediate sentence context prior to the audiovisual target. We found (Sánchez et al., submitted) that prior visual context benefits auditory processing, whereas auditory context does not speed up visual processing. This supports the notion of predictive coding across sensory modalities and, in the light of the modality asymmetry, we propose that prediction works at a phonological level where vision provides advanced information about audition, but not viceversa. In addition, we found that cross-modal prediction occurs more strongly in the native language. We suggest that predictions are based on information built up from language-specific experience, not from domain-general multisensory experience.

 .. 

SUBPROJECT 2. Physiological correlates of av speech processing in L2 

(a) We investigated the neural correlates of multisensory processing in L1 and L2 using fMRI. We tested English-Spanish and Spanish-English bilinguals (Barrós-Loscertales et al., 2009) with speech fragments (sentences) in their L1 and L2 under four different conditions, (A)uditory alone, (V)isual alone, (AVc) audio-visual congruent, and (AVi) audio-visual incongruent. A behavioral study using a shadowing task under similar conditions revealed strong audiovisual congruency effects in L1 but not L2 (see Fig 6.1.). In fMRI, we sought for multisensory integration areas (AV>max[A,V] contrast), and for cross-modal congruency sites (contrast [(AVc>AVi) OR(AVi>AVc)]). Two findings emerged. First, the brain areas sensitive to multisensory integration in L1 and L2 overlap. Second, that audio-visual congruency in L1 leads to greater activity in auditory association areas, whereas congruency in L2 leads to visual cortex activation. This suggests alternative mechanisms in the processing of first and second language audiovisual speech. See Fig. 2b 

    

Figure 2. (a) Behavioral results of the shadowing task, where bars represent the relative gain obtained in AV presentations with respect to audio alone. (b) fMRI results, highlighting AV congruency-sensitive brain areas. The data are plotted as a function of L1 or L2 with English and Spanish groups collapsed.

 

(b) Event Related Potentials. The classical Mismatch Negativity (MMN) difference wave, signature of an early comparison process, is sensitive to phonological categories. We tested this phonological sensitivity for L2 phonemes under auditory and audiovisual conditions. Behavioral results revealed that auditory /be/ - /ve/ is discriminated nearly perfectly by French (L1) but not by Spanish (L2) speakers. However, the performance of Spanish speakers improves when adding vision. In the (ongoing) ERP study measuring MMN /be/-/ve/ are presented acoustically, visually or audiovisually. We expect that sensitivity to this contrast in L2 will fail for auditory-only conditions but will be restored for audio-visual conditions. 

 

OTHER SUBPROJECTS 

  1. Searching for multisensory coincidence in speech. (Alsius et al., submitted).
  2. Structural determinants of rule learning (Toro & al., in press)
  3. Cross-modal narrowing in infants (Pons et al. 2009).