Reset filters

Search publications


Search by keyword
List by department / centre / faculty

No publications found.

 

Audiovisual integration in children with cochlear implants revealed through EEG and fNIRS

Authors: Alemi RWolfe JNeumann SManning JTowler WKoirala NGracco VLDeroche M


Affiliations

1 Department of Psychology, Concordia University, 7141 Sherbrooke St. West, Montreal, Quebec H4B 1R6, Canada. Electronic address: razieh.alemi@mail.concordia.ca.
2 Oberkotter Foundation, Oklahoma City, OK, USA.
3 Hearts for Hearing Foundation, 11500 Portland Av., Oklahoma City, OK 73120, USA.
4 Haskins Laboratories, 300 George St., New Haven, CT 06511, USA.
5 Department of Psychology, Concordia University, 7141 Sherbrooke St. West, Montreal, Quebec H4B 1R6, Canada.

Description

Sensory deprivation can offset the balance of audio versus visual information in multimodal processing. Such a phenomenon could persist for children born deaf, even after they receive cochlear implants (CIs), and could potentially explain why one modality is given priority over the other. Here, we recorded cortical responses to a single speaker uttering two syllables, presented in audio-only (A), visual-only (V), and audio-visual (AV) modes. Electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS) were successively recorded in seventy-five school-aged children. Twenty-five were children with normal hearing (NH) and fifty wore CIs, among whom 26 had relatively high language abilities (HL) comparable to those of NH children, while 24 others had low language abilities (LL). In EEG data, visual-evoked potentials were captured in occipital regions, in response to V and AV stimuli, and they were accentuated in the HL group compared to the LL group (the NH group being intermediate). Close to the vertex, auditory-evoked potentials were captured in response to A and AV stimuli and reflected a differential treatment of the two syllables but only in the NH group. None of the EEG metrics revealed any interaction between group and modality. In fNIRS data, each modality induced a corresponding activity in visual or auditory regions, but no group difference was observed in A, V, or AV stimulation. The present study did not reveal any sign of abnormal AV integration in children with CI. An efficient multimodal integrative network (at least for rudimentary speech materials) is clearly not a sufficient condition to exhibit good language and literacy.


Keywords: Audiovisual integrationCochlear implantCross-modal plasticityHearing loss


Links

PubMed: https://pubmed.ncbi.nlm.nih.gov/37989460/

DOI: 10.1016/j.brainresbull.2023.110817