Keyword search (4,163 papers available)

"neural decoding" Keyword-tagged Publications:

Title Authors PubMed ID
1 SpeechBrain-MOABB: An open-source Python library for benchmarking deep neural networks applied to EEG signals Borra D; Paissan F; Ravanelli M; 39265481
ENCS
2 Decoding of Envelope vs. Fundamental Frequency During Complex Auditory Stream Segregation Greenlaw KM; Puschmann S; Coffey EBJ; 37215227
PSYCHOLOGY

 

Title:Decoding of Envelope vs. Fundamental Frequency During Complex Auditory Stream Segregation
Authors:Greenlaw KMPuschmann SCoffey EBJ
Link:https://pubmed.ncbi.nlm.nih.gov/37215227/
DOI:10.1162/nol_a_00013
Publication:Neurobiology of language (Cambridge, Mass.)
Keywords:auditory stream segregationhearing-in-noiseneural decodingpitch representationreconstructionspeech-in-noise
PMID:37215227 Category: Date Added:2023-05-22
Dept Affiliation: PSYCHOLOGY

Description:

Hearing-in-noise perception is a challenging task that is critical to human function, but how the brain accomplishes it is not well understood. A candidate mechanism proposes that the neural representation of an attended auditory stream is enhanced relative to background sound via a combination of bottom-up and top-down mechanisms. To date, few studies have compared neural representation and its task-related enhancement across frequency bands that carry different auditory information, such as a sound's amplitude envelope (i.e., syllabic rate or rhythm; 1-9 Hz), and the fundamental frequency of periodic stimuli (i.e., pitch; >40 Hz). Furthermore, hearing-in-noise in the real world is frequently both messier and richer than the majority of tasks used in its study. In the present study, we use continuous sound excerpts that simultaneously offer predictive, visual, and spatial cues to help listeners separate the target from four acoustically similar simultaneously presented sound streams. We show that while both lower and higher frequency information about the entire sound stream is represented in the brain's response, the to-be-attended sound stream is strongly enhanced only in the slower, lower frequency sound representations. These results are consistent with the hypothesis that attended sound representations are strengthened progressively at higher level, later processing stages, and that the interaction of multiple brain systems can aid in this process. Our findings contribute to our understanding of auditory stream separation in difficult, naturalistic listening conditions and demonstrate that pitch and envelope information can be decoded from single-channel EEG data.





BookR developed by Sriram Narayanan
for the Concordia University School of Health
Copyright © 2011-2026
Cookie settings
Concordia University