Toggle Main Menu Toggle Search

Open Access padlockePrints

Spectral Grouping of Electrically Encoded Sound Predicts Speech-in-Noise Performance in Cochlear Implantees

Lookup NU author(s): Matthew Choy, Professor Tim GriffithsORCiD

Downloads


Licence

This work is licensed under a Creative Commons Attribution 4.0 International License (CC BY 4.0).


Abstract

© 2023, The Author(s). Objectives: Cochlear implant (CI) users exhibit large variability in understanding speech in noise. Past work in CI users found that spectral and temporal resolution correlates with speech-in-noise ability, but a large portion of variance remains unexplained. Recent work on normal-hearing listeners showed that the ability to group temporally and spectrally coherent tones in a complex auditory scene predicts speech-in-noise ability independently of the audiogram, highlighting a central mechanism for auditory scene analysis that contributes to speech-in-noise. The current study examined whether the auditory grouping ability also contributes to speech-in-noise understanding in CI users. Design: Forty-seven post-lingually deafened CI users were tested with psychophysical measures of spectral and temporal resolution, a stochastic figure-ground task that depends on the detection of a figure by grouping multiple fixed frequency elements against a random background, and a sentence-in-noise measure. Multiple linear regression was used to predict sentence-in-noise performance from the other tasks. Results: No co-linearity was found between any predictor variables. All three predictors (spectral and temporal resolution plus the figure-ground task) exhibited significant contribution in the multiple linear regression model, indicating that the auditory grouping ability in a complex auditory scene explains a further proportion of variance in CI users’ speech-in-noise performance that was not explained by spectral and temporal resolution. Conclusion: Measures of cross-frequency grouping reflect an auditory cognitive mechanism that determines speech-in-noise understanding independently of cochlear function. Such measures are easily implemented clinically as predictors of CI success and suggest potential strategies for rehabilitation based on training with non-speech stimuli.


Publication metadata

Author(s): Choi I, Gander PE, Berger JI, Woo J, Choy MH, Hong J, Colby S, McMurray B, Griffiths TD

Publication type: Article

Publication status: Published

Journal: JARO - Journal of the Association for Research in Otolaryngology

Year: 2023

Volume: 24

Pages: 607-617

Online publication date: 07/12/2023

Acceptance date: 14/11/2023

Date deposited: 18/12/2023

ISSN (print): 1525-3961

ISSN (electronic): 1438-7573

Publisher: Springer Nature

URL: https://doi.org/10.1007/s10162-023-00918-x

DOI: 10.1007/s10162-023-00918-x


Altmetrics

Altmetrics provided by Altmetric


Funding

Funder referenceFunder name
DC000242 36
MR/T032553/1
MRC
National Institute on Deafness and Other Communication Disorders
US Department of Defense
W81XWH1910637

Share