Searched for: in-biosketch:yes
person:svirsm01
A new software tool to optimize frequency table selection for cochlear implants
Jethanamest, Daniel; Tan, Chin-Tuan; Fitzgerald, Matthew B; Svirsky, Mario A
HYPOTHESIS: When cochlear implant (CI) users are allowed to self-select the 'most intelligible' frequency-to-electrode table, some of them choose one that differs from the default frequency table that is normally used in clinical practice. BACKGROUND: CIs reproduce the tonotopicity of normal cochleas using frequency-to-electrode tables that assign stimulation of more basal electrodes to higher frequencies and more apical electrodes to lower frequency sounds. Current audiologic practice uses a default frequency-to-electrode table for most patients. However, individual differences in cochlear size, neural survival, and electrode positioning may result in different tables sounding most intelligible to different patients. No clinical tools currently exist to facilitate this fitting. METHODS: A software tool was designed that enables CI users to self-select a most intelligible frequency table. Users explore a 2-dimensional space that represents a range of different frequency tables. Unlike existing tools, this software enables users to interactively audition speech processed by different frequency tables and quickly identify a preferred one. Pilot testing was performed in 11 long-term, postlingually deaf CI users. RESULTS: The software tool was designed, developed, tested, and debugged. Patients successfully used the tool to sample frequency tables and to self-select tables deemed most intelligible, which for approximately half of the users differed from the clinical default. CONCLUSION: A software tool allowing CI users to self-select frequency-to-electrode tables may help in fitting postlingually deaf users. This novel approach may transform current methods of CI fitting
PMCID:2962926
PMID: 20729774
ISSN: 1537-4505
CID: 113658
Speech perception in congenitally deaf children receiving cochlear implants in the first year of life
Tajudeen, Bobby A; Waltzman, Susan B; Jethanamest, Daniel; Svirsky, Mario A
OBJECTIVE: To investigate whether children implanted in the first year of life show higher levels of speech perception than later-implanted children, when compared at the same ages and to investigate the time course of sensitive periods for developing speech perception skills. More specifically, to determine whether faster gains in speech perception are made by children implanted before 1 year old relative to those implanted at 2 or 3 years. STUDY DESIGN: Retrospective cohort study. SETTING: Tertiary academic referral center. PATIENTS: 117 children with congenital profound bilateral sensorineural hearing loss, with no additional identified disabilities. INTERVENTION: Cochlear implantation in the first, second, or third year of life. MAIN OUTCOME MEASURE: Development curves showing Lexical Neighborhood Test (LNT) word identification scores as a function of age. RESULTS: Children implanted within the first year of life have a mean advantage of 8.2% LNT-easy word scores over those implanted in the second year (p < 0.001) and a 16.8% advantage in LNT-easy word scores over those implanted in the third year of life (p < 0.001). These advantages remained statistically significant after accounting for sex, residual hearing, and bilateral cochlear implant use. When speech perception scores were expressed as a function of 'hearing age' rather than chronological age, however, there were no significant differences among the 3 groups. CONCLUSION: There is a clear speech perception advantage for earlier-implanted children over later-implanted children when compared at the same age but not when compared at the same time after implantation. Thus, the sensitive period for developing word identification seems to extend at least until age 3 years
PMCID:2962931
PMID: 20814343
ISSN: 1537-4505
CID: 113659
Speech production intelligibility of early implanted pediatric cochlear implant users
Habib, Mirette G; Waltzman, Susan B; Tajudeen, Bobby; Svirsky, Mario A
OBJECTIVES: To investigate the influence of age, and age-at-implantation, on speech production intelligibility in prelingually deaf pediatric cochlear implant recipients. METHODS: Forty prelingually, profoundly deaf children who received cochlear implants between 8 and 40 months of age. Their age at testing ranged between 2.5 and 18 years. Children were recorded repeating the 10 sentences in the Beginner's Intelligibility Test. These recordings were played back to normal-hearing listeners who were unfamiliar with deaf speech and who were instructed to write down what they heard. They also rated each subject for the intelligibility of their speech production on a 5-point rating-scale. The main outcome measures were the percentage of target words correctly transcribed, and the intelligibility ratings, in both cases averaged across 3 normal-hearing listeners. RESULTS: The data showed a strong effect of age at testing, with older children being more intelligible. This effect was particularly pronounced for children implanted in the first 24 months of life, all of whom had speech production intelligibility scores of 80% or higher when they were tested at age 5.5 years or older. This was true for only 5 out of 9 children implanted at age 25-36 months. CONCLUSIONS: Profoundly deaf children who receive cochlear implants in the first 2 years of life produce highly intelligible speech before the age of 6. This is also true for most, but not all children implanted in their third year
PMCID:2897907
PMID: 20472308
ISSN: 1872-8464
CID: 110684
A Model of Incomplete Adaptation to a Severely Shifted Frequency-to-Electrode Mapping by Cochlear Implant Users
Sagi, Elad; Fu, Qian-Jie; Galvin, John J 3rd; Svirsky, Mario A
In the present study, a computational model of phoneme identification was applied to data from a previous study, wherein cochlear implant (CI) users' adaption to a severely shifted frequency allocation map was assessed regularly over 3 months of continual use. This map provided more input filters below 1 kHz, but at the expense of introducing a downwards frequency shift of up to one octave in relation to the CI subjects' clinical maps. At the end of the 3-month study period, it was unclear whether subjects' asymptotic speech recognition performance represented a complete or partial adaptation. To clarify the matter, the computational model was applied to the CI subjects' vowel identification data in order to estimate the degree of adaptation, and to predict performance levels with complete adaptation to the frequency shift. Two model parameters were used to quantify this adaptation; one representing the listener's ability to shift their internal representation of how vowels should sound, and the other representing the listener's uncertainty in consistently recalling these representations. Two of the three CI users could shift their internal representations towards the new stimulation pattern within 1 week, whereas one could not do so completely even after 3 months. Subjects' uncertainty for recalling these representations increased substantially with the frequency-shifted map. Although this uncertainty decreased after 3 months, it remained much larger than subjects' uncertainty with their clinically assigned maps. This result suggests that subjects could not completely remap their phoneme labels, stored in long-term memory, towards the frequency-shifted vowels. The model also predicted that even with complete adaptation, the frequency-shifted map would not have resulted in improved speech understanding. Hence, the model presented here can be used to assess adaptation, and the anticipated gains in speech perception expected from changing a given CI device parameter
PMCID:2820204
PMID: 19774412
ISSN: 1438-7573
CID: 106591
A mathematical model of vowel identification by users of cochlear implants
Sagi, Elad; Meyer, Ted A; Kaiser, Adam R; Teoh, Su Wooi; Svirsky, Mario A
A simple mathematical model is presented that predicts vowel identification by cochlear implant users based on these listeners' resolving power for the mean locations of first, second, and/or third formant energies along the implanted electrode array. This psychophysically based model provides hypotheses about the mechanism cochlear implant users employ to encode and process the input auditory signal to extract information relevant for identifying steady-state vowels. Using one free parameter, the model predicts most of the patterns of vowel confusions made by users of different cochlear implant devices and stimulation strategies, and who show widely different levels of speech perception (from near chance to near perfect). Furthermore, the model can predict results from the literature, such as Skinner, et al. [(1995). Ann. Otol. Rhinol. Laryngol. 104, 307-311] frequency mapping study, and the general trend in the vowel results of Zeng and Galvin's [(1999). Ear Hear. 20, 60-74] studies of output electrical dynamic range reduction. The implementation of the model presented here is specific to vowel identification by cochlear implant users, but the framework of the model is more general. Computational models such as the one presented here can be useful for advancing knowledge about speech perception in hearing impaired populations, and for providing a guide for clinical research and clinical practice
PMCID:2830268
PMID: 20136228
ISSN: 0001-4966
CID: 106597
Effects of semantic context and feedback on perceptual learning of speech processed through an acoustic simulation of a cochlear implant
Loebach, Jeremy L; Pisoni, David B; Svirsky, Mario A
The effect of feedback and materials on perceptual learning was examined in listeners with normal hearing who were exposed to cochlear implant simulations. Generalization was most robust when feedback paired the spectrally degraded sentences with their written transcriptions, promoting mapping between the degraded signal and its acoustic-phonetic representation. Transfer-appropriate processing theory suggests that such feedback was most successful because the original learning conditions were reinstated at testing: Performance was facilitated when both training and testing contained degraded stimuli. In addition, the effect of semantic context on generalization was assessed by training listeners on meaningful or anomalous sentences. Training with anomalous sentences was as effective as that with meaningful sentences, suggesting that listeners were encouraged to use acoustic-phonetic information to identify speech than to make predictions from semantic context
PMCID:2818425
PMID: 20121306
ISSN: 1939-1277
CID: 114806
Transfer of auditory perceptual learning with spectrally reduced speech to speech and nonspeech tasks: implications for cochlear implants
Loebach, Jeremy L; Pisoni, David B; Svirsky, Mario A
OBJECTIVE: The objective of this study was to assess whether training on speech processed with an eight-channel noise vocoder to simulate the output of a cochlear implant would produce transfer of auditory perceptual learning to the recognition of nonspeech environmental sounds, the identification of speaker gender, and the discrimination of talkers by voice. DESIGN: Twenty-four normal-hearing subjects were trained to transcribe meaningful English sentences processed with a noise vocoder simulation of a cochlear implant. An additional 24 subjects served as an untrained control group and transcribed the same sentences in their unprocessed form. All subjects completed pre- and post-test sessions in which they transcribed vocoded sentences to provide an assessment of training efficacy. Transfer of perceptual learning was assessed using a series of closed set, nonlinguistic tasks: subjects identified talker gender, discriminated the identity of pairs of talkers, and identified ecologically significant environmental sounds from a closed set of alternatives. RESULTS: Although both groups of subjects showed significant pre- to post-test improvements, subjects who transcribed vocoded sentences during training performed significantly better at post-test than those in the control group. Both groups performed equally well on gender identification and talker discrimination. Subjects who received explicit training on the vocoded sentences, however, performed significantly better on environmental sound identification than the untrained subjects. Moreover, across both groups, pre-test speech performance and, to a higher degree, post-test speech performance, were significantly correlated with environmental sound identification. For both groups, environmental sounds that were characterized as having more salient temporal information were identified more often than environmental sounds that were characterized as having more salient spectral information. CONCLUSIONS: Listeners trained to identify noise-vocoded sentences showed evidence of transfer of perceptual learning to the identification of environmental sounds. In addition, the correlation between environmental sound identification and sentence transcription indicates that subjects who were better able to use the degraded acoustic information to identify the environmental sounds were also better able to transcribe the linguistic content of novel sentences. Both trained and untrained groups performed equally well ( approximately 75% correct) on the gender-identification task, indicating that training did not have an effect on the ability to identify the gender of talkers. Although better than chance, performance on the talker discrimination task was poor overall ( approximately 55%), suggesting that either explicit training is required to discriminate talkers' voices reliably or that additional information (perhaps spectral in nature) not present in the vocoded speech is required to excel in such tasks. Taken together, the results suggest that although transfer of auditory perceptual learning with spectrally degraded speech does occur, explicit task-specific training may be necessary for tasks that cannot rely on temporal information alone
PMCID:2794833
PMID: 19773659
ISSN: 1538-4667
CID: 114807
The effect of temporal gap identification on speech perception by users of cochlear implants
Sagi, Elad; Kaiser, Adam R; Meyer, Ted A; Svirsky, Mario A
PURPOSE: This study examined the ability of listeners using cochlear implants (CIs) and listeners with normal-hearing (NH) to identify silent gaps of different duration, and the relation of this ability to speech understanding in CI users. METHOD: Sixteen NH adults and eleven postlingually deafened adults with CIs identified synthetic vowel-like stimuli that were either continuous or contained an intervening silent gap ranging from 15 to 90 ms. Cumulative d', an index of discriminability, was calculated for each participant. Consonant and CNC word identification tasks were administered to the CI group. RESULTS: Overall, the ability to identify stimuli with gaps of different duration was better for the NH group than for the CI group. Seven CI users had cumulative d' scores that were no higher than those of any NH listener, and their CNC word scores ranged from 0 to 30%. The other four CI users had cumulative d' scores within the range of the NH group, and their CNC word scores ranged from 46% to 68%. For the CI group, cumulative d' scores were significantly correlated with their speech testing scores. CONCLUSIONS: The ability to identify silent gap duration may help explain individual differences in speech perception by CI users
PMCID:2664850
PMID: 18806216
ISSN: 1092-4388
CID: 94927
Regarding sufficiency of authors' disclosures: Hearing levels of firefighters: risk of occupational noise-induced hearing loss assessed by cross-sectional and longitudinal data [Ear Hear 2005;26(3):327-340] [Editorial]
Ryals, Brenda M; Svirsky, Mario A
PMID: 18769274
ISSN: 1538-4667
CID: 94928
Speech perception and insertion trauma in hybrid cochlear implant users: A response to Gstottner and Arnolder [Letter]
Fitzgerald, MB; Sagi, E; Jackson, M; Shapiro, WH; Roland, JT; Waltzman, SB; Svirsky, MA
ISI:000259071900027
ISSN: 1531-7129
CID: 86665