Try a new search

Format these results:

Searched for:

in-biosketch:yes

person:svirsm01

Total Results:

159


A Model of Incomplete Adaptation to a Severely Shifted Frequency-to-Electrode Mapping by Cochlear Implant Users

Sagi, Elad; Fu, Qian-Jie; Galvin, John J 3rd; Svirsky, Mario A
In the present study, a computational model of phoneme identification was applied to data from a previous study, wherein cochlear implant (CI) users' adaption to a severely shifted frequency allocation map was assessed regularly over 3 months of continual use. This map provided more input filters below 1 kHz, but at the expense of introducing a downwards frequency shift of up to one octave in relation to the CI subjects' clinical maps. At the end of the 3-month study period, it was unclear whether subjects' asymptotic speech recognition performance represented a complete or partial adaptation. To clarify the matter, the computational model was applied to the CI subjects' vowel identification data in order to estimate the degree of adaptation, and to predict performance levels with complete adaptation to the frequency shift. Two model parameters were used to quantify this adaptation; one representing the listener's ability to shift their internal representation of how vowels should sound, and the other representing the listener's uncertainty in consistently recalling these representations. Two of the three CI users could shift their internal representations towards the new stimulation pattern within 1 week, whereas one could not do so completely even after 3 months. Subjects' uncertainty for recalling these representations increased substantially with the frequency-shifted map. Although this uncertainty decreased after 3 months, it remained much larger than subjects' uncertainty with their clinically assigned maps. This result suggests that subjects could not completely remap their phoneme labels, stored in long-term memory, towards the frequency-shifted vowels. The model also predicted that even with complete adaptation, the frequency-shifted map would not have resulted in improved speech understanding. Hence, the model presented here can be used to assess adaptation, and the anticipated gains in speech perception expected from changing a given CI device parameter
PMCID:2820204
PMID: 19774412
ISSN: 1438-7573
CID: 106591

Effects of semantic context and feedback on perceptual learning of speech processed through an acoustic simulation of a cochlear implant

Loebach, Jeremy L; Pisoni, David B; Svirsky, Mario A
The effect of feedback and materials on perceptual learning was examined in listeners with normal hearing who were exposed to cochlear implant simulations. Generalization was most robust when feedback paired the spectrally degraded sentences with their written transcriptions, promoting mapping between the degraded signal and its acoustic-phonetic representation. Transfer-appropriate processing theory suggests that such feedback was most successful because the original learning conditions were reinstated at testing: Performance was facilitated when both training and testing contained degraded stimuli. In addition, the effect of semantic context on generalization was assessed by training listeners on meaningful or anomalous sentences. Training with anomalous sentences was as effective as that with meaningful sentences, suggesting that listeners were encouraged to use acoustic-phonetic information to identify speech than to make predictions from semantic context
PMCID:2818425
PMID: 20121306
ISSN: 1939-1277
CID: 114806

A mathematical model of vowel identification by users of cochlear implants

Sagi, Elad; Meyer, Ted A; Kaiser, Adam R; Teoh, Su Wooi; Svirsky, Mario A
A simple mathematical model is presented that predicts vowel identification by cochlear implant users based on these listeners' resolving power for the mean locations of first, second, and/or third formant energies along the implanted electrode array. This psychophysically based model provides hypotheses about the mechanism cochlear implant users employ to encode and process the input auditory signal to extract information relevant for identifying steady-state vowels. Using one free parameter, the model predicts most of the patterns of vowel confusions made by users of different cochlear implant devices and stimulation strategies, and who show widely different levels of speech perception (from near chance to near perfect). Furthermore, the model can predict results from the literature, such as Skinner, et al. [(1995). Ann. Otol. Rhinol. Laryngol. 104, 307-311] frequency mapping study, and the general trend in the vowel results of Zeng and Galvin's [(1999). Ear Hear. 20, 60-74] studies of output electrical dynamic range reduction. The implementation of the model presented here is specific to vowel identification by cochlear implant users, but the framework of the model is more general. Computational models such as the one presented here can be useful for advancing knowledge about speech perception in hearing impaired populations, and for providing a guide for clinical research and clinical practice
PMCID:2830268
PMID: 20136228
ISSN: 0001-4966
CID: 106597

Transfer of auditory perceptual learning with spectrally reduced speech to speech and nonspeech tasks: implications for cochlear implants

Loebach, Jeremy L; Pisoni, David B; Svirsky, Mario A
OBJECTIVE: The objective of this study was to assess whether training on speech processed with an eight-channel noise vocoder to simulate the output of a cochlear implant would produce transfer of auditory perceptual learning to the recognition of nonspeech environmental sounds, the identification of speaker gender, and the discrimination of talkers by voice. DESIGN: Twenty-four normal-hearing subjects were trained to transcribe meaningful English sentences processed with a noise vocoder simulation of a cochlear implant. An additional 24 subjects served as an untrained control group and transcribed the same sentences in their unprocessed form. All subjects completed pre- and post-test sessions in which they transcribed vocoded sentences to provide an assessment of training efficacy. Transfer of perceptual learning was assessed using a series of closed set, nonlinguistic tasks: subjects identified talker gender, discriminated the identity of pairs of talkers, and identified ecologically significant environmental sounds from a closed set of alternatives. RESULTS: Although both groups of subjects showed significant pre- to post-test improvements, subjects who transcribed vocoded sentences during training performed significantly better at post-test than those in the control group. Both groups performed equally well on gender identification and talker discrimination. Subjects who received explicit training on the vocoded sentences, however, performed significantly better on environmental sound identification than the untrained subjects. Moreover, across both groups, pre-test speech performance and, to a higher degree, post-test speech performance, were significantly correlated with environmental sound identification. For both groups, environmental sounds that were characterized as having more salient temporal information were identified more often than environmental sounds that were characterized as having more salient spectral information. CONCLUSIONS: Listeners trained to identify noise-vocoded sentences showed evidence of transfer of perceptual learning to the identification of environmental sounds. In addition, the correlation between environmental sound identification and sentence transcription indicates that subjects who were better able to use the degraded acoustic information to identify the environmental sounds were also better able to transcribe the linguistic content of novel sentences. Both trained and untrained groups performed equally well ( approximately 75% correct) on the gender-identification task, indicating that training did not have an effect on the ability to identify the gender of talkers. Although better than chance, performance on the talker discrimination task was poor overall ( approximately 55%), suggesting that either explicit training is required to discriminate talkers' voices reliably or that additional information (perhaps spectral in nature) not present in the vocoded speech is required to excel in such tasks. Taken together, the results suggest that although transfer of auditory perceptual learning with spectrally degraded speech does occur, explicit task-specific training may be necessary for tasks that cannot rely on temporal information alone
PMCID:2794833
PMID: 19773659
ISSN: 1538-4667
CID: 114807

The effect of temporal gap identification on speech perception by users of cochlear implants

Sagi, Elad; Kaiser, Adam R; Meyer, Ted A; Svirsky, Mario A
PURPOSE: This study examined the ability of listeners using cochlear implants (CIs) and listeners with normal-hearing (NH) to identify silent gaps of different duration, and the relation of this ability to speech understanding in CI users. METHOD: Sixteen NH adults and eleven postlingually deafened adults with CIs identified synthetic vowel-like stimuli that were either continuous or contained an intervening silent gap ranging from 15 to 90 ms. Cumulative d', an index of discriminability, was calculated for each participant. Consonant and CNC word identification tasks were administered to the CI group. RESULTS: Overall, the ability to identify stimuli with gaps of different duration was better for the NH group than for the CI group. Seven CI users had cumulative d' scores that were no higher than those of any NH listener, and their CNC word scores ranged from 0 to 30%. The other four CI users had cumulative d' scores within the range of the NH group, and their CNC word scores ranged from 46% to 68%. For the CI group, cumulative d' scores were significantly correlated with their speech testing scores. CONCLUSIONS: The ability to identify silent gap duration may help explain individual differences in speech perception by CI users
PMCID:2664850
PMID: 18806216
ISSN: 1092-4388
CID: 94927

Regarding sufficiency of authors' disclosures: Hearing levels of firefighters: risk of occupational noise-induced hearing loss assessed by cross-sectional and longitudinal data [Ear Hear 2005;26(3):327-340] [Editorial]

Ryals, Brenda M; Svirsky, Mario A
PMID: 18769274
ISSN: 1538-4667
CID: 94928

Speech perception and insertion trauma in hybrid cochlear implant users: A response to Gstottner and Arnolder [Letter]

Fitzgerald, MB; Sagi, E; Jackson, M; Shapiro, WH; Roland, JT; Waltzman, SB; Svirsky, MA
ISI:000259071900027
ISSN: 1531-7129
CID: 86665

An exploratory look at pediatric cochlear implantation: is earliest always best?

Holt, Rachael Frush; Svirsky, Mario A
OBJECTIVES: Since the advent of cochlear implants, age at implantation has declined as investigators report greater benefit the younger a child is implanted. Infants younger than 12 mos currently are excluded from Food and Drug Administration clinical trials, but have been implanted with Food and Drug Administration-approved devices. With a chance that an infant without profound hearing loss could be implanted because of the limitations of the diagnostic measures used with this population and the potential for additional anesthetic risks to infants younger than 1-yr-old, it is prudent to evaluate benefit in the youngest cochlear implant recipients. The goals of this research were to investigate whether significant gains are made by children implanted before 1-yr-old relative to those implanted at later ages, while controlling for potential covariates, and whether there is behavioral evidence for sensitive periods in spoken language development. It was expected that children implanted before age 1 yr would have more advanced spoken language skills than children implanted at later ages; there would be a negative relationship between age at implantation and rate of spoken language development, allowing for an examination of the effects of sensitive periods in spoken language development; and these trends would remain despite accounting for participant characteristics and experiences that might influence spoken language outcomes. DESIGN: Ninety-six children with congenital profound sensorineural hearing loss bilaterally and no additional identified disabilities who were implanted before the age of 4 yrs were stratified into four groups based on age at implantation. Children's spoken language development was followed for at least 2 yrs after device activation. Spoken language scores and rate of development were evaluated along with four covariates (unaided pure-tone average, communication mode, gender, and estimated family income) as a function of age at implantation. RESULTS: In general, the developmental trajectories of children implanted earlier were significantly better than those of children implanted later. However, the advantage of implanting children before 1-yr old versus waiting until the child was between 1 and 2 yrs was small and only was evident in receptive language development, not expressive language or word recognition development. Age at implantation did not significantly influence the rate of the word recognition development, but did influence the rate of both receptive and expressive language acquisition: children implanted earlier in life had faster rates of spoken language acquisition than children implanted later in life. CONCLUSIONS: Although in general earlier cochlear implantation led to better outcomes, there were few differences in outcome between the small sample of six children implanted before 12 mos of age and those implanted at 13 to 24 mos. Significant performance differences remained among the other age groups despite accounting for potential confounds. Further, oral language development progressed faster in children implanted earlier rather than later in of life (up to age 4 yrs), whereas the rate of open-set speech recognition development was similar. Together, the results suggest that there is a sensitive period for spoken language during the first 4 yrs of life, but not necessarily for word recognition development during the same period
PMCID:5494277
PMID: 18382374
ISSN: 1538-4667
CID: 94929

Information transfer analysis: a first look at estimation bias

Sagi, Elad; Svirsky, Mario A
Information transfer analysis [G. A. Miller and P. E. Nicely, J. Acoust. Soc. Am. 27, 338-352 (1955)] is a tool used to measure the extent to which speech features are transmitted to a listener, e.g., duration or formant frequencies for vowels; voicing, place and manner of articulation for consonants. An information transfer of 100% occurs when no confusions arise between phonemes belonging to different feature categories, e.g., between voiced and voiceless consonants. Conversely, an information transfer of 0% occurs when performance is purely random. As asserted by Miller and Nicely, the maximum-likelihood estimate for information transfer is biased to overestimate its true value when the number of stimulus presentations is small. This small-sample bias is examined here for three cases: a model of random performance with pseudorandom data, a data set drawn from Miller and Nicely, and reported data from three studies of speech perception by hearing impaired listeners. The amount of overestimation can be substantial, depending on the number of samples, the size of the confusion matrix analyzed, as well as the manner in which data are partitioned therein
PMCID:2677320
PMID: 18529200
ISSN: 1520-8524
CID: 81060

Speech perception benefits of sequential bilateral cochlear implantation in children and adults: a retrospective analysis

Zeitler, Daniel M; Kessler, Megan A; Terushkin, Vitaly; Roland, Thomas J Jr; Svirsky, Mario A; Lalwani, Anil K; Waltzman, Susan B
Objective: To examine speech perception outcomes and determine the impact of length of deafness and time between implants on performance in the sequentially bilateral implanted population. STUDY DESIGN: Retrospective review. SETTING: Tertiary academic referral center. PATIENTS: Forty-three children (age, <18 yr) and 22 adults underwent sequential bilateral implantation with at least 6 months between surgeries. The mean age at the time of the second implant in children was 7.83 years, and mean time between implants was 5.16 years. Five children received the first side implant (C1) below 12 months of age; 16, at 12 to 23 months; 9, between the ages of 24 and 35 months; and 11, at 36 to 59 months; 2 were implanted above the age of 5 years. In adults, mean age at second implant was 46.6 years, and mean time between implants was 5.6 years. INTERVENTION: Sequential implantation with 6 months or more between implantations. MAIN OUTCOME MEASURES: Speech perception tests were performed preoperatively before the second implantation and at 3 months postoperatively. RESULTS: Results revealed significant improvement in the second implanted ear and in the bilateral condition, despite time between implantations or length of deafness; however, age of first-side implantation was a contributing factor to second ear outcome in the pediatric population. CONCLUSION: Sequential bilateral implantation leads to significantly better speech understanding. On average, patients improved, despite length of deafness, time between implants, or age at implantation
PMID: 18494140
ISSN: 1531-7129
CID: 79563