Try a new search

Format these results:

Searched for:

person:sagie01

in-biosketch:yes

Total Results:

22


Current and planned cochlear implant research at new york university laboratory for translational auditory research

Svirsky, Mario A; Fitzgerald, Matthew B; Neuman, Arlene; Sagi, Elad; Tan, Chin-Tuan; Ketten, Darlene; Martin, Brett
The Laboratory of Translational Auditory Research (LTAR/NYUSM) is part of the Department of Otolaryngology at the New York University School of Medicine and has close ties to the New York University Cochlear Implant Center. LTAR investigators have expertise in multiple related disciplines including speech and hearing science, audiology, engineering, and physiology. The lines of research in the laboratory deal mostly with speech perception by hearing impaired listeners, and particularly those who use cochlear implants (CIs) or hearing aids (HAs). Although the laboratory's research interests are diverse, there are common threads that permeate and tie all of its work. In particular, a strong interest in translational research underlies even the most basic studies carried out in the laboratory. Another important element is the development of engineering and computational tools, which range from mathematical models of speech perception to software and hardware that bypass clinical speech processors and stimulate cochlear implants directly, to novel ways of analyzing clinical outcomes data. If the appropriate tool to conduct an important experiment does not exist, we may work to develop it, either in house or in collaboration with academic or industrial partners. Another notable characteristic of the laboratory is its interdisciplinary nature where, for example, an audiologist and an engineer might work closely to develop an approach that would not have been feasible if each had worked singly on the project. Similarly, investigators with expertise in hearing aids and cochlear implants might join forces to study how human listeners integrate information provided by a CI and a HA. The following pages provide a flavor of the diversity and the commonalities of our research interests.
PMCID:3677062
PMID: 22668763
ISSN: 1050-0545
CID: 169712

A mathematical model of medial consonant identification by cochlear implant users

Svirsky, Mario A; Sagi, Elad; Meyer, Ted A; Kaiser, Adam R; Teoh, Su Wooi
The multidimensional phoneme identification model is applied to consonant confusion matrices obtained from 28 postlingually deafened cochlear implant users. This model predicts consonant matrices based on these subjects' ability to discriminate a set of postulated spectral, temporal, and amplitude speech cues as presented to them by their device. The model produced confusion matrices that matched many aspects of individual subjects' consonant matrices, including information transfer for the voicing, manner, and place features, despite individual differences in age at implantation, implant experience, device and stimulation strategy used, as well as overall consonant identification level. The model was able to match the general pattern of errors between consonants, but not the full complexity of all consonant errors made by each individual. The present study represents an important first step in developing a model that can be used to test specific hypotheses about the mechanisms cochlear implant users employ to understand speech
PMCID:3087396
PMID: 21476674
ISSN: 1520-8524
CID: 130913

A Model of Incomplete Adaptation to a Severely Shifted Frequency-to-Electrode Mapping by Cochlear Implant Users

Sagi, Elad; Fu, Qian-Jie; Galvin, John J 3rd; Svirsky, Mario A
In the present study, a computational model of phoneme identification was applied to data from a previous study, wherein cochlear implant (CI) users' adaption to a severely shifted frequency allocation map was assessed regularly over 3 months of continual use. This map provided more input filters below 1 kHz, but at the expense of introducing a downwards frequency shift of up to one octave in relation to the CI subjects' clinical maps. At the end of the 3-month study period, it was unclear whether subjects' asymptotic speech recognition performance represented a complete or partial adaptation. To clarify the matter, the computational model was applied to the CI subjects' vowel identification data in order to estimate the degree of adaptation, and to predict performance levels with complete adaptation to the frequency shift. Two model parameters were used to quantify this adaptation; one representing the listener's ability to shift their internal representation of how vowels should sound, and the other representing the listener's uncertainty in consistently recalling these representations. Two of the three CI users could shift their internal representations towards the new stimulation pattern within 1 week, whereas one could not do so completely even after 3 months. Subjects' uncertainty for recalling these representations increased substantially with the frequency-shifted map. Although this uncertainty decreased after 3 months, it remained much larger than subjects' uncertainty with their clinically assigned maps. This result suggests that subjects could not completely remap their phoneme labels, stored in long-term memory, towards the frequency-shifted vowels. The model also predicted that even with complete adaptation, the frequency-shifted map would not have resulted in improved speech understanding. Hence, the model presented here can be used to assess adaptation, and the anticipated gains in speech perception expected from changing a given CI device parameter
PMCID:2820204
PMID: 19774412
ISSN: 1438-7573
CID: 106591

A mathematical model of vowel identification by users of cochlear implants

Sagi, Elad; Meyer, Ted A; Kaiser, Adam R; Teoh, Su Wooi; Svirsky, Mario A
A simple mathematical model is presented that predicts vowel identification by cochlear implant users based on these listeners' resolving power for the mean locations of first, second, and/or third formant energies along the implanted electrode array. This psychophysically based model provides hypotheses about the mechanism cochlear implant users employ to encode and process the input auditory signal to extract information relevant for identifying steady-state vowels. Using one free parameter, the model predicts most of the patterns of vowel confusions made by users of different cochlear implant devices and stimulation strategies, and who show widely different levels of speech perception (from near chance to near perfect). Furthermore, the model can predict results from the literature, such as Skinner, et al. [(1995). Ann. Otol. Rhinol. Laryngol. 104, 307-311] frequency mapping study, and the general trend in the vowel results of Zeng and Galvin's [(1999). Ear Hear. 20, 60-74] studies of output electrical dynamic range reduction. The implementation of the model presented here is specific to vowel identification by cochlear implant users, but the framework of the model is more general. Computational models such as the one presented here can be useful for advancing knowledge about speech perception in hearing impaired populations, and for providing a guide for clinical research and clinical practice
PMCID:2830268
PMID: 20136228
ISSN: 0001-4966
CID: 106597

The effect of temporal gap identification on speech perception by users of cochlear implants

Sagi, Elad; Kaiser, Adam R; Meyer, Ted A; Svirsky, Mario A
PURPOSE: This study examined the ability of listeners using cochlear implants (CIs) and listeners with normal-hearing (NH) to identify silent gaps of different duration, and the relation of this ability to speech understanding in CI users. METHOD: Sixteen NH adults and eleven postlingually deafened adults with CIs identified synthetic vowel-like stimuli that were either continuous or contained an intervening silent gap ranging from 15 to 90 ms. Cumulative d', an index of discriminability, was calculated for each participant. Consonant and CNC word identification tasks were administered to the CI group. RESULTS: Overall, the ability to identify stimuli with gaps of different duration was better for the NH group than for the CI group. Seven CI users had cumulative d' scores that were no higher than those of any NH listener, and their CNC word scores ranged from 0 to 30%. The other four CI users had cumulative d' scores within the range of the NH group, and their CNC word scores ranged from 46% to 68%. For the CI group, cumulative d' scores were significantly correlated with their speech testing scores. CONCLUSIONS: The ability to identify silent gap duration may help explain individual differences in speech perception by CI users
PMCID:2664850
PMID: 18806216
ISSN: 1092-4388
CID: 94927

Speech perception and insertion trauma in hybrid cochlear implant users: A response to Gstottner and Arnolder [Letter]

Fitzgerald, MB; Sagi, E; Jackson, M; Shapiro, WH; Roland, JT; Waltzman, SB; Svirsky, MA
ISI:000259071900027
ISSN: 1531-7129
CID: 86665

Information transfer analysis: a first look at estimation bias

Sagi, Elad; Svirsky, Mario A
Information transfer analysis [G. A. Miller and P. E. Nicely, J. Acoust. Soc. Am. 27, 338-352 (1955)] is a tool used to measure the extent to which speech features are transmitted to a listener, e.g., duration or formant frequencies for vowels; voicing, place and manner of articulation for consonants. An information transfer of 100% occurs when no confusions arise between phonemes belonging to different feature categories, e.g., between voiced and voiceless consonants. Conversely, an information transfer of 0% occurs when performance is purely random. As asserted by Miller and Nicely, the maximum-likelihood estimate for information transfer is biased to overestimate its true value when the number of stimulus presentations is small. This small-sample bias is examined here for three cases: a model of random performance with pseudorandom data, a data set drawn from Miller and Nicely, and reported data from three studies of speech perception by hearing impaired listeners. The amount of overestimation can be substantial, depending on the number of samples, the size of the confusion matrix analyzed, as well as the manner in which data are partitioned therein
PMCID:2677320
PMID: 18529200
ISSN: 1520-8524
CID: 81060

Reimplantation of hybrid cochlear implant users with a full-length electrode after loss of residual hearing [Case Report]

Fitzgerald, Matthew B; Sagi, Elad; Jackson, Michael; Shapiro, William H; Roland, J Thomas Jr; Waltzman, Susan B; Svirsky, Mario A
OBJECTIVE: To assess word recognition and pitch-scaling abilities of cochlear implant users first implanted with a Nucleus 10-mm Hybrid electrode array and then reimplanted with a full length Nucleus Freedom array after loss of residual hearing. BACKGROUND: Although electroacoustic stimulation is a promising treatment for patients with residual low-frequency hearing,a small subset of them lose that residual hearing. It is not clear whether these patients would be better served by leaving in the 10-mm array and providing electric stimulation through it, or by replacing it with a standard full-length array. METHODS: Word recognition and pitch-scaling abilities were measured in 2 users of hybrid cochlear implants who lost their residual hearing in the implanted ear after a few months. Tests were repeated over several months, first with a 10-mm array, and after, these patients were reimplanted with a full array. The word recognition task consisted of 2 50-word consonant nucleus consonant (CNC) lists. In the pitch-scaling task, 6 electrodes were stimulated in pseudorandom order, and patients assigned a pitch value to the sensation elicited by each electrode. RESULTS: Shortly after reimplantation with the full electrode array, speech understanding was much better than with the 10-mm array. Patients improved their ability to perform the pitch-scaling task over time with the full array, although their performance on that task was variable, and the improvements were often small. CONCLUSION: 1) Short electrode arrays may help preserve residual hearing but may also provide less benefit than traditional cochlear implants for some patients. 2) Pitch percepts in response to electric stimulation may be modified by experience
PMID: 18165793
ISSN: 1531-7129
CID: 76765

What matched comparisons can and cannot tell us: the case of cochlear implants

Sagi, Elad; Fitzgerald, Matthew B; Svirsky, Mario A
OBJECTIVES: To examine the conclusions and possible misinterpretations that may or may not be drawn from the 'outcome-matching method,' a study design recently used in the cochlear implant literature. In this method, subject groups are matched not only on potentially confounding variables but also on an outcome measure that is closely related to the outcome measure under analysis. For example, subjects may be matched according to their speech perception scores in quiet, and their speech perception in noise is compared. DESIGN: The present study includes two components, a simulation study and a questionnaire. In the simulation study, the outcome-matching method was applied to pseudo-randomly generated data. Simulated speech perception scores in quiet and in noise were generated for two comparison groups, in two imaginary worlds. In both worlds, comparison group A performed only slightly worse in noise than in quiet, whereas comparison group B performed significantly worse in noise than in quiet. In Imaginary World 1, comparison group A had better speech perception scores than comparison group B. In Imaginary World 2, comparison group B had better speech perception scores than comparison group A. The outcome-matching method was applied to these data twice in each imaginary world: 1) matching scores in quiet and comparing in noise, and 2) matching scores in noise and comparing in quiet. This procedure was repeated 10,000 times. The second part of the study was conducted to address the level of misinterpretation that could arise from the outcome-matching method. A questionnaire was administered to 54 students in a senior level course on speech and hearing to assess their opinions about speech perception with two different models of cochlear implant devices. The students were instructed to fill out the questionnaire before and after reading a paper that used the outcome-matching method to examine speech perception in noise and in quiet with those two cochlear implant devices. RESULTS: When pseudorandom scores were matched in quiet, comparison group A's scores in noise were significantly better than comparison group B's scores. Results were different when scores were matched in noise: in this case, comparison group B's scores in quiet were significantly better than comparison group A's scores. Thus, the choice of outcome measure used for matching determined the result of the comparison. Additionally, results of the comparisons were identical regardless of whether they were conducted using data from Imaginary World 1 (where comparison group A is better) or from Imaginary World 2 (where comparison group B is better). After reading the paper that used the outcome-matching method, students' opinions about the two cochlear implants underwent a significant change even though, according to the simulation study, this opinion change was not warranted by the data. CONCLUSIONS: The outcome-matching method can provide important information about differences within a comparison group, but it cannot be used to determine whether a given device or clinical intervention is better than another one. Care must be used when interpreting the results of a study using the outcome-matching method
PMID: 17609617
ISSN: 0196-0202
CID: 73808

Identification variability as a measure of loudness: an application to gender differences

Sagi, Elad; D'Alessandro, Lisa M; Norwich, Kenneth H
It is well known that discrimination response variability increases with stimulus intensity, closely related to Weber's Law. It is also an axiom that sensation magnitude increases with stimulus intensity. Following earlier researchers such as Thurstone, Garner, and Durlach and Braida, we explored a new method of exploiting these relationships to estimate the power function exponent relating sound pressure level to loudness, using the accuracy with which listeners could identify the intensity of pure tones. The log standard deviation of the normally distributed identification errors increases linearly with stimulus range in decibels, and the slope, a, of the regression is proportional to the loudness exponent, n. Interestingly, in a demonstration experiment, the loudness exponent estimated in this way is greater for females than for males
PMID: 17479743
ISSN: 1196-1961
CID: 147976