Try a new search

Format these results:

Searched for:

in-biosketch:yes

person:azadpm01

Total Results:

19


Gradual adaptation to auditory frequency mismatch

Svirsky, Mario A; Talavage, Thomas M; Sinha, Shivank; Neuburger, Heidi; Azadpour, Mahan
What is the best way to help humans adapt to a distorted sensory input? Interest in this question is more than academic. The answer may help facilitate auditory learning by people who became deaf after learning language and later received a cochlear implant (a neural prosthesis that restores hearing through direct electrical stimulation of the auditory nerve). There is evidence that some cochlear implants (which provide information that is spectrally degraded to begin with) stimulate neurons with higher characteristic frequency than the acoustic frequency of the original stimulus. In other words, the stimulus is shifted in frequency with respect to what the listener expects to hear. This frequency misalignment may have a negative influence on speech perception by CI users. However, a perfect frequency-place alignment may result in the loss of important low frequency speech information. A trade-off may involve a gradual approach: start with correct frequency-place alignment to allow listeners to adapt to the spectrally degraded signal first, and then gradually increase the frequency shift to allow them to adapt to it over time. We used an acoustic model of a cochlear implant to measure adaptation to a frequency-shifted signal, using either the gradual approach or the "standard" approach (sudden imposition of the frequency shift). Listeners in both groups showed substantial auditory learning, as measured by increases in speech perception scores over the course of fifteen one-hour training sessions. However, the learning process was faster for listeners who were exposed to the gradual approach. These results suggest that gradual rather than sudden exposure may facilitate perceptual learning in the face of a spectrally degraded, frequency-shifted input. This article is part of a Special Issue entitled .
PMCID:4380802
PMID: 25445816
ISSN: 0378-5955
CID: 1474192

Processing of speech temporal and spectral information by users of auditory brainstem implants and cochlear implants

Azadpour, Mahan; McKay, Colette M
OBJECTIVES: Auditory brainstem implants (ABI) use the same processing strategy as was developed for cochlear implants (CI). However, the cochlear nucleus (CN), the stimulation site of ABIs, is anatomically and physiologically more complex than the auditory nerve and consists of neurons with differing roles in auditory processing. The aim of this study was to evaluate the hypotheses that ABI users are less able than CI users to access speech spectro-temporal information delivered by the existing strategies and that the sites stimulated by different locations of CI and ABI electrode arrays differ in encoding of temporal patterns in the stimulation. DESIGN: Six CI users and four ABI users of Nucleus implants with ACE processing strategy participated in this study. Closed-set perception of aCa syllables (16 consonants) and bVd words (11 vowels) was evaluated via experimental processing strategies that activated one, two, or four of the electrodes of the array in a CIS manner as well as subjects' clinical strategies. Three single-channel strategies presented the overall temporal envelope variations of the signal on a single-implant electrode located at the high-, medium-, and low-frequency regions of the array. Implantees' ability to discriminate within electrode temporal patterns of stimulation for phoneme perception and their ability to make use of spectral information presented by increased number of active electrodes were assessed in the single- and multiple-channel strategies, respectively. Overall percentages and information transmission of phonetic features were obtained for each experimental program. RESULTS: Phoneme perception performance of three ABI users was within the range of CI users in most of the experimental strategies and improved as the number of active electrodes increased. One ABI user performed close to chance with all the single and multiple electrode strategies. There was no significant difference between apical, basal, and middle CI electrodes in transmitting speech temporal information, except a trend that the voicing feature was the least transmitted by the basal electrode. A similar electrode-location pattern could be observed in most ABI subjects. CONCLUSIONS: Although the number of tested ABI subjects was small, their wide range of phoneme perception performance was consistent with previous reports of overall speech perception in ABI patients. The better-performing ABI user participants had access to speech temporal and spectral information that was comparable to that of average CI user. The poor-performing ABI user did not have access to within-channel speech temporal information and did not benefit from an increased number of spectral channels. The within-subject variability between different ABI electrodes was less than the variability across users in transmission of speech temporal information. The difference in the performance of ABI users could be related to the location of their electrode array on the CN, anatomy, and physiology of their CN or the damage to their auditory brainstem due to tumor or surgery.
PMID: 25010634
ISSN: 1538-4667
CID: 2689902

Estimating confidence intervals for information transfer analysis of confusion matrices

Azadpour, Mahan; McKay, Colette M; Smith, Robert L
A non-parametric bootstrapping statistical method is introduced and investigated for estimating confidence intervals resulting from information transfer (IT) analysis of confusion matrices. Confidence intervals can be used to statistically compare ITs from two or more confusion matrices obtained in an experiment. Information transfer is a nonlinear analysis and does not satisfy many of the assumptions of a parametric method. The bootstrapping method accurately estimated IT confidence intervals as long as the confusion matrices contained a sufficiently large number of presentations per stimulus category, which is also a condition for reduced bias in IT analysis.
PMID: 24606307
ISSN: 1520-8524
CID: 2689912

Place specificity measured in forward and interleaved masking in cochlear implants

Azadpour, Mahan; AlJasser, Arwa; McKay, Colette M
Interleaved masking in cochlear implants is analogous to acoustic simultaneous masking and is relevant to speech processing strategies that interleave pulses on concurrently activated electrodes. In this study, spatial decay of masking as the distance between masker and probe increases was compared between forward and interleaved masking in the same group of cochlear implant users. Spatial masking patterns and the measures of place specificity were similar between forward and interleaved masking. Unlike acoustic hearing where broader tuning curves are obtained in simultaneous masking, the type of masking experiment did not influence the measure of place specificity in cochlear implants.
PMID: 24116536
ISSN: 1520-8524
CID: 2689922

Overview and challenges of implantable auditory prostheses

Azadpour, Mahan
PMCID:4202539
PMID: 25337335
ISSN: 2008-126x
CID: 2689932

Beneficial acoustic speech cues for cochlear implant users with residual acoustic hearing

Visram, Anisa S; Azadpour, Mahan; Kluk, Karolina; McKay, Colette M
This study investigated which acoustic cues within the speech signal are responsible for bimodal speech perception benefit. Seven cochlear implant (CI) users with usable residual hearing at low frequencies in the non-implanted ear participated. Sentence tests were performed in near-quiet (some noise on the CI side to reduce scores from ceiling) and in a modulated noise background, with the implant alone and with the addition, in the hearing ear, of one of four types of acoustic signals derived from the same sentences: (1) a complex tone modulated by the fundamental frequency (F0) and amplitude envelope contours; (2) a pure tone modulated by the F0 and amplitude contours; (3) a noise-vocoded signal; (4) unprocessed speech. The modulated tones provided F0 information without spectral shape information, whilst the vocoded signal presented spectral shape information without F0 information. For the group as a whole, only the unprocessed speech condition provided significant benefit over implant-alone scores, in both near-quiet and noise. This suggests that, on average, F0 or spectral cues in isolation provided limited benefit for these subjects in the tested listening conditions, and that the significant benefit observed in the full-signal condition was derived from implantees' use of a combination of these cues.
PMID: 22559377
ISSN: 1520-8524
CID: 2689942

A psychophysical method for measuring spatial resolution in cochlear implants

Azadpour, Mahan; McKay, Colette M
A novel psychophysical method was developed for assessing spatial resolution in cochlear implants. Spectrally flat and spectrally peaked pulse train stimuli were generated by interleaving pulses on 11 electrodes. Spectrally flat stimuli used loudness-balanced currents and the spectrally peaked stimuli had a single spatial ripple with the current of the middle electrode raised to create a peak while the currents on two electrodes equally spaced at variable distance from the peak electrode were reduced to create valleys. The currents on peak and valley electrodes were adjusted to balance the overall loudness with the spectrally flat stimulus, while keeping the currents on flanking electrodes fixed. The psychometric functions obtained from percent correct discrimination of peaked and flat stimuli versus the distance between peak and valley electrodes were used to quantify spatial resolution for each of the eight subjects. The ability to resolve the spatial ripple correlated strongly with current level difference limens measured on the peak electrode. The results were consistent with a hypothesis that a factor other than spread of excitation (such as neural response variance) might underlie much of the variability in spatial resolution. Resolution ability was not correlated with phoneme recognition in quiet or sentence recognition in quiet and background noise, consistent with a hypothesis that implantees rely on cues other than fine spectral detail to identify speech, perhaps because this detail is poorly accessible or unreliable.
PMCID:3254715
PMID: 22002609
ISSN: 1438-7573
CID: 2689952

Do Humans Really Learn A(n) B(n) Artificial Grammars From Exemplars?

Hochmann, Jean-Remy; Azadpour, Mahan; Mehler, Jacques
An important topic in the evolution of language is the kinds of grammars that can be computed by humans and other animals. Fitch and Hauser (F&H; 2004) approached this question by assessing the ability of different species to learn 2 grammars, (AB)(n) and A(n) B(n) . A(n) B(n) was taken to indicate a phrase structure grammar, eliciting a center-embedded pattern. (AB)(n) indicates a grammar whose strings entail only local relations between the categories of constituents. F&H's data suggest that humans, but not tamarin monkeys, learn an A(n) B(n) grammar, whereas both learn a simpler (AB)(n) grammar (Fitch & Hauser, 2004). In their experiments, the A constituents were syllables pronounced by a female voice, whereas the B constituents were syllables pronounced by a male voice. This study proposes that what characterizes the A(n) B(n) exemplars is the distributional regularities of the syllables pronounced by either a male or a female rather than the underlying, more abstract patterns. This article replicates F&H's data and reports new controls using either categories similar to those in F&H or less salient ones. This article shows that distributional regularities explain the data better than grammar learning. Indeed, when familiarized with A(n) B(n) exemplars, participants failed to discriminate A(3) B(2) and A(2) B(3) from A(n) B(n) items, missing the crucial feature that the number of As must equal the number of Bs. Therefore, contrary to F&H, this study concludes that no syntactic rules implementing embedded nonadjacent dependencies were learned in these experiments. The difference between human linguistic abilities and the putative precursors in monkeys deserves further exploration.
PMID: 21585440
ISSN: 0364-0213
CID: 2689962

Phonological representations are unconsciously used when processing complex, non-speech signals

Azadpour, Mahan; Balaban, Evan
Neuroimaging studies of speech processing increasingly rely on artificial speech-like sounds whose perceptual status as speech or non-speech is assigned by simple subjective judgments; brain activation patterns are interpreted according to these status assignments. The naive perceptual status of one such stimulus, spectrally-rotated speech (not consciously perceived as speech by naive subjects), was evaluated in discrimination and forced identification experiments. Discrimination of variation in spectrally-rotated syllables in one group of naive subjects was strongly related to the pattern of similarities in phonological identification of the same stimuli provided by a second, independent group of naive subjects, suggesting either that (1) naive rotated syllable perception involves phonetic-like processing, or (2) that perception is solely based on physical acoustic similarity, and similar sounds are provided with similar phonetic identities. Analysis of acoustic (Euclidean distances of center frequency values of formants) and phonetic similarities in the perception of the vowel portions of the rotated syllables revealed that discrimination was significantly and independently influenced by both acoustic and phonological information. We conclude that simple subjective assessments of artificial speech-like sounds can be misleading, as perception of such sounds may initially and unconsciously utilize speech-like, phonological processing.
PMCID:2292097
PMID: 18414663
ISSN: 1932-6203
CID: 2689972