Try a new search

Format these results:

Searched for:

person:azadpm01

in-biosketch:yes

Total Results:

17


Estimating confidence intervals for information transfer analysis of confusion matrices

Azadpour, Mahan; McKay, Colette M; Smith, Robert L
A non-parametric bootstrapping statistical method is introduced and investigated for estimating confidence intervals resulting from information transfer (IT) analysis of confusion matrices. Confidence intervals can be used to statistically compare ITs from two or more confusion matrices obtained in an experiment. Information transfer is a nonlinear analysis and does not satisfy many of the assumptions of a parametric method. The bootstrapping method accurately estimated IT confidence intervals as long as the confusion matrices contained a sufficiently large number of presentations per stimulus category, which is also a condition for reduced bias in IT analysis.
PMID: 24606307
ISSN: 1520-8524
CID: 2689912

Place specificity measured in forward and interleaved masking in cochlear implants

Azadpour, Mahan; AlJasser, Arwa; McKay, Colette M
Interleaved masking in cochlear implants is analogous to acoustic simultaneous masking and is relevant to speech processing strategies that interleave pulses on concurrently activated electrodes. In this study, spatial decay of masking as the distance between masker and probe increases was compared between forward and interleaved masking in the same group of cochlear implant users. Spatial masking patterns and the measures of place specificity were similar between forward and interleaved masking. Unlike acoustic hearing where broader tuning curves are obtained in simultaneous masking, the type of masking experiment did not influence the measure of place specificity in cochlear implants.
PMID: 24116536
ISSN: 1520-8524
CID: 2689922

Overview and challenges of implantable auditory prostheses

Azadpour, Mahan
PMCID:4202539
PMID: 25337335
ISSN: 2008-126x
CID: 2689932

Beneficial acoustic speech cues for cochlear implant users with residual acoustic hearing

Visram, Anisa S; Azadpour, Mahan; Kluk, Karolina; McKay, Colette M
This study investigated which acoustic cues within the speech signal are responsible for bimodal speech perception benefit. Seven cochlear implant (CI) users with usable residual hearing at low frequencies in the non-implanted ear participated. Sentence tests were performed in near-quiet (some noise on the CI side to reduce scores from ceiling) and in a modulated noise background, with the implant alone and with the addition, in the hearing ear, of one of four types of acoustic signals derived from the same sentences: (1) a complex tone modulated by the fundamental frequency (F0) and amplitude envelope contours; (2) a pure tone modulated by the F0 and amplitude contours; (3) a noise-vocoded signal; (4) unprocessed speech. The modulated tones provided F0 information without spectral shape information, whilst the vocoded signal presented spectral shape information without F0 information. For the group as a whole, only the unprocessed speech condition provided significant benefit over implant-alone scores, in both near-quiet and noise. This suggests that, on average, F0 or spectral cues in isolation provided limited benefit for these subjects in the tested listening conditions, and that the significant benefit observed in the full-signal condition was derived from implantees' use of a combination of these cues.
PMID: 22559377
ISSN: 1520-8524
CID: 2689942

A psychophysical method for measuring spatial resolution in cochlear implants

Azadpour, Mahan; McKay, Colette M
A novel psychophysical method was developed for assessing spatial resolution in cochlear implants. Spectrally flat and spectrally peaked pulse train stimuli were generated by interleaving pulses on 11 electrodes. Spectrally flat stimuli used loudness-balanced currents and the spectrally peaked stimuli had a single spatial ripple with the current of the middle electrode raised to create a peak while the currents on two electrodes equally spaced at variable distance from the peak electrode were reduced to create valleys. The currents on peak and valley electrodes were adjusted to balance the overall loudness with the spectrally flat stimulus, while keeping the currents on flanking electrodes fixed. The psychometric functions obtained from percent correct discrimination of peaked and flat stimuli versus the distance between peak and valley electrodes were used to quantify spatial resolution for each of the eight subjects. The ability to resolve the spatial ripple correlated strongly with current level difference limens measured on the peak electrode. The results were consistent with a hypothesis that a factor other than spread of excitation (such as neural response variance) might underlie much of the variability in spatial resolution. Resolution ability was not correlated with phoneme recognition in quiet or sentence recognition in quiet and background noise, consistent with a hypothesis that implantees rely on cues other than fine spectral detail to identify speech, perhaps because this detail is poorly accessible or unreliable.
PMCID:3254715
PMID: 22002609
ISSN: 1438-7573
CID: 2689952

Do Humans Really Learn A(n) B(n) Artificial Grammars From Exemplars?

Hochmann, Jean-Remy; Azadpour, Mahan; Mehler, Jacques
An important topic in the evolution of language is the kinds of grammars that can be computed by humans and other animals. Fitch and Hauser (F&H; 2004) approached this question by assessing the ability of different species to learn 2 grammars, (AB)(n) and A(n) B(n) . A(n) B(n) was taken to indicate a phrase structure grammar, eliciting a center-embedded pattern. (AB)(n) indicates a grammar whose strings entail only local relations between the categories of constituents. F&H's data suggest that humans, but not tamarin monkeys, learn an A(n) B(n) grammar, whereas both learn a simpler (AB)(n) grammar (Fitch & Hauser, 2004). In their experiments, the A constituents were syllables pronounced by a female voice, whereas the B constituents were syllables pronounced by a male voice. This study proposes that what characterizes the A(n) B(n) exemplars is the distributional regularities of the syllables pronounced by either a male or a female rather than the underlying, more abstract patterns. This article replicates F&H's data and reports new controls using either categories similar to those in F&H or less salient ones. This article shows that distributional regularities explain the data better than grammar learning. Indeed, when familiarized with A(n) B(n) exemplars, participants failed to discriminate A(3) B(2) and A(2) B(3) from A(n) B(n) items, missing the crucial feature that the number of As must equal the number of Bs. Therefore, contrary to F&H, this study concludes that no syntactic rules implementing embedded nonadjacent dependencies were learned in these experiments. The difference between human linguistic abilities and the putative precursors in monkeys deserves further exploration.
PMID: 21585440
ISSN: 0364-0213
CID: 2689962

Phonological representations are unconsciously used when processing complex, non-speech signals

Azadpour, Mahan; Balaban, Evan
Neuroimaging studies of speech processing increasingly rely on artificial speech-like sounds whose perceptual status as speech or non-speech is assigned by simple subjective judgments; brain activation patterns are interpreted according to these status assignments. The naive perceptual status of one such stimulus, spectrally-rotated speech (not consciously perceived as speech by naive subjects), was evaluated in discrimination and forced identification experiments. Discrimination of variation in spectrally-rotated syllables in one group of naive subjects was strongly related to the pattern of similarities in phonological identification of the same stimuli provided by a second, independent group of naive subjects, suggesting either that (1) naive rotated syllable perception involves phonetic-like processing, or (2) that perception is solely based on physical acoustic similarity, and similar sounds are provided with similar phonetic identities. Analysis of acoustic (Euclidean distances of center frequency values of formants) and phonetic similarities in the perception of the vowel portions of the rotated syllables revealed that discrimination was significantly and independently influenced by both acoustic and phonological information. We conclude that simple subjective assessments of artificial speech-like sounds can be misleading, as perception of such sounds may initially and unconsciously utilize speech-like, phonological processing.
PMCID:2292097
PMID: 18414663
ISSN: 1932-6203
CID: 2689972