Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 13 de 13
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Ear Hear ; 38(5): e292-e304, 2017.
Artículo en Inglés | MEDLINE | ID: mdl-28353522

RESUMEN

OBJECTIVE: This study investigated the possible impact of simulated hearing loss on speech perception in Spanish-English bilingual children. To avoid confound between individual differences in hearing-loss configuration and linguistic experience, threshold-elevating noise simulating a mild-to-moderate sloping hearing loss was used with normal-hearing listeners. The hypotheses were that: (1) bilingual children can perform similarly to English-speaking monolingual peers in quiet; (2) for both bilingual and monolingual children, noise and simulated hearing loss would have detrimental impacts consistent with their acoustic characteristics (i.e., consonants with high-frequency cues remain highly intelligible in speech-shaped noise, but suffer from simulated hearing loss more than other consonants); (3) differences in phonology and acquisition order between Spanish and English would have additional negative influence on bilingual children's recognition of some English consonants. DESIGN: Listeners were 11 English-dominant, Spanish-English bilingual children (6 to 12 years old) and 12 English-speaking, monolingual age peers. All had normal hearing and age-appropriate nonverbal intelligence and expressive English vocabulary. Listeners performed a listen-and-repeat speech perception task. Targets were 13 American English consonants embedded in vowel-consonant-vowel (VCV) syllables. VCVs were presented in quiet and in speech-shaped noise at signal-to-noise ratios (SNRs) of -5, 0, 5 dB (normal-hearing condition). For the simulated hearing-loss condition, threshold-elevating noise modeling a mild-to-moderate sloping sensorineural hearing loss profile was added to the normal-hearing stimuli for 0, 5 dB SNR, and quiet. Responses were scored for consonant correct. Individual listeners' performance was summarized for average across 13 consonants (overall) and for individual consonants. RESULTS: Groups were compared for the effects of background noise and simulated hearing loss. As predicted, group performed similarly in quiet. The simulated hearing loss had a considerable detrimental impact on both groups, even in the absence of speech-shaped noise. Contrary to our prediction, no group difference was observed at any SNR in either condition. However, although nonsignificant, the greater within-group variance for the bilingual children in the normal-hearing condition indicated a wider "normal" range than for the monolingual children. Interestingly, although it did not contribute to the group difference, bilingual children's overall consonant recognition in both conditions improved with age, whereas such a developmental trend for monolingual children was observed only in the simulated hearing-loss condition, suggesting possible effects of experience. As for the recognition of individual consonants, the influence of background noise or simulated hearing loss was similar between groups and was consistent with the prediction based on their acoustic characteristics. CONCLUSIONS: The results demonstrated that school-age, English-dominant, Spanish-English bilingual children can recognize English consonants in a background of speech-shaped noise with similar average accuracy as English-speaking monolingual age peers. The general impact of simulated hearing loss was also similar between bilingual and monolingual children. Thus, our hypothesis that bilingual children's English consonant recognition would suffer from background noise or simulated hearing loss more than the monolingual peers was rejected. However, the present results raise several issues that warrant further investigation, including the possible difference in the "normal" range for bilingual and monolingual children, influence of experience, impact of actual hearing loss on bilingual children, and stimulus quality.


Asunto(s)
Pérdida Auditiva Sensorineural , Multilingüismo , Percepción del Habla , Niño , Femenino , Humanos , Lenguaje , Masculino , Ruido , Fonética , Relación Señal-Ruido , Estados Unidos
2.
Ear Hear ; 38(3): e180-e192, 2017.
Artículo en Inglés | MEDLINE | ID: mdl-28045838

RESUMEN

OBJECTIVES: The purpose of this study was to examine word recognition in children who are hard of hearing (CHH) and children with normal hearing (CNH) in response to time-gated words presented in high- versus low-predictability sentences (HP, LP), where semantic cues were manipulated. Findings inform our understanding of how CHH combine cognitive-linguistic and acoustic-phonetic cues to support spoken word recognition. It was hypothesized that both groups of children would be able to make use of linguistic cues provided by HP sentences to support word recognition. CHH were expected to require greater acoustic information (more gates) than CNH to correctly identify words in the LP condition. In addition, it was hypothesized that error patterns would differ across groups. DESIGN: Sixteen CHH with mild to moderate hearing loss and 16 age-matched CNH participated (5 to 12 years). Test stimuli included 15 LP and 15 HP age-appropriate sentences. The final word of each sentence was divided into segments and recombined with the sentence frame to create series of sentences in which the final word was progressively longer by the gated increments. Stimuli were presented monaurally through headphones and children were asked to identify the target word at each successive gate. They also were asked to rate their confidence in their word choice using a five- or three-point scale. For CHH, the signals were processed through a hearing aid simulator. Standardized language measures were used to assess the contribution of linguistic skills. RESULTS: Analysis of language measures revealed that the CNH and CHH performed within the average range on language abilities. Both groups correctly recognized a significantly higher percentage of words in the HP condition than in the LP condition. Although CHH performed comparably with CNH in terms of successfully recognizing the majority of words, differences were observed in the amount of acoustic-phonetic information needed to achieve accurate word recognition. CHH needed more gates than CNH to identify words in the LP condition. CNH were significantly lower in rating their confidence in the LP condition than in the HP condition. CHH, however, were not significantly different in confidence between the conditions. Error patterns for incorrect word responses across gates and predictability varied depending on hearing status. CONCLUSIONS: The results of this study suggest that CHH with age-appropriate language abilities took advantage of context cues in the HP sentences to guide word recognition in a manner similar to CNH. However, in the LP condition, they required more acoustic information (more gates) than CNH for word recognition. Differences in the structure of incorrect word responses and their nomination patterns across gates for CHH compared with their peers with NH suggest variations in how these groups use limited acoustic information to select word candidates.


Asunto(s)
Pérdida Auditiva , Percepción del Habla , Umbral Auditivo , Estudios de Casos y Controles , Niño , Preescolar , Femenino , Humanos , Lenguaje , Masculino
3.
Ear Hear ; 37(4): 492-4, 2016.
Artículo en Inglés | MEDLINE | ID: mdl-26862712

RESUMEN

This study evaluated the English version of Computer-Assisted Speech Perception Assessment (E-CASPA) with Spanish-English bilingual children. E-CASPA has been evaluated with monolingual English speakers ages 5 years and older, but it is unknown whether a separate norm is necessary for bilingual children. Eleven Spanish-English bilingual and 12 English monolingual children (6 to 12 years old) with normal hearing participated. Responses were scored by word, phoneme, consonant, and vowel. Regardless of scores, performance across three signal-to-noise ratio conditions was similar between groups, suggesting that the same norm can be used for both bilingual and monolingual children.


Asunto(s)
Multilingüismo , Percepción del Habla , Estudios de Casos y Controles , Niño , Diagnóstico por Computador , Femenino , Humanos , Lenguaje , Masculino , Relación Señal-Ruido
4.
J Acoust Soc Am ; 127(5): 3177-88, 2010 May.
Artículo en Inglés | MEDLINE | ID: mdl-21117766

RESUMEN

In contrast to the availability of consonant confusion studies with adults, to date, no investigators have compared children's consonant confusion patterns in noise to those of adults in a single study. To examine whether children's error patterns are similar to those of adults, three groups of children (24 each in 4-5, 6-7, and 8-9 yrs. old) and 24 adult native speakers of American English (AE) performed a recognition task for 15 AE consonants in /ɑ/-consonant-/ɑ/ nonsense syllables presented in a background of speech-shaped noise. Three signal-to-noise ratios (SNR: 0, +5, and +10 dB) were used. Although the performance improved as a function of age, the overall consonant recognition accuracy as a function of SNR improved at a similar rate for all groups. Detailed analyses using phonetic features (manner, place, and voicing) revealed that stop consonants were the most problematic for all groups. In addition, for the younger children, front consonants presented in the 0 dB SNR condition were more error prone than others. These results suggested that children's use of phonetic cues do not develop at the same rate for all phonetic features.


Asunto(s)
Lenguaje , Ruido/efectos adversos , Enmascaramiento Perceptual , Fonética , Reconocimiento en Psicología , Acústica del Lenguaje , Inteligibilidad del Habla , Percepción del Habla , Estimulación Acústica , Adulto , Factores de Edad , Audiometría del Habla , Niño , Preescolar , Señales (Psicología) , Humanos
5.
Ear Hear ; 31(3): 345-55, 2010 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-20081536

RESUMEN

OBJECTIVE: Although numerous studies have investigated the effects of single-microphone digital noise-reduction algorithms for adults with hearing loss, similar studies have not been conducted with young hearing-impaired children. The goal of this study was to examine the effects of a commonly used digital noise-reduction scheme (spectral subtraction) in children with mild to moderately severe sensorineural hearing losses. It was hypothesized that the process of spectral subtraction may alter or degrade speech signals in some way. Such degradation may have little influence on the perception of speech by hearing-impaired adults who are likely to use contextual information under such circumstances. For young children who are still developing various language skills, however, signal degradation may have a more detrimental effect on the perception of speech. DESIGN: Sixteen children (eight 5- to 7-yr-olds and eight 8- to 10-yr-olds) with mild to moderately severe hearing loss participated in this study. All participants wore binaural behind the ear hearing aids where noise-reduction processing was performed independently in 16 bands with center frequencies spaced 500 Hz apart up to 7500 Hz. Test stimuli were nonsense syllables, words, and sentences in a background of noise. For all stimuli, data were obtained with noise reduction (NR) on and off conditions. RESULTS: In general, performance improved as a function of speech to noise ratio for all three speech materials. The main effect for stimulus type was significant and post hoc comparisons of stimulus type indicated that speech recognition was higher for sentences than that for both nonsense syllables and words, but no significant differences were observed between nonsense syllables and words. The main effect for NR and the two-way interaction between NR and stimulus type were not significant. Significant age group effects were observed, but the two-way interaction between NR and age group was not significant. CONCLUSIONS: Consistent with previous findings from studies with adults, results suggest that the form of NR used in this study does not have a negative effect on the overall perception of nonsense syllables, words, or sentences across the age range (5 to 10 yrs) and speech to noise ratios (0, +5, and +10 dB) tested.


Asunto(s)
Audífonos , Pérdida Auditiva Sensorineural/fisiopatología , Pérdida Auditiva Sensorineural/terapia , Desarrollo del Lenguaje , Ruido/prevención & control , Percepción del Habla/fisiología , Estimulación Acústica , Algoritmos , Umbral Auditivo , Niño , Preescolar , Humanos , Fonética , Prueba del Umbral de Recepción del Habla
6.
J Acoust Soc Am ; 124(1): 576-88, 2008 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-18647000

RESUMEN

Acoustic and perceptual similarities between Japanese and American English (AE) vowels were investigated in two studies. In study 1, a series of discriminant analyses were performed to determine acoustic similarities between Japanese and AE vowels, each spoken by four native male speakers using F1, F2, and vocalic duration as input parameters. In study 2, the Japanese vowels were presented to native AE listeners in a perceptual assimilation task, in which the listeners categorized each Japanese vowel token as most similar to an AE category and rated its goodness as an exemplar of the chosen AE category. Results showed that the majority of AE listeners assimilated all Japanese vowels into long AE categories, apparently ignoring temporal differences between 1- and 2-mora Japanese vowels. In addition, not all perceptual assimilation patterns reflected context-specific spectral similarity patterns established by discriminant analysis. It was hypothesized that this incongruity between acoustic and perceptual similarity may be due to differences in distributional characteristics of native and non-native vowel categories that affect the listeners' perceptual judgments.


Asunto(s)
Fonética , Acústica del Lenguaje , Percepción del Habla , Pueblo Asiatico , Humanos , Estados Unidos
7.
J Speech Lang Hear Res ; 51(5): 1369-80, 2008 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-18664693

RESUMEN

PURPOSE: Recent studies from the authors' laboratory have suggested that reduced audibility in the high frequencies (because of the bandwidth of hearing instruments) may play a role in the delays in phonological development often exhibited by children with hearing impairment. The goal of the current study was to extend previous findings on the effect of bandwidth on fricatives/affricates to more complex stimuli. METHOD: Nine fricatives/affricates embedded in 2-syllable nonsense words were filtered at 5 and 10 kHz and presented to normal-hearing 6- to 7-year-olds who repeated words exactly as heard. Responses were recorded for subsequent phonetic and acoustic analyses. RESULTS: Significant effects of talker gender and bandwidth were found, with better performance for the male talker and the wider bandwidth condition. In contrast to previous studies, relatively small (5%) mean bandwidth effects were observed for /s/ and /z/ spoken by the female talker. Acoustic analyses of stimuli used in the previous and the current studies failed to explain this discrepancy. CONCLUSIONS: It appears likely that a combination of factors (i.e., dynamic cues, prior phonotactic knowledge, and perhaps other unidentified cues to fricative identity) may have facilitated the perception of these complex nonsense words in the current study.


Asunto(s)
Audífonos , Pérdida Auditiva/complicaciones , Pérdida Auditiva/terapia , Trastornos del Desarrollo del Lenguaje/etiología , Fonética , Niño , Femenino , Humanos , Pruebas del Lenguaje , Masculino , Percepción de la Altura Tonal , Psicoacústica , Percepción del Habla
8.
J Speech Lang Hear Res ; 51(6): 1480-93, 2008 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-18664694

RESUMEN

PURPOSE: K. Nishi and D. Kewley-Port (2007) trained Japanese listeners to perceive 9 American English monophthongs and showed that a protocol using all 9 vowels (fullset) produced better results than the one using only the 3 more difficult vowels (subset). The present study extended the target population to Koreans and examined whether protocols combining the 2 vowel sets would provide more effective training. METHOD: Three groups of 5 Korean listeners were trained on American English vowels for 9 days using one of the 3 protocols: fullset only, first 3 days on subset then 6 days on fullset, or first 6 days on fullset then 3 days on subset. Participants' performance was assessed by pre- and posttraining tests, as well as by a midtraining test. RESULTS: (a) Fullset training was effective for Koreans as well as Japanese, (b) no advantage was found for the 2 combined protocols over the fullset-only protocol, and (c) sustained "nonimprovement" was observed for training using one of the combined protocols. CONCLUSIONS: In using subsets for training on American English vowels, care should be taken not only in the selection of subset vowels but also in the training orders of subsets.


Asunto(s)
Pueblo Asiatico , Fonética , Percepción del Habla , Enseñanza , Adulto , Femenino , Humanos , Lingüística/métodos , Masculino , Multilingüismo , Medición de la Producción del Habla
9.
J Speech Lang Hear Res ; 50(6): 1496-509, 2007 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-18055770

RESUMEN

PURPOSE: Studies on speech perception training have shown that adult 2nd language learners can learn to perceive non-native consonant contrasts through laboratory training. However, research on perception training for non-native vowels is still scarce, and none of the previous vowel studies trained more than 5 vowels. In the present study, the influence of training set sizes was investigated by training native Japanese listeners to identify American English (AE) vowels. METHOD: Twelve Japanese learners of English were trained 9 days either on 9 AE monophthongs (fullset training group) or on the 3 more difficult vowels (subset training group). Five listeners served as controls and received no training. Performance of listeners was assessed before and after training as well as 3 months after training was completed. RESULTS: Results indicated that (a) fullset training using 9 vowels in the stimulus set improved average identification by 25%; (b) listeners in both training groups generalized improvement to untrained words and tokens spoken by novel speakers; and (c) both groups maintained improvement after 3 months. However, the subset group never improved on untrained vowels. CONCLUSIONS: Training protocols for learning non-native vowels should present a full set of vowels and should not focus only on the more difficult vowels.


Asunto(s)
Pueblo Asiatico , Cultura , Lingüística/métodos , Fonética , Percepción del Habla , Enseñanza/métodos , Adulto , Femenino , Humanos , Lenguaje , Masculino , Factores de Tiempo
10.
J Acoust Soc Am ; 122(2): 1111-29, 2007 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-17672658

RESUMEN

Cross-language perception studies report influences of speech style and consonantal context on perceived similarity and discrimination of non-native vowels by inexperienced and experienced listeners. Detailed acoustic comparisons of distributions of vowels produced by native speakers of North German (NG), Parisian French (PF) and New York English (AE) in citation (di)syllables and in sentences (surrounded by labial and alveolar stops) are reported here. Results of within- and cross-language discriminant analyses reveal striking dissimilarities across languages in the spectral/temporal variation of coarticulated vowels. As expected, vocalic duration was most important in differentiating NG vowels; it did not contribute to PF vowel classification. Spectrally, NG long vowels showed little coarticulatory change, but back/low short vowels were fronted/raised in alveolar context. PF vowels showed greater coarticulatory effects overall; back and front rounded vowels were fronted, low and mid-low vowels were raised in both sentence contexts. AE mid to high back vowels were extremely fronted in alveolar contexts, with little change in mid-low and low long vowels. Cross-language discriminant analyses revealed varying patterns of spectral (dis)similarity across speech styles and consonantal contexts that could, in part, account for AE listeners' perception of German and French front rounded vowels, and "similar" mid-high to mid-low vowels.


Asunto(s)
Acústica , Lenguaje , Fonética , Habla , Femenino , Francia , Alemania , Humanos , Masculino , New York , Inteligibilidad del Habla , Estados Unidos
11.
J Acoust Soc Am ; 118(3 Pt 1): 1751-62, 2005 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-16240833

RESUMEN

Strange et al. [J. Acoust. Soc. Am. 115, 1791-1807 (2004)] reported that North German (NG) front-rounded vowels in hVp syllables were acoustically intermediate between front and back American English (AE) vowels. However, AE listeners perceptually assimilated them as poor exemplars of back AE vowels. In this study, speaker- and context-independent cross-language discriminant analyses of NG and AE vowels produced in CVC syllables (C=labial, alveolar, velar stops) in sentences showed that NG front-rounded vowels fell within AE back-vowel distributions, due to the "fronting" of AE back vowels in alveolar/velar contexts. NG [I, e, epsilon, inverted c] were located relatively "higher" in acoustic vowel space than their AE counterparts and varied in cross-language similarity across consonantal contexts. In a perceptual assimilation task, naive listeners classified NG vowels in terms of native AE categories and rated their goodness on a 7-point scale (very foreign to very English sounding). Both front- and back-rounded NG vowels were perceptually assimilated overwhelmingly to back AE categories and judged equally good exemplars. Perceptual assimilation patterns did not vary with context, and were not always predictable from acoustic similarity. These findings suggest that listeners adopt a context-independent strategy when judging the cross-language similarity of vowels produced and presented in continuous speech contexts.


Asunto(s)
Lenguaje , Acústica del Lenguaje , Percepción del Habla/fisiología , Femenino , Humanos , Masculino , Fonética , Medición de la Producción del Habla , Factores de Tiempo
12.
Lang Speech ; 47(Pt 2): 139-54, 2004.
Artículo en Inglés | MEDLINE | ID: mdl-15581189

RESUMEN

This study compared the intelligibility of native and foreign-accented bilingualism English speech presented in quiet and mixed with three different levels of background noise. Two native American English speakers and four native Mandarin Chinese speakers for whom English is a second language each read a list of 50 phonetically balanced sentences (Egan, 1948). The authors speech intelligibility identified two of the Mandarin-accented English speakers as high-proficiency speakers and two as lower proficiency speakers, based on their speech intelligibility in quiet (about 95% and 80%, respectively). Original record-perception ings and noise-masked versions of 48 utterances were presented to monolingual American English speakers. Listeners were asked to write down the words they heard the speakers say, and intelligibility was measured as content words correctly identified. While there was a modest difference between native and high-proficiency speech in quiet (about 7%), it was found that adding noise to the signal reduced the intelligibility of high-proficiency accented speech significantly more than it reduced the intelligibility of native speech. Differences between the two groups in the three added noise conditions ranged from about 12% to 33%. This result suggests that even high-proficiency non-native speech is less robust than native speech when it is presented to listeners under suboptimal conditions.


Asunto(s)
Desarrollo del Lenguaje , Ruido , Inteligibilidad del Habla , Adulto , China , Inglaterra , Femenino , Humanos , Pruebas del Lenguaje , Lingüística , Masculino , Persona de Mediana Edad
13.
J Acoust Soc Am ; 115(4): 1791-807, 2004 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-15101657

RESUMEN

Current theories of cross-language speech perception claim that patterns of perceptual assimilation of non-native segments to native categories predict relative difficulties in learning to perceive (and produce) non-native phones. Cross-language spectral similarity of North German (NG) and American English (AE) vowels produced in isolated hVC(a) (di)syllables (study 1) and in hVC syllables embedded in a short sentence (study 2) was determined by discriminant analyses, to examine the extent to which acoustic similarity was predictive of perceptual similarity patterns. The perceptual assimilation of NG vowels to native AE vowel categories by AE listeners with no German language experience was then assessed directly. Both studies showed that acoustic similarity of AE and NG vowels did not always predict perceptual similarity, especially for "new" NG front rounded vowels and for "similar" NG front and back mid and mid-low vowels. Both acoustic and perceptual similarity of NG and AE vowels varied as a function of the prosodic context, although vowel duration differences did not affect perceptual assimilation patterns. When duration and spectral similarity were in conflict, AE listeners assimilated vowels on the basis of spectral similarity in both prosodic contexts.


Asunto(s)
Fonética , Percepción del Habla/fisiología , Adulto , Análisis Discriminante , Femenino , Alemania , Humanos , Lenguaje , Masculino , Acústica del Lenguaje , Medición de la Producción del Habla , Estados Unidos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA