Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 15 de 15
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
bioRxiv ; 2024 Sep 04.
Artículo en Inglés | MEDLINE | ID: mdl-39282463

RESUMEN

Musical training has been associated with enhanced neural processing of sounds, as measured via the frequency following response (FFR), implying the potential for human subcortical neural plasticity. We conducted a large-scale multi-site preregistered study (n > 260) to replicate and extend the findings underpinning this important relationship. We failed to replicate any of the major findings published previously in smaller studies. Musical training was related neither to enhanced spectral encoding strength of a speech stimulus (/da/) in babble nor to a stronger neural-stimulus correlation. Similarly, the strength of neural tracking of a speech sound with a time-varying pitch was not related to either years of musical training or age of onset of musical training. Our findings provide no evidence for plasticity of early auditory responses based on musical training and exposure.

2.
JASA Express Lett ; 2(11): 114401, 2022 11.
Artículo en Inglés | MEDLINE | ID: mdl-36456369

RESUMEN

The potential binaural consequences of two envelope-based speech enhancement strategies (broadband compression and expansion) were examined. Sensitivity to interaural time differences imposed on four single-word stimuli was measured in listeners with normal hearing and sensorineural hearing loss. While there were no consistent effects of compression or expansion across all words, some potentially interesting word-specific effects were observed.


Asunto(s)
Compresión de Datos , Pérdida Auditiva Sensorineural , Procedimientos Quirúrgicos Refractivos , Humanos , Habla
3.
Trends Hear ; 26: 23312165221095357, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35754372

RESUMEN

While many studies have reported a loss of sensitivity to interaural time differences (ITDs) carried in the fine structure of low-frequency signals for listeners with hearing loss, relatively few data are available on the perception of ITDs carried in the envelope of high-frequency signals in this population. The relevant studies found stronger effects of hearing loss at high frequencies than at low frequencies in most cases, but small subject numbers and several confounding effects prevented strong conclusions from being drawn. In the present study, we revisited this question while addressing some of the issues identified in previous studies. Participants were ten young adults with normal hearing (NH) and twenty adults with sensorineural hearing impairment (HI) spanning a range of ages. ITD discrimination thresholds were measured for octave-band-wide "rustle" stimuli centered at 500 Hz or 4000 Hz, which were presented at 20 or 40 dB sensation level. Broadband rustle stimuli and 500-Hz pure-tone stimuli were also tested. Thresholds were poorer on average for the HI group than the NH group. The ITD deficit, relative to the NH group, was similar at low and high frequencies for most HI participants. For a small number of participants, however, the deficit was strongly frequency-dependent. These results provide new insights into the binaural perception of complex sounds and may inform binaural models that incorporate effects of hearing loss.


Asunto(s)
Sordera , Pérdida Auditiva , Estimulación Acústica/métodos , Percepción Auditiva , Pruebas Auditivas , Humanos , Adulto Joven
4.
J Acoust Soc Am ; 150(2): 1311, 2021 08.
Artículo en Inglés | MEDLINE | ID: mdl-34470281

RESUMEN

Previous studies have shown that for high-rate click trains and low-frequency pure tones, interaural time differences (ITDs) at the onset of stimulus contribute most strongly to the overall lateralization percept (receive the largest perceptual weight). Previous studies have also shown that when these stimuli are modulated, ITDs during the rising portion of the modulation cycle receive increased perceptual weight. Baltzell, Cho, Swaminathan, and Best [(2020). J. Acoust. Soc. Am. 147, 3883-3894] measured perceptual weights for a pair of spoken words ("two" and "eight"), and found that word-initial phonemes receive larger weight than word-final phonemes, suggesting a "word-onset dominance" for speech. Generalizability of this conclusion was limited by a coarse temporal resolution and limited stimulus set. In the present study, temporal weighting functions (TWFs) were measured for four spoken words ("two," "eight," "six," and "nine"). Stimuli were partitioned into 30-ms bins, ITDs were applied independently to each bin, and lateralization judgements were obtained. TWFs were derived using a hierarchical regression model. Results suggest that "word-initial" onset dominance does not generalize across words and that TWFs depend in part on acoustic changes throughout the stimulus. Two model-based predictions were generated to account for observed TWFs, but neither could fully account for the perceptual data.


Asunto(s)
Localización de Sonidos , Estimulación Acústica , Juicio , Habla
5.
J Acoust Soc Am ; 150(2): 1076, 2021 08.
Artículo en Inglés | MEDLINE | ID: mdl-34470293

RESUMEN

This study aimed at predicting individual differences in speech reception thresholds (SRTs) in the presence of symmetrically placed competing talkers for young listeners with sensorineural hearing loss. An existing binaural model incorporating the individual audiogram was revised to handle severe hearing losses by (a) taking as input the target speech level at SRT in a given condition and (b) introducing a floor in the model to limit extreme negative better-ear signal-to-noise ratios. The floor value was first set using SRTs measured with stationary and modulated noises. The model was then used to account for individual variations in SRTs found in two previously published data sets that used speech maskers. The model accounted well for the variation in SRTs across listeners with hearing loss, based solely on differences in audibility. When considering listeners with normal hearing, the model could predict the best SRTs, but not the poorer SRTs, suggesting that other factors limit performance when audibility (as measured with the audiogram) is not compromised.


Asunto(s)
Inteligibilidad del Habla , Percepción del Habla , Umbral Auditivo , Individualidad , Ruido/efectos adversos , Prueba del Umbral de Recepción del Habla
6.
J Acoust Soc Am ; 147(6): 3883, 2020 06.
Artículo en Inglés | MEDLINE | ID: mdl-32611137

RESUMEN

Numerous studies have demonstrated that the perceptual weighting of interaural time differences (ITDs) is non-uniform in time and frequency, leading to reports of spectral and temporal "dominance" regions. It is unclear however, how these dominance regions apply to spectro-temporally complex stimuli such as speech. The authors report spectro-temporal weighting functions for ITDs in a pair of naturally spoken speech tokens ("two" and "eight"). Each speech token was composed of two phonemes, and was partitioned into eight frequency regions over two time bins (one time bin for each phoneme). To derive lateralization weights, ITDs for each time-frequency bin were drawn independently from a normal distribution with a mean of 0 and a standard deviation of 200 µs, and listeners were asked to indicate whether the speech token was presented from the left or right. ITD thresholds were also obtained for each of the 16 time-frequency bins in isolation. The results suggest that spectral dominance regions apply to speech, and that ITDs carried by phonemes in the first position of the syllable contribute more strongly to lateralization judgments than ITDs carried by phonemes in the second position. The results also show that lateralization judgments are partially accounted for by ITD sensitivity across time-frequency bins.


Asunto(s)
Localización de Sonidos , Percepción del Habla , Estimulación Acústica , Habla
7.
J Acoust Soc Am ; 147(3): 1546, 2020 03.
Artículo en Inglés | MEDLINE | ID: mdl-32237845

RESUMEN

Listeners with sensorineural hearing loss routinely experience less spatial release from masking (SRM) in speech mixtures than listeners with normal hearing. Hearing-impaired listeners have also been shown to have degraded temporal fine structure (TFS) sensitivity, a consequence of which is degraded access to interaural time differences (ITDs) contained in the TFS. Since these "binaural TFS" cues are critical for spatial hearing, it has been hypothesized that degraded binaural TFS sensitivity accounts for the limited SRM experienced by hearing-impaired listeners. In this study, speech stimuli were noise-vocoded using carriers that were systematically decorrelated across the left and right ears, thus simulating degraded binaural TFS sensitivity. Both (1) ITD sensitivity in quiet and (2) SRM in speech mixtures spatialized using ITDs (or binaural release from masking; BRM) were measured as a function of TFS interaural decorrelation in young normal-hearing and hearing-impaired listeners. This allowed for the examination of the relationship between ITD sensitivity and BRM over a wide range of ITD thresholds. This paper found that, for a given ITD sensitivity, hearing-impaired listeners experienced less BRM than normal-hearing listeners, suggesting that binaural TFS sensitivity can account for only a modest portion of the BRM deficit in hearing-impaired listeners. However, substantial individual variability was observed.


Asunto(s)
Sordera , Pérdida Auditiva Sensorineural , Pérdida Auditiva , Percepción del Habla , Umbral Auditivo , Audición , Pérdida Auditiva Sensorineural/diagnóstico , Humanos , Ruido/efectos adversos , Enmascaramiento Perceptual , Habla
8.
Neuroimage ; 200: 490-500, 2019 10 15.
Artículo en Inglés | MEDLINE | ID: mdl-31254649

RESUMEN

Natural speech is organized according to a hierarchical structure, with individual speech sounds combining to form abstract linguistic units, and abstract linguistic units combining to form higher-order linguistic units. Since the boundaries between these units are not always indicated by acoustic cues, they must often be computed internally. Signatures of this internal computation were reported by Ding et al. (2016), who presented isochronous sequences of mono-syllabic words that combined to form phrases that combined to form sentences, and showed that cortical responses simultaneously encode boundaries at multiple levels of the linguistic hierarchy. In the present study, we designed melodic sequences that were hierarchically organized according to Western music conventions. Specifically, isochronous sequences of "sung" nonsense syllables were constructed such that syllables combined to form triads outlining individual chords, which combined to form harmonic progressions. EEG recordings were made while participants listened to these sequences with the instruction to detect when violations in the sequence structure occurred. We show that cortical responses simultaneously encode boundaries at multiple levels of a melodic hierarchy, suggesting that the encoding of hierarchical structure is not unique to speech. No effect of musical training on cortical encoding was observed.


Asunto(s)
Percepción Auditiva/fisiología , Corteza Cerebral/fisiología , Neuroimagen Funcional , Música , Adolescente , Adulto , Electroencefalografía , Femenino , Humanos , Masculino , Persona de Mediana Edad , Percepción del Habla/fisiología , Adulto Joven
9.
J Acoust Soc Am ; 144(5): 2662, 2018 11.
Artículo en Inglés | MEDLINE | ID: mdl-30522300

RESUMEN

While wide dynamic range compression (WDRC) is a standard feature of modern hearing aids, it can be difficult to fit compression settings to individual hearing aid users. The goal of the current study was to develop a practical test to learn the preference of individual listeners for different compression ratio (CR) settings in different listening conditions (speech-in-quiet and speech-in-noise). While it is possible to exhaustively test different CR settings, such methods can take many hours to complete, making them impractical. Bayesian optimization methods were used to find CR preferences in individual listeners in a relatively short amount of time. Using this practical preference learning test, individual differences in CR preference were examined across a relatively wide range of CR settings in different listening conditions. In experiment 1, the accuracy of the preference learning test in normal hearing listeners was verified. In experiment 2, it is shown that individual hearing impaired listeners differ in their CR preferences, and listeners tended to prefer the CR setting identified by the preference learning test over both linear gain or the National Acoustics Lab--Nonlinear 2 CR prescription based on their audiograms.


Asunto(s)
Percepción Auditiva/fisiología , Audífonos/tendencias , Ruido/efectos adversos , Prioridad del Paciente/estadística & datos numéricos , Estimulación Acústica/métodos , Adulto , Anciano , Algoritmos , Teorema de Bayes , Compresión de Datos , Femenino , Análisis de Fourier , Pruebas Auditivas/métodos , Humanos , Individualidad , Masculino , Persona de Mediana Edad , Personas con Deficiencia Auditiva/rehabilitación , Personas con Deficiencia Auditiva/estadística & datos numéricos
10.
J Neurophysiol ; 118(6): 3144-3151, 2017 12 01.
Artículo en Inglés | MEDLINE | ID: mdl-28877963

RESUMEN

It has been suggested that cortical entrainment plays an important role in speech perception by helping to parse the acoustic stimulus into discrete linguistic units. However, the question of whether the entrainment response to speech depends on the intelligibility of the stimulus remains open. Studies addressing this question of intelligibility have, for the most part, significantly distorted the acoustic properties of the stimulus to degrade the intelligibility of the speech stimulus, making it difficult to compare across "intelligible" and "unintelligible" conditions. To avoid these acoustic confounds, we used priming to manipulate the intelligibility of vocoded speech. We used EEG to measure the entrainment response to vocoded target sentences that are preceded by natural speech (nonvocoded) prime sentences that are either valid (match the target) or invalid (do not match the target). For unintelligible speech, valid primes have the effect of restoring intelligibility. We compared the effect of priming on the entrainment response for both 3-channel (unintelligible) and 16-channel (intelligible) speech. We observed a main effect of priming, suggesting that the entrainment response depends on prior knowledge, but not a main effect of vocoding (16 channels vs. 3 channels). Furthermore, we found no difference in the effect of priming on the entrainment response to 3-channel and 16-channel vocoded speech, suggesting that for vocoded speech, entrainment response does not depend on intelligibility.NEW & NOTEWORTHY Neural oscillations have been implicated in the parsing of speech into discrete, hierarchically organized units. Our data suggest that these oscillations track the acoustic envelope rather than more abstract linguistic properties of the speech stimulus. Our data also suggest that prior experience with the stimulus allows these oscillations to better track the stimulus envelope.


Asunto(s)
Corteza Cerebral/fisiología , Memoria Implícita , Inteligibilidad del Habla , Percepción del Habla , Adulto , Femenino , Humanos , Masculino
11.
Brain Res ; 1644: 203-12, 2016 08 01.
Artículo en Inglés | MEDLINE | ID: mdl-27195825

RESUMEN

Recent studies have uncovered a neural response that appears to track the envelope of speech, and have shown that this tracking process is mediated by attention. It has been argued that this tracking reflects a process of phase-locking to the fluctuations of stimulus energy, ensuring that this energy arrives during periods of high neuronal excitability. Because all acoustic stimuli are decomposed into spectral channels at the cochlea, and this spectral decomposition is maintained along the ascending auditory pathway and into auditory cortex, we hypothesized that the overall stimulus envelope is not as relevant to cortical processing as the individual frequency channels; attention may be mediating envelope tracking differentially across these spectral channels. To test this we reanalyzed data reported by Horton et al. (2013), where high-density EEG was recorded while adults attended to one of two competing naturalistic speech streams. In order to simulate cochlear filtering, the stimuli were passed through a gammatone filterbank, and temporal envelopes were extracted at each filter output. Following Horton et al. (2013), the attended and unattended envelopes were cross-correlated with the EEG, and local maxima were extracted at three different latency ranges corresponding to distinct peaks in the cross-correlation function (N1, P2, and N2). We found that the ratio between the attended and unattended cross-correlation functions varied across frequency channels in the N1 latency range, consistent with the hypothesis that attention differentially modulates envelope-tracking activity across spectral channels.


Asunto(s)
Atención/fisiología , Corteza Cerebral/fisiología , Percepción del Habla/fisiología , Estimulación Acústica , Adulto , Vías Auditivas/fisiología , Electroencefalografía , Potenciales Evocados , Potenciales Evocados Auditivos , Femenino , Humanos , Masculino , Procesamiento de Señales Asistido por Computador , Acústica del Lenguaje , Adulto Joven
12.
Am J Audiol ; 25(1): 75-83, 2016 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-26989823

RESUMEN

PURPOSE: Understanding speech in background noise is difficult for many individuals; however, time constraints have limited its inclusion in the clinical audiology assessment battery. Phoneme scoring of words has been suggested as a method of reducing test time and variability. The purposes of this study were to establish a phoneme scoring rubric and use it in testing phoneme and word perception in noise in older individuals and individuals with hearing impairment. METHOD: Words were presented to 3 participant groups at 80 dB in speech-shaped noise at 7 signal-to-noise ratios (-10 to 35 dB). Responses were scored for words and phonemes correct. RESULTS: It was not surprising to find that phoneme scores were up to about 30% better than word scores. Word scoring resulted in larger hearing loss effect sizes than phoneme scoring, whereas scoring method did not significantly modify age effect sizes. There were significant effects of hearing loss and some limited effects of age; age effect sizes of about 3 dB and hearing loss effect sizes of more than 10 dB were found. CONCLUSION: Hearing loss is the major factor affecting word and phoneme recognition with a subtle contribution of age. Phoneme scoring may provide several advantages over word scoring. A set of recommended phoneme scoring guidelines is provided.


Asunto(s)
Audiometría del Habla/métodos , Pérdida Auditiva/fisiopatología , Percepción del Habla , Adolescente , Adulto , Anciano , Anciano de 80 o más Años , Audiometría de Tonos Puros , Umbral Auditivo , Estudios de Casos y Controles , Femenino , Pérdida Auditiva/diagnóstico , Humanos , Masculino , Persona de Mediana Edad , Fonética , Relación Señal-Ruido , Adulto Joven
13.
Clin Neurophysiol ; 126(7): 1319-30, 2015 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-25453611

RESUMEN

OBJECTIVE: To use cortical auditory evoked potentials (CAEPs) to understand neural encoding in background noise and the conditions under which noise enhances CAEP responses. METHODS: CAEPs from 16 normal-hearing listeners were recorded using the speech syllable/ba/presented in quiet and speech-shaped noise at signal-to-noise ratios of 10 and 30dB. The syllable was presented binaurally and monaurally at two presentation rates. RESULTS: The amplitudes of N1 and N2 peaks were often significantly enhanced in the presence of low-level background noise relative to quiet conditions, while P1 and P2 amplitudes were consistently reduced in noise. P1 and P2 amplitudes were significantly larger during binaural compared to monaural presentations, while N1 and N2 peaks were similar between binaural and monaural conditions. CONCLUSIONS: Methodological choices impact CAEP peaks in very different ways. Negative peaks can be enhanced by background noise in certain conditions, while positive peaks are generally enhanced by binaural presentations. SIGNIFICANCE: Methodological choices significantly impact CAEPs acquired in quiet and in noise. If CAEPs are to be used as a tool to explore signal encoding in noise, scientists must be cognizant of how differences in acquisition and processing protocols selectively shape CAEP responses.


Asunto(s)
Estimulación Acústica/métodos , Corteza Auditiva/fisiología , Percepción Auditiva/fisiología , Potenciales Evocados Auditivos/fisiología , Ruido , Adolescente , Adulto , Potenciales Evocados/fisiología , Femenino , Humanos , Masculino , Relación Señal-Ruido , Habla , Factores de Tiempo , Adulto Joven
14.
Clin Neurophysiol ; 125(2): 370-80, 2014 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-24007688

RESUMEN

OBJECTIVE: The purpose of this study was to determine the effects of SNR and signal level on the offset response of the cortical auditory evoked potential (CAEP). Successful listening often depends on how well the auditory system can extract target signals from competing background noise. Both signal onsets and offsets are encoded neurally and contribute to successful listening in noise. Neural onset responses to signals in noise demonstrate a strong sensitivity to signal-to-noise ratio (SNR) rather than signal level; however, the sensitivity of neural offset responses to these cues is not known. METHODS: We analyzed the offset response from two previously published datasets for which only the onset response was reported. For both datasets, CAEPs were recorded from young normal-hearing adults in response to a 1000-Hz tone. For the first dataset, tones were presented at seven different signal levels without background noise, while the second dataset varied both signal level and SNR. RESULTS: Offset responses demonstrated sensitivity to absolute signal level in quiet, SNR, and to absolute signal level in noise. CONCLUSIONS: Offset sensitivity to signal level when presented in noise contrasts with previously published onset results. SIGNIFICANCE: This sensitivity suggests a potential clinical measure of cortical encoding of signal level in noise.


Asunto(s)
Corteza Auditiva/fisiología , Percepción Auditiva/fisiología , Potenciales Evocados Auditivos/fisiología , Estimulación Acústica/métodos , Adulto , Humanos , Ruido , Relación Señal-Ruido
15.
Int J Otolaryngol ; 2012: 365752, 2012.
Artículo en Inglés | MEDLINE | ID: mdl-23093964

RESUMEN

The clinical usefulness of aided cortical auditory evoked potentials (CAEPs) remains unclear despite several decades of research. One major contributor to this ambiguity is the wide range of variability across published studies and across individuals within a given study; some results demonstrate expected amplification effects, while others demonstrate limited or no amplification effects. Recent evidence indicates that some of the variability in amplification effects may be explained by distinguishing between experiments that focused on physiological detection of a stimulus versus those that differentiate responses to two audible signals, or physiological discrimination. Herein, we ask if either of these approaches is clinically feasible given the inherent challenges with aided CAEPs. N1 and P2 waves were elicited from 12 noise-masked normal-hearing individuals using hearing-aid-processed 1000-Hz pure tones. Stimulus levels were varied to study the effect of hearing-aid-signal/hearing-aid-noise audibility relative to the noise-masked thresholds. Results demonstrate that clinical use of aided CAEPs may be justified when determining whether audible stimuli are physiologically detectable relative to inaudible signals. However, differentiating aided CAEPs elicited from two suprathreshold stimuli (i.e., physiological discrimination) is problematic and should not be used for clinical decision making until a better understanding of the interaction between hearing-aid-processed stimuli and CAEPs can be established.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA