Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Biology (Basel) ; 13(6)2024 Jun 18.
Artículo en Inglés | MEDLINE | ID: mdl-38927323

RESUMEN

Cortical auditory evoked potentials (CAEPs) indicate that noise degrades auditory neural encoding, causing decreased peak amplitude and increased peak latency. Different types of noise affect CAEP responses, with greater informational masking causing additional degradation. In noisy conditions, attention can improve target signals' neural encoding, reflected by an increased CAEP amplitude, which may be facilitated through various inhibitory mechanisms at both pre-attentive and attentive levels. While previous research has mainly focused on inhibition effects during attentive auditory processing in noise, the impact of noise on the neural response during the pre-attentive phase remains unclear. Therefore, this preliminary study aimed to assess the auditory gating response, reflective of the sensory inhibitory stage, to repeated vowel pairs presented in background noise. CAEPs were recorded via high-density EEG in fifteen normal-hearing adults in quiet and noise conditions with low and high informational masking. The difference between the average CAEP peak amplitude evoked by each vowel in the pair was compared across conditions. Scalp maps were generated to observe general cortical inhibitory networks in each condition. Significant gating occurred in quiet, while noise conditions resulted in a significantly decreased gating response. The gating function was significantly degraded in noise with less informational masking content, coinciding with a reduced activation of inhibitory gating networks. These findings illustrate the adverse effect of noise on pre-attentive inhibition related to speech perception.

2.
Hear Res ; 437: 108853, 2023 09 15.
Artículo en Inglés | MEDLINE | ID: mdl-37441879

RESUMEN

Bimodal hearing, in which a contralateral hearing aid is combined with a cochlear implant (CI), provides greater speech recognition benefits than using a CI alone. Factors predicting individual bimodal patient success are not fully understood. Previous studies have shown that bimodal benefits may be driven by a patient's ability to extract fundamental frequency (f0) and/or temporal fine structure cues (e.g., F1). Both of these features may be represented in frequency following responses (FFR) to bimodal speech. Thus, the goals of this study were to: 1) parametrically examine neural encoding of f0 and F1 in simulated bimodal speech conditions; 2) examine objective discrimination of FFRs to bimodal speech conditions using machine learning; 3) explore whether FFRs are predictive of perceptual bimodal benefit. Three vowels (/ε/, /i/, and /ʊ/) with identical f0 were manipulated by a vocoder (right ear) and low-pass filters (left ear) to create five bimodal simulations for evoking FFRs: Vocoder-only, Vocoder +125 Hz, Vocoder +250 Hz, Vocoder +500 Hz, and Vocoder +750 Hz. Perceptual performance on the BKB-SIN test was also measured using the same five configurations. Results suggested that neural representation of f0 and F1 FFR components were enhanced with increasing acoustic bandwidth in the simulated "non-implanted" ear. As spectral differences between vowels emerged in the FFRs with increased acoustic bandwidth, FFRs were more accurately classified and discriminated using a machine learning algorithm. Enhancement of f0 and F1 neural encoding with increasing bandwidth were collectively predictive of perceptual bimodal benefit on a speech-in-noise task. Given these results, FFR may be a useful tool to objectively assess individual variability in bimodal hearing.


Asunto(s)
Implantación Coclear , Implantes Cocleares , Audífonos , Percepción del Habla , Humanos , Habla , Percepción del Habla/fisiología
3.
Audiol Res ; 12(1): 89-94, 2022 Jan 28.
Artículo en Inglés | MEDLINE | ID: mdl-35200259

RESUMEN

Speech frequency following responses (sFFRs) are increasingly used in translational auditory research. Statistically-based automated sFFR detection could aid response identification and provide a basis for stopping rules when recording responses in clinical and/or research applications. In this brief report, sFFRs were measured from 18 normal hearing adult listeners in quiet and speech-shaped noise. Two statistically-based automated response detection methods, the F-test and Hotelling's T2 (HT2) test, were compared based on detection accuracy and test time. Similar detection accuracy across statistical tests and conditions was observed, although the HT2 test time was less variable. These findings suggest that automated sFFR detection is robust for responses recorded in quiet and speech-shaped noise using either the F-test or HT2 test. Future studies evaluating test performance with different stimuli and maskers are warranted to determine if the interchangeability of test performance extends to these conditions.

4.
Audiol Res ; 11(1): 38-46, 2021 Jan 28.
Artículo en Inglés | MEDLINE | ID: mdl-33525531

RESUMEN

Temporal acuity is the ability to differentiate between sounds based on fluctuations in the waveform envelope. The proximity of successive sounds and background noise diminishes the ability to track rapid changes between consecutive sounds. We determined whether a physiological correlate of temporal acuity is also affected by these factors. We recorded the auditory brainstem response (ABR) from human listeners using a harmonic complex (S1) followed by a brief tone burst (S2) with the latter serving as the evoking signal. The duration and depth of the silent gap between S1 and S2 were manipulated, and the peak latency and amplitude of wave V were measured. The latency of the responses decreased significantly as the duration or depth of the gap increased. The amplitude of the responses was not affected by the duration or depth of the gap. These findings suggest that changing the physical parameters of the gap affects the auditory system's ability to encode successive sounds.

5.
Front Neurosci ; 15: 747303, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34987356

RESUMEN

The efferent auditory nervous system may be a potent force in shaping how the brain responds to behaviorally significant sounds. Previous human experiments using the frequency following response (FFR) have shown efferent-induced modulation of subcortical auditory function online and over short- and long-term time scales; however, a contemporary understanding of FFR generation presents new questions about whether previous effects were constrained solely to the auditory subcortex. The present experiment used sine-wave speech (SWS), an acoustically-sparse stimulus in which dynamic pure tones represent speech formant contours, to evoke FFRSWS. Due to the higher stimulus frequencies used in SWS, this approach biased neural responses toward brainstem generators and allowed for three stimuli (/bɔ/, /bu/, and /bo/) to be used to evoke FFRSWS before and after listeners in a training group were made aware that they were hearing a degraded speech stimulus. All SWS stimuli were rapidly perceived as speech when presented with a SWS carrier phrase, and average token identification reached ceiling performance during a perceptual training phase. Compared to a control group which remained naïve throughout the experiment, training group FFRSWS amplitudes were enhanced post-training for each stimulus. Further, linear support vector machine classification of training group FFRSWS significantly improved post-training compared to the control group, indicating that training-induced neural enhancements were sufficient to bolster machine learning classification accuracy. These results suggest that the efferent auditory system may rapidly modulate auditory brainstem representation of sounds depending on their context and perception as non-speech or speech.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA