Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 11 de 11
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
bioRxiv ; 2024 Jan 29.
Artículo en Inglés | MEDLINE | ID: mdl-38352339

RESUMEN

Auditory neural coding of speech-relevant temporal cues can be noninvasively probed using envelope following responses (EFRs), neural ensemble responses phase-locked to the stimulus amplitude envelope. EFRs emphasize different neural generators, such as the auditory brainstem or auditory cortex, by altering the temporal modulation rate of the stimulus. EFRs can be an important diagnostic tool to assess auditory neural coding deficits that go beyond traditional audiometric estimations. Existing approaches to measure EFRs use discrete amplitude modulated (AM) tones of varying modulation frequencies, which is time consuming and inefficient, impeding clinical translation. Here we present a faster and more efficient framework to measure EFRs across a range of AM frequencies using stimuli that dynamically vary in modulation rates, combined with spectrally specific analyses that offer optimal spectrotemporal resolution. EFRs obtained from several species (humans, Mongolian gerbils, Fischer-344 rats, and Cba/CaJ mice) showed robust, high-SNR tracking of dynamic AM trajectories (up to 800Hz in humans, and 1.4 kHz in rodents), with a fivefold decrease in recording time and thirtyfold increase in spectrotemporal resolution. EFR amplitudes between dynamic AM stimuli and traditional discrete AM tokens within the same subjects were highly correlated (94% variance explained) across species. Hence, we establish a time-efficient and spectrally specific approach to measure EFRs. These results could yield novel clinical diagnostics for precision audiology approaches by enabling rapid, objective assessment of temporal processing along the entire auditory neuraxis.

2.
J Assoc Res Otolaryngol ; 25(1): 35-51, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38278969

RESUMEN

PURPOSE: Frequency selectivity is a fundamental property of the peripheral auditory system; however, the invasiveness of auditory nerve (AN) experiments limits its study in the human ear. Compound action potentials (CAPs) associated with forward masking have been suggested as an alternative to assess cochlear frequency selectivity. Previous methods relied on an empirical comparison of AN and CAP tuning curves in animal models, arguably not taking full advantage of the information contained in forward-masked CAP waveforms. METHODS: To improve the estimation of cochlear frequency selectivity based on the CAP, we introduce a convolution model to fit forward-masked CAP waveforms. The model generates masking patterns that, when convolved with a unitary response, can predict the masking of the CAP waveform induced by Gaussian noise maskers. Model parameters, including those characterizing frequency selectivity, are fine-tuned by minimizing waveform prediction errors across numerous masking conditions, yielding robust estimates. RESULTS: The method was applied to click-evoked CAPs at the round window of anesthetized chinchillas using notched-noise maskers with various notch widths and attenuations. The estimated quality factor Q10 as a function of center frequency is shown to closely match the average quality factor obtained from AN fiber tuning curves, without the need for an empirical correction factor. CONCLUSION: This study establishes a moderately invasive method for estimating cochlear frequency selectivity with potential applicability to other animal species or humans. Beyond the estimation of frequency selectivity, the proposed model proved to be remarkably accurate in fitting forward-masked CAP responses and could be extended to study more complex aspects of cochlear signal processing (e.g., compressive nonlinearities).


Asunto(s)
Cóclea , Nervio Coclear , Animales , Humanos , Potenciales de Acción , Ventana Redonda , Chinchilla
3.
J Otol ; 18(3): 152-159, 2023 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-37497332

RESUMEN

Background/purpose: With increasing accessibility to the Internet, patients frequently use the Internet for hearing healthcare information. No study has examined the information about hearing loss available in the Mandarin language on online video-sharing platforms. The study's primary purpose is to investigate the content, source, understandability, and actionability of hearing loss information in the Mandarin language's one hundred most popular online videos. Method: In this project, publicly accessible online videos were analyzed. One hundred of the most popular Mandarin-language videos about hearing loss were identified (51 videos on YouTube and 49 on the Bilibili video-sharing platform). They were manually coded for different popularity metrics, sources, and content. Each video was also rated using the Patient Education Materials Assessment Tool for Audiovisual Materials (PEMAT-AV) to measure the understandability and actionability scores. Results: The video sources were classified as either media (n = 36), professional (n = 39), or consumer (n = 25). The videos covered various topics, including symptoms, consequences, and treatment of hearing loss. Overall, videos attained adequate understandability scores (mean = 73.6%) but low (mean = 43.4%) actionability scores. Conclusions: While existing online content related to hearing loss is quite diverse and largely understandable, those videos provide limited actionable information. Hearing healthcare professionals, media, and content creators can help patients better understand their conditions and make educated hearing healthcare decisions by focusing on the actionability information in their online videos.

4.
Commun Biol ; 6(1): 456, 2023 05 02.
Artículo en Inglés | MEDLINE | ID: mdl-37130918

RESUMEN

For robust vocalization perception, the auditory system must generalize over variability in vocalization production as well as variability arising from the listening environment (e.g., noise and reverberation). We previously demonstrated using guinea pig and marmoset vocalizations that a hierarchical model generalized over production variability by detecting sparse intermediate-complexity features that are maximally informative about vocalization category from a dense spectrotemporal input representation. Here, we explore three biologically feasible model extensions to generalize over environmental variability: (1) training in degraded conditions, (2) adaptation to sound statistics in the spectrotemporal stage and (3) sensitivity adjustment at the feature detection stage. All mechanisms improved vocalization categorization performance, but improvement trends varied across degradation type and vocalization type. One or more adaptive mechanisms were required for model performance to approach the behavioral performance of guinea pigs on a vocalization categorization task. These results highlight the contributions of adaptive mechanisms at multiple auditory processing stages to achieve robust auditory categorization.


Asunto(s)
Corteza Auditiva , Vocalización Animal , Animales , Cobayas , Ruido , Sonido , Percepción Auditiva , Callithrix
5.
Hear Res ; 429: 108697, 2023 03 01.
Artículo en Inglés | MEDLINE | ID: mdl-36696724

RESUMEN

To generate insight from experimental data, it is critical to understand the inter-relationships between individual data points and place them in context within a structured framework. Quantitative modeling can provide the scaffolding for such an endeavor. Our main objective in this review is to provide a primer on the range of quantitative tools available to experimental auditory neuroscientists. Quantitative modeling is advantageous because it can provide a compact summary of observed data, make underlying assumptions explicit, and generate predictions for future experiments. Quantitative models may be developed to characterize or fit observed data, to test theories of how a task may be solved by neural circuits, to determine how observed biophysical details might contribute to measured activity patterns, or to predict how an experimental manipulation would affect neural activity. In complexity, quantitative models can range from those that are highly biophysically realistic and that include detailed simulations at the level of individual synapses, to those that use abstract and simplified neuron models to simulate entire networks. Here, we survey the landscape of recently developed models of auditory cortical processing, highlighting a small selection of models to demonstrate how they help generate insight into the mechanisms of auditory processing. We discuss examples ranging from models that use details of synaptic properties to explain the temporal pattern of cortical responses to those that use modern deep neural networks to gain insight into human fMRI data. We conclude by discussing a biologically realistic and interpretable model that our laboratory has developed to explore aspects of vocalization categorization in the auditory pathway.


Asunto(s)
Corteza Auditiva , Humanos , Corteza Auditiva/fisiología , Estimulación Acústica , Percepción Auditiva/fisiología , Vías Auditivas/fisiología , Redes Neurales de la Computación , Modelos Neurológicos
6.
Elife ; 112022 10 13.
Artículo en Inglés | MEDLINE | ID: mdl-36226815

RESUMEN

Vocal animals produce multiple categories of calls with high between- and within-subject variability, over which listeners must generalize to accomplish call categorization. The behavioral strategies and neural mechanisms that support this ability to generalize are largely unexplored. We previously proposed a theoretical model that accomplished call categorization by detecting features of intermediate complexity that best contrasted each call category from all other categories. We further demonstrated that some neural responses in the primary auditory cortex were consistent with such a model. Here, we asked whether a feature-based model could predict call categorization behavior. We trained both the model and guinea pigs (GPs) on call categorization tasks using natural calls. We then tested categorization by the model and GPs using temporally and spectrally altered calls. Both the model and GPs were surprisingly resilient to temporal manipulations, but sensitive to moderate frequency shifts. Critically, the model predicted about 50% of the variance in GP behavior. By adopting different model training strategies and examining features that contributed to solving specific tasks, we could gain insight into possible strategies used by animals to categorize calls. Our results validate a model that uses the detection of intermediate-complexity contrastive features to accomplish call categorization.


Asunto(s)
Corteza Auditiva , Cobayas , Animales , Corteza Auditiva/fisiología , Vocalización Animal/fisiología , Conducta Animal/fisiología , Percepción Auditiva/fisiología , Estimulación Acústica
7.
Hear Res ; 424: 108603, 2022 10.
Artículo en Inglés | MEDLINE | ID: mdl-36099806

RESUMEN

For gaining insight into general principles of auditory processing, it is critical to choose model organisms whose set of natural behaviors encompasses the processes being investigated. This reasoning has led to the development of a variety of animal models for auditory neuroscience research, such as guinea pigs, gerbils, chinchillas, rabbits, and ferrets; but in recent years, the availability of cutting-edge molecular tools and other methodologies in the mouse model have led to waning interest in these unique model species. As laboratories increasingly look to include in-vivo components in their research programs, a comprehensive description of procedures and techniques for applying some of these modern neuroscience tools to a non-mouse small animal model would enable researchers to leverage unique model species that may be best suited for testing their specific hypotheses. In this manuscript, we describe in detail the methods we have developed to apply these tools to the guinea pig animal model to answer questions regarding the neural processing of complex sounds, such as vocalizations. We describe techniques for vocalization acquisition, behavioral testing, recording of auditory brainstem responses and frequency-following responses, intracranial neural signals including local field potential and single unit activity, and the expression of transgenes allowing for optogenetic manipulation of neural activity, all in awake and head-fixed guinea pigs. We demonstrate the rich datasets at the behavioral and electrophysiological levels that can be obtained using these techniques, underscoring the guinea pig as a versatile animal model for studying complex auditory processing. More generally, the methods described here are applicable to a broad range of small mammals, enabling investigators to address specific auditory processing questions in model organisms that are best suited for answering them.


Asunto(s)
Corteza Auditiva , Estimulación Acústica , Animales , Corteza Auditiva/fisiología , Chinchilla , Hurones , Gerbillinae , Cobayas , Audición , Modelos Animales , Neuronas/fisiología , Conejos , Vocalización Animal/fisiología
8.
Hear Res ; 426: 108586, 2022 12.
Artículo en Inglés | MEDLINE | ID: mdl-35953357

RESUMEN

Listeners with sensorineural hearing loss (SNHL) have substantial perceptual deficits, especially in noisy environments. Unfortunately, speech-intelligibility models have limited success in predicting the performance of listeners with hearing loss. A better understanding of the various suprathreshold factors that contribute to neural-coding degradations of speech in noisy conditions will facilitate better modeling and clinical outcomes. Here, we highlight the importance of one physiological factor that has received minimal attention to date, termed distorted tonotopy, which refers to a disruption in the mapping between acoustic frequency and cochlear place that is a hallmark of normal hearing. More so than commonly assumed factors (e.g., threshold elevation, reduced frequency selectivity, diminished temporal coding), distorted tonotopy severely degrades the neural representations of speech (particularly in noise) in single- and across-fiber responses in the auditory nerve following noise-induced hearing loss. Key results include: 1) effects of distorted tonotopy depend on stimulus spectral bandwidth and timbre, 2) distorted tonotopy increases across-fiber correlation and thus reduces information capacity to the brain, and 3) its effects vary across etiologies, which may contribute to individual differences. These results motivate the development and testing of noninvasive measures that can assess the severity of distorted tonotopy in human listeners. The development of such noninvasive measures of distorted tonotopy would advance precision-audiological approaches to improving diagnostics and rehabilitation for listeners with SNHL.


Asunto(s)
Pérdida Auditiva Provocada por Ruido , Pérdida Auditiva Sensorineural , Percepción del Habla , Humanos , Pérdida Auditiva Provocada por Ruido/diagnóstico , Inteligibilidad del Habla , Percepción del Habla/fisiología , Pérdida Auditiva Sensorineural/diagnóstico , Ruido/efectos adversos , Umbral Auditivo/fisiología
9.
J Neurosci ; 42(8): 1477-1490, 2022 02 23.
Artículo en Inglés | MEDLINE | ID: mdl-34983817

RESUMEN

Listeners with sensorineural hearing loss (SNHL) struggle to understand speech, especially in noise, despite audibility compensation. These real-world suprathreshold deficits are hypothesized to arise from degraded frequency tuning and reduced temporal-coding precision; however, peripheral neurophysiological studies testing these hypotheses have been largely limited to in-quiet artificial vowels. Here, we measured single auditory-nerve-fiber responses to a connected speech sentence in noise from anesthetized male chinchillas with normal hearing (NH) or noise-induced hearing loss (NIHL). Our results demonstrated that temporal precision was not degraded following acoustic trauma, and furthermore that sharpness of cochlear frequency tuning was not the major factor affecting impaired peripheral coding of connected speech in noise. Rather, the loss of cochlear tonotopy, a hallmark of NH, contributed the most to both consonant-coding and vowel-coding degradations. Because distorted tonotopy varies in degree across etiologies (e.g., noise exposure, age), these results have important implications for understanding and treating individual differences in speech perception for people suffering from SNHL.SIGNIFICANCE STATEMENT Difficulty understanding speech in noise is the primary complaint in audiology clinics and can leave people with sensorineural hearing loss (SNHL) suffering from communication difficulties that affect their professional, social, and family lives, as well as their mental health. We measured single-neuron responses from a preclinical SNHL animal model to characterize salient neural-coding deficits for naturally spoken speech in noise. We found the major mechanism affecting neural coding was not a commonly assumed factor, but rather a disruption of tonotopicity, the systematic mapping of acoustic frequency to cochlear place that is a hallmark of normal hearing. Because the degree of distorted tonotopy varies across hearing-loss etiologies, these results have important implications for precision audiology approaches to diagnosis and treatment of SNHL.


Asunto(s)
Pérdida Auditiva Provocada por Ruido , Pérdida Auditiva Sensorineural , Percepción del Habla , Estimulación Acústica/métodos , Animales , Umbral Auditivo/fisiología , Pérdida Auditiva Sensorineural/etiología , Humanos , Masculino , Ruido , Habla , Percepción del Habla/fisiología
10.
PLoS Comput Biol ; 17(2): e1008155, 2021 02.
Artículo en Inglés | MEDLINE | ID: mdl-33617548

RESUMEN

Significant scientific and translational questions remain in auditory neuroscience surrounding the neural correlates of perception. Relating perceptual and neural data collected from humans can be useful; however, human-based neural data are typically limited to evoked far-field responses, which lack anatomical and physiological specificity. Laboratory-controlled preclinical animal models offer the advantage of comparing single-unit and evoked responses from the same animals. This ability provides opportunities to develop invaluable insight into proper interpretations of evoked responses, which benefits both basic-science studies of neural mechanisms and translational applications, e.g., diagnostic development. However, these comparisons have been limited by a disconnect between the types of spectrotemporal analyses used with single-unit spike trains and evoked responses, which results because these response types are fundamentally different (point-process versus continuous-valued signals) even though the responses themselves are related. Here, we describe a unifying framework to study temporal coding of complex sounds that allows spike-train and evoked-response data to be analyzed and compared using the same advanced signal-processing techniques. The framework uses a set of peristimulus-time histograms computed from single-unit spike trains in response to polarity-alternating stimuli to allow advanced spectral analyses of both slow (envelope) and rapid (temporal fine structure) response components. Demonstrated benefits include: (1) novel spectrally specific temporal-coding measures that are less confounded by distortions due to hair-cell transduction, synaptic rectification, and neural stochasticity compared to previous metrics, e.g., the correlogram peak-height, (2) spectrally specific analyses of spike-train modulation coding (magnitude and phase), which can be directly compared to modern perceptually based models of speech intelligibility (e.g., that depend on modulation filter banks), and (3) superior spectral resolution in analyzing the neural representation of nonstationary sounds, such as speech and music. This unifying framework significantly expands the potential of preclinical animal models to advance our understanding of the physiological correlates of perceptual deficits in real-world listening following sensorineural hearing loss.


Asunto(s)
Percepción Auditiva/fisiología , Potenciales Evocados Auditivos/fisiología , Modelos Neurológicos , Estimulación Acústica , Animales , Chinchilla/fisiología , Nervio Coclear/fisiología , Biología Computacional , Modelos Animales de Enfermedad , Pérdida Auditiva Sensorineural/fisiopatología , Pérdida Auditiva Sensorineural/psicología , Humanos , Modelos Animales , Dinámicas no Lineales , Psicoacústica , Sonido , Análisis Espacio-Temporal , Inteligibilidad del Habla/fisiología , Percepción del Habla/fisiología , Investigación Biomédica Traslacional
11.
J Assoc Res Otolaryngol ; 22(1): 51-66, 2021 02.
Artículo en Inglés | MEDLINE | ID: mdl-33188506

RESUMEN

Animal models of noise-induced hearing loss (NIHL) show a dramatic mismatch between cochlear characteristic frequency (CF, based on place of innervation) and the dominant response frequency in single auditory-nerve-fiber responses to broadband sounds (i.e., distorted tonotopy, DT). This noise trauma effect is associated with decreased frequency-tuning-curve (FTC) tip-to-tail ratio, which results from decreased tip sensitivity and enhanced tail sensitivity. Notably, DT is more severe for noise trauma than for metabolic (e.g., age-related) losses of comparable degree, suggesting that individual differences in DT may contribute to speech intelligibility differences in patients with similar audiograms. Although DT has implications for many neural-coding theories for real-world sounds, it has primarily been explored in single-neuron studies that are not viable with humans. Thus, there are no noninvasive measures to detect DT. Here, frequency following responses (FFRs) to a conversational speech sentence were recorded in anesthetized male chinchillas with either normal hearing or NIHL. Tonotopic sources of FFR envelope and temporal fine structure (TFS) were evaluated in normal-hearing chinchillas. Results suggest that FFR envelope primarily reflects activity from high-frequency neurons, whereas FFR-TFS receives broad tonotopic contributions. Representation of low- and high-frequency speech power in FFRs was also assessed. FFRs in hearing-impaired animals were dominated by low-frequency stimulus power, consistent with oversensitivity of high-frequency neurons to low-frequency power. These results suggest that DT can be diagnosed noninvasively. A normalized DT metric computed from speech FFRs provides a potential diagnostic tool to test for DT in humans. A sensitive noninvasive DT metric could be used to evaluate perceptual consequences of DT and to optimize hearing-aid amplification strategies to improve tonotopic coding for hearing-impaired listeners.


Asunto(s)
Estimulación Acústica/efectos adversos , Nervio Coclear , Pérdida Auditiva Provocada por Ruido , Percepción del Habla , Animales , Chinchilla , Nervio Coclear/lesiones , Humanos , Masculino , Conducción Nerviosa , Ruido , Habla
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA