Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 28
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Hear Res ; 441: 108924, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-38061267

RESUMEN

The head-related transfer function (HRTF) describes the direction-dependent acoustic filtering by the head that occurs between a source signal in free-field space and the signal at the tympanic membrane. HRTFs contain information on sound source location via interaural differences of their magnitude or phase spectra and via the shapes of their magnitude spectra. The present study characterized HRTFs for source locations in the front horizontal plane for nine rabbits, which are a species commonly used in studies of the central auditory system. HRTF magnitude spectra shared several features across individuals, including a broad spectral peak at 2.6kHz that increased gain by 12 to 23dB depending on source azimuth; and a notch at 7.6kHz and peak at 9.8kHz visible for most azimuths. Overall, frequencies above 4kHz were amplified for sources ipsilateral to the ear and progressively attenuated for frontal and contralateral azimuths. The slope of the magnitude spectrum between 3 and 5kHz was found to be an unambiguous monaural cue for source azimuths ipsilateral to the ear. Average interaural level difference (ILD) between 5 and 16kHz varied monotonically with azimuth over ±31dB despite a relatively small head size. Interaural time differences (ITDs) at 0.5kHz and 1.5kHz also varied monotonically with azimuth over ±358 µs and ±260 µs, respectively. Remeasurement of HRTFs after pinna removal revealed that the large pinnae of rabbits were responsible for all spectral peaks and notches in magnitude spectra and were the main contribution to high-frequency ILDs (5-16kHz), whereas the rest of the head was the main contribution to ITDs and low-frequency ILDs (0.2-1.5kHz). Lastly, inter-individual differences in magnitude spectra were found to be small enough that deviations of individual HRTFs from an average HRTF were comparable in size to measurement error. Therefore, the average HRTF may be acceptable for use in neural or behavioral studies of rabbits implementing virtual acoustic space when measurement of individualized HRTFs is not possible.


Asunto(s)
Pabellón Auricular , Localización de Sonidos , Animales , Conejos , Estimulación Acústica , Oído Externo , Sonido
2.
J Audiol Otol ; 27(4): 219-226, 2023 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-37872756

RESUMEN

BACKGROUND AND OBJECTIVES: Traditional sound field localization setups in a free-field environment closely represent real-world situations. However, they are costly and sophisticated, and it is difficult to replicate similar setups in every clinic. Hence, a cost-effective, portable, and less sophisticated virtual setup will be more feasible for assessing spatial acuity in the clinical setting. The virtual auditory space identification (VASI) test was developed to assess spatial acuity using virtual sources in a closed field. The present study compares the legitimacy of these two methods. SUBJECTS AND METHODS: Fifty-five individuals with normal hearing (mean age±SD: 21± 3.26 years) underwent spatial acuity assessment using two paradigms: 1) the sound field paradigm (localization test) and 2) the virtual paradigm (VASI test). Location-specific and overall accuracy scores and error rates were calculated using confusion matrices for each participant in both paradigms. RESULTS: The results of Wilcoxon signed-rank tests showed that the locationspecific and overall accuracy scores for both paradigms were not significantly different. Further, both paradigms did not yield significantly different localization error rates like right and left intra-hemifield errors, inter-hemifield errors, and front-back errors. Spearman's correlation analysis showed that all the measures of the two paradigms had mild to moderate correlation. CONCLUSIONS: These results demonstrate that both VASI and the sound field paradigm localization test performed equally well in assessing spatial acuity.

3.
Sensors (Basel) ; 23(13)2023 Jun 29.
Artículo en Inglés | MEDLINE | ID: mdl-37447865

RESUMEN

The head-related transfer functions (HRTFs) describe the acoustic path transfer functions between sound sources in the free-field and the listener's ear canal. They enable the evaluation of the sound perception of a human being and the creation of immersive virtual acoustic environments that can be reproduced over headphones or loudspeakers. HRTFs are strongly individual and they can be measured by in-ear microphones worn by real subjects. However, standardized HRTFs can also be measured using artificial head simulators which standardize the body dimensions. In this paper, a comparative analysis of HRTF measurement using in-ear microphones is presented. The results obtained with in-ear microphones are compared with the HRTFs measured with a standard head and torso simulator, investigating different positions of the microphones and of the sound source and employing two different types of microphones. Finally, the HRTFs of five real subjects are measured and compared with the ones measured by the microphones in the ear of a standard mannequin.


Asunto(s)
Localización de Sonidos , Humanos , Sonido , Audición , Acústica , Conducto Auditivo Externo , Cabeza , Percepción Auditiva
4.
J Neurosci Methods ; 379: 109661, 2022 09 01.
Artículo en Inglés | MEDLINE | ID: mdl-35817307

RESUMEN

BACKGROUND: Brain-computer interfaces (BCIs) are a promising tool for communication with completely locked-in state (CLIS) patients. Despite the great efforts already made by the BCI research community, the cases of success are still very few, very exploratory, limited in time, and based on simple 'yes/no' paradigms. NEW METHOD: A P300-based BCI is proposed comparing two conditions, one corresponding to purely spatial auditory stimuli (AU-S) and the other corresponding to hybrid visual and spatial auditory stimuli (HVA-S). In the HVA-S condition, there is a semantic, temporal, and spatial congruence between visual and auditory stimuli. The stimuli comprise a lexicon of 7 written and spoken words. Spatial sounds are generated through the head-related transfer function. Given the good results obtained with 10 able-bodied participants, we investigated whether a patient entering CLIS could use the proposed BCI. RESULTS: The able-bodied group achieved 71.3 % and 90.5 % online classification accuracy for the auditory and hybrid BCIs respectively, while the patient achieved 30 % and chance level accuracies, for the same conditions. Notwithstanding, the patient's event-related potentials (ERPs) showed statistical discrimination between target and non-target events in different time windows. COMPARISON WITH EXISTING METHODS: The results of the control group compare favorably with the state-of-the-art, considering a 7-class BCI controlled visual-covertly and with auditory stimuli. The integration of visual and auditory stimuli has not been tested before with CLIS patients. CONCLUSIONS: The semantic, temporal, and spatial congruence of the stimuli increased the performance of the control group, but not of the CLIS patient, which can be due to impaired attention and cognitive function. The patient's unique ERP patterns make interpretation difficult, requiring further tests/paradigms to decouple patients' responses at different levels (reflexive, perceptual, cognitive). The ERPs discrimination found indicates that a simplification of the proposed approaches may be feasible.


Asunto(s)
Esclerosis Amiotrófica Lateral , Interfaces Cerebro-Computador , Electroencefalografía/métodos , Potenciales Evocados/fisiología , Humanos , Semántica
5.
Int J Psychophysiol ; 165: 92-100, 2021 07.
Artículo en Inglés | MEDLINE | ID: mdl-33901512

RESUMEN

Mismatch negativity (MMN) is an intensively studied event-related potential component that reflects pre-attentive auditory processing. Existing spatial MMN (sMMN) studies usually use loud-speakers in different locations or deliver sound with binaural localization cues through earphones to elicit MMN, which either was practically complicated or sounded unnatural to the subjects. In the present study, we generated head related transfer function (HRTF)-based spatial sounds and verified that the HRTF-based sounds retained the left and the right spatial localization cues. We further used them as deviants to elicit sMMN with conventional oddball paradigm. Results showed that sMMN was successfully elicited by the HRTF-based deviants in 18 of 21 healthy subjects in two separate sessions. Furthermore, the left deviants elicited higher sMMN amplitudes in the right hemisphere compared to the left hemisphere, while the right deviants elicited sMMN with similar amplitudes in both hemispheres, which supports a combination of contralateral and right-hemispheric dominance in spatial auditory information processing. In addition, the sMMN in response to the right deviants showed good test-retest reliability, while the sMMN in response to the left deviants had weak test-retest reliability. These findings implicate that HRTF-based sMMN could be a robust paradigm to investigate spatial localization and discrimination abilities.


Asunto(s)
Electroencefalografía , Potenciales Evocados Auditivos , Estimulación Acústica , Percepción Auditiva , Humanos , Reproducibilidad de los Resultados
6.
Eur Ann Otorhinolaryngol Head Neck Dis ; 138(5): 333-336, 2021 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-33390347

RESUMEN

OBJECTIVES: The main objective of this study was to test the feasibility of measuring minimum audible angle in headphones with different reference positions in the horizontal plane, and comparing different types of pre-recorded head-related transfer functions. The secondary objective was to assess spatial discrimination performance in simulated unilateral hearing loss by measuring the minimum audible angle under monaural conditions using headphones. MATERIALS AND METHODS: Minimum audible angle was assessed in 27 normal-hearing subjects, to test their spatial discrimination abilities, using 4 datasets of pre-recorded head-related transfer functions: 2 recorded on mannequins (KU100, KEMAR), and 2 individualized head-related transfer function datasets (TBM, PBM). Performance was evaluated at 3 reference positions (0°, 50° and 180°) in 1 binaural and 2 monaural conditions. RESULTS: KU100 generated minimum audible angle values smaller than KEMAR in frontal and lateral position P<0.005), with a suggestive difference (P<0.05) compared to TBM and PBM in the frontal and lateral planes. Comparison between binaural and monaural conditions showed significant differences in frontal position for MON-c (contralateral) and MON-i (ipsilateral) (P<0.001), in lateral position for MON-c only (P<0.001) and in posterior position for MON-c and MON-i (P<0.001). CONCLUSION: This study suggests that evaluation of spatial discrimination capacity using minimum audible angle with the KU100 head-related transfer dataset was reliable and robust.


Asunto(s)
Pérdida Auditiva Unilateral , Localización de Sonidos , Pruebas Auditivas , Humanos
7.
Artículo en Inglés | MEDLINE | ID: mdl-35010583

RESUMEN

Head-related transfer functions (HRTFs) play a significant role in modern acoustic experiment designs in the auralization of 3-dimensional virtual acoustic environments. This technique enables us to create close to real-life situations including room-acoustic effects, background noise and multiple sources in a controlled laboratory environment. While adult HRTF databases are widely available to the research community, datasets of children are not. To fill this gap, children aged 5-10 years old were recruited among 1st and 2nd year primary school children in Aachen, Germany. Their HRTFs were measured in the hemi-anechoic chamber with a 5-degree × 5-degree resolution. Special care was taken to reduce artifacts from motion during the measurements by means of fast measurement routines. To complement the HRTF measurements with the anthropometric data needed for individualization methods, a high-resolution 3D-scan of the head and upper torso of each participant was recorded. The HRTF measurement took around 3 min. The children's head movement during that time was larger compared to adult participants in comparable experiments but was generally kept within 5 degrees of rotary and 1 cm of translatory motion. Adult participants only exhibit this range of motion in longer duration measurements. A comparison of the HRTF measurements to the KEMAR artificial head shows that it is not representative of an average child HRTF. Difference can be seen in both the spectrum and in the interaural time delay (ITD) with differences of 70 µs on average and a maximum difference of 138 µs. For both spectrum and ITD, the KEMAR more closely resembles the 95th percentile of range of children's data. This warrants a closer look at using child specific HRTFs in the binaural presentation of virtual acoustic environments in the future.


Asunto(s)
Acústica , Ruido , Estimulación Acústica , Adulto , Niño , Preescolar , Alemania , Cabeza , Humanos
8.
JMIR Serious Games ; 8(3): e17576, 2020 Sep 08.
Artículo en Inglés | MEDLINE | ID: mdl-32897232

RESUMEN

BACKGROUND: In order to present virtual sound sources via headphones spatially, head-related transfer functions (HRTFs) can be applied to audio signals. In this so-called binaural virtual acoustics, the spatial perception may be degraded if the HRTFs deviate from the true HRTFs of the listener. OBJECTIVE: In this study, participants wearing virtual reality (VR) headsets performed a listening test on the 3D audio perception of virtual audiovisual scenes, thus enabling us to investigate the necessity and influence of the individualization of HRTFs. Two hypotheses were investigated: first, general HRTFs lead to limitations of 3D audio perception in VR and second, the localization model for stationary localization errors is transferable to nonindividualized HRTFs in more complex environments such as VR. METHODS: For the evaluation, 39 subjects rated individualized and nonindividualized HRTFs in an audiovisual virtual scene on the basis of 5 perceptual qualities: localizability, front-back position, externalization, tone color, and realism. The VR listening experiment consisted of 2 tests: in the first test, subjects evaluated their own and the general HRTF from the Massachusetts Institute of Technology Knowles Electronics Manikin for Acoustic Research database and in the second test, their own and 2 other nonindividualized HRTFs from the Acoustics Research Institute HRTF database. For the experiment, 2 subject-specific, nonindividualized HRTFs with a minimal and maximal localization error deviation were selected according to the localization model in sagittal planes. RESULTS: With the Wilcoxon signed-rank test for the first test, analysis of variance for the second test, and a sample size of 78, the results were significant in all perceptual qualities, except for the front-back position between own and minimal deviant nonindividualized HRTF (P=.06). CONCLUSIONS: Both hypotheses have been accepted. Sounds filtered by individualized HRTFs are considered easier to localize, easier to externalize, more natural in timbre, and thus more realistic compared to sounds filtered by nonindividualized HRTFs.

9.
Artículo en Inglés | MEDLINE | ID: mdl-32140774

RESUMEN

Interaural time and level differences are important cues for sound localization. We wondered whether the broadband information contained in these two cues could fully explain the behavior of barn owls and responses of midbrain neurons in these birds. To tackle this problem, we developed a novel approach based on head-related transfer functions. These filters contain the complete information present at the eardrum. We selected positions in space characterized by equal broadband interaural time and level differences. Stimulation from such positions provides reduced information to the owl. We show that barn owls are able to discriminate between such positions. In many cases, but not all, the owls may have used spectral components of interaural level differences that exceeded the known behavioral resolution and variability for discrimination. Alternatively, the birds may have used template matching. Likewise, neurons in the optic tectum of the barn owl, a nucleus involved in sensorimotor integration, contained more information than is available in the broadband interaural time and level differences. Thus, these data show that more information is available and used by barn owls for sound localization than carried by broadband interaural time and level differences.


Asunto(s)
Vías Auditivas/fisiología , Cabeza/fisiología , Neuronas/fisiología , Localización de Sonidos , Estrigiformes/fisiología , Vías Visuales/fisiología , Estimulación Acústica , Animales , Señales (Psicología) , Femenino , Masculino
10.
Front Neurosci ; 13: 1164, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-31802997

RESUMEN

Sound localization requires the integration in the brain of auditory spatial cues generated by interactions with the external ears, head and body. Perceptual learning studies have shown that the relative weighting of these cues can change in a context-dependent fashion if their relative reliability is altered. One factor that may influence this process is vision, which tends to dominate localization judgments when both modalities are present and induces a recalibration of auditory space if they become misaligned. It is not known, however, whether vision can alter the weighting of individual auditory localization cues. Using virtual acoustic space stimuli, we measured changes in subjects' sound localization biases and binaural localization cue weights after ∼50 min of training on audiovisual tasks in which visual stimuli were either informative or not about the location of broadband sounds. Four different spatial configurations were used in which we varied the relative reliability of the binaural cues: interaural time differences (ITDs) and frequency-dependent interaural level differences (ILDs). In most subjects and experiments, ILDs were weighted more highly than ITDs before training. When visual cues were spatially uninformative, some subjects showed a reduction in auditory localization bias and the relative weighting of ILDs increased after training with congruent binaural cues. ILDs were also upweighted if they were paired with spatially-congruent visual cues, and the largest group-level improvements in sound localization accuracy occurred when both binaural cues were matched to visual stimuli. These data suggest that binaural cue reweighting reflects baseline differences in the relative weights of ILDs and ITDs, but is also shaped by the availability of congruent visual stimuli. Training subjects with consistently misaligned binaural and visual cues produced the ventriloquism aftereffect, i.e., a corresponding shift in auditory localization bias, without affecting the inter-subject variability in sound localization judgments or their binaural cue weights. Our results show that the relative weighting of different auditory localization cues can be changed by training in ways that depend on their reliability as well as the availability of visual spatial information, with the largest improvements in sound localization likely to result from training with fully congruent audiovisual information.

11.
Multisens Res ; 32(8): 745-770, 2019 01 01.
Artículo en Inglés | MEDLINE | ID: mdl-31648191

RESUMEN

Ventriloquist illusion, the change in perceived location of an auditory stimulus when a synchronously presented but spatially discordant visual stimulus is added, has been previously shown in young healthy populations to be a robust paradigm that mainly relies on automatic processes. Here, we propose ventriloquist illusion as a potential simple test to assess audiovisual (AV) integration in young and older individuals. We used a modified version of the illusion paradigm that was adaptive, nearly bias-free, relied on binaural stimulus representation using generic head-related transfer functions (HRTFs) instead of multiple loudspeakers, and tested with synchronous and asynchronous presentation of AV stimuli (both tone and speech). The minimum audible angle (MAA), the smallest perceptible difference in angle between two sound sources, was compared with or without the visual stimuli in young and older adults with no or minimal sensory deficits. The illusion effect, measured by means of MAAs implemented with HRTFs, was observed with both synchronous and asynchronous visual stimulus, but only with tone and not speech stimulus. The patterns were similar between young and older individuals, indicating the versatility of the modified ventriloquist illusion paradigm.


Asunto(s)
Envejecimiento/fisiología , Percepción Auditiva/fisiología , Ilusiones/fisiología , Localización de Sonidos/fisiología , Percepción Visual/fisiología , Estimulación Acústica , Adulto , Anciano , Señales (Psicología) , Femenino , Humanos , Masculino , Persona de Mediana Edad , Estimulación Luminosa , Adulto Joven
12.
R Soc Open Sci ; 6(7): 190423, 2019 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-31417740

RESUMEN

As top predators, crocodilians have an acute sense of hearing that is useful for their social life and for probing their environment in hunting situations. Although previous studies suggest that crocodilians are able to localize the position of a sound source, how they do this remains largely unknown. In this study, we measured the potential monaural sound localization cues (head-related transfer functions; HRTFs) on alive animals and skulls in two situations, both mimicking natural positions: basking on the land and cruising at the interface between air and water. Binaural cues were also estimated by measuring the interaural level differences (ILDs) and the interaural time differences (ITDs). In both conditions, HRTF measurements show large spectral variations (greater than 10 dB) for high frequencies, depending on the azimuthal angle. These localization cues are influenced by head size and by the internal coupling of the ears. ITDs give reliable information regarding sound-source position for low frequencies, while ILDs are more suitable for frequencies higher than 1.5 kHz. Our results support the hypothesis that crocodilian head morphology is adapted to acquire reliable localization cues from sound sources when outside the water, but also when only a small part of their head is above the air-water interface.

13.
Hear Res ; 365: 28-35, 2018 08.
Artículo en Inglés | MEDLINE | ID: mdl-29909353

RESUMEN

The detection of high-frequency spectral notches has been shown to be worse at 70-80 dB sound pressure level (SPL) than at higher levels up to 100 dB SPL. The performance improvement at levels higher than 70-80 dB SPL has been related to an 'ideal observer' comparison of population auditory nerve spike trains to stimuli with and without high-frequency spectral notches. Insofar as vertical localization partly relies on information provided by pinna-based high-frequency spectral notches, we hypothesized that localization would be worse at 70-80 dB SPL than at higher levels. Results from a first experiment using a virtual localization set-up and non-individualized head-related transfer functions (HRTFs) were consistent with this hypothesis, but a second experiment using a free-field set-up showed that vertical localization deteriorates monotonically with increasing level up to 100 dB SPL. These results suggest that listeners use different cues when localizing sound sources in virtual and free-field conditions. In addition, they confirm that the worsening in vertical localization with increasing level continues beyond 70-80 dB SPL, the highest levels tested by previous studies. Further, they suggest that vertical localization, unlike high-frequency spectral notch detection, does not rely on an 'ideal observer' analysis of auditory nerve spike trains.


Asunto(s)
Estimulación Acústica/métodos , Señales (Psicología) , Percepción Sonora , Localización de Sonidos , Sonido , Adulto , Umbral Auditivo , Femenino , Humanos , Masculino , Movimiento (Física) , Enmascaramiento Perceptual , Presión , Factores de Tiempo , Adulto Joven
14.
Eur Ann Otorhinolaryngol Head Neck Dis ; 135(4): 259-264, 2018 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-29731298

RESUMEN

Sound source localization is paramount for comfort of life, determining the position of a sound source in 3 dimensions: azimuth, height and distance. It is based on 3 types of cue: 2 binaural (interaural time difference and interaural level difference) and 1 monaural spectral cue (head-related transfer function). These are complementary and vary according to the acoustic characteristics of the incident sound. The objective of this report is to update the current state of knowledge on the physical basis of spatial sound localization.


Asunto(s)
Localización de Sonidos/fisiología , Humanos , Fenómenos Físicos
15.
Front Neurosci ; 12: 21, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-29456486

RESUMEN

Auditory spatial localization in humans is performed using a combination of interaural time differences, interaural level differences, as well as spectral cues provided by the geometry of the ear. To render spatialized sounds within a virtual reality (VR) headset, either individualized or generic Head Related Transfer Functions (HRTFs) are usually employed. The former require arduous calibrations, but enable accurate auditory source localization, which may lead to a heightened sense of presence within VR. The latter obviate the need for individualized calibrations, but result in less accurate auditory source localization. Previous research on auditory source localization in the real world suggests that our representation of acoustic space is highly plastic. In light of these findings, we investigated whether auditory source localization could be improved for users of generic HRTFs via cross-modal learning. The results show that pairing a dynamic auditory stimulus, with a spatio-temporally aligned visual counterpart, enabled users of generic HRTFs to improve subsequent auditory source localization. Exposure to the auditory stimulus alone or to asynchronous audiovisual stimuli did not improve auditory source localization. These findings have important implications for human perception as well as the development of VR systems as they indicate that generic HRTFs may be enough to enable good auditory source localization in VR.

16.
Hear Res ; 356: 35-50, 2017 12.
Artículo en Inglés | MEDLINE | ID: mdl-29128159

RESUMEN

The morphology of the head and pinna shape the spatial and frequency dependence of sound propagation that give rise to the acoustic cues to sound source location. During early development, the physical dimensions of the head and pinna increase rapidly. Thus, the binaural (interaural time and level differences, ITD and ILD) and monaural (spectral shape) cues are also hypothesized to change rapidly. Complex interactions between the size and shape of the head and pinna limit the accuracy of simple acoustical models (e.g. spherical) and necessitate empirical measurements. Here, we measured the cues to location in the developing guinea pig, a precocial species commonly used for studies of the auditory system. We measured directional transfer functions (DTFs) and the dimensions of the head and pinna in guinea pigs from birth (P0) through adulthood. Dimensions of the head and pinna increased by 87% and 48%, respectively, reaching adult values by ∼8 weeks (P56). The monaural acoustic gain produced by the head and pinna increased with frequency and age, with maximum gains at higher frequencies (>8 kHz) reaching values of 10-21 dB for all ages. The center frequency of monaural spectral notches also decreased with age, from higher frequencies (∼17 kHz) at P0 to lower frequencies (∼12 kHz) in adults. In all animals, ILDs and ITDs were dependent on both frequency and spatial location. Over development, the maximum ILD magnitude increased from ∼15 dB at P0 to ∼30 dB in adults (at frequencies >8 kHz), while the maximum low frequency ITDs increased from ∼185 µs at P0 to ∼300 µs in adults. These results demonstrate that the changes in the acoustical cues are directly related to changes in head and pinna morphology.


Asunto(s)
Señales (Psicología) , Pabellón Auricular/crecimiento & desarrollo , Cabeza/crecimiento & desarrollo , Audición , Localización de Sonidos , Estimulación Acústica , Acústica , Factores de Edad , Animales , Cefalometría , Femenino , Cobayas , Masculino , Movimiento (Física) , Sonido , Espectrografía del Sonido , Factores de Tiempo
17.
eNeuro ; 4(6)2017.
Artículo en Inglés | MEDLINE | ID: mdl-29379866

RESUMEN

A function of the auditory system is to accurately determine the location of a sound source. The main cues for sound location are interaural time (ITD) and level (ILD) differences. Humans use both ITD and ILD to determine the azimuth. Thus far, the conception of sound localization in barn owls was that their facial ruff and asymmetrical ears generate a two-dimensional grid of ITD for azimuth and ILD for elevation. We show that barn owls also use ILD for azimuthal sound localization when ITDs are ambiguous. For high-frequency narrowband sounds, midbrain neurons can signal multiple locations, leading to the perception of an auditory illusion called a phantom source. Owls respond to such an illusory percept by orienting toward it instead of the true source. Acoustical measurements close to the eardrum reveal a small ILD component that changes with azimuth, suggesting that ITD and ILD information could be combined to eliminate the illusion. Our behavioral data confirm that perception was robust against ambiguities if ITD and ILD information was combined. Electrophysiological recordings of ILD sensitivity in the owl's midbrain support the behavioral findings indicating that rival brain hemispheres drive the decision to orient to either true or phantom sources. Thus, the basis for disambiguation, and reliable detection of sound source azimuth, relies on similar cues across species as similar response to combinations of ILD and narrowband ITD has been observed in humans.


Asunto(s)
Localización de Sonidos/fisiología , Estrigiformes/fisiología , Estimulación Acústica , Animales , Femenino , Lateralidad Funcional , Ilusiones/fisiología , Mesencéfalo/fisiología , Neuronas/fisiología , Factores de Tiempo
18.
Trends Hear ; 202016 Sep 22.
Artículo en Inglés | MEDLINE | ID: mdl-27659486

RESUMEN

Listeners use monaural spectral cues to localize sound sources in sagittal planes (along the up-down and front-back directions). How sensorineural hearing loss affects the salience of monaural spectral cues is unclear. To simulate the effects of outer-hair-cell (OHC) dysfunction and the contribution of different auditory-nerve fiber types on localization performance, we incorporated a nonlinear model of the auditory periphery into a model of sagittal-plane sound localization for normal-hearing listeners. The localization model was first evaluated in its ability to predict the effects of spectral cue modifications for normal-hearing listeners. Then, we used it to simulate various degrees of OHC dysfunction applied to different types of auditory-nerve fibers. Predicted localization performance was hardly affected by mild OHC dysfunction but was strongly degraded in conditions involving severe and complete OHC dysfunction. These predictions resemble the usually observed degradation in localization performance induced by sensorineural hearing loss. Predicted localization performance was best when preserving fibers with medium spontaneous rates, which is particularly important in view of noise-induced hearing loss associated with degeneration of this fiber type. On average across listeners, predicted localization performance was strongly related to level discrimination sensitivity of auditory-nerve fibers, indicating an essential role of this coding property for localization accuracy in sagittal planes.

19.
Front Neurosci ; 10: 363, 2016.
Artículo en Inglés | MEDLINE | ID: mdl-27512366

RESUMEN

[This corrects the article on p. 451 in vol. 8, PMID: 25688182.].

20.
Adv Exp Med Biol ; 875: 583-7, 2016.
Artículo en Inglés | MEDLINE | ID: mdl-26611007

RESUMEN

The head-related transfer function (HRTF) is an important descriptor of spatial sound field reception by the listener. In this study, we computed the HRTF of the common dolphin Delphinus delphis. The received sound pressure level at various locations within the acoustic fats of the internal pinna near the surface of the tympanoperiotic complex (TPC) was calculated for planar incident waves directed toward the animal. The relative amplitude of the received pressure versus the incident pressure was the representation of the HRTF from the point of view of the animal. It is of interest that (1) different locations on the surface of the TPC resulted in different HRTFs, (2) the HRTFs for the left and right ears were slightly asymmetric, and (3) the locations of the peaks of the HRTF depended on the frequency of the incident wave.


Asunto(s)
Cetáceos/fisiología , Audición/fisiología , Acústica , Animales , Cabeza , Presión , Sonido
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA