Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 1.053
Filtrar
1.
Music Sci ; 28(3): 478-501, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-39219861

RESUMEN

In this preliminary study, we explored the relationship between auditory imagery ability and the maintenance of tonal and temporal accuracy when singing and audiating with altered auditory feedback (AAF). Actively performing participants sang and audiated (sang mentally but not aloud) a self-selected piece in AAF conditions, including upward pitch-shifts and delayed auditory feedback (DAF), and with speech distraction. Participants with higher self-reported scores on the Bucknell Auditory Imagery Scale (BAIS) produced a tonal reference that was less disrupted by pitch shifts and speech distraction than musicians with lower scores. However, there was no observed effect of BAIS score on temporal deviation when singing with DAF. Auditory imagery ability was not related to the experience of having studied music theory formally, but was significantly related to the experience of performing. The significant effect of auditory imagery ability on tonal reference deviation remained even after partialling out the effect of experience of performing. The results indicate that auditory imagery ability plays a key role in maintaining an internal tonal center during singing but has at most a weak effect on temporal consistency. In this article, we outline future directions in understanding the multifaceted role of auditory imagery ability in singers' accuracy and expression.

2.
Artículo en Inglés | MEDLINE | ID: mdl-39261125

RESUMEN

INTRODUCTION: Hearing is essential for language acquisition and understanding the environment. Understanding how children react to auditory and visual information is essential for appropriate management in case of hearing loss. Objective and subjective assessments can diagnose hearing loss, but do not measure natural perception in children. We developed a "sensory room" for complementary assessment of children's perceptions so as to assess behavioral responses to meaningful natural sounds and visual stimuli in an ecologic environment suited to children. MATERIAL AND METHODS: Sixteen normal-hearing children and 10 with congenital hearing loss before cochlear implantation, aged 13 to 32months, were included in this feasibility study. They perceived 18 environmental sounds and 9 visual stimuli, and their behavioral responses were coded accordingly as: stopping, looking, moving, pointing, language or emotional reactions. RESULT: All children completed the task, demonstrating its feasibility in children. Percentage responses to auditory versus visual stimuli did not differ in normal-hearing children; those with congenital hearing loss responded like normal-hearing children to visual stimuli, but did not react to auditory stimuli. Progression in normal-hearing children's behavioral responses corresponded to cognitive and linguistic development according to age. CONCLUSION: The "sensory room" quantified children's responses to various auditory and visual stimuli, providing clinicians with measurable insight into the children's sensory perception and processing.

3.
J Exp Biol ; 2024 Sep 12.
Artículo en Inglés | MEDLINE | ID: mdl-39263850

RESUMEN

Early-life experiences with signals used in communication are instrumental in shaping an animal's social interactions. In songbirds, which use vocalizations for guiding social interactions and mate choice, recent studies show that sensory effects on development occur earlier than previously expected, even in embryos and nestlings. Here, we explore the neural dynamics underlying experience-dependent song categorization in young birds prior to the traditionally studied sensitive period of vocal learning that begins around 3 weeks post-hatch. We raised zebra finches either with their biological parents, cross-fostered by Bengalese finches beginning at embryonic day 9, or by only the non-singing mother from 2 days post-hatch. Then, 1-5 days after fledging, we conducted behavioral experiments and extracellular recordings in the auditory forebrain to test responses to zebra finch and Bengalese finch songs. Auditory forebrain neurons in cross-fostered and isolate birds showed increases in firing rate and decreases in responsiveness and selectivity. In cross-fostered birds, decreases in responsiveness and selectivity relative to white noise were specific to conspecific song stimuli, which paralleled behavioral attentiveness to conspecific songs in those same birds. This study shows that auditory and social experience can already impact song 'type' processing in the brains of nestlings, and that brain changes at this age can portend the effects of natal experience in adults.

4.
Cell Rep ; 43(9): 114726, 2024 Sep 13.
Artículo en Inglés | MEDLINE | ID: mdl-39276352

RESUMEN

The posterior dorsal striatum (pDS) plays an essential role in sensory-guided decision-making. However, it remains unclear how the antagonizing direct- and indirect-pathway striatal projection neurons (dSPNs and iSPNs) work in concert to support action selection. Here, we employed deep-brain two-photon imaging to investigate pathway-specific single-neuron and population representations during an auditory-guided decision-making task. We found that the majority of pDS projection neurons predominantly encode choice information. Both dSPNs and iSPNs comprise divergent subpopulations of comparable sizes representing competing choices, rendering a multi-ensemble balance between the two pathways. Intriguingly, such ensemble balance displays a dynamic shift during the decision period: dSPNs show a significantly stronger preference for the contraversive choice than iSPNs. This dynamic shift is further manifested in the inter-neuronal coactivity and population trajectory divergence. Our results support a balance-shift model as a neuronal population mechanism coordinating the direct and indirect striatal pathways for eliciting selected actions during decision-making.

5.
Elife ; 122024 Sep 13.
Artículo en Inglés | MEDLINE | ID: mdl-39268817

RESUMEN

Perceptual systems heavily rely on prior knowledge and predictions to make sense of the environment. Predictions can originate from multiple sources of information, including contextual short-term priors, based on isolated temporal situations, and context-independent long-term priors, arising from extended exposure to statistical regularities. While the effects of short-term predictions on auditory perception have been well-documented, how long-term predictions shape early auditory processing is poorly understood. To address this, we recorded magnetoencephalography data from native speakers of two languages with different word orders (Spanish: functor-initial vs Basque: functor-final) listening to simple sequences of binary sounds alternating in duration with occasional omissions. We hypothesized that, together with contextual transition probabilities, the auditory system uses the characteristic prosodic cues (duration) associated with the native language's word order as an internal model to generate long-term predictions about incoming non-linguistic sounds. Consistent with our hypothesis, we found that the amplitude of the mismatch negativity elicited by sound omissions varied orthogonally depending on the speaker's linguistic background and was most pronounced in the left auditory cortex. Importantly, listening to binary sounds alternating in pitch instead of duration did not yield group differences, confirming that the above results were driven by the hypothesized long-term 'duration' prior. These findings show that experience with a given language can shape a fundamental aspect of human perception - the neural processing of rhythmic sounds - and provides direct evidence for a long-term predictive coding system in the auditory cortex that uses auditory schemes learned over a lifetime to process incoming sound sequences.


Asunto(s)
Corteza Auditiva , Percepción Auditiva , Lenguaje , Magnetoencefalografía , Humanos , Femenino , Masculino , Adulto , Percepción Auditiva/fisiología , Adulto Joven , Corteza Auditiva/fisiología , Estimulación Acústica , Sonido , Percepción del Habla/fisiología
6.
Sci Rep ; 14(1): 20994, 2024 09 09.
Artículo en Inglés | MEDLINE | ID: mdl-39251659

RESUMEN

Sound recognition is effortless for humans but poses a significant challenge for artificial hearing systems. Deep neural networks (DNNs), especially convolutional neural networks (CNNs), have recently surpassed traditional machine learning in sound classification. However, current DNNs map sounds to labels using binary categorical variables, neglecting the semantic relations between labels. Cognitive neuroscience research suggests that human listeners exploit such semantic information besides acoustic cues. Hence, our hypothesis is that incorporating semantic information improves DNN's sound recognition performance, emulating human behaviour. In our approach, sound recognition is framed as a regression problem, with CNNs trained to map spectrograms to continuous semantic representations from NLP models (Word2Vec, BERT, and CLAP text encoder). Two DNN types were trained: semDNN with continuous embeddings and catDNN with categorical labels, both with a dataset extracted from a collection of 388,211 sounds enriched with semantic descriptions. Evaluations across four external datasets, confirmed the superiority of semantic labeling from semDNN compared to catDNN, preserving higher-level relations. Importantly, an analysis of human similarity ratings for natural sounds, showed that semDNN approximated human listener behaviour better than catDNN, other DNNs, and NLP models. Our work contributes to understanding the role of semantics in sound recognition, bridging the gap between artificial systems and human auditory perception.


Asunto(s)
Percepción Auditiva , Procesamiento de Lenguaje Natural , Redes Neurales de la Computación , Semántica , Humanos , Percepción Auditiva/fisiología , Aprendizaje Profundo , Sonido
7.
J Cogn ; 7(1): 69, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39280724

RESUMEN

Music making across cultures arguably involves a blend of innovation and adherence to established norms. This integration allows listeners to recognise a range of innovative, surprising, and functional elements in music, while also associating them to a certain tradition or style. In this light, musical creativity may be seen to involve the novel recombination of shared elements and rules, which can in itself give rise to new cultural conventions. Put simply, future norms rely on past knowledge and present action; this holds for music as it does for other cultural domains. A key process permeating this temporal transition, with regards to both music making and music listening, is prediction. Recent findings suggest that as we listen to music, our brain is constantly generating predictions based on prior knowledge acquired in a given enculturation context. Those predictions, in turn, can shape our appraisal of the music, in a continual perception-action loop. This dynamic process of predicting and calibrating expectations may enable shared musical realities, that is, sets of norms that are transmitted, with some modification, either vertically between generations of a given musical culture, or horizontally between peers of the same or different cultures. As music transforms through cultural evolution, so do the predictive models in our minds and the expectancy they give rise to, influenced by cultural exposure and individual experience. Thus, creativity and prediction are both fundamental and complementary to the transmission of cultural systems, including music, across generations and societies. For these reasons, prediction, creativity and cultural evolution were the central themes in a symposium we organised in 2022. The symposium aimed to study their interplay from an interdisciplinary perspective, guided by contemporary theories and methodologies. This special issue compiles research discussed during or inspired by that symposium, concluding with potential directions for the field of music cognition in that spirit.

8.
Brain Behav ; 14(9): e370011, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-39295079

RESUMEN

OBJECTIVE: The objective of this study was to determine the gender-specific normative values of masking level difference (MLD) in healthy young adults for two different measurement conditions. METHODS: One hundred young adults between the ages of 19 and 25 were included. Tympanometry, pure tone audiometry, and MLD were performed. In the first MLD measurement condition, the threshold level where the signal was out of phase and the noise was in phase (SπNo) was subtracted from the threshold level where the signal and noise were in phase (SoNo). In the second MLD measurement condition, the threshold level where the signal was in phase and the noise was out of phase (SoNπ) was subtracted from the threshold level where the signal and noise were in phase (SoNo). The mean test scores were obtained in decibels. Comparisons were made in terms of gender and conditions. RESULTS: The mean MLD for SoNo-SπNo condition was 10.3 ± 1.99 dB. For SoNo-SoNπ condition, the mean MLD was 6.72 ± 2.38 dB. A significant difference was determined between the MLD under two different measurement conditions (p <.05). There was no significant difference in terms of gender (p >.05). CONCLUSION: Mean normative values of MLD test scores in gender-specific healthy young adults for two different measurement conditions are presented.


Asunto(s)
Enmascaramiento Perceptual , Humanos , Masculino , Femenino , Adulto Joven , Adulto , Valores de Referencia , Enmascaramiento Perceptual/fisiología , Audiometría de Tonos Puros/métodos , Pruebas de Impedancia Acústica/métodos , Pruebas de Impedancia Acústica/normas , Umbral Auditivo/fisiología , Factores Sexuales
10.
eNeuro ; 11(8)2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-39122554

RESUMEN

Reverberation, a ubiquitous feature of real-world acoustic environments, exhibits statistical regularities that human listeners leverage to self-orient, facilitate auditory perception, and understand their environment. Despite the extensive research on sound source representation in the auditory system, it remains unclear how the brain represents real-world reverberant environments. Here, we characterized the neural response to reverberation of varying realism by applying multivariate pattern analysis to electroencephalographic (EEG) brain signals. Human listeners (12 males and 8 females) heard speech samples convolved with real-world and synthetic reverberant impulse responses and judged whether the speech samples were in a "real" or "fake" environment, focusing on the reverberant background rather than the properties of speech itself. Participants distinguished real from synthetic reverberation with ∼75% accuracy; EEG decoding reveals a multistage decoding time course, with dissociable components early in the stimulus presentation and later in the perioffset stage. The early component predominantly occurred in temporal electrode clusters, while the later component was prominent in centroparietal clusters. These findings suggest distinct neural stages in perceiving natural acoustic environments, likely reflecting sensory encoding and higher-level perceptual decision-making processes. Overall, our findings provide evidence that reverberation, rather than being largely suppressed as a noise-like signal, carries relevant environmental information and gains representation along the auditory system. This understanding also offers various applications; it provides insights for including reverberation as a cue to aid navigation for blind and visually impaired people. It also helps to enhance realism perception in immersive virtual reality settings, gaming, music, and film production.


Asunto(s)
Percepción Auditiva , Toma de Decisiones , Electroencefalografía , Percepción del Habla , Humanos , Masculino , Femenino , Adulto Joven , Adulto , Toma de Decisiones/fisiología , Percepción del Habla/fisiología , Percepción Auditiva/fisiología , Estimulación Acústica , Ambiente , Encéfalo/fisiología
11.
Autism Res ; 2024 Aug 26.
Artículo en Inglés | MEDLINE | ID: mdl-39188092

RESUMEN

Some autistic children acquire foreign languages from exposure to screens. Such unexpected bilingualism (UB) is therefore not driven by social interaction, rather, language acquisition appears to rely on less socially mediated learning and other cognitive processes. We hypothesize that UB children may rely on other cues, such as acoustic cues, of the linguistic input. Previous research indicates enhanced pitch processing in some autistic children, often associated with language delays and difficulties in forming stable phonological categories due to sensitivity to subtle linguistic variations. We propose that repetitive screen-based input simplifies linguistic complexity, allowing focus on individual cues. This study hypothesizes that autistic UB children exhibit superior pitch discrimination compared with both autistic and non-autistic peers. From a sample of 46 autistic French-speaking children aged 9 to 16, 12 were considered as UB. These children, along with 45 non-autistic children, participated in a two-alternative forced-choice pitch discrimination task. They listened to pairs of pure tones, 50% of which differed by 3% (easy), 2% (medium), or 1% (hard). A stringent comparison of performance revealed that only the autistic UB group performed above chance for tone pairs that differed, across all conditions. This group demonstrated superior pitch discrimination relative to autistic and non-autistic peers. This study establishes the phenomenon of UB in autism and provides evidence for enhanced pitch discrimination in this group. Acute perception of auditory information, combined with repeated language content, may facilitate UB children's focus on phonetic features, and help acquire a language with no communicative support or motivation.

12.
Biomedica ; 44(2): 168-181, 2024 05 30.
Artículo en Inglés, Español | MEDLINE | ID: mdl-39088526

RESUMEN

Introduction: Hearing health is a public health concern that affects the quality of life and can be disturbed by noise exposure, generating auditory and extra-auditory symptoms. Objective. To identify the hearing health status in adults living in Bogotá and its association with environmental noise exposure and individual and otological factors. Objective: To identify the hearing health status in adults living in Bogotá and its association with environmental noise exposure and individual and otological factors. Materials and methods: We conducted a cross-sectional study using a database with 10,311 records from 2014 to 2018, consigned in a structured survey of noise perception and hearing screening. We performed a descriptive, bivariate, and binary logistic regression analysis. Results: Of the included participants, 35.4% presented hearing impairment. In the perception component, 13.0 % reported not hearing well; 28.8 % had extra-auditory symptoms, 53.3 % informed otological antecedents and 69.0 % presented discomfort due to extramural noise. In the logistic regression, the variables with the highest association for hearing impairment were living in noisy areas (OR = 1.50) (95% CI: 1.34-1.69), being male (OR = 1.85) (95% CI: 1.64-2.09), increasing age (for each year of life, the risk of hearing impairment increased 6%), and having history of extra-auditory symptoms (OR = 1.86) (95% CI: 1.66-2.08). Conclusions: Hearing impairment is multi-causal in the studied population. The factors that promote its prevalence are increasing age, being male, smoking, ototoxic medications, living in areas with high noise exposure, and extra-auditory symptoms.


Introducción. La salud auditiva es un tema de interés en salud pública que afecta la calidad de vida y que puede afectarse por la exposición continua al ruido, un factor de riesgo que genera síntomas auditivos y extraauditivos. Objetivo. Identificar el estado de salud auditiva de adultos que viven en Bogotá, y su asociación con factores de exposición a ruido ambiental, individuales y otológicos. Materiales y métodos. Se realizó un estudio transversal mediante el análisis de una base de datos con 10.311 registros, obtenidos entre los años 2014 y 2018, producto de una encuesta estructurada de percepción de ruido y tamizaje auditivo. Se hizo un análisis descriptivo bivariado y una regresión logística binaria. Resultados. El 35,4 % de los participantes presentó disminución auditiva. En el componente de percepción: 13,0 % refirió no escuchar bien, 28,8 % informó síntomas extraauditivos, 53,3 % tenía antecedentes otológicos, y 69,0 % manifestó molestia por ruido extramural. En la regresión logística, las variables más asociadas con disminución auditiva fueron: de las ambientales, vivir en zonas de mayor ruido (OR = 1,50) (IC95%: 1,34-1,69); de las individuales, ser hombre (OR = 1,85) (IC95%: 1,64-2,09) y la edad (por cada año de vida, el riesgo de disminución auditiva aumentó 6 %); y de las otológicas, tener antecedente de síntomas otológicos (OR = 1,86) (IC95%: 1,66-2,08). Conclusiones. La disminución auditiva es multicausal en la población evaluada. Los factores que aumentan su prevalencia son incremento de la edad, ser hombre, tabaquismo, medicamentos ototóxicos, vivir en zonas de mayor exposición a ruido y presentar síntomas extraauditivos.


Asunto(s)
Exposición a Riesgos Ambientales , Pérdida Auditiva Provocada por Ruido , Ruido , Humanos , Colombia/epidemiología , Adulto , Estudios Transversales , Persona de Mediana Edad , Masculino , Adolescente , Femenino , Ruido/efectos adversos , Adulto Joven , Pérdida Auditiva Provocada por Ruido/epidemiología , Pérdida Auditiva Provocada por Ruido/etiología , Exposición a Riesgos Ambientales/efectos adversos , Factores de Riesgo
13.
Biomedicines ; 12(7)2024 Jul 19.
Artículo en Inglés | MEDLINE | ID: mdl-39062188

RESUMEN

The aim of this study was to identify key proteins of synaptic transmission in the cochlear nucleus (CN) that are involved in normal hearing, acoustic stimulation, and tinnitus. A gene list was compiled from the GeneCards database using the keywords "synaptic transmission" AND "tinnitus" AND "cochlear nucleus" (Tin). For comparison, two gene lists with the keywords "auditory perception" (AP) AND "acoustic stimulation" (AcouStim) were built. The STRING protein-protein interaction (PPI) network and the Cytoscape data analyzer were used to identify the top two high-degree proteins (HDPs) and their high-score interaction proteins (HSIPs), together referred to as key proteins. The top1 key proteins of the Tin-process were BDNF, NTRK1, NTRK3, and NTF3; the top2 key proteins are FOS, JUN, CREB1, EGR1, MAPK1, and MAPK3. Highly significant GO terms in CN in tinnitus were "RNA polymerase II transcription factor complex", "late endosome", cellular response to cadmium ion", "cellular response to reactive oxygen species", and "nerve growth factor signaling pathway", indicating changes in vesicle and cell homeostasis. In contrast to the spiral ganglion, where important changes in tinnitus are characterized by processes at the level of cells, important biological changes in the CN take place at the level of synapses and transcription.

14.
Exp Brain Res ; 242(9): 2207-2217, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-39012473

RESUMEN

Music is based on various regularities, ranging from the repetition of physical sounds to theoretically organized harmony and counterpoint. How are multidimensional regularities processed when we listen to music? The present study focuses on the redundant signals effect (RSE) as a novel approach to untangling the relationship between these regularities in music. The RSE refers to the occurrence of a shorter reaction time (RT) when two or three signals are presented simultaneously than when only one of these signals is presented, and provides evidence that these signals are processed concurrently. In two experiments, chords that deviated from tonal (harmonic) and acoustic (intensity and timbre) regularities were presented occasionally in the final position of short chord sequences. The participants were asked to detect all deviant chords while withholding their responses to non-deviant chords (i.e., the Go/NoGo task). RSEs were observed in all double- and triple-deviant combinations, reflecting processing of multidimensional regularities. Further analyses suggested evidence of coactivation by separate perceptual modules in the combination of tonal and acoustic deviants, but not in the combination of two acoustic deviants. These results imply that tonal and acoustic regularities are different enough to be processed as two discrete pieces of information. Examining the underlying process of RSE may elucidate the relationship between multidimensional regularity processing in music.


Asunto(s)
Estimulación Acústica , Percepción Auditiva , Música , Tiempo de Reacción , Humanos , Femenino , Masculino , Tiempo de Reacción/fisiología , Adulto Joven , Adulto , Estimulación Acústica/métodos , Percepción Auditiva/fisiología
15.
Sci Rep ; 14(1): 16482, 2024 07 17.
Artículo en Inglés | MEDLINE | ID: mdl-39014070

RESUMEN

Emotions have the potential to modulate human voluntary movement by modifying muscle afferent discharge which in turn may affect kinesthetic acuity. We examined if heart rate (HR)-related physiological changes induced by music-elicited emotions would underlie alterations in healthy young adults' ankle joint target-matching strategy quantified by joint position sense (JPS). Participants (n = 40, 19 females, age = 25.9 ± 2.9 years) performed ipsilateral-, and contralateral ankle target-matching tasks with their dominant and non-dominant foot using a custom-made foot platform while listening to classical music pieces deemed to evoke happy, sad, or neutral emotions (each n = 10). Participants in the 4th group received no music during the task. Absolute (ABS), constant (CONST), and variable (VAR) target-matching errors and HR-related data were analyzed. Participants performed the contralateral target-matching task with smaller JPS errors when listening to sad vs. happy music (ABS: p < 0.001, d = 1.6; VAR: p = 0.010, d = 1.2) or neutral (ABS: p < 0.001, d = 1.6; VAR: p < 0.001, d = 1.4) music. The ABS (d = 0.8) and VAR (d = 0.3) JPS errors were lower when participants performed the task with their dominant vs. non-dominant foot. JPS errors were also smaller during the ipsilateral target-matching task when participants (1) listened to sad vs. neutral (ABS: p = 0.007, d = 1.2) music, and (2) performed the target-matching with their dominant vs. non-dominant foot (p < 0.001, d = 0.4). Although emotions also induced changes in some HR-related data during the matching conditions, i.e., participants who listened to happy music had lower HR-related values when matching with their non-dominant vs. dominant foot, these changes did not correlate with JPS errors (all p > 0.05). Overall, our results suggest that music-induced emotions have the potential to affect target-matching strategy and HR-related metrics but the changes in HR-metrics do not underlie the alteration of ankle joint target-matching strategy in response to classical music-elicited emotions.


Asunto(s)
Articulación del Tobillo , Emociones , Frecuencia Cardíaca , Música , Humanos , Femenino , Masculino , Adulto , Frecuencia Cardíaca/fisiología , Música/psicología , Articulación del Tobillo/fisiología , Emociones/fisiología , Adulto Joven
16.
J Alzheimers Dis ; 100(3): 945-959, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38995777

RESUMEN

Background: Understanding the nature and extent of sensorimotor decline in aging individuals and those with neurocognitive disorders (NCD), such as Alzheimer's disease, is essential for designing effective music-based interventions. Our understanding of rhythmic functions remains incomplete, particularly in how aging and NCD affect sensorimotor synchronization and adaptation to tempo changes. Objective: This study aimed to investigate how aging and NCD severity impact tapping to metronomes and music, with and without tempo changes. Methods: Patients from a memory clinic participated in a tapping task, synchronizing with metronomic and musical sequences, some of which contained sudden tempo changes. After exclusions, 51 patients were included in the final analysis. Results: Participants' Mini-Mental State Examination scores were associated with tapping consistency. Additionally, age negatively influenced consistency when synchronizing with a musical beat, whereas consistency remained stable across age when tapping with a metronome. Conclusions: The results indicate that the initial decline of attention and working memory with age may impact perception and synchronization to a musical beat, whereas progressive NCD-related cognitive decline results in more widespread sensorimotor decline, affecting tapping irrespective of audio type. These findings underline the importance of customizing rhythm-based interventions to the needs of older adults and individuals with NCD, taking into consideration their cognitive as well as their rhythmic aptitudes.


Asunto(s)
Envejecimiento , Música , Humanos , Masculino , Femenino , Anciano , Envejecimiento/fisiología , Envejecimiento/psicología , Música/psicología , Persona de Mediana Edad , Trastornos Neurocognitivos/fisiopatología , Trastornos Neurocognitivos/psicología , Desempeño Psicomotor/fisiología , Anciano de 80 o más Años , Percepción Auditiva/fisiología , Adaptación Fisiológica/fisiología , Atención/fisiología , Pruebas de Estado Mental y Demencia
17.
Cereb Cortex ; 34(7)2024 Jul 03.
Artículo en Inglés | MEDLINE | ID: mdl-39051660

RESUMEN

What is the function of auditory hemispheric asymmetry? We propose that the identification of sound sources relies on the asymmetric processing of two complementary and perceptually relevant acoustic invariants: actions and objects. In a large dataset of environmental sounds, we observed that temporal and spectral modulations display only weak covariation. We then synthesized auditory stimuli by simulating various actions (frictions) occurring on different objects (solid surfaces). Behaviorally, discrimination of actions relies on temporal modulations, while discrimination of objects relies on spectral modulations. Functional magnetic resonance imaging data showed that actions and objects are decoded in the left and right hemispheres, respectively, in bilateral superior temporal and left inferior frontal regions. This asymmetry reflects a generic differential processing-through differential neural sensitivity to temporal and spectral modulations present in environmental sounds-that supports the efficient categorization of actions and objects. These results support an ecologically valid framework of the functional role of auditory brain asymmetry.


Asunto(s)
Estimulación Acústica , Percepción Auditiva , Lateralidad Funcional , Imagen por Resonancia Magnética , Humanos , Masculino , Femenino , Imagen por Resonancia Magnética/métodos , Lateralidad Funcional/fisiología , Adulto , Estimulación Acústica/métodos , Percepción Auditiva/fisiología , Adulto Joven , Mapeo Encefálico/métodos , Corteza Auditiva/fisiología , Corteza Auditiva/diagnóstico por imagen
18.
R Soc Open Sci ; 11(5): 221181, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-39076801

RESUMEN

Octave equivalence describes the perception that two notes separated by a doubling in frequency have a similar quality. In humans, octave equivalence is important to both music and language learning and is found cross-culturally. Cross-species studies comparing human and non-human animals can help illuminate the necessary pre-conditions to developing octave equivalence. Here, we tested whether rats (Rattus norvegicus) perceive octave equivalence using a standardized cross-species paradigm. This allowed us to disentangle concurring hypotheses regarding the evolutionary roots of this phenomenon. One hypothesis is that octave equivalence is directly connected to vocal learning, but this hypothesis is only partially supported by data. According to another hypothesis, the harmonic structure of mammalian vocalizations may be more important. If rats perceive octave equivalence, this would support the importance of vocal harmonic structure. If rats do not perceive octave equivalence, this would suggest that octave equivalence evolved independently in several mammalian clades due to a more complex interplay of different factors such as-but not exclusively-the ability to vocally learn. Evidence from our study suggests that rats do perceive octave equivalence, thereby suggesting that the harmonic vocal structure found in mammals may be a key pre-requisite for octave equivalence. Stage 1 approved protocol: the study reported here was originally accepted as a Registered Report and the study design was approved in Stage 1. We hereby confirm that the completed experiment(s) have been executed and analysed in the manner originally approved with any unforeseen changes in those approved methods and analyses clearly noted. The approved Stage 1 protocol can be found at: https://osf.io/gvf7c/?view_only=76dc1840f31c4f9ab59eb93cbadb98b7.

19.
Psychol Sci ; 35(7): 814-824, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38889285

RESUMEN

Despite the intuitive feeling that our visual experience is coherent and comprehensive, the world is full of ambiguous and indeterminate information. Here we explore how the visual system might take advantage of ambient sounds to resolve this ambiguity. Young adults (ns = 20-30) were tasked with identifying an object slowly fading in through visual noise while a task-irrelevant sound played. We found that participants demanded more visual information when the auditory object was incongruent with the visual object compared to when it was not. Auditory scenes, which are only probabilistically related to specific objects, produced similar facilitation even for unheard objects (e.g., a bench). Notably, these effects traverse categorical and specific auditory and visual-processing domains as participants performed across-category and within-category visual tasks, underscoring cross-modal integration across multiple levels of perceptual processing. To summarize, our study reveals the importance of audiovisual interactions to support meaningful perceptual experiences in naturalistic settings.


Asunto(s)
Percepción Auditiva , Percepción Visual , Humanos , Percepción Auditiva/fisiología , Adulto Joven , Adulto , Masculino , Femenino , Percepción Visual/fisiología , Ruido , Estimulación Acústica
20.
Schizophr Bull ; 50(5): 1104-1116, 2024 Aug 27.
Artículo en Inglés | MEDLINE | ID: mdl-38934800

RESUMEN

BACKGROUND AND HYPOTHESIS: N-Methyl-d-aspartate receptor (NMDA-R) hypofunctioning has been hypothesized to be involved in circuit dysfunctions in schizophrenia (ScZ). Yet, it remains to be determined whether the physiological changes observed following NMDA-R antagonist administration are consistent with auditory gamma-band activity in ScZ which is dependent on NMDA-R activity. STUDY DESIGN: This systematic review investigated the effects of NMDA-R antagonists on auditory gamma-band activity in preclinical (n = 15) and human (n = 3) studies and compared these data to electro/magneto-encephalographic measurements in ScZ patients (n = 37) and 9 studies in early-stage psychosis. The following gamma-band parameters were examined: (1) evoked spectral power, (2) intertrial phase coherence (ITPC), (3) induced spectral power, and (4) baseline power. STUDY RESULTS: Animal and human pharmacological data reported a reduction, especially for evoked gamma-band power and ITPC, as well as an increase and biphasic effects of gamma-band activity following NMDA-R antagonist administration. In addition, NMDA-R antagonists increased baseline gamma-band activity in preclinical studies. Reductions in ITPC and evoked gamma-band power were broadly compatible with findings observed in ScZ and early-stage psychosis patients where the majority of studies observed decreased gamma-band spectral power and ITPC. In regard to baseline gamma-band power, there were inconsistent findings. Finally, a publication bias was observed in studies investigating auditory gamma-band activity in ScZ patients. CONCLUSIONS: Our systematic review indicates that NMDA-R antagonists may partially recreate reductions in gamma-band spectral power and ITPC during auditory stimulation in ScZ. These findings are discussed in the context of current theories involving alteration in E/I balance and the role of NMDA hypofunction in the pathophysiology of ScZ.


Asunto(s)
Ritmo Gamma , Trastornos Psicóticos , Receptores de N-Metil-D-Aspartato , Esquizofrenia , Humanos , Esquizofrenia/fisiopatología , Esquizofrenia/tratamiento farmacológico , Trastornos Psicóticos/fisiopatología , Trastornos Psicóticos/tratamiento farmacológico , Ritmo Gamma/fisiología , Ritmo Gamma/efectos de los fármacos , Receptores de N-Metil-D-Aspartato/antagonistas & inhibidores , Magnetoencefalografía , Antagonistas de Aminoácidos Excitadores/farmacología , Antagonistas de Aminoácidos Excitadores/administración & dosificación , Potenciales Evocados Auditivos/fisiología , Potenciales Evocados Auditivos/efectos de los fármacos , Estimulación Acústica , Animales
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA