Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 214
Filtrar
1.
Int J Audiol ; : 1-8, 2024 Sep 14.
Artículo en Inglés | MEDLINE | ID: mdl-39275858

RESUMEN

OBJECTIVE: The primary objective of this study was to explore the feasibility of remotely assessing music perception in paediatric cochlear implant (CI) recipients. Pitch direction discrimination (PDD) and timbre recognition (TR) tests were administered remotely. We aimed to provide insights into the potential benefits and challenges of remote assessments. DESIGN: The study was exploratory in nature. All participants underwent remote assessments for the PDD and TR tests. Eight participants completed both online and face-to-face tests. Supervising parents in remote tests completed the System Usability Scale (SUS). STUDY SAMPLE: A cohort of 27 children with CI, averaging 11.19 years of age, participated in this study. RESULTS: In the online condition, the average PDD score was 3.29 semitones (st), the TR score was 37.86%, and the average duration for PDD and TR testing was 9.98 and 6.18 minutes, respectively. Face-to-face sessions had an average PDD score of 3.00 st, a TR score of 32.81% and durations of 10.20 and 5.42 minutes for PDD and TR tests, respectively. The SUS score averaged 64.04. CONCLUSION: These findings contribute to the growing body of knowledge supporting the integration of remote assessments in audiological practices.

3.
Exp Brain Res ; 242(9): 2207-2217, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-39012473

RESUMEN

Music is based on various regularities, ranging from the repetition of physical sounds to theoretically organized harmony and counterpoint. How are multidimensional regularities processed when we listen to music? The present study focuses on the redundant signals effect (RSE) as a novel approach to untangling the relationship between these regularities in music. The RSE refers to the occurrence of a shorter reaction time (RT) when two or three signals are presented simultaneously than when only one of these signals is presented, and provides evidence that these signals are processed concurrently. In two experiments, chords that deviated from tonal (harmonic) and acoustic (intensity and timbre) regularities were presented occasionally in the final position of short chord sequences. The participants were asked to detect all deviant chords while withholding their responses to non-deviant chords (i.e., the Go/NoGo task). RSEs were observed in all double- and triple-deviant combinations, reflecting processing of multidimensional regularities. Further analyses suggested evidence of coactivation by separate perceptual modules in the combination of tonal and acoustic deviants, but not in the combination of two acoustic deviants. These results imply that tonal and acoustic regularities are different enough to be processed as two discrete pieces of information. Examining the underlying process of RSE may elucidate the relationship between multidimensional regularity processing in music.


Asunto(s)
Estimulación Acústica , Percepción Auditiva , Música , Tiempo de Reacción , Humanos , Femenino , Masculino , Tiempo de Reacción/fisiología , Adulto Joven , Adulto , Estimulación Acústica/métodos , Percepción Auditiva/fisiología
4.
Hum Brain Mapp ; 45(10): e26724, 2024 Jul 15.
Artículo en Inglés | MEDLINE | ID: mdl-39001584

RESUMEN

Music is ubiquitous, both in its instrumental and vocal forms. While speech perception at birth has been at the core of an extensive corpus of research, the origins of the ability to discriminate instrumental or vocal melodies is still not well investigated. In previous studies comparing vocal and musical perception, the vocal stimuli were mainly related to speaking, including language, and not to the non-language singing voice. In the present study, to better compare a melodic instrumental line with the voice, we used singing as a comparison stimulus, to reduce the dissimilarities between the two stimuli as much as possible, separating language perception from vocal musical perception. In the present study, 45 newborns were scanned, 10 full-term born infants and 35 preterm infants at term-equivalent age (mean gestational age at test = 40.17 weeks, SD = 0.44) using functional magnetic resonance imaging while listening to five melodies played by a musical instrument (flute) or sung by a female voice. To examine the dynamic task-based effective connectivity, we employed a psychophysiological interaction of co-activation patterns (PPI-CAPs) analysis, using the auditory cortices as seed region, to investigate moment-to-moment changes in task-driven modulation of cortical activity during an fMRI task. Our findings reveal condition-specific, dynamically occurring patterns of co-activation (PPI-CAPs). During the vocal condition, the auditory cortex co-activates with the sensorimotor and salience networks, while during the instrumental condition, it co-activates with the visual cortex and the superior frontal cortex. Our results show that the vocal stimulus elicits sensorimotor aspects of the auditory perception and is processed as a more salient stimulus while the instrumental condition activated higher-order cognitive and visuo-spatial networks. Common neural signatures for both auditory stimuli were found in the precuneus and posterior cingulate gyrus. Finally, this study adds knowledge on the dynamic brain connectivity underlying the newborns capability of early and specialized auditory processing, highlighting the relevance of dynamic approaches to study brain function in newborn populations.


Asunto(s)
Percepción Auditiva , Imagen por Resonancia Magnética , Música , Humanos , Femenino , Masculino , Percepción Auditiva/fisiología , Recién Nacido , Canto/fisiología , Recien Nacido Prematuro/fisiología , Mapeo Encefálico , Estimulación Acústica , Encéfalo/fisiología , Encéfalo/diagnóstico por imagen , Voz/fisiología
5.
J Clin Med ; 13(11)2024 May 27.
Artículo en Inglés | MEDLINE | ID: mdl-38892853

RESUMEN

Background: This study investigated how different hearing profiles influenced melodic contour identification (MCI) in a real-world concert setting with a live band including drums, bass, and a lead instrument. We aimed to determine the impact of various auditory assistive technologies on music perception in an ecologically valid environment. Methods: The study involved 43 participants with varying hearing capabilities: normal hearing, bilateral hearing aids, bimodal hearing, single-sided cochlear implants, and bilateral cochlear implants. Participants were exposed to melodies played on a piano or accordion, with and without an electric bass as a masker, accompanied by a basic drum rhythm. Bayesian logistic mixed-effects models were utilized to analyze the data. Results: The introduction of an electric bass as a masker did not significantly affect MCI performance for any hearing group when melodies were played on the piano, contrary to its effect on accordion melodies and previous studies. Greater challenges were observed with accordion melodies, especially when accompanied by an electric bass. Conclusions: MCI performance among hearing aid users was comparable to other hearing-impaired profiles, challenging the hypothesis that they would outperform cochlear implant users. A cohort of short melodies inspired by Western music styles was developed for future contour identification tasks.

6.
J Cogn ; 7(1): 32, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38617750

RESUMEN

We present a novel approach to representing perceptual and cognitive knowledge, spectral knowledge representation, that is focused on the oscillatory behaviour of the brain. The model is presented in the context of a larger hypothetical cognitive architecture. The model uses literal representations of waves to describe the dynamics of neural assemblies as they process perceived input. We show how the model can be applied to representations of sound, and usefully model music perception, specifically harmonic distance. We demonstrate that the model naturally captures both pitch and chord/key distance as empirically measured by Krumhansl and Kessler, thereby providing an underlying mechanism from which their toroidal model might arise. We evaluate our model with respect to those of Milne and others.

7.
Front Psychol ; 15: 1339168, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38629034

RESUMEN

Nowadays there are multiple ways to perceive music, from attending concerts (live) to listening to recorded music through headphones (medial). In between there are many mixed modes, such as playback performances. In empirical music research, this plurality of performance forms has so far found little recognition. Until now no measuring instrument has existed that could adequately capture the differences in perception and aesthetic judgment. The purpose of our empirical investigation was to capture all dimensions relevant to such an assessment. Using 3D-simulations and dynamic binaural synthesis, various live and medial situations were simulated. A qualitative survey was conducted at the Department of Audio Communication of the Technical University of Berlin (TU Berlin). With the help of the repertory grid technique, a data pool of approximately 400 attribute pairs was created and individual rating data were collected. Our first study served to create a semantic differential. In a second study, this semantic differential was evaluated. The development of the semantic differential was carried out by first using a mixed-method approach to qualitative analysis according to grounded theory. Thereafter, a principal component analysis reduced the attribute pairs to 67 items in four components. The semantic differential consists of items concerning acoustic, visual and audio-visual interaction as well as items with an overarching assessment of the stimuli. The evaluation study, comprising 45 participants (23 male and 22 female, M = 42.56 years, SD = 17.16) who rated 12 stimuli each, reduced the items to 61 and resulted in 18 subscales and nine single items. Because the survey used simulations, the social component may be underrepresented. Nevertheless, the questionnaire we created enables the evaluation of music performances (especially for classical concerts) in a new scope, thus opening many opportunities for further research. For example, in a live concert context, we observed not only that seating position influences the judgment of sound quality but also that visual elements influence immersion and felt affect. In the future, the differential could be reviewed for a larger stimulus pool, extended or used modularly for different research questions.

8.
Dev Sci ; 27(5): e13519, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-38679927

RESUMEN

The present longitudinal study investigated the hypothesis that early musical skills (as measured by melodic and rhythmic perception and memory) predict later literacy development via a mediating effect of phonology. We examined 130 French-speaking children, 31 of whom with a familial risk for developmental dyslexia (DD). Their abilities in the three domains were assessed longitudinally with a comprehensive battery of behavioral tests in kindergarten, first grade, and second grade. Using a structural equation modeling approach, we examined potential longitudinal effects from music to literacy via phonology. We then investigated how familial risk for DD may influence these relationships by testing whether atypical music processing is a risk factor for DD. Results showed that children with a familial risk for DD consistently underperformed children without familial risk in music, phonology, and literacy. A small effect of musical ability on literacy via phonology was observed, but may have been induced by differences in stability across domains over time. Furthermore, early musical skills did not add significant predictive power to later literacy difficulties beyond phonological skills and family risk status. These findings are consistent with the idea that certain key auditory skills are shared between music and speech processing, and between DD and congenital amusia. However, they do not support the notion that music perception and memory skills can serve as a reliable early marker of DD, nor as a valuable target for reading remediation. RESEARCH HIGHLIGHTS: Music, phonology, and literacy skills of 130 children, 31 of whom with a familial risk for dyslexia, were examined longitudinally. Children with a familial risk for dyslexia consistently underperformed children without familial risk in musical, phonological, and literacy skills. Structural equation models showed a small effect of musical ability in kindergarten on literacy in second grade, via phonology in first grade. However, early musical skills did not add significant predictive power to later literacy difficulties beyond phonological skills and family risk status.


Asunto(s)
Dislexia , Música , Humanos , Dislexia/genética , Dislexia/fisiopatología , Estudios Longitudinales , Niño , Masculino , Femenino , Factores de Riesgo , Lectura , Preescolar , Percepción Auditiva/fisiología
9.
Cereb Cortex ; 34(4)2024 Apr 01.
Artículo en Inglés | MEDLINE | ID: mdl-38679480

RESUMEN

Existing neuroimaging studies on neural correlates of musical familiarity often employ a familiar vs. unfamiliar contrast analysis. This singular analytical approach reveals associations between explicit musical memory and musical familiarity. However, is the neural activity associated with musical familiarity solely related to explicit musical memory, or could it also be related to implicit musical memory? To address this, we presented 130 song excerpts of varying familiarity to 21 participants. While acquiring their brain activity using functional magnetic resonance imaging (fMRI), we asked the participants to rate the familiarity of each song on a five-point scale. To comprehensively analyze the neural correlates of musical familiarity, we examined it from four perspectives: the intensity of local neural activity, patterns of local neural activity, global neural activity patterns, and functional connectivity. The results from these four approaches were consistent and revealed that musical familiarity is related to the activity of both explicit and implicit musical memory networks. Our findings suggest that: (1) musical familiarity is also associated with implicit musical memory, and (2) there is a cooperative and competitive interaction between the two types of musical memory in the perception of music.


Asunto(s)
Mapeo Encefálico , Encéfalo , Imagen por Resonancia Magnética , Música , Reconocimiento en Psicología , Humanos , Música/psicología , Reconocimiento en Psicología/fisiología , Masculino , Femenino , Adulto Joven , Adulto , Encéfalo/fisiología , Encéfalo/diagnóstico por imagen , Mapeo Encefálico/métodos , Percepción Auditiva/fisiología , Estimulación Acústica/métodos
10.
Audiol Res ; 14(2): 217-226, 2024 Feb 22.
Artículo en Inglés | MEDLINE | ID: mdl-38525681

RESUMEN

The most prevalent sensory impairment impacting the elderly is age-related hearing loss (HL), which affects around 65% of individuals over the age of 60 years. This bilateral, symmetrical sensorineural impairment profoundly affects auditory perception, speech discrimination, and the overall understanding of auditory signals. Influenced by diverse factors, age-related HL can substantially influence an individual's quality of life and mental health and can lead to depression. Cochlear implantation (CI) stands as a standard intervention, yet despite advancements, music perception challenges persist, which can be addressed with individualized music therapy. This case report describes the journey of an 81-year-old musician through profound sensorineural hearing loss, cochlear implantation, and rehabilitative music therapy. Auditory evaluations, musical exercises, and quality of life assessments highlighted meaningful improvements in music perception, auditory skills, and overall satisfaction post-implantation. Music therapy facilitated emotional, functional, and musical levels of engagement, notably enhancing his ability to perceive melody, rhythm, and different instruments. Moreover, subjective assessments and audiograms indicated marked improvements in auditory differentiation, music enjoyment, and overall hearing thresholds. This comprehensive approach integrating bilateral CIs and music therapy showcased audiological and quality of life enhancements in an elderly individual with profound hearing loss, emphasizing the efficacy of this combined treatment approach.

11.
Cogn Neurodyn ; 18(1): 49-66, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38406195

RESUMEN

The present study tests the hypothesis that emotions of fear and anger are associated with distinct psychophysiological and neural circuitry according to discrete emotion model due to contrasting neurotransmitter activities, despite being included in the same affective group in many studies due to similar arousal-valance scores of them in emotion models. EEG data is downloaded from OpenNeuro platform with access number of ds002721. Brain connectivity estimations are obtained by using both functional and effective connectivity estimators in analysis of short (2 sec) and long (6 sec) EEG segments across the cortex. In tests, discrete emotions and resting-states are identified by frequency band specific brain network measures and then contrasting emotional states are deep classified with 5-fold cross-validated Long Short Term Memory Networks. Logistic regression modeling has also been examined to provide robust performance criteria. Commonly, the best results are obtained by using Partial Directed Coherence in Gamma (31.5-60.5Hz) sub-bands of short EEG segments. In particular, Fear and Anger have been classified with accuracy of 91.79%. Thus, our hypothesis is supported by overall results. In conclusion, Anger is found to be characterized by increased transitivity and decreased local efficiency in addition to lower modularity in Gamma-band in comparison to fear. Local efficiency refers functional brain segregation originated from the ability of the brain to exchange information locally. Transitivity refer the overall probability for the brain having adjacent neural populations interconnected, thus revealing the existence of tightly connected cortical regions. Modularity quantifies how well the brain can be partitioned into functional cortical regions. In conclusion, PDC is proposed to graph theoretical analysis of short EEG epochs in presenting robust emotional indicators sensitive to perception of affective sounds.

12.
Eur Arch Otorhinolaryngol ; 281(7): 3475-3482, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38194096

RESUMEN

PURPOSE: This study aimed to investigate the effects of low frequency (LF) pitch perception on speech-in-noise and music perception performance by children with cochlear implants (CIC) and typical hearing (THC). Moreover, the relationships between speech-in-noise and music perception as well as the effects of demographic and audiological factors on present research outcomes were studied. METHODS: The sample consisted of 22 CIC and 20 THC (7-10 years). Harmonic intonation (HI) and disharmonic intonation (DI) tests were used to assess LF pitch perception. Speech perception in quiet (WRSq)/noise (WRSn + 10) were tested with the Italian bisyllabic words for pediatric populations. The Gordon test was used to evaluate music perception (rhythm, melody, harmony, and overall). RESULTS: CIC/THC performance comparisons for LF pitch, speech-in-noise, and all music measures except harmony revealed statistically significant differences with large effect sizes. For the CI group, HI showed statistically significant correlations with melody discrimination. Melody/total Gordon scores were significantly correlated with WRSn + 10. For the overall group, HI/DI showed significant correlations with all music perception measures and WRSn + 10. Hearing thresholds showed significant effects on HI/DI scores. Hearing thresholds and WRSn + 10 scores were significantly correlated; both revealed significant effects on all music perception scores. CI age had significant effects on WRSn + 10, harmony, and total Gordon scores (p < 0.05). CONCLUSION: Such findings confirmed the significant effects of LF pitch perception on complex listening performance. Significant speech-in-noise and music perception correlations were as promising as results from recent studies indicating significant positive effects of music training on speech-in-noise recognition in CIC.


Asunto(s)
Implantes Cocleares , Música , Ruido , Percepción de la Altura Tonal , Percepción del Habla , Humanos , Niño , Masculino , Femenino , Percepción del Habla/fisiología , Percepción de la Altura Tonal/fisiología , Implantación Coclear
13.
Laryngoscope ; 134(3): 1381-1387, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-37665102

RESUMEN

OBJECTIVE: Music is a highly complex acoustic stimulus in both spectral and temporal contents. Accurate representation and delivery of high-fidelity information are essential for music perception. However, it is unclear how well bone-anchored hearing implants (BAHIs) transmit music. The study objective is to establish music perception performance baselines for BAHI users and normal hearing (NH) listeners and compare outcomes between the cohorts. METHODS: A case-controlled, cross-sectional study was conducted among 18 BAHI users and 11 NH controls. Music perception was assessed via performance on seven major musical element tasks: pitch discrimination, melodic contour identification, rhythmic clocking, basic tempo discrimination, timbre identification, polyphonic pitch detection, and harmonic chord discrimination. RESULTS: BAHI users performed comparably well on all music perception tasks with their device compared with the unilateral condition with their better-hearing ear. BAHI performance was not statistically significantly different from NH listeners' performance. BAHI users performed just as well, if not better than NH listeners when using their control contralateral ear; there was no significant difference between the two groups except for the rhythmic timing (BAHI non-implanted ear 69% [95% CI: 62%-75%], NH 56% [95% CI: 49%-63%], p = 0.02), and basic tempo tasks (BAHI non-implanted ear 80% [95% CI: 65%-95%]; NH 75% [95% CI: 68%-82%, p = 0.03]). CONCLUSIONS: This study represents the first comprehensive study of basic music perception performance in BAHI users. Our results demonstrate that BAHI users perform as well with their implanted ear as with their contralateral better-hearing ear and NH controls in the major elements of music perception. LEVEL OF EVIDENCE: 3 Laryngoscope, 134:1381-1387, 2024.


Asunto(s)
Implantación Coclear , Implantes Cocleares , Música , Humanos , Percepción Auditiva , Estudios Transversales , Audición , Percepción de la Altura Tonal
14.
Behav Res Methods ; 56(3): 1968-1983, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-37221344

RESUMEN

We describe the development and validation of a test battery to assess musical ability that taps into a broad range of music perception skills and can be administered in 10 minutes or less. In Study 1, we derived four very brief versions from the Profile of Music Perception Skills (PROMS) and examined their properties in a sample of 280 participants. In Study 2 (N = 109), we administered the version retained from Study 1-termed Micro-PROMS-with the full-length PROMS, finding a short-to-long-form correlation of r = .72. In Study 3 (N = 198), we removed redundant trials and examined test-retest reliability as well as convergent, discriminant, and criterion validity. Results showed adequate internal consistency ( ω ¯ = .73) and test-retest reliability (ICC = .83). Findings supported convergent validity of the Micro-PROMS (r = .59 with the MET, p < .01) as well as discriminant validity with short-term and working memory (r ≲ .20). Criterion-related validity was evidenced by significant correlations of the Micro-PROMS with external indicators of musical proficiency ( r ¯ = .37, ps < .01), and with Gold-MSI General Musical Sophistication (r = .51, p<.01). In virtue of its brevity, psychometric qualities, and suitability for online administration, the battery fills a gap in the tools available to objectively assess musical ability.


Asunto(s)
Música , Humanos , Reproducibilidad de los Resultados , Exactitud de los Datos , Psicometría , Habilidades para Tomar Exámenes
15.
Behav Res Methods ; 2023 Nov 13.
Artículo en Inglés | MEDLINE | ID: mdl-37957432

RESUMEN

Auditory scene analysis (ASA) is the process through which the auditory system makes sense of complex acoustic environments by organising sound mixtures into meaningful events and streams. Although music psychology has acknowledged the fundamental role of ASA in shaping music perception, no efficient test to quantify listeners' ASA abilities in realistic musical scenarios has yet been published. This study presents a new tool for testing ASA abilities in the context of music, suitable for both normal-hearing (NH) and hearing-impaired (HI) individuals: the adaptive Musical Scene Analysis (MSA) test. The test uses a simple 'yes-no' task paradigm to determine whether the sound from a single target instrument is heard in a mixture of popular music. During the online calibration phase, 525 NH and 131 HI listeners were recruited. The level ratio between the target instrument and the mixture, choice of target instrument, and number of instruments in the mixture were found to be important factors affecting item difficulty, whereas the influence of the stereo width (induced by inter-aural level differences) only had a minor effect. Based on a Bayesian logistic mixed-effects model, an adaptive version of the MSA test was developed. In a subsequent validation experiment with 74 listeners (20 HI), MSA scores showed acceptable test-retest reliability and moderate correlations with other music-related tests, pure-tone-average audiograms, age, musical sophistication, and working memory capacities. The MSA test is a user-friendly and efficient open-source tool for evaluating musical ASA abilities and is suitable for profiling the effects of hearing impairment on music perception.

16.
Audiol Res ; 13(5): 753-766, 2023 Oct 17.
Artículo en Inglés | MEDLINE | ID: mdl-37887848

RESUMEN

Electric stimulation via a cochlear implant (CI) enables people with severe-to-profound sensorineural hearing loss to regain speech understanding and music appreciation and, thus, allow them to actively engage in social life. Three main manufacturers (CochlearTM, MED-ELTM, and Advanced BionicsTM "AB") have been offering CI systems, thus challenging CI recipients and otolaryngologists with a difficult decision as currently no comprehensive overview or meta-analysis on performance outcomes following CI implantation is available. The main goals of this scoping review were to (1) map the literature on speech and music performance outcomes and to (2) find whether studies have performed outcome comparisons between devices of different manufacturers. To this end, a literature search was conducted to find studies that address speech and music outcomes in CI recipients. From a total of 1592 papers, 188 paper abstracts were analyzed and 147 articles were found suitable for an examination of full text. From these, 42 studies were included for synthesis. A total of 16 studies used the consonant-nucleus-consonant (CNC) word recognition test in quiet at 60 db SPL. We found that aside from technical comparisons, very few publications compared speech outcomes across manufacturers of CI systems. However, evidence suggests that these data are available in large CI centers in Germany and the US. Future studies should therefore leverage large data cohorts to perform such comparisons, which could provide critical evaluation criteria and assist both CI recipients and otolaryngologists to make informed performance-based decisions.

17.
Front Hum Neurosci ; 17: 1195996, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37841073

RESUMEN

Introduction: A growing body of research has investigated how performing arts training, and more specifically, music training, impacts the brain. Recent meta-analytic work has identified multiple brain areas where activity varies as a function of levels of musical expertise gained through music training. However, research has also shown that musical sophistication may be high even without music training. Thus, we aim to extend previous work by investigating whether the functional connectivity of these areas relates to interindividual differences in musical sophistication, and to characterize differences in connectivity attributed to performing arts training. Methods: We analyzed resting-state functional magnetic resonance imaging from n = 74 participants, of whom 37 received performing arts training, that is, including a musical instrument, singing, and/or acting, at university level. We used a validated, continuous measure of musical sophistication to further characterize our sample. Following standard pre-processing, fifteen brain areas were identified a priori based on meta-analytic work and used as seeds in separate seed-to-voxel analyses to examine the effect of musical sophistication across the sample, and between-group analyses to examine the effects of performing arts training. Results: Connectivity of bilateral superior temporal gyrus, bilateral precentral gyrus and cerebellum, and bilateral putamen, left insula, and left thalamus varied with different aspects of musical sophistication. By including these measures of these aspects as covariates in post hoc analyses, we found that connectivity of the right superior temporal gyrus and left precentral gyrus relate to effects of performing arts training beyond effects of individual musical sophistication. Discussion: Our results highlight the potential role of sensory areas in active engagement with music, the potential role of motor areas in emotion processing, and the potential role of connectivity between putamen and lingual gyrus in general musical sophistication.

18.
Healthcare (Basel) ; 11(12)2023 Jun 08.
Artículo en Inglés | MEDLINE | ID: mdl-37372805

RESUMEN

INTRODUCTION: Music is an intriguing but relatively under-researched intervention with many potential benefits for mechanically ventilated patients. The review aimed to assess the impact of listening to music as a non-pharmacological intervention on the physiological, psychological, and social responses of patients in an intensive care unit. METHODS: The literature review was conducted in the fourth quarter of 2022. The overview included papers found in Science Direct, EBSCO, PubMed, Ovid, Scopus, and original research papers published in English meeting the PICOS criteria. Articles published between 2010 and 2022 meeting the inclusion criteria were included for further analysis. RESULTS: Music significantly affects vital parameters: decreases the heart rate, blood pressure, and breathing; reduces pain intensity. The analyses confirmed that music affects anxiety levels, reduces sleep disturbances and delirium occurrence, and improves cognitive function. The effectiveness of the intervention is influenced by the choice of music. CONCLUSIONS: There is evidence of the beneficial effects of music on a patient's physiological, psychological, and social responses. Music therapy is highly effective in reducing anxiety and pain and stabilizes physiological parameters, i.e., the heart rate and respiratory rate, after music sessions in mechanically ventilated patients. Studies show that music reduces agitation in confused patients, improves mood, and facilitates communication.

19.
Heliyon ; 9(4): e15199, 2023 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-37123947

RESUMEN

This study presents a method to estimate the complexity of popular music drum patterns based on a core idea from predictive coding. Specifically, it postulates that the complexity of a drum pattern depends on the quantity of surprisal it causes in the listener. Surprisal, according to predictive coding theory, is a numerical measure that takes large values when the perceiver's internal model of the surrounding world fails to predict the actual stream of sensory data (i.e. when the perception surprises the perceiver), and low values if model predictions and sensory data agree. The proposed new method first approximates a listener's internal model of a popular music drum pattern (using ideas on enculturation and a Bayesian learning process). It then quantifies the listener's surprisal evaluating the discrepancies between the predictions of the internal model and the actual drum pattern. It finally estimates drum pattern complexity from surprisal. The method was optimised and tested using a set of forty popular music drum patterns, for which empirical perceived complexity measurements are available. The new method provided complexity estimates that had a good fit with the empirical measurements ( R 2 = . 852 ). The method was implemented as an R script that can be used to estimate the complexity of popular music drum patterns in the future. Simulations indicate that we can expect the method to predict perceived complexity with a good fit ( R 2 ≥ . 709 ) in 99% of drum pattern sets randomly drawn from the Western popular music repertoire. These results suggest that surprisal indeed captures essential aspects of complexity, and that it may serve as a basis for a general theory of perceived complexity.

20.
Int J Psychol ; 58(5): 465-475, 2023 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-37248624

RESUMEN

Musical stimuli are widely used in emotion research and intervention studies. However, reviews have repeatedly noted that a lack of pre-evaluated musical stimuli is stalling progress in our understanding of specific effects of varying music. Musical stimuli vary along a plethora of dimensions. Of particular interest are emotional valence and tempo. Thus, we aimed to evaluate the emotional valence of a set of slow and fast musical stimuli. N = 102 (mean age: 39.95, SD: 13.60, 61% female) participants rated the perceived emotional valence in 20 fast (>110 beats per minute [bmp]) and 20 slow (<90 bpm) stimuli. Moreover, we collected reports on subjective arousal for each stimulus to explore arousal's association with tempo and valence. Finally, participants completed questionnaires on demographics, mood (profile of mood states), personality (10-item personality index), musical sophistication (Gold-music sophistication index), and sound preferences and hearing habits (sound preference and hearing habits questionnaire). Using mixed-effect model estimates, we identified 19 stimuli that participants rated to have positive valence and 16 stimuli that they rated to have negative valence. Higher age predicted more positive valence ratings across stimuli. Higher tempo and more extreme valence ratings were each associated with higher arousal. Higher educational attainment was also associated with higher arousal reports. Pre-evaluated stimuli can be used in future musical research.


Asunto(s)
Música , Humanos , Femenino , Adulto , Masculino , Música/psicología , Emociones , Nivel de Alerta , Afecto , Percepción , Percepción Auditiva
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA