Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 35
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Sci Rep ; 14(1): 16462, 2024 07 16.
Artículo en Inglés | MEDLINE | ID: mdl-39014043

RESUMEN

The current study tested the hypothesis that the association between musical ability and vocal emotion recognition skills is mediated by accuracy in prosody perception. Furthermore, it was investigated whether this association is primarily related to musical expertise, operationalized by long-term engagement in musical activities, or musical aptitude, operationalized by a test of musical perceptual ability. To this end, we conducted three studies: In Study 1 (N = 85) and Study 2 (N = 93), we developed and validated a new instrument for the assessment of prosodic discrimination ability. In Study 3 (N = 136), we examined whether the association between musical ability and vocal emotion recognition was mediated by prosodic discrimination ability. We found evidence for a full mediation, though only in relation to musical aptitude and not in relation to musical expertise. Taken together, these findings suggest that individuals with high musical aptitude have superior prosody perception skills, which in turn contribute to their vocal emotion recognition skills. Importantly, our results suggest that these benefits are not unique to musicians, but extend to non-musicians with high musical aptitude.


Asunto(s)
Aptitud , Emociones , Música , Humanos , Música/psicología , Masculino , Femenino , Emociones/fisiología , Aptitud/fisiología , Adulto , Adulto Joven , Percepción del Habla/fisiología , Percepción Auditiva/fisiología , Adolescente , Reconocimiento en Psicología/fisiología , Voz/fisiología
2.
Cogn Emot ; 38(1): 23-43, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-37715528

RESUMEN

There is debate within the literature as to whether emotion dysregulation (ED) in Attention-Deficit Hyperactivity Disorder (ADHD) reflects deviant attentional mechanisms or atypical perceptual emotion processing. Previous reviews have reliably examined the nature of facial, but not vocal, emotion recognition accuracy in ADHD. The present meta-analysis quantified vocal emotion recognition (VER) accuracy scores in ADHD and controls using robust variance estimation, gathered from 21 published and unpublished papers. Additional moderator analyses were carried out to determine whether the nature of VER accuracy in ADHD varied depending on emotion type. Findings revealed a medium effect size for the presence of VER deficits in ADHD, and moderator analyses showed VER accuracy in ADHD did not differ due to emotion type. These results support the theories which implicate the role of attentional mechanisms in driving VER deficits in ADHD. However, there is insufficient data within the behavioural VER literature to support the presence of emotion processing atypicalities in ADHD. Future neuro-imaging research could explore the interaction between attention and emotion processing in ADHD, taking into consideration ADHD subtypes and comorbidities.


Asunto(s)
Trastorno por Déficit de Atención con Hiperactividad , Voz , Humanos , Trastorno por Déficit de Atención con Hiperactividad/psicología , Emociones/fisiología , Reconocimiento en Psicología , Expresión Facial
3.
Br J Psychol ; 115(2): 206-225, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-37851369

RESUMEN

Musicians outperform non-musicians in vocal emotion perception, likely because of increased sensitivity to acoustic cues, such as fundamental frequency (F0) and timbre. Yet, how musicians make use of these acoustic cues to perceive emotions, and how they might differ from non-musicians, is unclear. To address these points, we created vocal stimuli that conveyed happiness, fear, pleasure or sadness, either in all acoustic cues, or selectively in either F0 or timbre only. We then compared vocal emotion perception performance between professional/semi-professional musicians (N = 39) and non-musicians (N = 38), all socialized in Western music culture. Compared to non-musicians, musicians classified vocal emotions more accurately. This advantage was seen in the full and F0-modulated conditions, but was absent in the timbre-modulated condition indicating that musicians excel at perceiving the melody (F0), but not the timbre of vocal emotions. Further, F0 seemed more important than timbre for the recognition of all emotional categories. Additional exploratory analyses revealed a link between time-varying F0 perception in music and voices that was independent of musical training. Together, these findings suggest that musicians are particularly tuned to the melody of vocal emotions, presumably due to a natural predisposition to exploit melodic patterns.


Asunto(s)
Música , Voz , Humanos , Estimulación Acústica , Emociones , Miedo , Reconocimiento en Psicología , Música/psicología , Percepción Auditiva
4.
Brain Sci ; 13(11)2023 Nov 07.
Artículo en Inglés | MEDLINE | ID: mdl-38002523

RESUMEN

Musicians outperform non-musicians in vocal emotion recognition, but the underlying mechanisms are still debated. Behavioral measures highlight the importance of auditory sensitivity towards emotional voice cues. However, it remains unclear whether and how this group difference is reflected at the brain level. Here, we compared event-related potentials (ERPs) to acoustically manipulated voices between musicians (n = 39) and non-musicians (n = 39). We used parameter-specific voice morphing to create and present vocal stimuli that conveyed happiness, fear, pleasure, or sadness, either in all acoustic cues or selectively in either pitch contour (F0) or timbre. Although the fronto-central P200 (150-250 ms) and N400 (300-500 ms) components were modulated by pitch and timbre, differences between musicians and non-musicians appeared only for a centro-parietal late positive potential (500-1000 ms). Thus, this study does not support an early auditory specialization in musicians but suggests instead that musicality affects the manner in which listeners use acoustic voice cues during later, controlled aspects of emotion evaluation.

5.
BMC Psychiatry ; 23(1): 760, 2023 10 17.
Artículo en Inglés | MEDLINE | ID: mdl-37848849

RESUMEN

BACKGROUND: Cognitive and emotional impairment are among the core features of schizophrenia; assessment of vocal emotion recognition may facilitate the detection of schizophrenia. We explored the differences between cognitive and social aspects of emotion using vocal emotion recognition and detailed clinical characterization. METHODS: Clinical symptoms and social and cognitive functioning were assessed by trained clinical psychiatrists. A vocal emotion perception test, including an assessment of emotion recognition and emotional intensity, was conducted. One-hundred-six patients with schizophrenia (SCZ) and 230 healthy controls (HCs) were recruited. RESULTS: Considering emotion recognition, scores for all emotion categories were significantly lower in SCZ compared to HC. Considering emotional intensity, scores for anger, calmness, sadness, and surprise were significantly lower in the SCZs. Vocal recognition patterns showed a trend of unification and simplification in SCZs. A direct correlation was confirmed between vocal recognition impairment and cognition. In diagnostic tests, only the total score of vocal emotion recognition was a reliable index for the presence of schizophrenia. CONCLUSIONS: This study shows that patients with schizophrenia are characterized by impaired vocal emotion perception. Furthermore, explicit and implicit vocal emotion perception processing in individuals with schizophrenia are viewed as distinct entities. This study provides a voice recognition tool to facilitate and improve the diagnosis of schizophrenia.


Asunto(s)
Esquizofrenia , Humanos , Esquizofrenia/diagnóstico , Emociones , Cognición , Ira , Percepción , Expresión Facial , Percepción Social
6.
J Child Lang ; : 1-11, 2023 Jul 01.
Artículo en Inglés | MEDLINE | ID: mdl-37391267

RESUMEN

Infant-directed speech often has hyperarticulated features, such as point vowels whose formants are further apart than in adult-directed speech. This increased "vowel space" may reflect the caretaker's effort to speak more clearly to infants, thus benefiting language processing. However, hyperarticulation may also result from more positive valence (e.g., speaking with positive vocal emotion) often found in mothers' speech to infants. This study was designed to replicate others who have found hyperarticulation in maternal speech to their 6-month-olds, but also to examine their speech to a non-human infant (i.e., a puppy). We rated both kinds of maternal speech for their emotional valence and recorded mothers' speech to a human adult. We found that mothers produced more positively valenced utterances and some hyperarticulation in both their infant- and puppy-directed speech, compared to their adult-directed speech. This finding promotes looking at maternal speech from a multi-faceted perspective that includes emotional state.

7.
Cereb Cortex Commun ; 4(1): tgad002, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-36726795

RESUMEN

Vocal emotion recognition, a key determinant to analyzing a speaker's emotional state, is known to be impaired following cerebellar dysfunctions. Nevertheless, its possible functional integration in the large-scale brain network subtending emotional prosody recognition has yet to be explored. We administered an emotional prosody recognition task to patients with right versus left-hemispheric cerebellar lesions and a group of matched controls. We explored the lesional correlates of vocal emotion recognition in patients through a network-based analysis by combining a neuropsychological approach for lesion mapping with normative brain connectome data. Results revealed impaired recognition among patients for neutral or negative prosody, with poorer sadness recognition performances by patients with right cerebellar lesion. Network-based lesion-symptom mapping revealed that sadness recognition performances were linked to a network connecting the cerebellum with left frontal, temporal, and parietal cortices. Moreover, when focusing solely on a subgroup of patients with right cerebellar damage, sadness recognition performances were associated with a more restricted network connecting the cerebellum to the left parietal lobe. As the left hemisphere is known to be crucial for the processing of short segmental information, these results suggest that a corticocerebellar network operates on a fine temporal scale during vocal emotion decoding.

8.
Sensors (Basel) ; 22(19)2022 Oct 06.
Artículo en Inglés | MEDLINE | ID: mdl-36236658

RESUMEN

Vocal emotion recognition (VER) in natural speech, often referred to as speech emotion recognition (SER), remains challenging for both humans and computers. Applied fields including clinical diagnosis and intervention, social interaction research or Human Computer Interaction (HCI) increasingly benefit from efficient VER algorithms. Several feature sets were used with machine-learning (ML) algorithms for discrete emotion classification. However, there is no consensus for which low-level-descriptors and classifiers are optimal. Therefore, we aimed to compare the performance of machine-learning algorithms with several different feature sets. Concretely, seven ML algorithms were compared on the Berlin Database of Emotional Speech: Multilayer Perceptron Neural Network (MLP), J48 Decision Tree (DT), Support Vector Machine with Sequential Minimal Optimization (SMO), Random Forest (RF), k-Nearest Neighbor (KNN), Simple Logistic Regression (LOG) and Multinomial Logistic Regression (MLR) with 10-fold cross validation using four openSMILE feature sets (i.e., IS-09, emobase, GeMAPS and eGeMAPS). Results indicated that SMO, MLP and LOG show better performance (reaching to 87.85%, 84.00% and 83.74% accuracies, respectively) compared to RF, DT, MLR and KNN (with minimum 73.46%, 53.08%, 70.65% and 58.69% accuracies, respectively). Overall, the emobase feature set performed best. We discuss the implications of these findings for applications in diagnosis, intervention or HCI.


Asunto(s)
Aprendizaje Automático , Habla , Algoritmos , Emociones , Humanos , Redes Neurales de la Computación , Máquina de Vectores de Soporte
9.
Disabil Rehabil Assist Technol ; : 1-8, 2022 Aug 23.
Artículo en Inglés | MEDLINE | ID: mdl-35997772

RESUMEN

PURPOSE: As humans convey information about emotions by speech signals, emotion recognition via auditory information is often employed to assess one's affective states. There are numerous ways of applying the knowledge of emotional vocal expressions to system designs that accommodate users' needs adequately. Yet, little is known about how people with visual disabilities infer emotions from speech stimuli, especially via online platforms (e.g., Zoom). This study focussed on examining the degree to which they perceive emotions strongly or weakly, i.e., perceived intensity but also investigating the degree to which their sociodemographic backgrounds affect them perceiving different intensity levels of emotions when exposed to a set of emotional speech stimuli via Zoom. MATERIALS AND METHODS: A convenience sample of 30 individuals with visual disabilities participated in zoom interviews. Participants were given a set of emotional speech stimuli and reported the intensity level of the perceived emotions on a rating scale from 1 (weak) to 8 (strong). RESULTS: When the participants were exposed to the emotional speech stimuli, calm, happy, fearful, sad, and neutral, they reported that neutral was the dominant emotion they perceived with the greatest intensity. Individual differences were also observed in the perceived intensity of emotions, associated with sociodemographic backgrounds, such as health, vision, job, and age. CONCLUSIONS: The results of this study are anticipated to contribute to the fundamental knowledge that will be helpful for many stakeholders such as voice technology engineers, user experience designers, health professionals, and social workers providing support to people with visual disabilities.IMPLICATIONS FOR REHABILITATIONTechnologies equipped with alternative user interfaces (e.g., Siri, Alexa, and Google Voice Assistant) meeting the needs of people with visual disabilities can promote independent living and quality of life.Such technologies can also be equipped with systems that can recognize emotions via users' voice, such that users can obtain services customized to fit their emotional needs or adequately address their emotional challenges (e.g., early detection of onset, provision of advice, and so on).The results of this study can be beneficial to health professionals (e.g., social workers) who work closely with clients who have visual disabilities (e.g., virtual telehealth sessions) as they could gain insights or learn how to recognize and understand the clients' emotional struggle by hearing their voice, which is contributing to enhancement of emotional intelligence. Thus, they can provide better services to their clients, leading to building a strong bond and trust between health professionals and clients with visual disabilities even they meet virtually (e.g., Zoom).

10.
Soc Cogn Affect Neurosci ; 17(12): 1145-1154, 2022 12 01.
Artículo en Inglés | MEDLINE | ID: mdl-35522247

RESUMEN

Our ability to infer a speaker's emotional state depends on the processing of acoustic parameters such as fundamental frequency (F0) and timbre. Yet, how these parameters are processed and integrated to inform emotion perception remains largely unknown. Here we pursued this issue using a novel parameter-specific voice morphing technique to create stimuli with emotion modulations in only F0 or only timbre. We used these stimuli together with fully modulated vocal stimuli in an event-related potential (ERP) study in which participants listened to and identified stimulus emotion. ERPs (P200 and N400) and behavioral data converged in showing that both F0 and timbre support emotion processing but do so differently for different emotions: Whereas F0 was most relevant for responses to happy, fearful and sad voices, timbre was most relevant for responses to voices expressing pleasure. Together, these findings offer original insights into the relative significance of different acoustic parameters for early neuronal representations of speaker emotion and show that such representations are predictive of subsequent evaluative judgments.


Asunto(s)
Percepción del Habla , Voz , Humanos , Masculino , Femenino , Electroencefalografía , Potenciales Evocados , Emociones/fisiología , Percepción Auditiva/fisiología , Percepción del Habla/fisiología
11.
Cogn Affect Behav Neurosci ; 22(5): 1030-1043, 2022 10.
Artículo en Inglés | MEDLINE | ID: mdl-35474566

RESUMEN

There is growing evidence that both the basal ganglia and the cerebellum play functional roles in emotion processing, either directly or indirectly, through their connections with cortical and subcortical structures. However, the lateralization of this complex processing in emotion recognition remains unclear. To address this issue, we investigated emotional prosody recognition in individuals with Parkinson's disease (model of basal ganglia dysfunction) or cerebellar stroke patients, as well as in matched healthy controls (n = 24 in each group). We analysed performances according to the lateralization of the predominant brain degeneration/lesion. Results showed that a right (basal ganglia and cerebellar) hemispheric dysfunction was likely to induce greater deficits than a left one. Moreover, deficits following left hemispheric dysfunction were only observed in cerebellar stroke patients, and these deficits resembled those observed after degeneration of the right basal ganglia. Additional analyses taking disease duration / time since stroke into consideration revealed a worsening of performances in patients with predominantly right-sided lesions over time. These results point to the differential, but complementary, involvement of the cerebellum and basal ganglia in emotional prosody decoding, with a probable hemispheric specialization according to the level of cognitive integration.


Asunto(s)
Enfermedad de Parkinson , Accidente Cerebrovascular , Ganglios Basales , Cerebelo , Emociones , Humanos , Accidente Cerebrovascular/complicaciones
12.
Soc Cogn Affect Neurosci ; 17(10): 890-903, 2022 10 03.
Artículo en Inglés | MEDLINE | ID: mdl-35323933

RESUMEN

Adolescence is associated with maturation of function within neural networks supporting the processing of social information. Previous longitudinal studies have established developmental influences on youth's neural response to facial displays of emotion. Given the increasing recognition of the importance of non-facial cues to social communication, we build on existing work by examining longitudinal change in neural response to vocal expressions of emotion in 8- to 19-year-old youth. Participants completed a vocal emotion recognition task at two timepoints (1 year apart) while undergoing functional magnetic resonance imaging. The right inferior frontal gyrus, right dorsal striatum and right precentral gyrus showed decreases in activation to emotional voices across timepoints, which may reflect focalization of response in these areas. Activation in the dorsomedial prefrontal cortex was positively associated with age but was stable across timepoints. In addition, the slope of change across visits varied as a function of participants' age in the right temporo-parietal junction (TPJ): this pattern of activation across timepoints and age may reflect ongoing specialization of function across childhood and adolescence. Decreased activation in the striatum and TPJ across timepoints was associated with better emotion recognition accuracy. Findings suggest that specialization of function in social cognitive networks may support the growth of vocal emotion recognition skills across adolescence.


Asunto(s)
Emociones , Voz , Adolescente , Adulto , Mapeo Encefálico , Niño , Emociones/fisiología , Expresión Facial , Humanos , Imagen por Resonancia Magnética , Corteza Prefrontal , Reconocimiento en Psicología/fisiología , Adulto Joven
13.
Neuroimage Clin ; 34: 102966, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35182929

RESUMEN

Epilepsy has been associated with deficits in the social cognitive ability to decode others' nonverbal cues to infer their emotional intent (emotion recognition). Studies have begun to identify potential neural correlates of these deficits, but have focused primarily on one type of nonverbal cue (facial expressions) to the detriment of other crucial social signals that inform the tenor of social interactions (e.g., tone of voice). Less is known about how individuals with epilepsy process these forms of social stimuli, with a particular gap in knowledge about representation of vocal cues in the developing brain. The current study compared vocal emotion recognition skills and functional patterns of neural activation to emotional voices in youth with and without refractory focal epilepsy. We made novel use of inter-subject pattern analysis to determine brain areas in which activation to emotional voices was predictive of epilepsy status. Results indicated that youth with epilepsy were comparatively less able to infer emotional intent in vocal expressions than their typically developing peers. Activation to vocal emotional expressions in regions of the mentalizing and/or default mode network (e.g., right temporo-parietal junction, right hippocampus, right medial prefrontal cortex, among others) differentiated youth with and without epilepsy. These results are consistent with emerging evidence that pediatric epilepsy is associated with altered function in neural networks subserving social cognitive abilities. Our results contribute to ongoing efforts to understand the neural markers of social cognitive deficits in pediatric epilepsy, in order to better tailor and funnel interventions to this group of youth at risk for poor social outcomes.


Asunto(s)
Epilepsia Refractaria , Epilepsia , Voz , Adolescente , Niño , Emociones/fisiología , Expresión Facial , Humanos , Voz/fisiología
14.
Cognition ; 219: 104967, 2022 02.
Artículo en Inglés | MEDLINE | ID: mdl-34875400

RESUMEN

While the human perceptual system constantly adapts to the environment, some of the underlying mechanisms are still poorly understood. For instance, although previous research demonstrated perceptual aftereffects in emotional voice adaptation, the contribution of different vocal cues to these effects is unclear. In two experiments, we used parameter-specific morphing of adaptor voices to investigate the relative roles of fundamental frequency (F0) and timbre in vocal emotion adaptation, using angry and fearful utterances. Participants adapted to voices containing emotion-specific information in either F0 or timbre, with all other parameters kept constant at an intermediate 50% morph level. Full emotional voices and ambiguous voices were used as reference conditions. All adaptor stimuli were either of the same (Experiment 1) or opposite speaker gender (Experiment 2) of subsequently presented target voices. In Experiment 1, we found consistent aftereffects in all adaptation conditions. Crucially, aftereffects following timbre adaptation were much larger than following F0 adaptation and were only marginally smaller than those following full adaptation. In Experiment 2, adaptation aftereffects appeared massively and proportionally reduced, with differences between morph types being no longer significant. These results suggest that timbre plays a larger role than F0 in vocal emotion adaptation, and that vocal emotion adaptation is compromised by eliminating gender-correspondence between adaptor and target stimuli. Our findings also add to mounting evidence suggesting a major role of timbre in auditory adaptation.


Asunto(s)
Voz , Adaptación Fisiológica , Señales (Psicología) , Emociones , Femenino , Humanos , Masculino , Percepción Visual
15.
Front Neurosci ; 15: 705741, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34393716

RESUMEN

As elucidated by prior research, children with hearing loss have impaired vocal emotion recognition compared with their normal-hearing peers. Cochlear implants (CIs) have achieved significant success in facilitating hearing and speech abilities for people with severe-to-profound sensorineural hearing loss. However, due to the current limitations in neuroimaging tools, existing research has been unable to detail the neural processing for perception and the recognition of vocal emotions during early stage CI use in infant and toddler CI users (ITCI). In the present study, functional near-infrared spectroscopy (fNIRS) imaging was employed during preoperative and postoperative tests to describe the early neural processing of perception in prelingual deaf ITCIs and their recognition of four vocal emotions (fear, anger, happiness, and neutral). The results revealed that the cortical response elicited by vocal emotional stimulation on the left pre-motor and supplementary motor area (pre-SMA), right middle temporal gyrus (MTG), and right superior temporal gyrus (STG) were significantly different between preoperative and postoperative tests. These findings indicate differences between the preoperative and postoperative neural processing associated with vocal emotional stimulation. Further results revealed that the recognition of vocal emotional stimuli appeared in the right supramarginal gyrus (SMG) after CI implantation, and the response elicited by fear was significantly greater than the response elicited by anger, indicating a negative bias. These findings indicate that the development of emotional bias and the development of emotional perception and recognition capabilities in ITCIs occur on a different timeline and involve different neural processing from those in normal-hearing peers. To assess the speech perception and production abilities, the Infant-Toddler Meaningful Auditory Integration Scale (IT-MAIS) and Speech Intelligibility Rating (SIR) were used. The results revealed no significant differences between preoperative and postoperative tests. Finally, the correlates of the neurobehavioral results were investigated, and the results demonstrated that the preoperative response of the right SMG to anger stimuli was significantly and positively correlated with the evaluation of postoperative behavioral outcomes. And the postoperative response of the right SMG to anger stimuli was significantly and negatively correlated with the evaluation of postoperative behavioral outcomes.

16.
Cortex ; 142: 186-203, 2021 09.
Artículo en Inglés | MEDLINE | ID: mdl-34273798

RESUMEN

Laughter is a fundamental communicative signal in our relations with other people and is used to convey a diverse repertoire of social and emotional information. It is therefore potentially a useful probe of impaired socio-emotional signal processing in neurodegenerative diseases. Here we investigated the cognitive and affective processing of laughter in forty-seven patients representing all major syndromes of frontotemporal dementia, a disease spectrum characterised by severe socio-emotional dysfunction (twenty-two with behavioural variant frontotemporal dementia, twelve with semantic variant primary progressive aphasia, thirteen with nonfluent-agrammatic variant primary progressive aphasia), in relation to fifteen patients with typical amnestic Alzheimer's disease and twenty healthy age-matched individuals. We assessed cognitive labelling (identification) and valence rating (affective evaluation) of samples of spontaneous (mirthful and hostile) and volitional (posed) laughter versus two auditory control conditions (a synthetic laughter-like stimulus and spoken numbers). Neuroanatomical associations of laughter processing were assessed using voxel-based morphometry of patients' brain MR images. While all dementia syndromes were associated with impaired identification of laughter subtypes relative to healthy controls, this was significantly more severe overall in frontotemporal dementia than in Alzheimer's disease and particularly in the behavioural and semantic variants, which also showed abnormal affective evaluation of laughter. Over the patient cohort, laughter identification accuracy was correlated with measures of daily-life socio-emotional functioning. Certain striking syndromic signatures emerged, including enhanced liking for hostile laughter in behavioural variant frontotemporal dementia, impaired processing of synthetic laughter in the nonfluent-agrammatic variant (consistent with a generic complex auditory perceptual deficit) and enhanced liking for numbers ('numerophilia') in the semantic variant. Across the patient cohort, overall laughter identification accuracy correlated with regional grey matter in a core network encompassing inferior frontal and cingulo-insular cortices; and more specific correlates of laughter identification accuracy were delineated in cortical regions mediating affective disambiguation (identification of hostile and posed laughter in orbitofrontal cortex) and authenticity (social intent) decoding (identification of mirthful and posed laughter in anteromedial prefrontal cortex) (all p < .05 after correction for multiple voxel-wise comparisons over the whole brain). These findings reveal a rich diversity of cognitive and affective laughter phenotypes in canonical dementia syndromes and suggest that laughter is an informative probe of neural mechanisms underpinning socio-emotional dysfunction in neurodegenerative disease.


Asunto(s)
Demencia Frontotemporal , Risa , Enfermedades Neurodegenerativas , Afasia Progresiva Primaria no Fluente , Emociones , Demencia Frontotemporal/diagnóstico por imagen , Humanos , Imagen por Resonancia Magnética , Pruebas Neuropsicológicas
17.
Autism Res ; 14(9): 1965-1974, 2021 09.
Artículo en Inglés | MEDLINE | ID: mdl-34089304

RESUMEN

This study examined the psychometric characteristics of the Cambridge-Mindreading Face-Voice Battery for Children (CAM-C) for a sample of 333 children, ages 6-12 years with ASD (with no intellectual disability). Internal consistency was very good for the Total score (0.81 for both Faces and Voices) and respectable for the Complex emotions score (0.72 for Faces and 0.74 for Voices); however, internal consistency was lower for Simple emotions (0.65 for Faces and 0.61 for Voices). Test-retest reliability at 18 and 36 weeks was very good for the faces and voices total (0.76-0.81) and good for simple and complex faces and voices (0.53-0.75). Significant correlations were found between CAM-C Faces and scores on another measure of face-emotion recognition (Diagnostic Analysis of Nonverbal Accuracy-Second Edition), and between Faces and Voices scores and child age, IQ (except perceptual IQ and Simple Voice emotions), and language ability. Parent-reported ASD symptom severity and the Emotion Recognition scale on the SRS-2 were not related to CAM-C scores. Suggestions for future studies and further development of the CAM-C are provided. LAY SUMMARY: Facial and vocal emotion recognition are important for social interaction and have been identified as a challenge for individuals with autism spectrum disorder. Emotion recognition is an area frequently targeted by interventions. This study evaluated a measure of emotion recognition (the CAM-C) for its consistency and validity in a large sample of children with autism. The study found the CAM-C showed many strengths needed to accurately measure the change in emotion recognition during intervention.


Asunto(s)
Trastorno del Espectro Autista , Reconocimiento Facial , Voz , Niño , Emociones , Expresión Facial , Humanos , Psicometría , Reproducibilidad de los Resultados
18.
PeerJ ; 8: e8773, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-32274264

RESUMEN

Traditionally, emotion recognition research has primarily used pictures and videos, while audio test materials are not always readily available or are not of good quality, which may be particularly important for studies with hearing-impaired listeners. Here we present a vocal emotion recognition test with pseudospeech productions from multiple speakers expressing three core emotions (happy, angry, and sad): the EmoHI test. The high sound quality recordings make the test suitable for use with populations of children and adults with normal or impaired hearing. Here we present normative data for vocal emotion recognition development in normal-hearing (NH) school-age children using the EmoHI test. Furthermore, we investigated cross-language effects by testing NH Dutch and English children, and the suitability of the EmoHI test for hearing-impaired populations, specifically for prelingually deaf Dutch children with cochlear implants (CIs). Our results show that NH children's performance improved significantly with age from the youngest age group onwards (4-6 years: 48.9%, on average). However, NH children's performance did not reach adult-like values (adults: 94.1%) even for the oldest age group tested (10-12 years: 81.1%). Additionally, the effect of age on NH children's development did not differ across languages. All except one CI child performed at or above chance-level showing the suitability of the EmoHI test. In addition, seven out of 14 CI children performed within the NH age-appropriate range, and nine out of 14 CI children did so when performance was adjusted for hearing age, measured from their age at CI implantation. However, CI children showed great variability in their performance, ranging from ceiling (97.2%) to below chance-level performance (27.8%), which could not be explained by chronological age alone. The strong and consistent development in performance with age, the lack of significant differences across the tested languages for NH children, and the above-chance performance of most CI children affirm the usability and versatility of the EmoHI test.

19.
Soc Cogn Affect Neurosci ; 14(5): 529-538, 2019 05 31.
Artículo en Inglés | MEDLINE | ID: mdl-31157395

RESUMEN

Vocal expression is essential for conveying the emotion during social interaction. Although vocal emotion has been explored in previous studies, little is known about how perception of different vocal emotional expressions modulates the functional brain network topology. In this study, we aimed to investigate the functional brain networks under different attributes of vocal emotion by graph-theoretical network analysis. Functional magnetic resonance imaging (fMRI) experiments were performed on 36 healthy participants. We utilized the Power-264 functional brain atlas to calculate the interregional functional connectivity (FC) from fMRI data under resting state and vocal stimuli at different arousal and valence levels. The orthogonal minimal spanning trees method was used for topological filtering. The paired-sample t-test with Bonferroni correction across all regions and arousal-valence levels were used for statistical comparisons. Our results show that brain network exhibits significantly altered network attributes at FC, nodal and global levels, especially under high-arousal or negative-valence vocal emotional stimuli. The alterations within/between well-known large-scale functional networks were also investigated. Through the present study, we have gained more insights into how comprehending emotional speech modulates brain networks. These findings may shed light on how the human brain processes emotional speech and how it distinguishes different emotional conditions.


Asunto(s)
Emociones/fisiología , Red Nerviosa/fisiología , Voz/fisiología , Adulto , Nivel de Alerta , Mapeo Encefálico , Femenino , Humanos , Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Masculino , Red Nerviosa/diagnóstico por imagen , Adulto Joven
20.
Front Psychol ; 10: 184, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-30828312

RESUMEN

It has repeatedly been argued that individual differences in personality influence emotion processing, but findings from both the facial and vocal emotion recognition literature are contradictive, suggesting a lack of reliability across studies. To explore this relationship further in a more systematic manner using the Big Five Inventory, we designed two studies employing different research paradigms. Study 1 explored the relationship between personality traits and vocal emotion recognition accuracy while Study 2 examined how personality traits relate to vocal emotion recognition speed. The combined results did not indicate a pairwise linear relationship between self-reported individual differences in personality and vocal emotion processing, suggesting that the continuously proposed influence of personality characteristics on vocal emotion processing might have been overemphasized previously.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA