Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 1.997
Filtrar
1.
Schizophr Res ; 274: 24-32, 2024 Sep 08.
Artículo en Inglés | MEDLINE | ID: mdl-39250840

RESUMEN

OBJECTIVE: Deficits of dyadic social interaction seem to diminish social functioning in schizophrenia. However, most previous studies are of a limited ecological validity due to decontextualized experimental conditions far off from real-world interaction. In this pilot study, we thus exposed participants to a more real-world-like situation to generate new hypotheses for research and therapeutic interventions. METHODS: Dyads of either participants with schizophrenia (n = 21) or control participants without mental disorder (n = 21) were presented with a 5-min emotionally engaging movie. The subsequent uninstructed dyadic interaction was videotaped and analyzed by means of a semi-quantitative, software-supported behavioral analysis. RESULTS: The patients with schizophrenia showed significant abnormalities regarding their social interaction, such as more negative verbalizations, a more open display of negative affect and gaze abnormalities. Their interaction behavior was mostly characterized by neutral affect, silence and avoidance of direct eye contact. Neutral affect was associated with poorer psychosocial performance. Verbal intelligence and empathy were associated with positive interaction variables, which were also not impaired by psychotic symptom severity. CONCLUSION: In this real-world-like dyadic interaction, participants with schizophrenia show distinct abnormalities that are relevant to psychosocial performance and consistent with a hypothesized lack of attunement to interaffective situations.

2.
Artículo en Inglés | MEDLINE | ID: mdl-39262120

RESUMEN

Cracking the non-verbal "code" of human emotions has been a chief interest of generations of scientists. Yet, despite much effort, a dictionary that clearly maps non-verbal behaviours onto meaning remains elusive. We suggest this is due to an over-reliance on language-related concepts and an under-appreciation of the evolutionary context in which a given non-verbal behaviour emerged. Indeed, work in other species emphasizes non-verbal effects (e.g. affiliation) rather than meaning (e.g. happiness) and differentiates between signals, for which communication benefits both sender and receiver, and cues, for which communication does not benefit senders. Against this backdrop, we develop a "non-verbal effecting" perspective for human research. This perspective extends the typical focus on facial expressions to a broadcasting of multisensory signals and cues that emerge from both social and non-social emotions. Moreover, it emphasizes the consequences or effects that signals and cues have for individuals and their social interactions. We believe that re-directing our attention from verbal emotion labels to non-verbal effects is a necessary step to comprehend scientifically how humans share what they feel.

3.
Sensors (Basel) ; 24(17)2024 Aug 25.
Artículo en Inglés | MEDLINE | ID: mdl-39275417

RESUMEN

Speech emotion recognition (SER) is not only a ubiquitous aspect of everyday communication, but also a central focus in the field of human-computer interaction. However, SER faces several challenges, including difficulties in detecting subtle emotional nuances and the complicated task of recognizing speech emotions in noisy environments. To effectively address these challenges, we introduce a Transformer-based model called MelTrans, which is designed to distill critical clues from speech data by learning core features and long-range dependencies. At the heart of our approach is a dual-stream framework. Using the Transformer architecture as its foundation, MelTrans deciphers broad dependencies within speech mel-spectrograms, facilitating a nuanced understanding of emotional cues embedded in speech signals. Comprehensive experimental evaluations on the EmoDB (92.52%) and IEMOCAP (76.54%) datasets demonstrate the effectiveness of MelTrans. These results highlight MelTrans's ability to capture critical cues and long-range dependencies in speech data, setting a new benchmark within the context of these specific datasets. These results highlight the effectiveness of the proposed model in addressing the complex challenges posed by SER tasks.


Asunto(s)
Emociones , Habla , Humanos , Emociones/fisiología , Habla/fisiología , Algoritmos , Software de Reconocimiento del Habla
4.
Sensors (Basel) ; 24(17)2024 Sep 02.
Artículo en Inglés | MEDLINE | ID: mdl-39275615

RESUMEN

Speech emotion recognition is key to many fields, including human-computer interaction, healthcare, and intelligent assistance. While acoustic features extracted from human speech are essential for this task, not all of them contribute to emotion recognition effectively. Thus, reduced numbers of features are required within successful emotion recognition models. This work aimed to investigate whether splitting the features into two subsets based on their distribution and then applying commonly used feature reduction methods would impact accuracy. Filter reduction was employed using the Kruskal-Wallis test, followed by principal component analysis (PCA) and independent component analysis (ICA). A set of features was investigated to determine whether the indiscriminate use of parametric feature reduction techniques affects the accuracy of emotion recognition. For this investigation, data from three databases-Berlin EmoDB, SAVEE, and RAVDES-were organized into subsets according to their distribution in applying both PCA and ICA. The results showed a reduction from 6373 features to 170 for the Berlin EmoDB database with an accuracy of 84.3%; a final size of 130 features for SAVEE, with a corresponding accuracy of 75.4%; and 150 features for RAVDESS, with an accuracy of 59.9%.


Asunto(s)
Emociones , Análisis de Componente Principal , Habla , Humanos , Emociones/fisiología , Habla/fisiología , Bases de Datos Factuales , Algoritmos , Reconocimiento de Normas Patrones Automatizadas/métodos
5.
Sensors (Basel) ; 24(17)2024 Sep 06.
Artículo en Inglés | MEDLINE | ID: mdl-39275707

RESUMEN

Emotion recognition through speech is a technique employed in various scenarios of Human-Computer Interaction (HCI). Existing approaches have achieved significant results; however, limitations persist, with the quantity and diversity of data being more notable when deep learning techniques are used. The lack of a standard in feature selection leads to continuous development and experimentation. Choosing and designing the appropriate network architecture constitutes another challenge. This study addresses the challenge of recognizing emotions in the human voice using deep learning techniques, proposing a comprehensive approach, and developing preprocessing and feature selection stages while constructing a dataset called EmoDSc as a result of combining several available databases. The synergy between spectral features and spectrogram images is investigated. Independently, the weighted accuracy obtained using only spectral features was 89%, while using only spectrogram images, the weighted accuracy reached 90%. These results, although surpassing previous research, highlight the strengths and limitations when operating in isolation. Based on this exploration, a neural network architecture composed of a CNN1D, a CNN2D, and an MLP that fuses spectral features and spectogram images is proposed. The model, supported by the unified dataset EmoDSc, demonstrates a remarkable accuracy of 96%.


Asunto(s)
Aprendizaje Profundo , Emociones , Redes Neurales de la Computación , Humanos , Emociones/fisiología , Habla/fisiología , Bases de Datos Factuales , Algoritmos , Reconocimiento de Normas Patrones Automatizadas/métodos
6.
Front Artif Intell ; 7: 1476791, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39290717

RESUMEN

[This corrects the article DOI: 10.3389/frai.2024.1386753.].

7.
J Neurol ; 2024 Sep 17.
Artículo en Inglés | MEDLINE | ID: mdl-39287680

RESUMEN

OBJECTIVE: To define the clinical usability of an affect recognition (AR) battery-the Comprehensive Affect Testing System (CATS)-in an Italian sample of patients with amyotrophic lateral sclerosis (ALS). METHODS: 96 ALS patients and 116 healthy controls underwent a neuropsychological assessment including the AR subtests of the abbreviated version of the CATS (CATS-A). CATS-A AR subtests and their global score (CATS-A AR Quotient, ARQ) were assessed for their factorial, convergent, and divergent validity. The diagnostic accuracy of each CATS-A AR measure in discriminating ALS patients with cognitive impairment from cognitively normal controls and patients was tested via receiver-operating characteristics analyses. Optimal cut-offs were identified for CATS-A AR measures yielding an acceptable AUC value (≥ .70). The ability of CATS-A ARQ to discriminate between different ALS cognitive phenotypes was also tested. Gray-matter (GM) volumes of controls, ALS with normal (ALS-nARQ), and impaired ARQ score (ALS-iARQ) were compared using ANCOVA models. RESULTS: CATS-A AR subtests and ARQ proved to have moderate-to-strong convergent and divergent validity. Almost all considered CATS-A measures reached acceptable accuracy and diagnostic power (AUC range = .79-.83). ARQ showed to be the best diagnostic measure (sensitivity = .80; specificity = .75) and discriminated between different ALS cognitive phenotypes. Compared to ALS-nARQ, ALS-iARQ patients showed reduced GM volumes in the right anterior cingulate, right middle frontal, left inferior temporal, and superior occipital regions. CONCLUSIONS: The AR subtests of the CATS-A, and in particular the CATS-A ARQ, are sound measures of AR in ALS. AR deficits may be a valid marker of frontotemporal involvement in these patients.

8.
J Neurosci Methods ; 411: 110276, 2024 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-39237038

RESUMEN

BACKGROUND: Emotion is an important area in neuroscience. Cross-subject emotion recognition based on electroencephalogram (EEG) data is challenging due to physiological differences between subjects. Domain gap, which refers to the different distributions of EEG data at different subjects, has attracted great attention for cross-subject emotion recognition. COMPARISON WITH EXISTING METHODS: This study focuses on narrowing the domain gap between subjects through the emotional frequency bands and the relationship information between EEG channels. Emotional frequency band features represent the energy distribution of EEG data in different frequency ranges, while relationship information between EEG channels provides spatial distribution information about EEG data. NEW METHOD: To achieve this, this paper proposes a model called the Frequency Band Attention Graph convolutional Adversarial neural Network (FBAGAN). This model includes three components: a feature extractor, a classifier, and a discriminator. The feature extractor consists of a layer with a frequency band attention mechanism and a graph convolutional neural network. The mechanism effectively extracts frequency band information by assigning weights and Graph Convolutional Networks can extract relationship information between EEG channels by modeling the graph structure. The discriminator then helps minimize the gap in the frequency information and relationship information between the source and target domains, improving the model's ability to generalize. RESULTS: The FBAGAN model is extensively tested on the SEED, SEED-IV, and DEAP datasets. The accuracy and standard deviation scores are 88.17% and 4.88, respectively, on the SEED dataset, and 77.35% and 3.72 on the SEED-IV dataset. On the DEAP dataset, the model achieves 69.64% for Arousal and 65.18% for Valence. These results outperform most existing models. CONCLUSIONS: The experiments indicate that FBAGAN effectively addresses the challenges of transferring EEG channel domain and frequency band domain, leading to improved performance.


Asunto(s)
Interfaces Cerebro-Computador , Electroencefalografía , Emociones , Redes Neurales de la Computación , Humanos , Electroencefalografía/métodos , Emociones/fisiología , Encéfalo/fisiología , Procesamiento de Señales Asistido por Computador
9.
Heliyon ; 10(16): e36411, 2024 Aug 30.
Artículo en Inglés | MEDLINE | ID: mdl-39253213

RESUMEN

This study introduces a groundbreaking method to enhance the accuracy and reliability of emotion recognition systems by combining electrocardiogram (ECG) with electroencephalogram (EEG) data, using an eye-tracking gated strategy. Initially, we propose a technique to filter out irrelevant portions of emotional data by employing pupil diameter metrics from eye-tracking data. Subsequently, we introduce an innovative approach for estimating effective connectivity to capture the dynamic interaction between the brain and the heart during emotional states of happiness and sadness. Granger causality (GC) is estimated and utilized to optimize input for a highly effective pre-trained convolutional neural network (CNN), specifically ResNet-18. To assess this methodology, we employed EEG and ECG data from the publicly available MAHNOB-HCI database, using a 5-fold cross-validation approach. Our method achieved an impressive average accuracy and area under the curve (AUC) of 91.00 % and 0.97, respectively, for GC-EEG-ECG images processed with ResNet-18. Comparative analysis with state-of-the-art studies clearly shows that augmenting ECG with EEG and refining data with an eye-tracking strategy significantly enhances emotion recognition performance across various emotions.

10.
Psychol Med ; : 1-9, 2024 Sep 09.
Artículo en Inglés | MEDLINE | ID: mdl-39246290

RESUMEN

BACKGROUND: Altered affective state recognition is assumed to be a root cause of aggressive behavior, a hallmark of psychopathologies such as psychopathy and antisocial personality disorder. However, the two most influential models make markedly different predictions regarding the underlying mechanism. According to the integrated emotion system theory (IES), aggression reflects impaired processing of social distress cues such as fearful faces. In contrast, the hostile attribution bias (HAB) model explains aggression with a bias to interpret ambiguous expressions as angry. METHODS: In a set of four experiments, we measured processing of fearful and angry facial expressions (compared to neutral and other expressions) in a sample of 65 male imprisoned violent offenders rated using the Hare Psychopathy Checklist-Revised (PCL-R, Hare, R. D. (1991). The psychopathy checklist-revised. Toronto, ON: Multi-Health Systems) and in 60 age-matched control participants. RESULTS: There was no evidence for a fear deficit in violent offenders or for an association of psychopathy or aggression with impaired processing of fearful faces. Similarly, there was no evidence for a perceptual bias for angry faces linked to psychopathy or aggression. However, using highly ambiguous stimuli and requiring explicit labeling of emotions, violent offenders showed a categorization bias for anger and this anger bias correlated with self-reported trait aggression (but not with psychopathy). CONCLUSIONS: These results add to a growing literature casting doubt on the notion that fear processing is impaired in aggressive individuals and in psychopathy and provide support for the idea that aggression is related to a hostile attribution bias that emerges from later cognitive, post-perceptual processing stages.

11.
Orphanet J Rare Dis ; 19(1): 325, 2024 Sep 06.
Artículo en Inglés | MEDLINE | ID: mdl-39243040

RESUMEN

BACKGROUND: Classic galactosemia is a rare inherited metabolic disease with long-term complications, particularly in the psychosocial domain. Patients report a lower quality of social life, difficulties in interactions and social relationships, and a lower mental health. We hypothesised that social cognition deficits could partially explain this psychological symptomatology. Eleven adults with galactosemia and 31 control adults participated in the study. We measured social cognition skills in cognitive and affective theory of Mind, and in basic and complex emotion recognition. We explored psychosocial development and mental well-being. RESULTS: We found significant deficits on all 4 social cognition measures. Compared to controls, participants with galactosemia were impaired in the 2nd-order cognitive theory of mind, in affective theory of mind, and in basic and complex emotion recognition. Participants with galactosemia had a significant delay in their psychosexual development, but we found no delay in social development and no significant decrease in mental health. CONCLUSION: Social cognition processes seem impaired among our participants with galactosemia. We discuss the future path research may follow. More research is needed to replicate and strengthen these results and establish the links between psychosocial complications and deficits in social cognition.


Asunto(s)
Galactosemias , Cognición Social , Humanos , Galactosemias/psicología , Femenino , Masculino , Adulto , Adulto Joven , Persona de Mediana Edad
12.
Front Comput Neurosci ; 18: 1416494, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39099770

RESUMEN

EEG-based emotion recognition is becoming crucial in brain-computer interfaces (BCI). Currently, most researches focus on improving accuracy, while neglecting further research on the interpretability of models, we are committed to analyzing the impact of different brain regions and signal frequency bands on emotion generation based on graph structure. Therefore, this paper proposes a method named Dual Attention Mechanism Graph Convolutional Neural Network (DAMGCN). Specifically, we utilize graph convolutional neural networks to model the brain network as a graph to extract representative spatial features. Furthermore, we employ the self-attention mechanism of the Transformer model which allocates more electrode channel weights and signal frequency band weights to important brain regions and frequency bands. The visualization of attention mechanism clearly demonstrates the weight allocation learned by DAMGCN. During the performance evaluation of our model on the DEAP, SEED, and SEED-IV datasets, we achieved the best results on the SEED dataset, showing subject-dependent experiments' accuracy of 99.42% and subject-independent experiments' accuracy of 73.21%. The results are demonstrably superior to the accuracies of most existing models in the realm of EEG-based emotion recognition.

13.
Dement Neurocogn Disord ; 23(3): 146-160, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-39113753

RESUMEN

Background and Purpose: The emotions of people at various stages of dementia need to be effectively utilized for prevention, early intervention, and care planning. With technology available for understanding and addressing the emotional needs of people, this study aims to develop speech emotion recognition (SER) technology to classify emotions for people at high risk of dementia. Methods: Speech samples from people at high risk of dementia were categorized into distinct emotions via human auditory assessment, the outcomes of which were annotated for guided deep-learning method. The architecture incorporated convolutional neural network, long short-term memory, attention layers, and Wav2Vec2, a novel feature extractor to develop automated speech-emotion recognition. Results: Twenty-seven kinds of Emotions were found in the speech of the participants. These emotions were grouped into 6 detailed emotions: happiness, interest, sadness, frustration, anger, and neutrality, and further into 3 basic emotions: positive, negative, and neutral. To improve algorithmic performance, multiple learning approaches were applied using different data sources-voice and text-and varying the number of emotions. Ultimately, a 2-stage algorithm-initial text-based classification followed by voice-based analysis-achieved the highest accuracy, reaching 70%. Conclusions: The diverse emotions identified in this study were attributed to the characteristics of the participants and the method of data collection. The speech of people at high risk of dementia to companion robots also explains the relatively low performance of the SER algorithm. Accordingly, this study suggests the systematic and comprehensive construction of a dataset from people with dementia.

14.
Cogn Neurodyn ; 18(4): 1689-1707, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-39104696

RESUMEN

Electroencephalogram (EEG) emotion recognition plays a vital role in affective computing. A limitation of the EEG emotion recognition task is that the features of multiple domains are rarely included in the analysis simultaneously because of the lack of an effective feature organization form. This paper proposes a video-level feature organization method to effectively organize the temporal, frequency and spatial domain features. In addition, a deep neural network, Channel Attention Convolutional Aggregation Network, is designed to explore deeper emotional information from video-level features. The network uses a channel attention mechanism to adaptively captures critical EEG frequency bands. Then the frame-level representation of each time point is obtained by multi-layer convolution. Finally, the frame-level features are aggregated through NeXtVLAD to learn the time-sequence-related features. The method proposed in this paper achieves the best classification performance in SEED and DEAP datasets. The mean accuracy and standard deviation of the SEED dataset are 95.80% and 2.04%. In the DEAP dataset, the average accuracy with the standard deviation of arousal and valence are 98.97% ± 1.13% and 98.98% ± 0.98%, respectively. The experimental results show that our approach based on video-level features is effective for EEG emotion recognition tasks.

15.
Front Neurosci ; 18: 1449527, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39170679

RESUMEN

Facial expression recognition (FER) plays a crucial role in affective computing, enhancing human-computer interaction by enabling machines to understand and respond to human emotions. Despite advancements in deep learning, current FER systems often struggle with challenges such as occlusions, head pose variations, and motion blur in natural environments. These challenges highlight the need for more robust FER solutions. To address these issues, we propose the Attention-Enhanced Multi-Layer Transformer (AEMT) model, which integrates a dual-branch Convolutional Neural Network (CNN), an Attentional Selective Fusion (ASF) module, and a Multi-Layer Transformer Encoder (MTE) with transfer learning. The dual-branch CNN captures detailed texture and color information by processing RGB and Local Binary Pattern (LBP) features separately. The ASF module selectively enhances relevant features by applying global and local attention mechanisms to the extracted features. The MTE captures long-range dependencies and models the complex relationships between features, collectively improving feature representation and classification accuracy. Our model was evaluated on the RAF-DB and AffectNet datasets. Experimental results demonstrate that the AEMT model achieved an accuracy of 81.45% on RAF-DB and 71.23% on AffectNet, significantly outperforming existing state-of-the-art methods. These results indicate that our model effectively addresses the challenges of FER in natural environments, providing a more robust and accurate solution. The AEMT model significantly advances the field of FER by improving the robustness and accuracy of emotion recognition in complex real-world scenarios. This work not only enhances the capabilities of affective computing systems but also opens new avenues for future research in improving model efficiency and expanding multimodal data integration.

16.
Stud Health Technol Inform ; 316: 924-928, 2024 Aug 22.
Artículo en Inglés | MEDLINE | ID: mdl-39176943

RESUMEN

In recent years, artificial intelligence, and machine learning (ML) models have advanced significantly, offering transformative solutions across diverse sectors. Emotion recognition in speech has particularly benefited from ML techniques, revolutionizing its accuracy and applicability. This article proposes a method for emotion detection in Romanian speech analysis by combining two distinct approaches: semantic analysis using GPT Transformer and acoustic analysis using openSMILE. The results showed an accuracy of 74% and a precision of almost 82%. Several system limitations were observed due to the limited and low-quality dataset. However, it also opened a new horizon in our research by analyzing emotions to identify mental health disorders.


Asunto(s)
Emociones , Software de Reconocimiento del Habla , Humanos , Rumanía , Aprendizaje Automático , Semántica , Inteligencia Artificial
17.
Mov Disord ; 2024 Aug 14.
Artículo en Inglés | MEDLINE | ID: mdl-39140267

RESUMEN

Social cognition (SC) encompasses a set of cognitive functions that enable individuals to understand and respond appropriately to social interactions. Although focused ultrasound subthalamotomy (FUS-STN) effectively treats Parkinson's disease (PD) clinical motor features, its impact and safety on cognitive-behavioral interactions/interpersonal awareness are unknown. This study investigated the effects of unilateral FUS-STN on facial emotion recognition (FER) and affective and cognitive theory of mind (ToM) in PD patients from a randomized sham-controlled trial (NCT03454425). Subjects performed SC evaluation before and 4 months after the procedure while still under blind assessment conditions. The SC assessment included the Karolinska Directed Emotional Faces task for FER, the Reading the Mind in the Eyes (RME) test for affective ToM, and The Theory of Mind Picture Stories Task (ToM PST) (order, questions, and total score) for cognitive ToM. The active treatment group showed anecdotal-to-moderate evidence of no worsening in SC after FUS-STN. Anecdotal evidence for an improvement was recognized in the SC score changes, from baseline to post-treatment, for the active treatment group compared with sham for the RME, ToM PST order, ToM PST total, FER total, and recognition of fear, disgust, and anger. This study provides the first evidence that unilateral FUS-STN does not impair social cognitive abilities, indicating that it can be considered a safe treatment approach for this domain in PD patients. Furthermore, the results suggest FUS-STN may even lead to some improvement in social cognitive outcomes, which should be considered as a preliminary finding requiring further investigation with larger samples sizes. © 2024 International Parkinson and Movement Disorder Society.

18.
Ann Med Surg (Lond) ; 86(8): 4657-4663, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-39118764

RESUMEN

This study aims to dissect the current state of emotion recognition and response mechanisms in artificial intelligence (AI) systems, exploring the progress made, challenges faced, and implicit operations of integrating emotional intelligence into AI. This study utilized a comprehensive review approach to investigate the integration of emotional intelligence (EI) into artificial intelligence (AI) systems, concentrating on emotion recognition and response mechanisms. The review process entailed formulating research questions, systematically searching academic databases such as PubMed, Scopus, and Web of Science, critically evaluating relevant literature, synthesizing the data, and presenting the findings in a comprehensive format. The study highlights the advancements in emotion recognition models, including the use of deep literacy ways and multimodal data emulsion. It discusses the challenges in emotion recognition, similar to variability in mortal expressions and the need for real-time processing. The integration of contextual information and individual traits is emphasized as enhancing the understanding of mortal feelings. The study also addresses ethical enterprises, similar as sequestration and impulses in training data. The integration of emotional intelligence into AI systems presents openings to revise mortal-computer relations. Emotion recognition and response mechanisms have made significant progress, but challenges remain. Unborn exploration directions include enhancing the robustness and interpretability of emotion recognition models, exploring cross-cultural and environment-apprehensive emotion understanding, and addressing long-term emotion shadowing and adaption. By further exploring emotional intelligence in AI systems, further compassionate and responsive machines can be developed, enabling deeper emotional connections with humans.

19.
Artículo en Inglés | MEDLINE | ID: mdl-39152275

RESUMEN

Callous-unemotional (CU) traits in children and adolescents are linked to severe and persistent antisocial behavior. Based on past empirical research, several theoretical models have suggested that CU traits may be partly explained by difficulties in correctly identifying others' emotional states as well as their reduced attention to others' eyes, which could be important for both causal theory and treatment. This study tested the relationships among CU traits, emotion recognition of facial expressions and visual behavior in a sample of 52 boys referred to a clinic for conduct problems (Mage = 10.29 years; SD = 2.06). We conducted a multi-method and multi-informant assessment of CU traits through the Child Problematic Traits Inventory (CPTI), the Inventory of Callous-Unemotional (ICU), and the Clinical Assessment of Prosocial Emotions-Version 1.1 (CAPE). The primary goal of the study was to compare the utility of these methods for forming subgroups of youth that differ in their emotional processing abilities. An emotion recognition task assessed recognition accuracy (percentage of mistakes) and absolute dwell time on the eyes or mouth region for each emotion. Results from repeated measures ANOVAs revealed that low and high CU groups did not differ in emotion recognition accuracy, irrespective of the method of assessing CU traits. However, the high CU group showed reduced attention to the eyes of fearful and sad facial expressions (using the CPTI) or to all emotions (using the CAPE). The high CU group also showed a general increase in attention to the mouth area, but only when assessed by the CAPE. These findings provide evidence to support abnormalities in how those elevated on CU traits process emotional stimuli, especially when assessed by a clinical interview, which could guide appropriate assessment and more successful interventions for this group of youth.

20.
Autism Res ; 2024 Aug 02.
Artículo en Inglés | MEDLINE | ID: mdl-39092565

RESUMEN

Face processing relies on predictive processes driven by low spatial frequencies (LSF) that convey coarse information prior to fine information conveyed by high spatial frequencies. However, autistic individuals might have atypical predictive processes, contributing to facial processing difficulties. This may be more normalized in autistic females, who often exhibit better socio-communicational abilities than males. We hypothesized that autistic females would display a more typical coarse-to-fine processing for socio-emotional stimuli compared to autistic males. To test this hypothesis, we asked adult participants (44 autistic, 51 non-autistic) to detect fearful faces among neutral faces, filtered in two orders: from coarse-to-fine (CtF) and from fine-to-coarse (FtC). Results show lower d' values and longer reaction times for fearful detection in autism compared to non-autistic (NA) individuals, regardless of the filtering order. Both groups presented shorter P100 latency after CtF compared to FtC, and larger amplitude for N170 after FtC compared to CtF. However, autistic participants presented a reduced difference in source activity between CtF and FtC in the fusiform. There was also a more spatially spread activation pattern in autistic females compared to NA females. Finally, females had faster P100 and N170 latencies, as well as larger occipital activation for FtC sequences than males, irrespective of the group. Overall, the results do not suggest impaired predictive processes from LSF in autism despite behavioral differences in fear detection. However, they do indicate reduced brain modulation by spatial frequency in autism. In addition, the findings highlight sex differences that warrant consideration in understanding autistic females.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA