Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 2.534
Filtrar
1.
Cogn Res Princ Implic ; 9(1): 57, 2024 Sep 02.
Artículo en Inglés | MEDLINE | ID: mdl-39218993

RESUMEN

Humans are often tasked to remember new faces so that they can recognize the faces later in time. Previous studies found that memory reports for basic visual features (e.g., colors and shapes) are susceptible to systematic distortions as a result of comparison with new visual input, especially when the input is perceived as similar to the memory. The current study tested whether this similarity-induced memory bias (SIMB) would also occur with more complex face stimuli. The results showed that faces that are just perceptually encoded into visual working memory as well as retrieved from visual long-term memory are also susceptible to SIMB. Furthermore, once induced, SIMB persisted over time across cues through which the face memory was accessed for memory report. These results demonstrate the generalizability of SIMB to more complex and practically relevant stimuli, and thus, suggest potential real-world implications.


Asunto(s)
Reconocimiento Facial , Memoria a Corto Plazo , Humanos , Reconocimiento Facial/fisiología , Femenino , Masculino , Adulto Joven , Adulto , Memoria a Corto Plazo/fisiología , Generalización Psicológica/fisiología , Adolescente , Memoria a Largo Plazo/fisiología
2.
Sensors (Basel) ; 24(17)2024 Aug 31.
Artículo en Inglés | MEDLINE | ID: mdl-39275593

RESUMEN

It is estimated that 10% to 20% of road accidents are related to fatigue, with accidents caused by drowsiness up to twice as deadly as those caused by other factors. In order to reduce these numbers, strategies such as advertising campaigns, the implementation of driving recorders in vehicles used for road transport of goods and passengers, or the use of drowsiness detection systems in cars have been implemented. Within the scope of the latter area, the technologies used are diverse. They can be based on the measurement of signals such as steering wheel movement, vehicle position on the road, or driver monitoring. Driver monitoring is a technology that has been exploited little so far and can be implemented in many different approaches. This work addresses the evaluation of a multidimensional drowsiness index based on the recording of facial expressions, gaze direction, and head position and studies the feasibility of its implementation in a low-cost electronic package. Specifically, the aim is to determine the driver's state by monitoring their facial expressions, such as the frequency of blinking, yawning, eye-opening, gaze direction, and head position. For this purpose, an algorithm capable of detecting drowsiness has been developed. Two approaches are compared: Facial recognition based on Haar features and facial recognition based on Histograms of Oriented Gradients (HOG). The implementation has been carried out on a Raspberry Pi, a low-cost device that allows the creation of a prototype that can detect drowsiness and interact with peripherals such as cameras or speakers. The results show that the proposed multi-index methodology performs better in detecting drowsiness than algorithms based on one-index detection.


Asunto(s)
Algoritmos , Conducción de Automóvil , Humanos , Expresión Facial , Reconocimiento Facial/fisiología , Fases del Sueño/fisiología , Accidentes de Tránsito/prevención & control , Masculino , Adulto , Reconocimiento Facial Automatizado/métodos , Femenino
3.
Cogn Res Princ Implic ; 9(1): 62, 2024 Sep 13.
Artículo en Inglés | MEDLINE | ID: mdl-39269590

RESUMEN

Two experiments explored the search for pairs of faces in a disjunctive dual-target face search (DDTFS) task for unfamiliar face targets. The distinctiveness of the target was manipulated such that both faces were typical or distinctive or contained one typical and one distinctive target. Targets were searched for in arrays of eight faces. In Experiment 1, participants completed a DDTFS block with targets learnt over the block of trials. In Experiment 2, the dual-target block was preceded by two training blocks of single-target trials. Participants also completed the upright and inverted long-form Cambridge Face Memory Test (CFMT+). The results showed that searching for two typical faces leads to one target being prioritised at the expense of the other. The ability to search for non-prioritised typical faces was associated with scores on the CFMT+. This association disappeared when faces were learnt before completing DDTFS. We interpret the findings in terms of the impact of typicality on face learning, individual differences in the ability to learn faces, and the involvement of capacity-limited working memory in the search for unfamiliar faces. The findings have implications for security-related situations where agents must search for multiple unfamiliar faces having been shown their images.


Security officers (e.g. police officers) are often required to be on the lookout for specific individuals or suspects. The present study shows that there is a profound challenge in finding unfamiliar targets when searching for more than one face at the same time. Importantly, the nature of this challenge depends on two factors: first, the relative typicality of the faces that are being sought at the same time, and second, the face processing ability of the searchers. The findings have implications for the design of the job roles and the recruitment of security officers tasked with searching for specific individuals.


Asunto(s)
Reconocimiento Facial , Humanos , Masculino , Femenino , Reconocimiento Facial/fisiología , Adulto Joven , Adulto , Adolescente , Reconocimiento en Psicología/fisiología , Memoria a Corto Plazo/fisiología
4.
Cognition ; 253: 105938, 2024 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-39232476

RESUMEN

Do people have accurate metacognition of non-uniformities in perceptual resolution across (i.e., eccentricity) and around (i.e., polar angle) the visual field? Despite its theoretical and practical importance, this question has not yet been empirically tested. This study investigated metacognition of perceptual resolution by guessing patterns during a degradation (i.e., loss of high spatial frequencies) localization task. Participants localized the degraded face among the nine faces that simultaneously appeared throughout the visual field: fovea (fixation at the center of the screen), parafovea (left, right, above, and below fixation at 4° eccentricity), and periphery (left, right, above, and below fixation at 10° eccentricity). We presumed that if participants had accurate metacognition, in the absence of a degraded face, they would exhibit compensatory guessing patterns based on counterfactual reasoning ("The degraded face must have been presented at locations with lower perceptual resolution, because if it were presented at locations with higher perceptual resolution, I would have easily detected it."), meaning that we would expect more guess responses for locations with lower perceptual resolution. In two experiments, we observed guessing patterns that suggest that people can monitor non-uniformities in perceptual resolution across, but not around, the visual field during tasks, indicating partial in-the-moment metacognition. Additionally, we found that global explicit knowledge of perceptual resolution is not sufficient to guide in-the-moment metacognition during tasks, which suggests a dissociation between local and global metacognition.


Asunto(s)
Metacognición , Campos Visuales , Humanos , Campos Visuales/fisiología , Metacognición/fisiología , Adulto , Adulto Joven , Masculino , Femenino , Reconocimiento Facial/fisiología , Percepción Visual/fisiología
5.
Horm Behav ; 165: 105633, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-39244875

RESUMEN

Time of day can alter memory performance in general. Its influence on memory recognition performance for faces, which is important for daily encounters with new persons or testimonies, has not been investigated yet. Importantly, high levels of the stress hormone cortisol impair memory recognition, in particular for emotional material. However, some studies also reported high cortisol levels to enhance memory recognition. Since cortisol levels in the morning are usually higher than in the evening, time of day might also influence recognition performance. In this pre-registered study with a two-day design, 51 healthy men encoded pictures of male and female faces with distinct emotional expressions on day one around noon. Memory for the faces was retrieved two days later at two consecutive testing times either in the morning (high and moderately increased endogenous cortisol levels) or in the evening (low endogenous cortisol levels). Additionally, alertness as well as salivary cortisol levels at the different timepoints was assessed. Cortisol levels were significantly higher in the morning compared to the evening group as expected, while both groups did not differ in alertness. Familiarity ratings for female stimuli were significantly better when participants were tested during moderately increased endogenous cortisol levels in the morning than during low endogenous cortisol levels in the evening, a pattern which was previously also observed for stressed versus non-stressed participants. In addition, cortisol levels during that time in the morning were positively correlated with the recollection of face stimuli in general. Thus, recognition memory performance may depend on the time of day and as well as on stimulus type, such as the difference of male and female faces. Most importantly, the results suggest that cortisol may be meaningful and worth investigating when studying the effects of time of day on memory performance. This research offers both, insights into daily encounters as well as legally relevant domains as for instance testimonies.


Asunto(s)
Ritmo Circadiano , Hidrocortisona , Reconocimiento en Psicología , Saliva , Humanos , Masculino , Hidrocortisona/metabolismo , Hidrocortisona/análisis , Adulto , Saliva/química , Saliva/metabolismo , Adulto Joven , Reconocimiento en Psicología/fisiología , Femenino , Ritmo Circadiano/fisiología , Reconocimiento Facial/fisiología , Expresión Facial , Emociones/fisiología , Factores de Tiempo
6.
Drug Alcohol Depend ; 263: 111398, 2024 Oct 01.
Artículo en Inglés | MEDLINE | ID: mdl-39137611

RESUMEN

BACKGROUND: Our brain uses interoceptive signals from the body to shape how we perceive emotions in others; however, whether interoceptive signals can be manipulated to alter emotional perceptions is unknown. This registered report examined whether alcohol administration triggers physiological changes that alter interoceptive signals and manipulate emotional face processing. METHODS: Participants (n=36) were administered an alcohol or placebo beverage. Cardiovascular physiology (Heartrate variability, HRD) was recorded before and after administration. Participants completed a behavioral task in which emotional faces were presented in synchrony with different phases of the cardiac cycle (i.e., systole/diastole) to index of how interoceptive signals amplify them. HYPOTHESES: We hypothesized that alcohol administration would disrupt the cardiac amplification of emotional face processing. We further explored whether this disruption depended on the nature and magnitude of changes in cardiovascular physiology after alcohol administration. RESULTS: We observed no main effects or interactions between alcohol administration and emotional face processing. We found that HRV at baseline negatively correlated with the cardiac amplification of emotional faces. The extent to which alcohol impacted HRV negatively correlated with the cardiac amplification of angry faces. CONCLUSIONS: This registered report failed to validate the primary hypotheses but offers some evidence that the effects of alcohol on emotional face processing, if any, could be mediated via changes in basic physiological signals that are integrated via interoceptive mechanisms. Results are interpreted within the context of interoceptive inference and could feed novel perspectives for the interplay between physiological sensitivity and interoception in the development of drug-related behaviors.


Asunto(s)
Emociones , Etanol , Expresión Facial , Frecuencia Cardíaca , Interocepción , Humanos , Masculino , Femenino , Emociones/efectos de los fármacos , Emociones/fisiología , Frecuencia Cardíaca/efectos de los fármacos , Frecuencia Cardíaca/fisiología , Adulto Joven , Adulto , Interocepción/fisiología , Interocepción/efectos de los fármacos , Etanol/farmacología , Reconocimiento Facial/efectos de los fármacos , Reconocimiento Facial/fisiología , Publicación de Preinscripción
7.
J Psychiatr Res ; 178: 210-218, 2024 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-39153454

RESUMEN

Social deficits in schizophrenia have been attributed to an impaired attunement to mutual interaction, or "interaffectivity". While impairments in emotion recognition and facial expressivity in schizophrenia have been consistently reported, findings on mimicry and social synchrony are inconsistent, and previous studies have often lacked ecological validity. To investigate interaffective behavior in dyadic interactions in a real-world-like setting, 20 individuals with schizophrenia and 20 without mental disorder played a cooperative board game with a previously unacquainted healthy control participant. Facial expression analysis was conducted using Affectiva Emotion AI in iMotions 9.3. The contingency and state space distribution of emotional facial expressions was assessed using Mangold INTERACT. Psychotic symptoms, subjective stress, affectivity and game experience were evaluated through questionnaires. Due to a considerable between-group age difference, age-adjusted ANCOVA was performed. Overall, despite an unchanged subjective experience of the social interaction, individuals with schizophrenia exhibited reduced responsiveness to positive affective stimuli. Subjective game experience did not differ between groups. Descriptively, facial expressions in schizophrenia were generally more negative, with increased sadness and decreased joy. Facial mimicry was impaired specifically regarding joyful expressions in schizophrenia, which correlated with blunted affect as measured by the SANS. Dyadic interactions involving persons with schizophrenia were less attracted toward mutual joyful affective states. Only unadjusted for age, in the absence of emotional stimuli from their interaction partner, individuals with schizophrenia showed more angry and sad expressions. These impairments in interaffective processes may contribute to social dysfunction in schizophrenia and provide new avenues for future research.


Asunto(s)
Expresión Facial , Esquizofrenia , Interacción Social , Humanos , Masculino , Adulto , Femenino , Esquizofrenia/fisiopatología , Persona de Mediana Edad , Reconocimiento Facial/fisiología , Psicología del Esquizofrénico , Emociones/fisiología , Inteligencia Artificial , Adulto Joven
8.
Psychiatry Res ; 340: 116143, 2024 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-39167864

RESUMEN

Facial emotion perception deficits, a possible indicator of illness progression and transdiagnostic phenotype, were examined in high-risk psychosis (CHR) patients through a systematic review and meta-analysis of 35 studies (2567 CHR individuals, 1103 non-transitioned [CHR-NT], 212 transitioned [CHR-T], 512 first-episode psychosis [FEP], and 1936 healthy controls [HC]). CHR showed overall (g = -0.369 [95 % CI, -0.485 to -0.253]) and specific impairments in detecting anger, disgust, fear, happiness, neutrality, and sadness compared to HC, except for surprise. FEP revealed a general deficit than CHR (g = -0.378 [95 % CI, -0.509 to -0.247]), and CHR-T displayed more pronounced baseline impairments than CHR-NT (g = -0.217 [95 % CI, -0.365 to -0.068]). FEP only exhibited a poorer ability to perceive fear, but not other individual emotions, compared to CHR. Similar performances in perceiving individual emotions were observed regardless of transition status (CHR-NT and CHR-T). However, literature comparing the perception of individual emotions among FEP, CHR-T, and CHR is limited. This study primarily characterized the general and overall impairments of facial emotion perception in CHR which could predict transition risk, emphasizing the need for future research on multimodal parameters of emotion perception and associations with other psychiatric outcomes.


Asunto(s)
Emociones , Reconocimiento Facial , Trastornos Psicóticos , Humanos , Trastornos Psicóticos/psicología , Trastornos Psicóticos/fisiopatología , Reconocimiento Facial/fisiología , Emociones/fisiología , Expresión Facial , Progresión de la Enfermedad , Percepción Social
9.
Transl Psychiatry ; 14(1): 342, 2024 Aug 24.
Artículo en Inglés | MEDLINE | ID: mdl-39181892

RESUMEN

Humans can decode emotional states from the body odors of the conspecifics and this type of emotional communication is particularly relevant in conditions in which social interactions are impaired, as in depression and social anxiety. The present study aimed to explore how body odors collected in happiness and fearful conditions modulate the subjective ratings, the psychophysiological response and the neural processing of neutral faces in individuals with depressive symptoms, social anxiety symptoms, and healthy controls (N = 22 per group). To this aim, electrocardiogram (ECG) and HD-EEG were recorded continuously. Heart Rate Variability (HRV) was extracted from the ECG as a measure of vagal tone, event-related potentials (ERPs) and event-related spectral perturbations (ERPSs) were extracted from the EEG. The results revealed that the HRV increased during the fear and happiness body odors conditions compared to clean air, but no group differences emerged. For ERPs data, repeated measure ANOVA did not show any significant effects. However, the ERPSs analyses revealed a late increase in delta power and a reduced beta power both at an early and a late stage of stimulus processing in response to the neutral faces presented with the emotional body odors, regardless of the presence of depressive or social anxiety symptoms. The current research offers new insights, demonstrating that emotional chemosignals serve as potent environmental cues. This represents a substantial advancement in comprehending the impact of emotional chemosignals in both individuals with and without affective disorders.


Asunto(s)
Señales (Psicología) , Electroencefalografía , Emociones , Potenciales Evocados , Expresión Facial , Frecuencia Cardíaca , Humanos , Masculino , Femenino , Adulto , Potenciales Evocados/fisiología , Emociones/fisiología , Adulto Joven , Frecuencia Cardíaca/fisiología , Percepción Olfatoria/fisiología , Felicidad , Electrocardiografía , Miedo/fisiología , Reconocimiento Facial/fisiología , Odorantes , Trastornos del Humor/fisiopatología , Trastornos del Humor/psicología , Depresión/fisiopatología , Depresión/psicología , Ansiedad/fisiopatología , Ansiedad/psicología
10.
Transl Psychiatry ; 14(1): 317, 2024 Aug 02.
Artículo en Inglés | MEDLINE | ID: mdl-39095355

RESUMEN

Several mental disorders emerge during childhood or adolescence and are often characterized by socioemotional difficulties, including alterations in emotion perception. Emotional facial expressions are processed in discrete functional brain modules whose connectivity patterns encode emotion categories, but the involvement of these neural circuits in psychopathology in youth is poorly understood. This study examined the associations between activation and functional connectivity patterns in emotion circuits and psychopathology during development. We used task-based fMRI data from the Philadelphia Neurodevelopmental Cohort (PNC, N = 1221, 8-23 years) and conducted generalized psycho-physiological interaction (gPPI) analyses. Measures of psychopathology were derived from an independent component analysis of questionnaire data. The results showed positive associations between identifying fearful, sad, and angry faces and depressive symptoms, and a negative relationship between sadness recognition and positive psychosis symptoms. We found a positive main effect of depressive symptoms on BOLD activation in regions overlapping with the default mode network, while individuals reporting higher levels of norm-violating behavior exhibited emotion-specific lower functional connectivity within regions of the salience network and between modules that overlapped with the salience and default mode network. Our findings illustrate the relevance of functional connectivity patterns underlying emotion processing for behavioral problems in children and adolescents.


Asunto(s)
Emociones , Expresión Facial , Imagen por Resonancia Magnética , Humanos , Adolescente , Femenino , Masculino , Niño , Emociones/fisiología , Adulto Joven , Depresión/fisiopatología , Depresión/diagnóstico por imagen , Depresión/psicología , Encéfalo/fisiopatología , Encéfalo/diagnóstico por imagen , Reconocimiento Facial/fisiología , Red en Modo Predeterminado/fisiopatología , Red en Modo Predeterminado/diagnóstico por imagen , Trastornos Mentales/fisiopatología , Trastornos Mentales/diagnóstico por imagen , Trastornos Mentales/psicología
11.
Cereb Cortex ; 34(8)2024 Aug 01.
Artículo en Inglés | MEDLINE | ID: mdl-39123309

RESUMEN

The functional importance of the anterior temporal lobes (ATLs) has come to prominence in two active, albeit unconnected literatures-(i) face recognition and (ii) semantic memory. To generate a unified account of the ATLs, we tested the predictions from each literature and examined the effects of bilateral versus unilateral ATL damage on face recognition, person knowledge, and semantic memory. Sixteen people with bilateral ATL atrophy from semantic dementia (SD), 17 people with unilateral ATL resection for temporal lobe epilepsy (TLE; left = 10, right = 7), and 14 controls completed tasks assessing perceptual face matching, person knowledge and general semantic memory. People with SD were impaired across all semantic tasks, including person knowledge. Despite commensurate total ATL damage, unilateral resection generated mild impairments, with minimal differences between left- and right-ATL resection. Face matching performance was largely preserved but slightly reduced in SD and right TLE. All groups displayed the familiarity effect in face matching; however, it was reduced in SD and right TLE and was aligned with the level of item-specific semantic knowledge in all participants. We propose a neurocognitive framework whereby the ATLs underpin a resilient bilateral representation system that supports semantic memory, person knowledge and face recognition.


Asunto(s)
Epilepsia del Lóbulo Temporal , Reconocimiento Facial , Semántica , Lóbulo Temporal , Humanos , Masculino , Femenino , Persona de Mediana Edad , Lóbulo Temporal/cirugía , Lóbulo Temporal/diagnóstico por imagen , Lóbulo Temporal/patología , Adulto , Reconocimiento Facial/fisiología , Epilepsia del Lóbulo Temporal/cirugía , Epilepsia del Lóbulo Temporal/psicología , Epilepsia del Lóbulo Temporal/fisiopatología , Reconocimiento en Psicología/fisiología , Lateralidad Funcional/fisiología , Pruebas Neuropsicológicas , Memoria/fisiología , Anciano , Cara
12.
Artículo en Inglés | MEDLINE | ID: mdl-39102324

RESUMEN

Faces and bodies provide critical cues for social interaction and communication. Their structural encoding depends on configural processing, as suggested by the detrimental effect of stimulus inversion for both faces (i.e., face inversion effect - FIE) and bodies (body inversion effect - BIE). An occipito-temporal negative event-related potential (ERP) component peaking around 170 ms after stimulus onset (N170) is consistently elicited by human faces and bodies and is affected by the inversion of these stimuli. Albeit it is known that emotional expressions can boost structural encoding (resulting in larger N170 components for emotional than for neutral faces), little is known about body emotional expressions. Thus, the current study investigated the effects of different emotional expressions on structural encoding in combination with FIE and BIE. Three ERP components (P1, N170, P2) were recorded using a 128-channel electroencephalogram (EEG) when participants were presented with (upright and inverted) faces and bodies conveying four possible emotions (happiness, sadness, anger, fear) or no emotion (neutral). Results demonstrated that inversion and emotional expressions independently affected the Accuracy and amplitude of all ERP components (P1, N170, P2). In particular, faces showed specific effects of emotional expressions during the structural encoding stage (N170), while P2 amplitude (representing top-down conceptualisation) was modified by emotional body perception. Moreover, the task performed by participants (i.e., implicit vs. explicit processing of emotional information) differently influenced Accuracy and ERP components. These results support integrated theories of visual perception, thus speaking in favour of the functional independence of the two neurocognitive pathways (one for structural encoding and one for emotional expression analysis) involved in social stimuli processing. Results are discussed highlighting the neurocognitive and computational advantages of the independence between the two pathways.


Asunto(s)
Electroencefalografía , Emociones , Potenciales Evocados , Expresión Facial , Humanos , Masculino , Emociones/fisiología , Femenino , Adulto Joven , Adulto , Potenciales Evocados/fisiología , Reconocimiento Facial/fisiología , Estimulación Luminosa , Percepción Visual/fisiología , Cinésica
13.
J Exp Biol ; 227(17)2024 Sep 01.
Artículo en Inglés | MEDLINE | ID: mdl-39119656

RESUMEN

Visual recognition of three-dimensional signals, such as faces, is challenging because the signals appear different from different viewpoints. A flexible but cognitively challenging solution is viewpoint-independent recognition, where receivers identify signals from novel viewing angles. Here, we used same/different concept learning to test viewpoint-independent face recognition in Polistes fuscatus, a wasp that uses facial patterns to individually identify conspecifics. We found that wasps use extrapolation to identify novel views of conspecific faces. For example, wasps identify a pair of pictures of the same wasp as the 'same', even if the pictures are taken from different views (e.g. one face 0 deg rotation, one face 60 deg rotation). This result is notable because it provides the first evidence of view-invariant recognition via extrapolation in an invertebrate. The results suggest that viewpoint-independent recognition via extrapolation may be a widespread strategy to facilitate individual face recognition.


Asunto(s)
Avispas , Avispas/fisiología , Animales , Reconocimiento en Psicología/fisiología , Reconocimiento Visual de Modelos/fisiología , Cara , Reconocimiento Facial/fisiología , Femenino
14.
Sci Rep ; 14(1): 19455, 2024 08 21.
Artículo en Inglés | MEDLINE | ID: mdl-39169205

RESUMEN

While alterations in both physiological responses to others' emotions as well as interoceptive abilities have been identified in autism, their relevance in altered emotion recognition is largely unknown. We here examined the role of interoceptive ability, facial mimicry, and autistic traits in facial emotion processing in non-autistic individuals. In an online Experiment 1, participants (N = 99) performed a facial emotion recognition task, including ratings of perceived emotional intensity and confidence in emotion recognition, and reported on trait interoceptive accuracy, interoceptive sensibility and autistic traits. In a follow-up lab Experiment 2 involving 100 participants, we replicated the online experiment and additionally investigated the relationship between facial mimicry (measured through electromyography), cardiac interoceptive accuracy (evaluated using a heartbeat discrimination task), and autistic traits in relation to emotion processing. Across experiments, neither interoception measures nor facial mimicry accounted for a reduced recognition of specific expressions with higher autistic traits. Higher trait interoceptive accuracy was rather associated with more confidence in correct recognition of some expressions, as well as with higher ratings of their perceived emotional intensity. Exploratory analyses indicated that those higher intensity ratings might result from a stronger integration of instant facial muscle activations, which seem to be less integrated in intensity ratings with higher autistic traits. Future studies should test whether facial muscle activity, and physiological signals in general, are correspondingly less predictive of perceiving emotionality in others in individuals on the autism spectrum, and whether training interoceptive abilities might facilitate the interpretation of emotional expressions.


Asunto(s)
Trastorno Autístico , Emociones , Expresión Facial , Individualidad , Interocepción , Humanos , Masculino , Femenino , Interocepción/fisiología , Emociones/fisiología , Adulto , Trastorno Autístico/fisiopatología , Trastorno Autístico/psicología , Adulto Joven , Adolescente , Reconocimiento Facial/fisiología , Reconocimiento en Psicología/fisiología
15.
Cognition ; 251: 105904, 2024 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-39106626

RESUMEN

Classification performance is better for learned than unlearned stimuli. This was also reported for faces, where identity matching of unfamiliar faces is worse than for familiar faces. This familiarity advantage led to the conclusion that variability across appearances of the same identity is partly idiosyncratic and cannot be generalized from familiar to unfamiliar identities. Recent advances in machine vision challenge this claim by showing that the performance for untrained (unfamiliar) identities reached the level of trained identities as the number of identities that the algorithm is trained with increases. We therefore asked whether humans who reportedly can identify a vast number of identities, such as super recognizers, may close the gap between familiar and unfamiliar face classification. Consistent with this prediction, super recognizers classified unfamiliar faces just as well as typical participants who are familiar with the same faces, on a task that generates a sizable familiarity effect in controls. Additionally, prosopagnosics' performance for familiar faces was as bad as that of typical participants who were unfamiliar with the same faces, indicating that they struggle to learn even identity-specific information. Overall, these findings demonstrate that by studying the extreme ends of a system's ability we can gain novel insights into its actual capabilities.


Asunto(s)
Reconocimiento Facial , Reconocimiento en Psicología , Humanos , Reconocimiento en Psicología/fisiología , Reconocimiento Facial/fisiología , Masculino , Femenino , Adulto Joven , Adulto , Prosopagnosia
16.
Neural Netw ; 179: 106573, 2024 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-39096753

RESUMEN

Recognizing expressions from dynamic facial videos can find more natural affect states of humans, and it becomes a more challenging task in real-world scenes due to pose variations of face, partial occlusions and subtle dynamic changes of emotion sequences. Existing transformer-based methods often focus on self-attention to model the global relations among spatial features or temporal features, which cannot well focus on important expression-related locality structures from both spatial and temporal features for the in-the-wild expression videos. To this end, we incorporate diverse graph structures into transformers and propose a CDGT method to construct diverse graph transformers for efficient emotion recognition from in-the-wild videos. Specifically, our method contains a spatial dual-graphs transformer and a temporal hyperbolic-graph transformer. The former deploys a dual-graph constrained attention to capture latent emotion-related graph geometry structures among local spatial tokens for efficient feature representation, especially for the video frames with pose variations and partial occlusions. The latter adopts a hyperbolic-graph constrained self-attention that explores important temporal graph structure information under hyperbolic space to model more subtle changes of dynamic emotion. Extensive experimental results on in-the-wild video-based facial expression databases show that our proposed CDGT outperforms other state-of-the-art methods.


Asunto(s)
Emociones , Expresión Facial , Grabación en Video , Humanos , Emociones/fisiología , Algoritmos , Redes Neurales de la Computación , Reconocimiento Facial/fisiología , Reconocimiento de Normas Patrones Automatizadas/métodos , Reconocimiento Facial Automatizado/métodos
17.
Soc Cogn Affect Neurosci ; 19(1)2024 Sep 16.
Artículo en Inglés | MEDLINE | ID: mdl-39167473

RESUMEN

Human facial features (eyes, nose, and mouth) allow us to communicate with others. Observing faces triggers physiological responses, including pupil dilation. Still, the relative influence of social and motion content of a visual stimulus on pupillary reactivity has never been elucidated. A total of 30 adults aged 18-33 years old were recorded with an eye tracker. We analysed the event-related pupil dilation in response to stimuli distributed along a gradient of social salience (non-social to social, going from objects to avatars to real faces) and dynamism (static to micro- to macro-motion). Pupil dilation was larger in response to social (faces and avatars) compared to non-social stimuli (objects), with surprisingly a larger response for avatars. Pupil dilation was also larger in response to macro-motion compared to static. After quantifying each stimulus' real quantity of motion, we found that the higher the quantity of motion, the larger the pupil dilated. However, the slope of this relationship was not higher for social stimuli. Overall, pupil dilation was more sensitive to the real quantity of motion than to the social component of motion, highlighting the relevance of ecological stimulations. Physiological response to faces results from specific contributions of both motion and social processing.


Asunto(s)
Reconocimiento Facial , Percepción de Movimiento , Pupila , Humanos , Pupila/fisiología , Adulto Joven , Adulto , Masculino , Femenino , Adolescente , Percepción de Movimiento/fisiología , Reconocimiento Facial/fisiología , Percepción Social , Estimulación Luminosa/métodos , Cara/fisiología , Tecnología de Seguimiento Ocular
18.
J Speech Lang Hear Res ; 67(9): 3148-3162, 2024 Sep 12.
Artículo en Inglés | MEDLINE | ID: mdl-39196850

RESUMEN

PURPOSE: Developmental language disorder (DLD) and autism sometimes appear as overlapping conditions in behavioral tests. There is much literature on the visual scanning pattern (VSP) of faces in autistic children, but this is scarce regarding those with DLD. The purpose of this study was to compare the VSP of faces in young children with DLD, those with autism, and typically developing peers, assessing the effect of three variables. METHOD: Two eye-tracking experiments were designed to assess the effect of the emotion and the poser's gender (Experiment 1) and the poser's age (Experiment 2) on the VSP of participants (Experiment 1: N = 59, age range: 32-74 months; Experiment 2: N = 58, age range: 32-74 months). We operationalized the VSP in terms of attentional orientation, visual preference, and depth of processing of each sort of face. We developed two paired preference tasks in which pairs of images of faces showing different emotions were displayed simultaneously to compete for children's attention. RESULTS: Data analysis revealed two VSP markers common to both disorders: (a) superficial processing of faces and (b) late orientation to angry and child faces. Moreover, one specific marker for each condition was also found: typical preference for child faces in children with DLD versus diminished preference for them in autistic children. CONCLUSIONS: Considering the similarities found between children with DLD and those with autism, difficulties of children with DLD in attention to faces have been systematically underestimated. Thus, more effort must be made to identify and respond to the needs of this population. Clinical practice may benefit from the potential of eye-tracking methodology and the analysis of the VSP to assess attention to faces in both conditions. This would also contribute to the improvement of early differential diagnosis in the long run.


Asunto(s)
Atención , Trastorno Autístico , Tecnología de Seguimiento Ocular , Trastornos del Desarrollo del Lenguaje , Humanos , Masculino , Femenino , Atención/fisiología , Preescolar , Trastornos del Desarrollo del Lenguaje/psicología , Trastornos del Desarrollo del Lenguaje/fisiopatología , Trastorno Autístico/psicología , Trastorno Autístico/fisiopatología , Niño , Expresión Facial , Emociones , Reconocimiento Facial/fisiología , Estimulación Luminosa/métodos
19.
Curr Biol ; 34(17): 4047-4055.e3, 2024 Sep 09.
Artículo en Inglés | MEDLINE | ID: mdl-39116886

RESUMEN

In his 1872 monograph, Charles Darwin posited that "… the habit of expressing our feelings by certain movements, though now rendered innate, had been in some manner gradually acquired."1 Nearly 150 years later, researchers are still teasing apart innate versus experience-dependent contributions to expression recognition. Indeed, studies have shown that face detection is surprisingly resilient to early visual deprivation,2,3,4,5 pointing to plasticity that extends beyond dogmatic critical periods.6,7,8 However, it remains unclear whether such resilience extends to downstream processing, such as the ability to recognize facial expressions. The extent to which innate versus experience-dependent mechanisms contribute to this ability has yet to be fully explored.9,10,11,12,13 To investigate the impact of early visual experience on facial-expression recognition, we studied children with congenital cataracts who have undergone sight-correcting treatment14,15 and tracked their longitudinal skill acquisition as they gain sight late in life. We introduce and explore two potential facilitators of late-life plasticity: the availability of newborn-like coarse visual acuity prior to treatment16 and the privileged role of motion following treatment.4,17,18 We find that early visual deprivation does not preclude partial acquisition of facial-expression recognition. While rudimentary pretreatment vision is sufficient to allow a low level of expression recognition, it does not facilitate post-treatment improvements. Additionally, only children commencing vision with high visual acuity privilege the use of dynamic cues. We conclude that skipping typical visual experience early in development and introducing high-resolution imagery late in development restricts, but does not preclude, facial-expression skill acquisition and that the representational mechanisms driving this learning differ from those that emerge during typical visual development.


Asunto(s)
Ceguera , Expresión Facial , Humanos , Ceguera/fisiopatología , Niño , Masculino , Femenino , Adolescente , Reconocimiento Facial/fisiología , Preescolar , Agudeza Visual/fisiología
20.
Cortex ; 179: 286-300, 2024 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-39216289

RESUMEN

In this study, we assessed whether predictability affected the early processing of facial expressions. To achieve this, we measured lateralised early- and mid-latency event-related potentials associated with visual processing. Twenty-two participants were shown pairs of bilaterally presented fearful, happy, angry, or scrambled faces. Participants were required to identify angry faces on a spatially attended side whilst ignoring happy, fearful, and scrambled faces. Each block began with the word HAPPY or FEARFUL which informed participants the probability at which these faces would appear. Attention effects were found for the lateralised P1, suggesting that emotions do not modulate the P1 differentially, nor do predictions relating to emotions. Pairwise comparisons demonstrated that, when spatially unattended, unpredicted fearful faces produced larger lateralised N170 amplitudes compared to predicted fearful faces and unpredicted happy faces. Finally, attention towards faces increased lateralised EPN amplitudes, as did both fearful expressions and low predictability. Thus, we demonstrate that the N170 and EPN are sensitive to top-down predictions relating to facial expressions and that low predictability appears to specifically affect the early encoding of fearful faces when unattended, possibly to initiate attentional capture.


Asunto(s)
Atención , Electroencefalografía , Potenciales Evocados , Expresión Facial , Miedo , Humanos , Femenino , Masculino , Miedo/fisiología , Adulto Joven , Adulto , Atención/fisiología , Potenciales Evocados/fisiología , Estimulación Luminosa/métodos , Tiempo de Reacción/fisiología , Emociones/fisiología , Adolescente , Reconocimiento Facial/fisiología
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA