Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Neuroimage ; 263: 119631, 2022 11.
Artículo en Inglés | MEDLINE | ID: mdl-36113736

RESUMEN

Face perception provides an excellent example of how the brain processes nuanced visual differences and transforms them into behaviourally useful representations of identities and emotional expressions. While a body of literature has looked into the spatial and temporal neural processing of facial expressions, few studies have used a dimensionally varying set of stimuli containing subtle perceptual changes. In the current study, we used 48 short videos varying dimensionally in their intensity and category (happy, angry, surprised) of expression. We measured both fMRI and EEG responses to these video clips and compared the neural response patterns to the predictions of models based on image features and models derived from behavioural ratings of the stimuli. In fMRI, the inferior frontal gyrus face area (IFG-FA) carried information related only to the intensity of the expression, independent of image-based models. The superior temporal sulcus (STS), inferior temporal (IT) and lateral occipital (LO) areas contained information about both expression category and intensity. In the EEG, the coding of expression category and low-level image features were most pronounced at around 400 ms. The expression intensity model did not, however, correlate significantly at any EEG timepoint. Our results show a specific role for IFG-FA in the coding of expressions and suggest that it contains image and category invariant representations of expression intensity.


Asunto(s)
Emociones , Imagen por Resonancia Magnética , Humanos , Emociones/fisiología , Imagen por Resonancia Magnética/métodos , Mapeo Encefálico/métodos , Expresión Facial , Electroencefalografía
2.
Neuroimage ; 209: 116531, 2020 04 01.
Artículo en Inglés | MEDLINE | ID: mdl-31931156

RESUMEN

The temporal and spatial neural processing of faces has been investigated rigorously, but few studies have unified these dimensions to reveal the spatio-temporal dynamics postulated by the models of face processing. We used support vector machine decoding and representational similarity analysis to combine information from different locations (fMRI), time windows (EEG), and theoretical models. By correlating representational dissimilarity matrices (RDMs) derived from multiple pairwise classifications of neural responses to different facial expressions (neutral, happy, fearful, angry), we found early EEG time windows (starting around 130 â€‹ms) to match fMRI data from primary visual cortex (V1), and later time windows (starting around 190 â€‹ms) to match data from lateral occipital, fusiform face complex, and temporal-parietal-occipital junction (TPOJ). According to model comparisons, the EEG classification results were based more on low-level visual features than expression intensities or categories. In fMRI, the model comparisons revealed change along the processing hierarchy, from low-level visual feature coding in V1 to coding of intensity of expressions in the right TPOJ. The results highlight the importance of a multimodal approach for understanding the functional roles of different brain regions in face processing.


Asunto(s)
Mapeo Encefálico , Corteza Cerebral/fisiología , Electroencefalografía , Emociones/fisiología , Reconocimiento Facial/fisiología , Imagen por Resonancia Magnética , Adulto , Corteza Cerebral/diagnóstico por imagen , Femenino , Humanos , Masculino , Factores de Tiempo , Adulto Joven
3.
Sci Rep ; 9(1): 892, 2019 01 29.
Artículo en Inglés | MEDLINE | ID: mdl-30696943

RESUMEN

Simple visual items and complex real-world objects are stored into visual working memory as a collection of independent features, not as whole or integrated objects. Storing faces into memory might differ, however, since previous studies have reported perceptual and memory advantage for whole faces compared to other objects. We investigated whether facial features can be integrated in a statistically optimal fashion and whether memory maintenance disrupts this integration. The observers adjusted a probe - either a whole face or isolated features (eyes or mouth region) - to match the identity of a target while viewing both stimuli simultaneously or after a 1.5 second retention period. Precision was better for the whole face compared to the isolated features. Perceptual precision was higher than memory precision, as expected, and memory precision further declined as the number of memorized items was increased from one to four. Interestingly, the whole-face precision was better predicted by models assuming injection of memory noise followed by integration of features than by models assuming integration of features followed by the memory noise. The results suggest equally weighted or optimal integration of facial features and indicate that feature information is preserved in visual working memory while remembering faces.


Asunto(s)
Expresión Facial , Memoria , Humanos , Modelos Teóricos , Estimulación Luminosa , Reconocimiento en Psicología , Percepción Visual
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA