Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 12 de 12
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Q J Exp Psychol (Hove) ; : 17470218241278649, 2024 Aug 20.
Artículo en Inglés | MEDLINE | ID: mdl-39164830

RESUMEN

Seeing the visual articulatory movements of a speaker, while hearing their voice, helps with understanding what is said. This multisensory enhancement is particularly evident in noisy listening conditions. Multisensory enhancement also occurs even in auditory-only conditions: auditory-only speech and voice-identity recognition is superior for speakers previously learned with their face, compared to control learning; an effect termed the "face-benefit". Whether the face-benefit can assist in maintaining robust perception in increasingly noisy listening conditions, similar to concurrent multisensory input, is unknown. Here, in two behavioural experiments, we examined this hypothesis. In each experiment, participants learned a series of speakers' voices together with their dynamic face, or control image. Following learning, participants listened to auditory-only sentences spoken by the same speakers and recognised the content of the sentences (speech recognition, Experiment 1) or the voice-identity of the speaker (Experiment 2) in increasing levels of auditory noise. For speech recognition, we observed that 14/30 participants (47%) showed a face-benefit. While 19/25 participants (76%) showed a face-benefit for voice-identity recognition. For those participants who demonstrated a face-benefit, the face-benefit increased with auditory noise levels. Taken together, the results support an audio-visual model of auditory communication and suggest that the brain can develop a flexible system in which learned facial characteristics are used to deal with varying auditory uncertainty.

2.
Hum Brain Mapp ; 42(12): 3963-3982, 2021 08 15.
Artículo en Inglés | MEDLINE | ID: mdl-34043249

RESUMEN

Recognising the identity of voices is a key ingredient of communication. Visual mechanisms support this ability: recognition is better for voices previously learned with their corresponding face (compared to a control condition). This so-called 'face-benefit' is supported by the fusiform face area (FFA), a region sensitive to facial form and identity. Behavioural findings indicate that the face-benefit increases in noisy listening conditions. The neural mechanisms for this increase are unknown. Here, using functional magnetic resonance imaging, we examined responses in face-sensitive regions while participants recognised the identity of auditory-only speakers (previously learned by face) in high (SNR -4 dB) and low (SNR +4 dB) levels of auditory noise. We observed a face-benefit in both noise levels, for most participants (16 of 21). In high-noise, the recognition of face-learned speakers engaged the right posterior superior temporal sulcus motion-sensitive face area (pSTS-mFA), a region implicated in the processing of dynamic facial cues. The face-benefit in high-noise also correlated positively with increased functional connectivity between this region and voice-sensitive regions in the temporal lobe in the group of 16 participants with a behavioural face-benefit. In low-noise, the face-benefit was robustly associated with increased responses in the FFA and to a lesser extent the right pSTS-mFA. The findings highlight the remarkably adaptive nature of the visual network supporting voice-identity recognition in auditory-only listening conditions.


Asunto(s)
Percepción Auditiva/fisiología , Conectoma , Reconocimiento Facial/fisiología , Reconocimiento en Psicología/fisiología , Lóbulo Temporal/fisiología , Voz , Adulto , Femenino , Humanos , Imagen por Resonancia Magnética , Masculino , Ruido , Lóbulo Temporal/diagnóstico por imagen , Adulto Joven
3.
Hum Brain Mapp ; 41(4): 952-972, 2020 03.
Artículo en Inglés | MEDLINE | ID: mdl-31749219

RESUMEN

Faces convey social information such as emotion and speech. Facial emotion processing is supported via interactions between dorsal-movement and ventral-form visual cortex regions. Here, we explored, for the first time, whether similar dorsal-ventral interactions (assessed via functional connectivity), might also exist for visual-speech processing. We then examined whether altered dorsal-ventral connectivity is observed in adults with high-functioning autism spectrum disorder (ASD), a disorder associated with impaired visual-speech recognition. We acquired functional magnetic resonance imaging (fMRI) data with concurrent eye tracking in pairwise matched control and ASD participants. In both groups, dorsal-movement regions in the visual motion area 5 (V5/MT) and the temporal visual speech area (TVSA) were functionally connected to ventral-form regions (i.e., the occipital face area [OFA] and the fusiform face area [FFA]) during the recognition of visual speech, in contrast to the recognition of face identity. Notably, parts of this functional connectivity were decreased in the ASD group compared to the controls (i.e., right V5/MT-right OFA, left TVSA-left FFA). The results confirmed our hypothesis that functional connectivity between dorsal-movement and ventral-form regions exists during visual-speech processing. Its partial dysfunction in ASD might contribute to difficulties in the recognition of dynamic face information relevant for successful face-to-face communication.


Asunto(s)
Trastorno del Espectro Autista/fisiopatología , Corteza Cerebral/fisiopatología , Conectoma , Reconocimiento Visual de Modelos/fisiología , Percepción Social , Habla , Adulto , Trastorno del Espectro Autista/diagnóstico por imagen , Corteza Cerebral/diagnóstico por imagen , Tecnología de Seguimiento Ocular , Reconocimiento Facial/fisiología , Femenino , Humanos , Imagen por Resonancia Magnética , Masculino , Persona de Mediana Edad , Adulto Joven
4.
Neuropsychologia ; 116(Pt B): 179-193, 2018 07 31.
Artículo en Inglés | MEDLINE | ID: mdl-29614253

RESUMEN

Humans have a remarkable skill for voice-identity recognition: most of us can remember many voices that surround us as 'unique'. In this review, we explore the computational and neural mechanisms which may support our ability to represent and recognise a unique voice-identity. We examine the functional architecture of voice-sensitive regions in the superior temporal gyrus/sulcus, and bring together findings on how these regions may interact with each other, and additional face-sensitive regions, to support voice-identity processing. We also contrast findings from studies on neurotypicals and clinical populations which have examined the processing of familiar and unfamiliar voices. Taken together, the findings suggest that representations of familiar and unfamiliar voices might dissociate in the human brain. Such an observation does not fit well with current models for voice-identity processing, which by-and-large assume a common sequential analysis of the incoming voice signal, regardless of voice familiarity. We provide a revised audio-visual integrative model of voice-identity processing which brings together traditional and prototype models of identity processing. This revised model includes a mechanism of how voice-identity representations are established and provides a novel framework for understanding and examining the potential differences in familiar and unfamiliar voice processing in the human brain.


Asunto(s)
Percepción Auditiva/fisiología , Encéfalo/fisiología , Reconocimiento en Psicología/fisiología , Voz , Estimulación Acústica , Agnosia/patología , Agnosia/fisiopatología , Encéfalo/anatomía & histología , Humanos , Modelos Biológicos
5.
Neuropsychologia ; 70: 281-95, 2015 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-25737056

RESUMEN

There is growing evidence to suggest that facial motion is an important cue for face recognition. However, it is poorly understood whether motion is integrated with facial form information or whether it provides an independent cue to identity. To provide further insight into this issue, we compared the effect of motion on face perception in two developmental prosopagnosics and age-matched controls. Participants first learned faces presented dynamically (video), or in a sequence of static images, in which rigid (viewpoint) or non-rigid (expression) changes occurred. Immediately following learning, participants were required to match a static face image to the learned face. Test face images varied by viewpoint (Experiment 1) or expression (Experiment 2) and were learned or novel face images. We found similar performance across prosopagnosics and controls in matching facial identity across changes in viewpoint when the learned face was shown moving in a rigid manner. However, non-rigid motion interfered with face matching across changes in expression in both individuals with prosopagnosia compared to the performance of control participants. In contrast, non-rigid motion did not differentially affect the matching of facial expressions across changes in identity for either prosopagnosics (Experiment 3). Our results suggest that whilst the processing of rigid motion information of a face may be preserved in developmental prosopagnosia, non-rigid motion can specifically interfere with the representation of structural face information. Taken together, these results suggest that both form and motion cues are important in face perception and that these cues are likely integrated in the representation of facial identity.


Asunto(s)
Cara , Expresión Facial , Movimiento (Física) , Reconocimiento Visual de Modelos/fisiología , Prosopagnosia/fisiopatología , Anciano , Anciano de 80 o más Años , Estudios de Casos y Controles , Señales (Psicología) , Femenino , Humanos , Masculino , Pruebas Neuropsicológicas , Estimulación Luminosa , Tiempo de Reacción , Reconocimiento en Psicología
6.
J Exp Psychol Hum Percept Perform ; 40(6): 2266-80, 2014 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-25328999

RESUMEN

Faces are inherently dynamic stimuli. However, face perception in younger adults appears to be mediated by the ability to extract structural cues from static images and a benefit of motion is inconsistent. In contrast, static face processing is poorer and more image-dependent in older adults. We therefore compared the role of facial motion in younger and older adults to assess whether motion can enhance perception when static cues are insufficient. In our studies, older and younger adults learned faces presented in motion or in a sequence of static images, containing rigid (viewpoint) or nonrigid (expression) changes. Immediately following learning, participants matched a static test image to the learned face which varied by viewpoint (Experiment 1) or expression (Experiment 2) and was either learned or novel. First, we found an age effect with better face matching performance in younger than in older adults. However, we observed face matching performance improved in the older adult group, across changes in viewpoint and expression, when faces were learned in motion relative to static presentation. There was no benefit for facial (nonrigid) motion when the task involved matching inverted faces (Experiment 3), suggesting that the ability to use dynamic face information for the purpose of recognition reflects motion encoding which is specific to upright faces. Our results suggest that ageing may offer a unique insight into how dynamic cues support face processing, which may not be readily observed in younger adults' performance. (PsycINFO Database Record (c) 2014 APA, all rights reserved).


Asunto(s)
Envejecimiento/psicología , Cara , Área de Dependencia-Independencia , Percepción de Movimiento , Reconocimiento Visual de Modelos , Adolescente , Adulto , Anciano , Señales (Psicología) , Aprendizaje Discriminativo , Expresión Facial , Femenino , Humanos , Masculino , Orientación , Reconocimiento en Psicología , Adulto Joven
7.
Front Hum Neurosci ; 7: 795, 2013.
Artículo en Inglés | MEDLINE | ID: mdl-24324423

RESUMEN

When interpreting other people's movements or actions, observers may not only rely on the visual cues available in the observed movement, but they may also be able to "put themselves in the other person's shoes" by engaging brain systems involved in both "mentalizing" and motor simulation. The ageing process brings changes in both perceptual and motor abilities, yet little is known about how these changes may affect the ability to accurately interpret other people's actions. Here we investigated the effect of ageing on the ability to discriminate the weight of objects based on the movements of actors lifting these objects. Stimuli consisted of videos of an actor lifting a small box weighing 0.05-0.9 kg or a large box weighting 3-18 kg. In a four-alternative forced-choice task, younger and older participants reported the perceived weight of the box in each video. Overall, older participants were less sensitive than younger participants in discriminating the perceived weight of lifted boxes, an effect that was especially pronounced in the small box condition. Weight discrimination performance was better for the large box compared to the small box in both groups, due to greater saliency of the visual cues in this condition. These results suggest that older adults may require more salient visual cues to interpret the actions of others accurately. We discuss the potential contribution of age-related changes in visual and motor function on the observed effects and suggest that older adults' decline in the sensitivity to subtle visual cues may lead to greater reliance on visual analysis of the observed scene and its semantic context.

8.
Multisens Res ; 26(1-2): 69-94, 2013.
Artículo en Inglés | MEDLINE | ID: mdl-23713200

RESUMEN

The current study examined the role of vision in spatial updating and its potential contribution to an increased risk of falls in older adults. Spatial updating was assessed using a path integration task in fall-prone and healthy older adults. Specifically, participants conducted a triangle completion task in which they were guided along two sides of a triangular route and were then required to return, unguided, to the starting point. During the task, participants could either clearly view their surroundings (full vision) or visuo-spatial information was reduced by means of translucent goggles (reduced vision). Path integration performance was measured by calculating the distance and angular deviation from the participant's return point relative to the starting point. Gait parameters for the unguided walk were also recorded. We found equivalent performance across groups on all measures in the full vision condition. In contrast, in the reduced vision condition, where participants had to rely on interoceptive cues to spatially update their position, fall-prone older adults made significantly larger distance errors relative to healthy older adults. However, there were no other performance differences between fall-prone and healthy older adults. These findings suggest that fall-prone older adults, compared to healthy older adults, have greater difficulty in reweighting other sensory cues for spatial updating when visual information is unreliable.


Asunto(s)
Marcha/fisiología , Desempeño Psicomotor/fisiología , Percepción Espacial/fisiología , Baja Visión/fisiopatología , Percepción Visual/fisiología , Accidentes por Caídas/prevención & control , Accidentes por Caídas/estadística & datos numéricos , Anciano , Femenino , Humanos , Masculino , Modelos Biológicos , Distorsión de la Percepción/fisiología , Factores de Riesgo
9.
Perception ; 41(7): 757-73, 2012.
Artículo en Inglés | MEDLINE | ID: mdl-23155729

RESUMEN

Auditory stimuli are known to improve visual target recognition and detection when both are presented in the same spatial location. However, most studies have focused on crossmodal spatial congruency along the horizontal plane and the effects of audio-visual spatial congruency in depth (i.e., along the depth axis) are relatively less well understood. In the following experiments we presented a visual (face) or auditory (voice) target stimulus in a location on a spatial array which was either spatially congruent or incongruent in depth (i.e., positioned directly in front or behind) with a crossmodal stimulus. The participant's task was to determine whether a visual (experiments 1 and 3) or auditory (experiment 2) target was located in the foreground or background of this array. We found that both visual and auditory targets were less accurately located when crossmodal stimuli were presented from different, compared to congruent, locations in depth. Moreover, this effect was particularly found for visual targets located in the periphery, although spatial incongruency affected the location of auditory targets across both locations. The relative distance of the array to the observer did not seem to modulate this congruency effect (experiment 3). Our results add to the growing evidence for multisensory influences on search performance and extend these findings to the localisation of targets in the depth plane.


Asunto(s)
Percepción Auditiva/fisiología , Percepción Espacial/fisiología , Percepción Visual/fisiología , Adulto , Femenino , Humanos , Masculino , Pruebas Neuropsicológicas , Adulto Joven
10.
Front Aging Neurosci ; 3: 19, 2011.
Artículo en Inglés | MEDLINE | ID: mdl-22207848

RESUMEN

Previous studies have found that perception in older people benefits from multisensory over unisensory information. As normal speech recognition is affected by both the auditory input and the visual lip movements of the speaker, we investigated the efficiency of audio and visual integration in an older population by manipulating the relative reliability of the auditory and visual information in speech. We also investigated the role of the semantic context of the sentence to assess whether audio-visual integration is affected by top-down semantic processing. We presented participants with audio-visual sentences in which the visual component was either blurred or not blurred. We found that there was a greater cost in recall performance for semantically meaningless speech in the audio-visual 'blur' compared to audio-visual 'no blur' condition and this effect was specific to the older group. Our findings have implications for understanding how aging affects efficient multisensory integration for the perception of speech and suggests that multisensory inputs may benefit speech perception in older adults when the semantic content of the speech is unpredictable.

11.
PLoS One ; 6(8): e23316, 2011.
Artículo en Inglés | MEDLINE | ID: mdl-21826247

RESUMEN

In the hand laterality task participants judge the handedness of visually presented stimuli--images of hands shown in a variety of postures and views--and indicate whether they perceive a right or left hand. The task engages kinaesthetic and sensorimotor processes and is considered a standard example of motor imagery. However, in this study we find that while motor imagery holds across egocentric views of the stimuli (where the hands are likely to be one's own), it does not appear to hold across allocentric views (where the hands are likely to be another person's). First, we find that psychophysical sensitivity, d', is clearly demarcated between egocentric and allocentric views, being high for the former and low for the latter. Secondly, using mixed effects methods to analyse the chronometric data, we find high positive correlation between response times across egocentric views, suggesting a common use of motor imagery across these views. Correlations are, however, considerably lower between egocentric and allocentric views, suggesting a switch from motor imagery across these perspectives. We relate these findings to research showing that the extrastriate body area discriminates egocentric ('self') and allocentric ('other') views of the human body and of body parts, including hands.


Asunto(s)
Lateralidad Funcional/fisiología , Adulto , Imagen Corporal , Discriminación en Psicología , Humanos , Masculino , Reconocimiento Visual de Modelos/fisiología , Estimulación Luminosa
12.
Exp Brain Res ; 211(1): 73-85, 2011 May.
Artículo en Inglés | MEDLINE | ID: mdl-21533699

RESUMEN

Determining the handedness of visually presented stimuli is thought to involve two separate stages--a rapid, implicit recognition of laterality followed by a confirmatory mental rotation of the matching hand. In two studies, we explore the role of the dominant and non-dominant hands in this process. In Experiment 1, participants judged stimulus laterality with either their left or right hand held behind their back or with both hands resting in the lap. The variation in reactions times across these conditions reveals that both hands play a role in hand laterality judgments, with the hand which is not involved in the mental rotation stage causing some interference, slowing down mental rotations and making them more accurate. While this interference occurs for both lateralities in right-handed people, it occurs for the dominant hand only in left-handers. This is likely due to left-handers' greater reliance on the initial, visual recognition stage than on the later, mental rotation stage, particularly when judging hands from the non-dominant laterality. Participants' own judgments of whether the stimuli were 'self' and 'other' hands in Experiment 2 suggest a difference in strategy for hands seen from an egocentric and allocentric perspective, with a combined visuo-sensorimotor strategy for the former and a visual only strategy for the latter. This result is discussed with reference to recent brain imaging research showing that the extrastriate body area distinguishes between bodies and body parts in egocentric and allocentric perspective.


Asunto(s)
Lateralidad Funcional/fisiología , Mano/fisiología , Desempeño Psicomotor/fisiología , Tiempo de Reacción/fisiología , Percepción Visual/fisiología , Adolescente , Adulto , Femenino , Humanos , Masculino , Orientación/fisiología , Estimulación Luminosa/métodos , Postura/fisiología , Adulto Joven
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA