Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 20
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Sci Rep ; 9(1): 414, 2019 01 23.
Artículo en Inglés | MEDLINE | ID: mdl-30674913

RESUMEN

We form very rapid personality impressions about speakers on hearing a single word. This implies that the acoustical properties of the voice (e.g., pitch) are very powerful cues when forming social impressions. Here, we aimed to explore how personality impressions for brief social utterances transfer across languages and whether acoustical properties play a similar role in driving personality impressions. Additionally, we examined whether evaluations are similar in the native and a foreign language of the listener. In two experiments we asked Spanish listeners to evaluate personality traits from different instances of the Spanish word "Hola" (Experiment 1) and the English word "Hello" (Experiment 2), native and foreign language respectively. The results revealed that listeners across languages form very similar personality impressions irrespective of whether the voices belong to the native or the foreign language of the listener. A social voice space was summarized by two main personality traits, one emphasizing valence (e.g., trust) and the other strength (e.g., dominance). Conversely, the acoustical properties that listeners pay attention to when judging other's personality vary across languages. These results provide evidence that social voice perception contains certain elements invariant across cultures/languages, while others are modulated by the cultural/linguistic background of the listener.


Asunto(s)
Atención , Lenguaje , Personalidad , Percepción Social , Percepción del Habla , Adolescente , Adulto , Femenino , Humanos , Masculino
2.
PLoS One ; 14(1): e0211282, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-30653619

RESUMEN

[This corrects the article DOI: 10.1371/journal.pone.0185651.].

3.
PLoS One ; 13(10): e0204991, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-30286148

RESUMEN

It has previously been shown that first impressions of a speaker's personality, whether accurate or not, can be judged from short utterances of vowels and greetings, as well as from prolonged sentences and readings of complex paragraphs. From these studies, it is established that listeners' judgements are highly consistent with one another, suggesting that different people judge personality traits in a similar fashion, with three key personality traits being related to measures of valence (associated with trustworthiness), dominance, and attractiveness. Yet, particularly in voice perception, limited research has established the reliability of such personality judgements across stimulus types of varying lengths. Here we investigate whether first impressions of trustworthiness, dominance, and attractiveness of novel speakers are related when a judgement is made on hearing both one word and one sentence from the same speaker. Secondly, we test whether what is said, thus adjusting content, influences the stability of personality ratings. 60 Scottish voices (30 females) were recorded reading two texts: one of ambiguous content and one with socially-relevant content. One word (~500 ms) and one sentence (~3000 ms) were extracted from each recording for each speaker. 181 participants (138 females) rated either male or female voices across both content conditions (ambiguous, socially-relevant) and both stimulus types (word, sentence) for one of the three personality traits (trustworthiness, dominance, attractiveness). Pearson correlations showed personality ratings between words and sentences were strongly correlated, with no significant influence of content. In short, when establishing an impression of a novel speaker, judgments of three key personality traits are highly related whether you hear one word or one sentence, irrespective of what they are saying. This finding is consistent with initial personality judgments serving as elucidators of approach or avoidance behaviour, without modulation by time or content. All data and sounds are available on OSF (osf.io/s3cxy).


Asunto(s)
Personalidad , Habla , Adolescente , Adulto , Femenino , Humanos , Modelos Lineales , Masculino , Confianza , Adulto Joven
4.
PLoS One ; 12(10): e0185651, 2017.
Artículo en Inglés | MEDLINE | ID: mdl-29023462

RESUMEN

When we hear a new voice we automatically form a "first impression" of the voice owner's personality; a single word is sufficient to yield ratings highly consistent across listeners. Past studies have shown correlations between personality ratings and acoustical parameters of voice, suggesting a potential acoustical basis for voice personality impressions, but its nature and extent remain unclear. Here we used data-driven voice computational modelling to investigate the link between acoustics and perceived trustworthiness in the single word "hello". Two prototypical voice stimuli were generated based on the acoustical features of voices rated low or high in perceived trustworthiness, respectively, as well as a continuum of stimuli inter- and extrapolated between these two prototypes. Five hundred listeners provided trustworthiness ratings on the stimuli via an online interface. We observed an extremely tight relationship between trustworthiness ratings and position along the trustworthiness continuum (r = 0.99). Not only were trustworthiness ratings higher for the high- than the low-prototypes, but the difference could be modulated quasi-linearly by reducing or exaggerating the acoustical difference between the prototypes, resulting in a strong caricaturing effect. The f0 trajectory, or intonation, appeared a parameter of particular relevance: hellos rated high in trustworthiness were characterized by a high starting f0 then a marked decrease at mid-utterance to finish on a strong rise. These results demonstrate a strong acoustical basis for voice personality impressions, opening the door to multiple potential applications.


Asunto(s)
Percepción Auditiva/fisiología , Personalidad , Voz , Adulto , Anciano , Simulación por Computador , Femenino , Humanos , Masculino , Persona de Mediana Edad
5.
Perception ; 45(8): 946-963, 2016 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-27081101

RESUMEN

Vocal pitch has been found to influence judgments of perceived trustworthiness and dominance from a novel voice. However, the majority of findings arise from using only male voices and in context-specific scenarios. In two experiments, we first explore the influence of average vocal pitch on first-impression judgments of perceived trustworthiness and dominance, before establishing the existence of an overall preference for high or low pitch across genders. In Experiment 1, pairs of high- and low-pitched temporally reversed recordings of male and female vocal utterances were presented in a two-alternative forced-choice task. Results revealed a tendency to select the low-pitched voice over the high-pitched voice as more trustworthy, for both genders, and more dominant, for male voices only. Experiment 2 tested an overall preference for low-pitched voices, and whether judgments were modulated by speech content, using forward and reversed speech to manipulate context. Results revealed an overall preference for low pitch, irrespective of direction of speech, in male voices only. No such overall preference was found for female voices. We propose that an overall preference for low pitch is a default prior in male voices irrespective of context, whereas pitch preferences in female voices are more context- and situation-dependent. The present study confirms the important role of vocal pitch in the formation of first-impression personality judgments and advances understanding of the impact of context on pitch preferences across genders.


Asunto(s)
Percepción de la Altura Tonal , Predominio Social , Percepción Social , Confianza , Voz , Adulto , Femenino , Humanos , Masculino , Factores Sexuales , Adulto Joven
6.
Neuroimage ; 119: 164-74, 2015 Oct 01.
Artículo en Inglés | MEDLINE | ID: mdl-26116964

RESUMEN

fMRI studies increasingly examine functions and properties of non-primary areas of human auditory cortex. However there is currently no standardized localization procedure to reliably identify specific areas across individuals such as the standard 'localizers' available in the visual domain. Here we present an fMRI 'voice localizer' scan allowing rapid and reliable localization of the voice-sensitive 'temporal voice areas' (TVA) of human auditory cortex. We describe results obtained using this standardized localizer scan in a large cohort of normal adult subjects. Most participants (94%) showed bilateral patches of significantly greater response to vocal than non-vocal sounds along the superior temporal sulcus/gyrus (STS/STG). Individual activation patterns, although reproducible, showed high inter-individual variability in precise anatomical location. Cluster analysis of individual peaks from the large cohort highlighted three bilateral clusters of voice-sensitivity, or "voice patches" along posterior (TVAp), mid (TVAm) and anterior (TVAa) STS/STG, respectively. A series of extra-temporal areas including bilateral inferior prefrontal cortex and amygdalae showed small, but reliable voice-sensitivity as part of a large-scale cerebral voice network. Stimuli for the voice localizer scan and probabilistic maps in MNI space are available for download.


Asunto(s)
Corteza Auditiva/fisiología , Individualidad , Percepción del Habla/fisiología , Estimulación Acústica , Adulto , Mapeo Encefálico , Dominancia Cerebral , Femenino , Humanos , Imagen por Resonancia Magnética , Masculino , Voz , Adulto Joven
7.
PLoS One ; 9(8): e104916, 2014.
Artículo en Inglés | MEDLINE | ID: mdl-25119267

RESUMEN

Although gossip serves several important social functions, it has relatively infrequently been the topic of systematic investigation. In two experiments, we advance a cognitive-informational approach to gossip. Specifically, we sought to determine which informational components engender gossip. In Experiment 1, participants read brief passages about other people and indicated their likelihood to share this information. We manipulated target familiarity (celebrity, non-celebrity) and story interest (interesting, boring). While participants were more likely to gossip about celebrity than non-celebrity targets and interesting than boring stories, they were even more likely to gossip about celebrity targets embedded within interesting stories. In Experiment 2, we additionally probed participants' reactions to the stories concerning emotion, expectation, and reputation information conveyed. Analyses showed that while such information partially mediated target familiarity and story interest effects, only expectation and reputation accounted for the interactive pattern of gossip behavior. Our findings provide novel insights into the essential components and processing mechanisms of gossip.


Asunto(s)
Reconocimiento en Psicología , Conducta Social , Adolescente , Adulto , Comunicación , Emociones , Femenino , Humanos , Adulto Joven
8.
Cortex ; 57: 74-91, 2014 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-24815091

RESUMEN

Does visual experience in judging intent to harm change our brain responses? And if it does, what are the mechanisms affected? We addressed these questions by studying the abilities of Closed Circuit Television (CCTV) operators, who must identify the presence of hostile intentions using only visual cues in complex scenes. We used functional magnetic resonance imaging to assess which brain processes are modulated by CCTV experience. To this end we scanned 15 CCTV operators and 15 age and gender matched novices while they watched CCTV videos of 16 sec, and asked them to report whether each clip would end in violence or not. We carried out four separate whole-brain analyses including 3 model-based analyses and one analysis of intersubject correlation to examine differences between the two groups. The three model analyses were based on 1) experimentally pre-defined clip activity labels of fight, confrontation, playful, and neutral behaviour, 2) participants' reports of violent outcomes during the scan, and 3) visual saliency within each clip, as pre-assessed using eye-tracking. The analyses identified greater activation in the right superior frontal gyrus for operators than novices when viewing playful behaviour, and reduced activity for operators in comparison with novices in the occipital and temporal regions, irrespective of the type of clips viewed. However, in the parahippocampal gyrus, all three model-based analyses consistently showed reduced activity for experienced CCTV operators. Activity in the anterior part of the parahippocampal gyrus (uncus) was found to increase with years of CCTV experience. The intersubject correlation analysis revealed a further effect of experience, with CCTV operators showing correlated activity in fewer brain regions (superior and middle temporal gyrus, inferior parietal lobule and the ventral striatum) than novices. Our results indicate that long visual experience in action observation, aimed to predict harmful behaviour, modulates brain mechanisms of intent recognition.


Asunto(s)
Imagen por Resonancia Magnética , Reconocimiento en Psicología/fisiología , Adulto , Mapeo Encefálico , Femenino , Lóbulo Frontal/fisiología , Reducción del Daño , Humanos , Intención , Acontecimientos que Cambian la Vida , Imagen por Resonancia Magnética/métodos , Masculino , Persona de Mediana Edad , Giro Parahipocampal/fisiología , Lóbulo Parietal/fisiología , Lóbulo Temporal/fisiología , Adulto Joven
9.
PLoS One ; 9(3): e90779, 2014.
Artículo en Inglés | MEDLINE | ID: mdl-24622283

RESUMEN

On hearing a novel voice, listeners readily form personality impressions of that speaker. Accurate or not, these impressions are known to affect subsequent interactions; yet the underlying psychological and acoustical bases remain poorly understood. Furthermore, hitherto studies have focussed on extended speech as opposed to analysing the instantaneous impressions we obtain from first experience. In this paper, through a mass online rating experiment, 320 participants rated 64 sub-second vocal utterances of the word 'hello' on one of 10 personality traits. We show that: (1) personality judgements of brief utterances from unfamiliar speakers are consistent across listeners; (2) a two-dimensional 'social voice space' with axes mapping Valence (Trust, Likeability) and Dominance, each driven by differing combinations of vocal acoustics, adequately summarises ratings in both male and female voices; and (3) a positive combination of Valence and Dominance results in increased perceived male vocal Attractiveness, whereas perceived female vocal Attractiveness is largely controlled by increasing Valence. Results are discussed in relation to the rapid evaluation of personality and, in turn, the intent of others, as being driven by survival mechanisms via approach or avoidance behaviours. These findings provide empirical bases for predicting personality impressions from acoustical analyses of short utterances and for generating desired personality impressions in artificial voices.


Asunto(s)
Percepción Auditiva , Juicio , Personalidad , Habla , Voz , Acústica , Adulto , Femenino , Humanos , Masculino , Análisis de Componente Principal , Factores Sexuales , Percepción Visual
10.
Cogn Affect Behav Neurosci ; 14(1): 307-18, 2014 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-23943513

RESUMEN

It has been proposed that we make sense of the movements of others by observing fluctuations in the kinematic properties of their actions. At the neural level, activity in the human motion complex (hMT+) and posterior superior temporal sulcus (pSTS) has been implicated in this relationship. However, previous neuroimaging studies have largely utilized brief, diminished stimuli, and the role of relevant kinematic parameters for the processing of human action remains unclear. We addressed this issue by showing extended-duration natural displays of an actor engaged in two common activities, to 12 participants in an fMRI study under passive viewing conditions. Our region-of-interest analysis focused on three neural areas (hMT+, pSTS, and fusiform face area) and was accompanied by a whole-brain analysis. The kinematic properties of the actor, particularly the speed of body part motion and the distance between body parts, were related to activity in hMT+ and pSTS. Whole-brain exploratory analyses revealed additional areas in posterior cortex, frontal cortex, and the cerebellum whose activity was related to these features. These results indicate that the kinematic properties of peoples' movements are continually monitored during everyday activity as a step to determining actions and intent.


Asunto(s)
Encéfalo/fisiología , Percepción de Movimiento/fisiología , Fenómenos Biomecánicos , Mapeo Encefálico , Corteza Cerebral/fisiología , Femenino , Lateralidad Funcional , Humanos , Imagen por Resonancia Magnética , Masculino , Pruebas Neuropsicológicas , Adulto Joven
11.
Iperception ; 4(4): 265-84, 2013.
Artículo en Inglés | MEDLINE | ID: mdl-24349687

RESUMEN

The superior temporal sulcus (STS) and gyrus (STG) are commonly identified to be functionally relevant for multisensory integration of audiovisual (AV) stimuli. However, most neuroimaging studies on AV integration used stimuli of short duration in explicit evaluative tasks. Importantly though, many of our AV experiences are of a long duration and ambiguous. It is unclear if the enhanced activity in audio, visual, and AV brain areas would also be synchronised over time across subjects when they are exposed to such multisensory stimuli. We used intersubject correlation to investigate which brain areas are synchronised across novices for uni- and multisensory versions of a 6-min 26-s recording of an unfamiliar, unedited Indian dance recording (Bharatanatyam). In Bharatanatyam, music and dance are choreographed together in a highly intermodal-dependent manner. Activity in the middle and posterior STG was significantly correlated between subjects and showed also significant enhancement for AV integration when the functional magnetic resonance signals were contrasted against each other using a general linear model conjunction analysis. These results extend previous studies by showing an intermediate step of synchronisation for novices: while there was a consensus across subjects' brain activity in areas relevant for unisensory processing and AV integration of related audio and visual stimuli, we found no evidence for synchronisation of higher level cognitive processes, suggesting these were idiosyncratic.

12.
Proc Natl Acad Sci U S A ; 110(28): 11577-82, 2013 Jul 09.
Artículo en Inglés | MEDLINE | ID: mdl-23801762

RESUMEN

The degree of correspondence between objective performance and subjective beliefs varies widely across individuals. Here we demonstrate that functional brain network connectivity measured before exposure to a perceptual decision task covaries with individual objective (type-I performance) and subjective (type-II performance) accuracy. Increases in connectivity with type-II performance were observed in networks measured while participants directed attention inward (focus on respiration), but not in networks measured during states of neutral (resting state) or exogenous attention. Measures of type-I performance were less sensitive to the subjects' specific attentional states from which the networks were derived. These results suggest the existence of functional brain networks indexing objective performance and accuracy of subjective beliefs distinctively expressed in a set of stable mental states.


Asunto(s)
Encéfalo/fisiología , Humanos , Imagen por Resonancia Magnética , Análisis y Desempeño de Tareas
13.
Curr Biol ; 23(12): 1075-80, 2013 Jun 17.
Artículo en Inglés | MEDLINE | ID: mdl-23707425

RESUMEN

Listeners exploit small interindividual variations around a generic acoustical structure to discriminate and identify individuals from their voice-a key requirement for social interactions. The human brain contains temporal voice areas (TVA) involved in an acoustic-based representation of voice identity, but the underlying coding mechanisms remain unknown. Indirect evidence suggests that identity representation in these areas could rely on a norm-based coding mechanism. Here, we show by using fMRI that voice identity is coded in the TVA as a function of acoustical distance to two internal voice prototypes (one male, one female)-approximated here by averaging a large number of same-gender voices by using morphing. Voices more distant from their prototype are perceived as more distinctive and elicit greater neuronal activity in voice-sensitive cortex than closer voices-a phenomenon not merely explained by neuronal adaptation. Moreover, explicit manipulations of distance-to-mean by morphing voices toward (or away from) their prototype elicit reduced (or enhanced) neuronal activity. These results indicate that voice-sensitive cortex integrates relevant acoustical features into a complex representation referenced to idealized male and female voice prototypes. More generally, they shed light on remarkable similarities in cerebral representations of facial and vocal identity.


Asunto(s)
Corteza Auditiva/fisiología , Percepción Auditiva/fisiología , Patrones de Reconocimiento Fisiológico , Acústica del Lenguaje , Estimulación Acústica , Acústica , Mapeo Encefálico , Femenino , Humanos , Imagen por Resonancia Magnética , Masculino , Voz/fisiología
14.
Neuroimage ; 59(2): 1524-33, 2012 Jan 16.
Artículo en Inglés | MEDLINE | ID: mdl-21888982

RESUMEN

Whether people with Autism Spectrum Disorders (ASDs) have a specific deficit when processing biological motion has been a topic of much debate. We used psychophysical methods to determine individual behavioural thresholds in a point-light direction discrimination paradigm for a small but carefully matched groups of adults (N=10 per group) with and without ASDs. These thresholds were used to derive individual stimulus levels in an identical fMRI task, with the purpose of equalising task performance across all participants whilst inside the scanner. The results of this investigation show that despite comparable behavioural performance both inside and outside the scanner, the group with ASDs shows a different pattern of BOLD activation from the TD group in response to the same stimulus levels. Furthermore, connectivity analysis suggests that the main differences between the groups are that the TD group utilise a unitary network with information passing from temporal to parietal regions, whilst the ASD group utilise two distinct networks; one utilising motion sensitive areas and another utilising form selective areas. Furthermore, a temporal-parietal link that is present in the TD group is missing in the ASD group. We tentatively propose that these differences may occur due to early dysfunctional connectivity in the brains of people with ASDs, which to some extent is compensated for by rewiring in high functioning adults.


Asunto(s)
Corteza Cerebral/fisiopatología , Trastornos Generalizados del Desarrollo Infantil/fisiopatología , Imagen por Resonancia Magnética , Percepción de Movimiento , Red Nerviosa/fisiopatología , Adolescente , Adulto , Niño , Femenino , Humanos , Masculino , Adulto Joven
15.
Neuroimage ; 56(3): 1480-92, 2011 Jun 01.
Artículo en Inglés | MEDLINE | ID: mdl-21397699

RESUMEN

When we observe someone perform a familiar action, we can usually predict what kind of sound that action will produce. Musical actions are over-experienced by musicians and not by non-musicians, and thus offer a unique way to examine how action expertise affects brain processes when the predictability of the produced sound is manipulated. We used functional magnetic resonance imaging to scan 11 drummers and 11 age- and gender-matched novices who made judgments on point-light drumming movements presented with sound. In Experiment 1, sound was synchronized or desynchronized with drumming strikes, while in Experiment 2 sound was always synchronized, but the natural covariation between sound intensity and velocity of the drumming strike was maintained or eliminated. Prior to MRI scanning, each participant completed psychophysical testing to identify personal levels of synchronous and asynchronous timing to be used in the two fMRI activation tasks. In both experiments, the drummers' brain activation was reduced in motor and action representation brain regions when sound matched the observed movements, and was similar to that of novices when sound was mismatched. This reduction in neural activity occurred bilaterally in the cerebellum and left parahippocampal gyrus in Experiment 1, and in the right inferior parietal lobule, inferior temporal gyrus, middle frontal gyrus and precentral gyrus in Experiment 2. Our results indicate that brain functions in action-sound representation areas are modulated by multimodal action expertise.


Asunto(s)
Encéfalo/fisiología , Destreza Motora/fisiología , Música/psicología , Desempeño Psicomotor/fisiología , Estimulación Acústica , Adolescente , Adulto , Análisis de Varianza , Cerebelo/fisiología , Análisis por Conglomerados , Femenino , Humanos , Imagen por Resonancia Magnética , Masculino , Persona de Mediana Edad , Giro Parahipocampal/fisiología , Lóbulo Parietal/fisiología , Estimulación Luminosa , Corteza Prefrontal/fisiología , Psicofísica , Lóbulo Temporal/fisiología , Adulto Joven
16.
J Autism Dev Disord ; 41(8): 1053-63, 2011 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-21069445

RESUMEN

The perception of intent in Autism Spectrum Disorders (ASD) often relies on synthetic animacy displays. This study tests intention perception in ASD via animacy stimuli derived from human motion. Using a forced choice task, 28 participants (14 ASDs; 14 age and verbal-I.Q. matched controls) categorized displays of Chasing, Fighting, Flirting, Following, Guarding and Playing, from two viewpoints (side, overhead) in both animacy and full video displays. Detailed analysis revealed no differences between populations in accuracy, or response patterns. Collapsing across groups revealed Following and Video displays to be most accurately perceived. The stimuli and intentions used are compared to those of previous studies, and the implication of our results on the understanding of Theory of Mind in ASD is discussed.


Asunto(s)
Trastornos Generalizados del Desarrollo Infantil/psicología , Intención , Percepción Social , Adolescente , Adulto , Niño , Humanos , Masculino , Persona de Mediana Edad , Percepción de Movimiento , Pruebas Neuropsicológicas
17.
Brain Res ; 1323: 139-48, 2010 Apr 06.
Artículo en Inglés | MEDLINE | ID: mdl-20153297

RESUMEN

In the present study we applied a paradigm often used in face-voice affect perception to solo music improvisation to examine how the emotional valence of sound and gesture are integrated when perceiving an emotion. Three brief excerpts expressing emotion produced by a drummer and three by a saxophonist were selected. From these bimodal congruent displays the audio-only, visual-only, and audiovisually incongruent conditions (obtained by combining the two signals both within and between instruments) were derived. In Experiment 1 twenty musical novices judged the perceived emotion and rated the strength of each emotion. The results indicate that sound dominated the visual signal in the perception of affective expression, though this was more evident for the saxophone. In Experiment 2 a further sixteen musical novices were asked to either pay attention to the musicians' movements or to the sound when judging the perceived emotions. The results showed no effect of visual information when judging the sound. On the contrary, when judging the emotional content of the visual information, a worsening in performance was obtained for the incongruent condition that combined different emotional auditory and visual information for the same instrument. The effect of emotionally discordant information thus became evident only when the auditory and visual signals belonged to the same categorical event despite their temporal mismatch. This suggests that the integration of emotional information may be reinforced by its semantic attributes but might be independent from temporal features.


Asunto(s)
Percepción Auditiva/fisiología , Emociones , Percepción Visual/fisiología , Estimulación Acústica , Adulto , Análisis de Varianza , Atención , Femenino , Humanos , Masculino , Música/psicología , Estimulación Luminosa , Factores de Tiempo
18.
Vision Res ; 49(22): 2705-39, 2009 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-19682485

RESUMEN

Autism spectrum disorders (ASDs) are developmental disorders which are thought primarily to affect social functioning. However, there is now a growing body of evidence that unusual sensory processing is at least a concomitant and possibly the cause of many of the behavioural signs and symptoms of ASD. A comprehensive and critical review of the phenomenological, empirical, neuroscientific and theoretical literature pertaining to visual processing in ASD is presented, along with a brief justification of a new theory which may help to explain some of the data, and link it with other current hypotheses about the genetic and neural aetiologies of this enigmatic condition.


Asunto(s)
Trastornos Generalizados del Desarrollo Infantil/complicaciones , Trastornos de la Percepción/etiología , Trastornos de la Visión/etiología , Percepción Visual/fisiología , Adolescente , Adulto , Niño , Trastornos Generalizados del Desarrollo Infantil/diagnóstico , Trastornos Generalizados del Desarrollo Infantil/psicología , Visión de Colores/fisiología , Sensibilidad de Contraste/fisiología , Humanos , Modelos Neurológicos , Modelos Psicológicos , Percepción de Movimiento/fisiología , Reconocimiento Visual de Modelos/fisiología , Adulto Joven
19.
Vision Res ; 49(20): 2503-8, 2009 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-19682487

RESUMEN

Understanding how structure and motion information contribute to the perception of biological motion is often studied with masking techniques. Current techniques in masking point-light walkers typically rely on adding surrounding masking dots or altering phase relations between joints. Here, we demonstrate the use of novel stimuli that make it possible to determine the noise level at which the local motion cues mask the opposing configural cues without changing the number of overall points in the display. Results show improved direction discrimination when configural cues are present compared to when the identical local motion signals are present but lack configural information.


Asunto(s)
Discriminación en Psicología/fisiología , Percepción de Movimiento/fisiología , Enmascaramiento Perceptual/fisiología , Factores de Confusión Epidemiológicos , Señales (Psicología) , Femenino , Humanos , Masculino , Reconocimiento Visual de Modelos/fisiología , Estimulación Luminosa/métodos , Psicofísica , Adulto Joven
20.
Behav Res Methods ; 40(3): 830-9, 2008 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-18697679

RESUMEN

The impression of animacy from the motion of simple shapes typically relies on synthetically defined motion patterns resulting in pseudorepresentations of human movement. Thus, it is unclear how these synthetic motions relate to actual biological agents. To clarify this relationship, we introduce a novel approach that uses video processing to reduce full-video displays of human interactions to animacy displays, thus creating animate shapes whose motions are directly derived from human actions. Furthermore, this technique facilitates the comparison of interactions in animacy displays from different viewpoints-an area that has yet to be researched. We introduce two experiments in which animacy displays were created showing six dyadic interactions from two viewpoints, incorporating cues altering the quantity of the visual information available. With a six-alternative forced choice task, results indicate that animacy displays can be created via this naturalistic technique and reveal a previously unreported advantage for viewing intentional motion from an overhead viewpoint.


Asunto(s)
Cognición , Intención , Percepción de Movimiento , Humanos , Movimiento
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA