RESUMEN
All talkers show some flexibility in their speech, and the ability to imitate an unfamiliar accent is a skill that shows vast individual differences. Yet the source of these individual differences, in particular whether they originate from perceptual, motor, or social/personality factors, is not yet clear. In the current study, we ask how individual differences in these factors predict individual differences in deliberate accent imitation. Participants imitated three accents, and attempts were rated for accuracy. A set of measures tracking individual differences in perceptual, motor, cognitive, personality, and demographic factors were also acquired. Imitation ability was related to differences in musical perception, vocal articulation, and the personality characteristic of "openness to experience," and was affected by attitudes towards the imitated talkers. Taken together, results suggest that deliberate accent imitation skill is modulated not only by core perceptual and motor skills, but also by personality and affinity to the talker, suggesting that some aspects of deliberate imitation are a function of domain-general constraints on perceptual-motor systems, while others may be modulated by social context.
RESUMEN
Talkers automatically imitate aspects of perceived speech, a phenomenon known as phonetic convergence. Talkers have previously been found to converge to auditory and visual speech information. Furthermore, talkers converge more to the speech of a conversational partner who is seen and heard, relative to one who is just heard (Dias & Rosenblum Perception, 40, 1457-1466, 2011). A question raised by this finding is what visual information facilitates the enhancement effect. In the following experiments, we investigated the possible contributions of visible speech articulation to visual enhancement of phonetic convergence within the noninteractive context of a shadowing task. In Experiment 1, we examined the influence of the visibility of a talker on phonetic convergence when shadowing auditory speech either in the clear or in low-level auditory noise. The results suggest that visual speech can compensate for convergence that is reduced by auditory noise masking. Experiment 2 further established the visibility of articulatory mouth movements as being important to the visual enhancement of phonetic convergence. Furthermore, the word frequency and phonological neighborhood density characteristics of the words shadowed were found to significantly predict phonetic convergence in both experiments. Consistent with previous findings (e.g., Goldinger Psychological Review, 105, 251-279, 1998), phonetic convergence was greater when shadowing low-frequency words. Convergence was also found to be greater for low-density words, contrasting with previous predictions of the effect of phonological neighborhood density on auditory phonetic convergence (e.g., Pardo, Jordan, Mallari, Scanlon, & Lewandowski Journal of Memory and Language, 69, 183-195, 2013). Implications of the results for a gestural account of phonetic convergence are discussed.