Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 10 de 10
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Neurobiol Lang (Camb) ; 5(2): 432-453, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38911458

RESUMEN

Research points to neurofunctional differences underlying fluent speech between stutterers and non-stutterers. Considerably less work has focused on processes that underlie stuttered vs. fluent speech. Additionally, most of this research has focused on speech motor processes despite contributions from cognitive processes prior to the onset of stuttered speech. We used MEG to test the hypothesis that reactive inhibitory control is triggered prior to stuttered speech. Twenty-nine stutterers completed a delayed-response task that featured a cue (prior to a go cue) signaling the imminent requirement to produce a word that was either stuttered or fluent. Consistent with our hypothesis, we observed increased beta power likely emanating from the right pre-supplementary motor area (R-preSMA)-an area implicated in reactive inhibitory control-in response to the cue preceding stuttered vs. fluent productions. Beta power differences between stuttered and fluent trials correlated with stuttering severity and participants' percentage of trials stuttered increased exponentially with beta power in the R-preSMA. Trial-by-trial beta power modulations in the R-preSMA following the cue predicted whether a trial would be stuttered or fluent. Stuttered trials were also associated with delayed speech onset suggesting an overall slowing or freezing of the speech motor system that may be a consequence of inhibitory control. Post-hoc analyses revealed that independently generated anticipated words were associated with greater beta power and more stuttering than researcher-assisted anticipated words, pointing to a relationship between self-perceived likelihood of stuttering (i.e., anticipation) and inhibitory control. This work offers a neurocognitive account of stuttering by characterizing cognitive processes that precede overt stuttering events.

2.
Adv Exp Med Biol ; 1455: 257-274, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38918356

RESUMEN

Speech can be defined as the human ability to communicate through a sequence of vocal sounds. Consequently, speech requires an emitter (the speaker) capable of generating the acoustic signal and a receiver (the listener) able to successfully decode the sounds produced by the emitter (i.e., the acoustic signal). Time plays a central role at both ends of this interaction. On the one hand, speech production requires precise and rapid coordination, typically within the order of milliseconds, of the upper vocal tract articulators (i.e., tongue, jaw, lips, and velum), their composite movements, and the activation of the vocal folds. On the other hand, the generated acoustic signal unfolds in time, carrying information at different timescales. This information must be parsed and integrated by the receiver for the correct transmission of meaning. This chapter describes the temporal patterns that characterize the speech signal and reviews research that explores the neural mechanisms underlying the generation of these patterns and the role they play in speech comprehension.


Asunto(s)
Habla , Humanos , Habla/fisiología , Percepción del Habla/fisiología , Acústica del Lenguaje , Periodicidad
3.
Cognition ; 245: 105737, 2024 04.
Artículo en Inglés | MEDLINE | ID: mdl-38342068

RESUMEN

Phonological statistical learning - our ability to extract meaningful regularities from spoken language - is considered critical in the early stages of language acquisition, in particular for helping to identify discrete words in continuous speech. Most phonological statistical learning studies use an experimental task introduced by Saffran et al. (1996), in which the syllables forming the words to be learned are presented continuously and isochronously. This raises the question of the extent to which this purportedly powerful learning mechanism is robust to the kinds of rhythmic variability that characterize natural speech. Here, we tested participants with arhythmic, semi-rhythmic, and isochronous speech during learning. In addition, we investigated how input rhythmicity interacts with two other factors previously shown to modulate learning: prior knowledge (syllable order plausibility with respect to participants' first language) and learners' speech auditory-motor synchronization ability. We show that words are extracted by all learners even when the speech input is completely arhythmic. Interestingly, high auditory-motor synchronization ability increases statistical learning when the speech input is temporally more predictable but only when prior knowledge can also be used. This suggests an additional mechanism for learning based on predictions not only about when but also about what upcoming speech will be.


Asunto(s)
Individualidad , Percepción del Habla , Humanos , Aprendizaje , Lingüística , Desarrollo del Lenguaje , Habla
4.
J Speech Lang Hear Res ; 66(5): 1631-1638, 2023 05 09.
Artículo en Inglés | MEDLINE | ID: mdl-37059075

RESUMEN

PURPOSE: Most neural and physiological research on stuttering focuses on the fluent speech of speakers who stutter due to the difficulty associated with eliciting stuttering reliably in the laboratory. We previously introduced an approach to elicit stuttered speech in the laboratory in adults who stutter. The purpose of this study was to determine whether that approach reliably elicits stuttering in school-age children and teenagers who stutter (CWS/TWS). METHOD: Twenty-three CWS/TWS participated. A clinical interview was used to identify participant-specific anticipated and unanticipated words in CWS and TWS. Two tasks were administered: (a) a delayed word reading task in which participants read words and produced them after a 5-s delay and (b) a delayed response question task in which participants responded to examiner questions after a 5-s delay. Two CWS and eight TWS completed the reading task; six CWS and seven TWS completed the question task. Trials were coded as unambiguously fluent, ambiguous, and unambiguously stuttered. RESULTS: The method yielded, at a group level, a near-equal distribution of unambiguously stuttered and fluent utterances: 42.5% and 45.1%, respectively, in the reading task and 40.5% and 51.4%, respectively, in the question task. CONCLUSIONS: The method presented in this article elicited a comparable amount of unambiguously stuttered and fluent trials in CWS and TWS, at a group level, during two different word production tasks. The inclusion of different tasks supports the generalizability of our approach, which can be used to elicit stuttering in studies that aim to unravel the neural and physiological bases that underlie stuttered speech.


Asunto(s)
Tartamudeo , Adulto , Niño , Humanos , Adolescente , Habla/fisiología , Instituciones Académicas , Medición de la Producción del Habla , Lectura
5.
PLoS Biol ; 20(7): e3001712, 2022 07.
Artículo en Inglés | MEDLINE | ID: mdl-35793349

RESUMEN

People of all ages display the ability to detect and learn from patterns in seemingly random stimuli. Referred to as statistical learning (SL), this process is particularly critical when learning a spoken language, helping in the identification of discrete words within a spoken phrase. Here, by considering individual differences in speech auditory-motor synchronization, we demonstrate that recruitment of a specific neural network supports behavioral differences in SL from speech. While independent component analysis (ICA) of fMRI data revealed that a network of auditory and superior pre/motor regions is universally activated in the process of learning, a frontoparietal network is additionally and selectively engaged by only some individuals (high auditory-motor synchronizers). Importantly, activation of this frontoparietal network is related to a boost in learning performance, and interference with this network via articulatory suppression (AS; i.e., producing irrelevant speech during learning) normalizes performance across the entire sample. Our work provides novel insights on SL from speech and reconciles previous contrasting findings. These findings also highlight a more general need to factor in fundamental individual differences for a precise characterization of cognitive phenomena.


Asunto(s)
Percepción del Habla , Habla , Mapeo Encefálico , Humanos , Imagen por Resonancia Magnética , Habla/fisiología , Percepción del Habla/fisiología
6.
STAR Protoc ; 3(2): 101248, 2022 06 17.
Artículo en Inglés | MEDLINE | ID: mdl-35310080

RESUMEN

The ability to synchronize a motor action to a rhythmic auditory stimulus is often considered an innate human skill. However, some individuals lack the ability to synchronize speech to a perceived syllabic rate. Here, we describe a simple and fast protocol to classify a single native English speaker as being or not being a speech synchronizer. This protocol consists of four parts: the pretest instructions and volume adjustment, the training procedure, the execution of the main task, and data analysis. For complete details on the use and execution of this protocol, please refer to Assaneo et al. (2019a).


Asunto(s)
Estimulación Acústica , Habla , Humanos
7.
PLoS Biol ; 19(9): e3001119, 2021 09.
Artículo en Inglés | MEDLINE | ID: mdl-34491980

RESUMEN

Statistical learning (SL) is the ability to extract regularities from the environment. In the domain of language, this ability is fundamental in the learning of words and structural rules. In lack of reliable online measures, statistical word and rule learning have been primarily investigated using offline (post-familiarization) tests, which gives limited insights into the dynamics of SL and its neural basis. Here, we capitalize on a novel task that tracks the online SL of simple syntactic structures combined with computational modeling to show that online SL responds to reinforcement learning principles rooted in striatal function. Specifically, we demonstrate-on 2 different cohorts-that a temporal difference model, which relies on prediction errors, accounts for participants' online learning behavior. We then show that the trial-by-trial development of predictions through learning strongly correlates with activity in both ventral and dorsal striatum. Our results thus provide a detailed mechanistic account of language-related SL and an explanation for the oft-cited implication of the striatum in SL tasks. This work, therefore, bridges the long-standing gap between language learning and reinforcement learning phenomena.


Asunto(s)
Cuerpo Estriado/fisiología , Desarrollo del Lenguaje , Aprendizaje por Probabilidad , Refuerzo en Psicología , Cuerpo Estriado/diagnóstico por imagen , Femenino , Humanos , Imagen por Resonancia Magnética , Masculino , Patrones de Reconocimiento Fisiológico , Adulto Joven
8.
Cereb Cortex ; 31(5): 2505-2522, 2021 03 31.
Artículo en Inglés | MEDLINE | ID: mdl-33338212

RESUMEN

Congenital blindness has been shown to result in behavioral adaptation and neuronal reorganization, but the underlying neuronal mechanisms are largely unknown. Brain rhythms are characteristic for anatomically defined brain regions and provide a putative mechanistic link to cognitive processes. In a novel approach, using magnetoencephalography resting state data of congenitally blind and sighted humans, deprivation-related changes in spectral profiles were mapped to the cortex using clustering and classification procedures. Altered spectral profiles in visual areas suggest changes in visual alpha-gamma band inhibitory-excitatory circuits. Remarkably, spectral profiles were also altered in auditory and right frontal areas showing increased power in theta-to-beta frequency bands in blind compared with sighted individuals, possibly related to adaptive auditory and higher cognitive processing. Moreover, occipital alpha correlated with microstructural white matter properties extending bilaterally across posterior parts of the brain. We provide evidence that visual deprivation selectively modulates spectral profiles, possibly reflecting structural and functional adaptation.


Asunto(s)
Vías Auditivas/fisiopatología , Ceguera/fisiopatología , Lóbulo Frontal/fisiopatología , Vías Visuales/fisiopatología , Adulto , Vías Auditivas/diagnóstico por imagen , Vías Auditivas/fisiología , Ceguera/diagnóstico por imagen , Imagen de Difusión Tensora , Femenino , Lóbulo Frontal/diagnóstico por imagen , Lóbulo Frontal/fisiología , Humanos , Imagen por Resonancia Magnética , Magnetoencefalografía , Masculino , Persona de Mediana Edad , Plasticidad Neuronal/fisiología , Lóbulo Occipital/diagnóstico por imagen , Lóbulo Occipital/fisiología , Lóbulo Occipital/fisiopatología , Vías Visuales/diagnóstico por imagen , Vías Visuales/fisiología , Sustancia Blanca/diagnóstico por imagen , Sustancia Blanca/fisiología , Sustancia Blanca/fisiopatología , Adulto Joven
9.
PLoS Biol ; 18(11): e3000895, 2020 11.
Artículo en Inglés | MEDLINE | ID: mdl-33137084

RESUMEN

A crucial aspect when learning a language is discovering the rules that govern how words are combined in order to convey meanings. Because rules are characterized by sequential co-occurrences between elements (e.g., "These cupcakes are unbelievable"), tracking the statistical relationships between these elements is fundamental. However, purely bottom-up statistical learning alone cannot fully account for the ability to create abstract rule representations that can be generalized, a paramount requirement of linguistic rules. Here, we provide evidence that, after the statistical relations between words have been extracted, the engagement of goal-directed attention is key to enable rule generalization. Incidental learning performance during a rule-learning task on an artificial language revealed a progressive shift from statistical learning to goal-directed attention. In addition, and consistent with the recruitment of attention, functional MRI (fMRI) analyses of late learning stages showed left parietal activity within a broad bilateral dorsal frontoparietal network. Critically, repetitive transcranial magnetic stimulation (rTMS) on participants' peak of activation within the left parietal cortex impaired their ability to generalize learned rules to a structurally analogous new language. No stimulation or rTMS on a nonrelevant brain region did not have the same interfering effect on generalization. Performance on an additional attentional task showed that this rTMS on the parietal site hindered participants' ability to integrate "what" (stimulus identity) and "when" (stimulus timing) information about an expected target. The present findings suggest that learning rules from speech is a two-stage process: following statistical learning, goal-directed attention-involving left parietal regions-integrates "what" and "when" stimulus information to facilitate rapid rule generalization.


Asunto(s)
Atención/fisiología , Aprendizaje/fisiología , Lóbulo Parietal/fisiología , Adulto , Encéfalo/fisiología , Mapeo Encefálico/métodos , Cognición/fisiología , Femenino , Lóbulo Frontal/fisiología , Lateralidad Funcional/fisiología , Humanos , Lenguaje , Lingüística/métodos , Imagen por Resonancia Magnética/métodos , Masculino , Estimulación Luminosa/métodos , Tiempo de Reacción/fisiología , Estimulación Magnética Transcraneal/métodos , Adulto Joven
10.
Nat Neurosci ; 22(4): 627-632, 2019 04.
Artículo en Inglés | MEDLINE | ID: mdl-30833700

RESUMEN

We introduce a deceptively simple behavioral task that robustly identifies two qualitatively different groups within the general population. When presented with an isochronous train of random syllables, some listeners are compelled to align their own concurrent syllable production with the perceived rate, whereas others remain impervious to the external rhythm. Using both neurophysiological and structural imaging approaches, we show group differences with clear consequences for speech processing and language learning. When listening passively to speech, high synchronizers show increased brain-to-stimulus synchronization over frontal areas, and this localized pattern correlates with precise microstructural differences in the white matter pathways connecting frontal to auditory regions. Finally, the data expose a mechanism that underpins performance on an ecologically relevant word-learning task. We suggest that this task will help to better understand and characterize individual performance in speech processing and language learning.


Asunto(s)
Encéfalo/anatomía & histología , Encéfalo/fisiología , Lenguaje , Aprendizaje/fisiología , Percepción del Habla/fisiología , Habla , Estimulación Acústica , Adulto , Mapeo Encefálico , Femenino , Humanos , Individualidad , Imagen por Resonancia Magnética , Magnetoencefalografía , Masculino , Persona de Mediana Edad , Vías Nerviosas/anatomía & histología , Vías Nerviosas/fisiología
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA