Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Front Neurosci ; 17: 1235911, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37841688

RESUMEN

Listeners are routinely exposed to many different types of speech, including artificially-enhanced and synthetic speech, styles which deviate to a greater or lesser extent from naturally-spoken exemplars. While the impact of differing speech types on intelligibility is well-studied, it is less clear how such types affect cognitive processing demands, and in particular whether those speech forms with the greatest intelligibility in noise have a commensurately lower listening effort. The current study measured intelligibility, self-reported listening effort, and a pupillometry-based measure of cognitive load for four distinct types of speech: (i) plain i.e. natural unmodified speech; (ii) Lombard speech, a naturally-enhanced form which occurs when speaking in the presence of noise; (iii) artificially-enhanced speech which involves spectral shaping and dynamic range compression; and (iv) speech synthesized from text. In the first experiment a cohort of 26 native listeners responded to the four speech types in three levels of speech-shaped noise. In a second experiment, 31 non-native listeners underwent the same procedure at more favorable signal-to-noise ratios, chosen since second language listening in noise has a more detrimental effect on intelligibility than listening in a first language. For both native and non-native listeners, artificially-enhanced speech was the most intelligible and led to the lowest subjective effort ratings, while the reverse was true for synthetic speech. However, pupil data suggested that Lombard speech elicited the lowest processing demands overall. These outcomes indicate that the relationship between intelligibility and cognitive processing demands is not a simple inverse, but is mediated by speech type. The findings of the current study motivate the search for speech modification algorithms that are optimized for both intelligibility and listening effort.

2.
Front Psychol ; 12: 623787, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-33679539

RESUMEN

Earlier studies have shown that musically trained individuals may have a benefit in adverse listening situations when compared to non-musicians, especially in speech-on-speech perception. However, the literature provides mostly conflicting results. In the current study, by employing different measures of spoken language processing, we aimed to test whether we could capture potential differences between musicians and non-musicians in speech-on-speech processing. We used an offline measure of speech perception (sentence recall task), which reveals a post-task response, and online measures of real time spoken language processing: gaze-tracking and pupillometry. We used stimuli of comparable complexity across both paradigms and tested the same groups of participants. In the sentence recall task, musicians recalled more words correctly than non-musicians. In the eye-tracking experiment, both groups showed reduced fixations to the target and competitor words' images as the level of speech maskers increased. The time course of gaze fixations to the competitor did not differ between groups in the speech-in-quiet condition, while the time course dynamics did differ between groups as the two-talker masker was added to the target signal. As the level of two-talker masker increased, musicians showed reduced lexical competition as indicated by the gaze fixations to the competitor. The pupil dilation data showed differences mainly in one target-to-masker ratio. This does not allow to draw conclusions regarding potential differences in the use of cognitive resources between groups. Overall, the eye-tracking measure enabled us to observe that musicians may be using a different strategy than non-musicians to attain spoken word recognition as the noise level increased. However, further investigation with more fine-grained alignment between the processes captured by online and offline measures is necessary to establish whether musicians differ due to better cognitive control or sound processing.

3.
Trends Hear ; 23: 2331216519845596, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-31131729

RESUMEN

Assessing effort in speech comprehension for hearing-impaired (HI) listeners is important, as effortful processing of speech can limit their hearing rehabilitation. We examined the measure of pupil dilation in its capacity to accommodate the heterogeneity that is present within clinical populations by studying lexical access in users with sensorineural hearing loss, who perceive speech via cochlear implants (CIs). We compared the pupillary responses of 15 experienced CI users and 14 age-matched normal-hearing (NH) controls during auditory lexical decision. A growth curve analysis was applied to compare the responses between the groups. NH listeners showed a coherent pattern of pupil dilation that reflects the task demands of the experimental manipulation and a homogenous time course of dilation. CI listeners showed more variability in the morphology of pupil dilation curves, potentially reflecting variable sources of effort across individuals. In follow-up analyses, we examined how speech perception, a task that relies on multiple stages of perceptual analyses, poses multiple sources of increased effort for HI listeners, wherefore we might not be measuring the same source of effort for HI as for NH listeners. We argue that interindividual variability among HI listeners can be clinically meaningful in attesting not only the magnitude but also the locus of increased effort. The understanding of individual variations in effort requires experimental paradigms that (a) differentiate the task demands during speech comprehension, (b) capture pupil dilation in its time course per individual listeners, and (c) investigate the range of individual variability present within clinical and NH populations.


Asunto(s)
Pérdida Auditiva , Pupila , Adulto , Anciano , Percepción Auditiva , Implantación Coclear , Implantes Cocleares , Femenino , Pérdida Auditiva/diagnóstico , Pérdida Auditiva Sensorineural , Humanos , Masculino , Persona de Mediana Edad , Ruido , Pupila/fisiología , Percepción del Habla/fisiología
4.
IEEE Trans Neural Syst Rehabil Eng ; 26(2): 392-399, 2018 02.
Artículo en Inglés | MEDLINE | ID: mdl-29432110

RESUMEN

Electroencephalographic (EEG) recordings provide objective estimates of listeners' cortical processing of sounds and of the status of their speech perception system. For profoundly deaf listeners with cochlear implants (CIs), the applications of EEG are limited because the device adds electric artifacts to the recordings. This restricts the possibilities for the neural-based metrics of speech processing by CI users, for instance to gauge cortical reorganization due to individual's hearing loss history. This paper describes the characteristics of the CI artifact as recorded with an artificial head substitute, and reports how the artifact is affected by the properties of the acoustical input signal versus the settings of the device. METHODS: We created a brain substitute using agar that simulates the brain's conductivity, placed it in a human skull, and performed EEG recordings with CIs from three different manufacturers. As stimuli, we used simple and complex non-speech stimuli, as well as naturally produced continuous speech. We examined the effect of manipulating device settings in both controlled experimental CI configurations and real clinical maps. RESULTS: An increase in the magnitude of the stimulation current through the device settings increases also the magnitude of the artifact. The artifact recorded to speech is smaller in magnitude than for non-speech stimuli due to signal-inherent amplitude modulations. CONCLUSION: The CI EEG artifact for speech appears more difficult to detect than for simple stimuli. Since the artifact differs across CI users, due to their individual clinical maps, the method presented enables insight into the individual manifestations of the artifact.


Asunto(s)
Estimulación Acústica , Artefactos , Encéfalo/fisiología , Implantes Cocleares , Electroencefalografía/métodos , Modelos Neurológicos , Agar , Mapeo Encefálico , Conductividad Eléctrica , Potenciales Evocados Auditivos , Humanos , Modelos Anatómicos , Cráneo
5.
Front Psychol ; 7: 398, 2016.
Artículo en Inglés | MEDLINE | ID: mdl-27065901

RESUMEN

Understanding speech is effortless in ideal situations, and although adverse conditions, such as caused by hearing impairment, often render it an effortful task, they do not necessarily suspend speech comprehension. A prime example of this is speech perception by cochlear implant users, whose hearing prostheses transmit speech as a significantly degraded signal. It is yet unknown how mechanisms of speech processing deal with such degraded signals, and whether they are affected by effortful processing of speech. This paper compares the automatic process of lexical competition between natural and degraded speech, and combines gaze fixations, which capture the course of lexical disambiguation, with pupillometry, which quantifies the mental effort involved in processing speech. Listeners' ocular responses were recorded during disambiguation of lexical embeddings with matching and mismatching durational cues. Durational cues were selected due to their substantial role in listeners' quick limitation of the number of lexical candidates for lexical access in natural speech. Results showed that lexical competition increased mental effort in processing natural stimuli in particular in presence of mismatching cues. Signal degradation reduced listeners' ability to quickly integrate durational cues in lexical selection, and delayed and prolonged lexical competition. The effort of processing degraded speech was increased overall, and because it had its sources at the pre-lexical level this effect can be attributed to listening to degraded speech rather than to lexical disambiguation. In sum, the course of lexical competition was largely comparable for natural and degraded speech, but showed crucial shifts in timing, and different sources of increased mental effort. We argue that well-timed progress of information from sensory to pre-lexical and lexical stages of processing, which is the result of perceptual adaptation during speech development, is the reason why in ideal situations speech is perceived as an undemanding task. Degradation of the signal or the receiver channel can quickly bring this well-adjusted timing out of balance and lead to increase in mental effort. Incomplete and effortful processing at the early pre-lexical stages has its consequences on lexical processing as it adds uncertainty to the forming and revising of lexical hypotheses.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA