Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 30.656
Filtrar
1.
Transl Vis Sci Technol ; 13(9): 8, 2024 Sep 03.
Artículo en Inglés | MEDLINE | ID: mdl-39235398

RESUMEN

Purpose: Crowding is the inability to distinguish objects in the periphery in the presence of clutter. Previous studies showed that crowding is elevated in patients with glaucoma. This could serve as an indicator of the functional visual performance of patients with glaucoma but at present appears too time-consuming and attentionally demanding. We examined visual crowding in individuals with preperimetric glaucoma to compare the potential effectiveness of eye movement-based and manual response paradigms. Methods: We assessed crowding magnitude in 10 participants with preperimetric glaucoma and 10 age-matched controls. Crowding magnitudes were assessed using four different paradigms: a conventional two-alternative forced choice (2AFC) manual, a 2AFC and a six-alternative forced choice (6AFC) eye movement, and a serial search paradigm. All paradigms measured crowding magnitude by comparing participants' orientation discrimination thresholds in isolated and flanked stimulus conditions. Moreover, assessment times and participant preferences were compared across paradigms. Results: Patients with preperimetric glaucoma exhibited elevated crowding, which was most evident in the manual-response paradigm. The serial search paradigm emerged as the fastest method for assessing thresholds, yet it could not effectively distinguish between glaucoma and control groups. The 6AFC paradigm proved challenging for both groups. Conclusions: We conclude that patients with preperimetric glaucoma demonstrate heightened binocular visual crowding. This is most effectively demonstrated via the 2AFC manual response paradigm. The additional attentional demand in eye movement paradigms rendered them less effective in the elderly population of the present study. Translational Relevance: Our findings underscore both the value and the complexity of efficiently evaluating crowding in elderly participants, including those with preperimetric glaucoma.


Asunto(s)
Movimientos Oculares , Glaucoma , Campos Visuales , Humanos , Masculino , Femenino , Persona de Mediana Edad , Movimientos Oculares/fisiología , Anciano , Glaucoma/fisiopatología , Glaucoma/diagnóstico , Campos Visuales/fisiología , Pruebas del Campo Visual/métodos , Agudeza Visual/fisiología , Umbral Sensorial/fisiología
2.
Cogn Sci ; 48(9): e13489, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-39226191

RESUMEN

In isolated English word reading, readers have the optimal performance when their initial eye fixation is directed to the area between the beginning and word center, that is, the optimal viewing position (OVP). Thus, how well readers voluntarily direct eye gaze to this OVP during isolated word reading may be associated with reading performance. Using Eye Movement analysis with Hidden Markov Models, we discovered two representative eye movement patterns during lexical decisions through clustering, which focused at the OVP and the word center, respectively. Higher eye movement similarity to the OVP-focusing pattern predicted faster lexical decision time in addition to cognitive abilities and lexical knowledge. However, the OVP-focusing pattern was associated with longer isolated single letter naming time, suggesting conflicting visual abilities required for identifying isolated letters and multi-letter words. In contrast, in both word and pseudoword naming, although clustering did not reveal an OVP-focused pattern, higher consistency of the first fixation as measured in entropy predicted faster naming time in addition to cognitive abilities and lexical knowledge. Thus, developing a consistent eye movement pattern focusing on the OVP is essential for word orthographic processing and reading fluency. This finding has important implications for interventions for reading difficulties.


Asunto(s)
Movimientos Oculares , Cadenas de Markov , Lectura , Humanos , Movimientos Oculares/fisiología , Adulto Joven , Femenino , Masculino , Fijación Ocular/fisiología , Adulto , Tiempo de Reacción/fisiología , Lenguaje
3.
PLoS One ; 19(9): e0308642, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39283837

RESUMEN

Intercepting moving targets is a fundamental skill in human behavior, influencing various domains such as sports, gaming, and other activities. In these contexts, precise visual processing and motor control are crucial for adapting and navigating effectively. Nevertheless, there are still some gaps in our understanding of how these elements interact while intercepting a moving target. This study explored the dynamic interplay among eye movements, pupil size, and interceptive hand movements, with visual and motion uncertainty factors. We developed a simple visuomotor task in which participants used a joystick to interact with a computer-controlled dot that moved along two-dimensional trajectories. This virtual system provided the flexibility to manipulate the target's speed and directional uncertainty during chase trials. We then conducted a geometric analysis based on optimal angles for each behavior, enabling us to distinguish between simple tracking and predictive trajectories that anticipate future positions of the moving target. Our results revealed the adoption of a strong interception strategy as participants approached the target. Notably, the onset and amount of optimal interception strategy depended on task parameters, such as the target's speed and frequency of directional changes. Furthermore, eye-tracking data showed that participants continually adjusted their gaze speed and position, continuously adapting to the target's movements. Finally, in successful trials, pupillary responses predicted the amount of optimal interception strategy while exhibiting an inverse relationship in trials without collisions. These findings reveal key interactions among visuomotor parameters that are crucial for solving complex interception tasks.


Asunto(s)
Movimientos Oculares , Desempeño Psicomotor , Humanos , Masculino , Femenino , Desempeño Psicomotor/fisiología , Adulto , Movimientos Oculares/fisiología , Adulto Joven , Pupila/fisiología , Percepción de Movimiento/fisiología , Tecnología de Seguimiento Ocular , Mano/fisiología , Movimiento/fisiología
4.
Nat Commun ; 15(1): 7964, 2024 Sep 11.
Artículo en Inglés | MEDLINE | ID: mdl-39261491

RESUMEN

Fixational eye movements alter the number and timing of spikes transmitted from the retina to the brain, but whether these changes enhance or degrade the retinal signal is unclear. To quantify this, we developed a Bayesian method for reconstructing natural images from the recorded spikes of hundreds of retinal ganglion cells (RGCs) in the macaque retina (male), combining a likelihood model for RGC light responses with the natural image prior implicitly embedded in an artificial neural network optimized for denoising. The method matched or surpassed the performance of previous reconstruction algorithms, and provides an interpretable framework for characterizing the retinal signal. Reconstructions were improved with artificial stimulus jitter that emulated fixational eye movements, even when the eye movement trajectory was assumed to be unknown and had to be inferred from retinal spikes. Reconstructions were degraded by small artificial perturbations of spike times, revealing more precise temporal encoding than suggested by previous studies. Finally, reconstructions were substantially degraded when derived from a model that ignored cell-to-cell interactions, indicating the importance of stimulus-evoked correlations. Thus, fixational eye movements enhance the precision of the retinal representation.


Asunto(s)
Movimientos Oculares , Fijación Ocular , Retina , Células Ganglionares de la Retina , Animales , Células Ganglionares de la Retina/fisiología , Retina/fisiología , Movimientos Oculares/fisiología , Masculino , Fijación Ocular/fisiología , Macaca mulatta , Teorema de Bayes , Algoritmos , Potenciales de Acción/fisiología , Estimulación Luminosa , Modelos Neurológicos
5.
Sci Rep ; 14(1): 21461, 2024 09 13.
Artículo en Inglés | MEDLINE | ID: mdl-39271749

RESUMEN

The analysis of eye movements has proven valuable for understanding brain function and the neuropathology of various disorders. This research aims to utilize eye movement data analysis as a screening tool for differentiation between eight different groups of pathologies, including scholar, neurologic, and postural disorders. Leveraging a dataset from 20 clinical centers, all employing AIDEAL and REMOBI eye movement technologies this study extends prior research by considering a multi-annotation setting, incorporating information from recordings from saccade and vergence eye movement tests, and using contextual information (e.g. target signals and latency of the eye movement relative to the target and confidence level of the quality of eye movement recording) to improve accuracy while reducing noise interference. Additionally, we introduce a novel hybrid architecture that combines the weight-sharing feature of convolution layers with the long-range capabilities of the transformer architecture to improve model efficiency and reduce the computation cost by a factor of 3.36, while still being competitive in terms of macro F1 score. Evaluated on two diverse datasets, our method demonstrates promising results, the most powerful discrimination being Attention & Neurologic; with a macro F1 score of up to 78.8%; disorder. The results indicate the effectiveness of our approach in classifying eye movement data from different pathologies and different clinical centers accurately, thus enabling the creation of an assistant tool in the future.


Asunto(s)
Movimientos Oculares , Humanos , Movimientos Oculares/fisiología , Movimientos Sacádicos/fisiología , Análisis de Datos , Enfermedades del Sistema Nervioso/diagnóstico , Masculino
6.
PLoS One ; 19(9): e0309998, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39241025

RESUMEN

The subjective feeling of being the author of one's actions and the subsequent consequences is referred to as a sense of agency. Such a feeling is crucial for usability in human-computer interactions, where eye movement has been adopted, yet this area has been scarcely investigated. We examined how the temporal action-feedback discrepancy affects the sense of agency concerning eye movement. Participants conducted a visual search for an array of nine Chinese characters within a temporally-delayed gaze-contingent display, blurring the peripheral view. The relative delay between each eye movement and the subsequent window movement varied from 0 to 4,000 ms. In the control condition, the window played a recorded gaze behavior. The mean authorship rating and the proportion of "self" responses in the categorical authorship report ("self," "delayed self," and "other") gradually decreased as the temporal discrepancy increased, with "other" being rarely reported, except in the control condition. These results generally mirror those of prior studies on hand actions, suggesting that sense of agency extends beyond the effector body parts to other modalities, and two different types of sense of agency that have different temporal characteristics are simultaneously operating. The mode of fixation duration shifted as the delay increased under 200-ms delays and was divided into two modes at 200-500 ms delays. The frequency of 0-1.5° saccades exhibited an increasing trend as the delay increased. These results demonstrate the influence of perceived action-effect discrepancy on action refinement and task strategy.


Asunto(s)
Movimientos Oculares , Fijación Ocular , Humanos , Masculino , Femenino , Fijación Ocular/fisiología , Adulto Joven , Movimientos Oculares/fisiología , Adulto , Factores de Tiempo , Desempeño Psicomotor/fisiología , Movimientos Sacádicos/fisiología
7.
J Vis ; 24(9): 2, 2024 Sep 03.
Artículo en Inglés | MEDLINE | ID: mdl-39226068

RESUMEN

Our aim in this study was to understand how we perform visuospatial comparison tasks by analyzing ocular behavior and to examine how restrictions in macular or peripheral vision disturb ocular behavior and task performance. Two groups of 18 healthy participants with normal or corrected visual acuity performed visuospatial comparison tasks (computerized version of the elementary visuospatial perception [EVSP] test) (Pisella et al., 2013) with a gaze-contingent mask simulating either tubular vision (first group) or macular scotoma (second group). After these simulations of pathological conditions, all participants also performed the EVSP test in full view, enabling direct comparison of their oculomotor behavior and performance. In terms of oculomotor behavior, compared with the full view condition, alternation saccades between the two objects to compare were less numerous in the absence of peripheral vision, whereas the number of within-object exploration saccades decreased in the absence of macular vision. The absence of peripheral vision did not affect accuracy except for midline judgments, but the absence of central vision impaired accuracy across all visuospatial subtests. Besides confirming the crucial role of the macula for visuospatial comparison tasks, these experiments provided important insights into how sensory disorder modifies oculomotor behavior with or without consequences on performance accuracy.


Asunto(s)
Movimientos Sacádicos , Escotoma , Percepción Espacial , Agudeza Visual , Humanos , Masculino , Femenino , Adulto , Escotoma/fisiopatología , Agudeza Visual/fisiología , Percepción Espacial/fisiología , Movimientos Sacádicos/fisiología , Adulto Joven , Campos Visuales/fisiología , Mácula Lútea , Movimientos Oculares/fisiología
8.
J Vis ; 24(9): 1, 2024 Sep 03.
Artículo en Inglés | MEDLINE | ID: mdl-39226069

RESUMEN

Most research on visual search has used simple tasks presented on a computer screen. However, in natural situations visual search almost always involves eye, head, and body movements in a three-dimensional (3D) environment. The different constraints imposed by these two types of search tasks might explain some of the discrepancies in our understanding concerning the use of memory resources and the role of contextual objects during search. To explore this issue, we analyzed a visual search task performed in an immersive virtual reality apartment. Participants searched for a series of geometric 3D objects while eye movements and head coordinates were recorded. Participants explored the apartment to locate target objects whose location and visibility were manipulated. For objects with reliable locations, we found that repeated searches led to a decrease in search time and number of fixations and to a reduction of errors. Searching for those objects that had been visible in previous trials but were only tested at the end of the experiment was also easier than finding objects for the first time, indicating incidental learning of context. More importantly, we found that body movements showed changes that reflected memory for target location: trajectories were shorter and movement velocities were higher, but only for those objects that had been searched for multiple times. We conclude that memory of 3D space and target location is a critical component of visual search and also modifies movement kinematics. In natural search, memory is used to optimize movement control and reduce energetic costs.


Asunto(s)
Movimientos Oculares , Memoria Espacial , Realidad Virtual , Humanos , Femenino , Masculino , Adulto Joven , Adulto , Movimientos Oculares/fisiología , Memoria Espacial/fisiología , Percepción Espacial/fisiología , Movimientos de la Cabeza/fisiología , Estimulación Luminosa/métodos , Percepción Visual/fisiología , Tiempo de Reacción/fisiología
9.
Sci Rep ; 14(1): 20978, 2024 09 09.
Artículo en Inglés | MEDLINE | ID: mdl-39251651

RESUMEN

This study investigated gaze behavior during visuo-cognitive-motor tasks with a change of movement direction in glaucoma patients and healthy controls. Nineteen glaucoma patients (10 females, 9 males) and 30 healthy sighted controls (17 females, 13 males) participated in this cross-sectional study. Participants performed two visuo-cognitive-motor tasks with a change of movement direction: (i) the "Speed-Court-Test" that involved stepping on different sensors in response to a visual sign displayed on either a large or small screen (165″ and 55″, respectively); (ii) the "Trail-Walking-Test" that required walking to 15 cones labeled with numbers (1-8) or letters (A-G) in an alternately ascending order. During these tasks, the time needed for completing each task was determined and the gaze behavior (e.g., saccade duration, fixation duration) was recorded via eye tracking. Data were analyzed with repeated measures analyses of covariance (ANCOVA; GROUP × SCREEN) and one-way ANCOVA. No differences between groups were found for the time needed to complete the tasks. However, during the "Trail-Walking-Test", the fixation duration was longer for glaucoma patients than for controls (p = 0.016, η p 2  = 0.131). Furthermore, during the "Speed-Court-Test", there was a screen size effect. Irrespective of group, saccade amplitudes were lower (p < 0.001, η p 2  = 0.242) and fixation durations were higher (p = 0.021, η p 2  = 0.125) for the small screen. Fixation durations were longer in glaucoma patients during the cognitively demanding "Trail-Walking-Test", which might indicate a strategy to compensate for their visual impairment.


Asunto(s)
Cognición , Fijación Ocular , Glaucoma de Ángulo Abierto , Humanos , Femenino , Masculino , Estudios Transversales , Persona de Mediana Edad , Glaucoma de Ángulo Abierto/fisiopatología , Fijación Ocular/fisiología , Anciano , Cognición/fisiología , Desempeño Psicomotor/fisiología , Estudios de Casos y Controles , Movimientos Oculares/fisiología , Adulto
10.
Cogn Psychol ; 153: 101683, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-39217858

RESUMEN

The direct-lexical-control hypothesis stipulates that some aspect of a word's processing determines the duration of the fixation on that word and/or the next. Although the direct lexical control is incorporated into most current models of eye-movement control in reading, the precise implementation varies and the assumptions of the hypothesis may not be feasible given that lexical processing must occur rapidly enough to influence fixation durations. Conclusive empirical evidence supporting this hypothesis is therefore lacking. In this article, we report the results of an eye-tracking experiment using the boundary paradigm in which native speakers of Chinese read sentences in which target words were either high- or low-frequency and preceded by a valid or invalid preview. Eye movements were co-registered with electroencephalography, allowing standard analyses of eye-movement measures, divergence point analyses of fixation-duration distributions, and fixated-related potentials on the target words. These analyses collectively provide strong behavioral and neural evidence of early lexical processing and thus strong support for the direct-lexical-control hypothesis. We discuss the implications of the findings for our understanding of how the hypothesis might be implemented, the neural systems that support skilled reading, and the nature of eye-movement control in the reading of Chinese versus alphabetic scripts.


Asunto(s)
Electroencefalografía , Movimientos Oculares , Tecnología de Seguimiento Ocular , Lectura , Humanos , Movimientos Oculares/fisiología , Femenino , Masculino , Fijación Ocular/fisiología , Adulto Joven , Adulto , Lenguaje , Potenciales Evocados/fisiología , China , Pueblos del Este de Asia
11.
Sensors (Basel) ; 24(16)2024 Aug 09.
Artículo en Inglés | MEDLINE | ID: mdl-39204851

RESUMEN

The impact of global population aging on older adults' health and emotional well-being is examined in this study, emphasizing innovative technological solutions to address their diverse needs. Changes in physical and mental functions due to aging, along with emotional challenges that necessitate attention, are highlighted. Gaze estimation and interactive art are utilized to develop an interactive system tailored for elderly users, where interaction is simplified through eye movements to reduce technological barriers and provide a soothing art experience. By employing multi-sensory stimulation, the system aims to evoke positive emotions and facilitate meaningful activities, promoting active aging. Named "Natural Rhythm through Eyes", it allows for users to interact with nature-themed environments via eye movements. User feedback via questionnaires and expert interviews was collected during public demonstrations in elderly settings to validate the system's effectiveness in providing usability, pleasure, and interactive experience for the elderly. Key findings include the following: (1) Enhanced usability of the gaze estimation interface for elderly users. (2) Increased enjoyment and engagement through nature-themed interactive art. (3) Positive influence on active aging through the integration of gaze estimation and interactive art. These findings underscore technology's potential to enhance well-being and quality of life for older adults navigating aging challenges.


Asunto(s)
Calidad de Vida , Humanos , Anciano , Femenino , Masculino , Movimientos Oculares/fisiología , Envejecimiento/fisiología , Interfaz Usuario-Computador , Anciano de 80 o más Años , Emociones/fisiología , Encuestas y Cuestionarios , Fijación Ocular/fisiología , Arte
12.
Sensors (Basel) ; 24(16)2024 Aug 14.
Artículo en Inglés | MEDLINE | ID: mdl-39204948

RESUMEN

This study evaluates an innovative control approach to assistive robotics by integrating brain-computer interface (BCI) technology and eye tracking into a shared control system for a mobile augmented reality user interface. Aimed at enhancing the autonomy of individuals with physical disabilities, particularly those with impaired motor function due to conditions such as stroke, the system utilizes BCI to interpret user intentions from electroencephalography signals and eye tracking to identify the object of focus, thus refining control commands. This integration seeks to create a more intuitive and responsive assistive robot control strategy. The real-world usability was evaluated, demonstrating significant potential to improve autonomy for individuals with severe motor impairments. The control system was compared with an eye-tracking-based alternative to identify areas needing improvement. Although BCI achieved an acceptable success rate of 0.83 in the final phase, eye tracking was more effective with a perfect success rate and consistently lower completion times (p<0.001). The user experience responses favored eye tracking in 11 out of 26 questions, with no significant differences in the remaining questions, and subjective fatigue was higher with BCI use (p=0.04). While BCI performance lagged behind eye tracking, the user evaluation supports the validity of our control strategy, showing that it could be deployed in real-world conditions and suggesting a pathway for further advancements.


Asunto(s)
Realidad Aumentada , Interfaces Cerebro-Computador , Electroencefalografía , Tecnología de Seguimiento Ocular , Robótica , Interfaz Usuario-Computador , Humanos , Robótica/métodos , Robótica/instrumentación , Electroencefalografía/métodos , Masculino , Femenino , Adulto , Persona de Mediana Edad , Adulto Joven , Movimientos Oculares/fisiología
13.
Sensors (Basel) ; 24(16)2024 Aug 20.
Artículo en Inglés | MEDLINE | ID: mdl-39205057

RESUMEN

Virtual speeches are a very popular way for remote multi-user communication, but it has the disadvantage of the lack of eye contact. This paper proposes the evaluation of an online audience attention based on gaze tracking. Our research only uses webcams to capture the audience's head posture, gaze time, and other features, providing a low-cost method for attention monitoring with reference values across multiple domains. Meantime, we also propose a set of indexes which can be used to evaluate the audience's degree of attention, making up for the fact that the speaker cannot gauge the audience's concentration through eye contact during online speeches. We selected 96 students for a 20 min group simulation session and used Spearman's correlation coefficient to analyze the correlation between our evaluation indicators and concentration. The result showed that each evaluation index has a significant correlation with the degree of attention (p = 0.01), and all the students in the focused group met the thresholds set by each of our evaluation indicators, while the students in the non-focused group failed to reach the standard. During the simulation, eye movement data and EEG signals were measured synchronously for the second group of students. The EEG results of the students were consistent with the systematic evaluation. The performance of the measured EEG signals confirmed the accuracy of the systematic evaluation.


Asunto(s)
Atención , Movimientos Oculares , Habla , Humanos , Atención/fisiología , Habla/fisiología , Movimientos Oculares/fisiología , Electroencefalografía/métodos , Masculino , Tecnología de Seguimiento Ocular , Femenino , Interfaz Usuario-Computador
14.
Sensors (Basel) ; 24(16)2024 Aug 21.
Artículo en Inglés | MEDLINE | ID: mdl-39205099

RESUMEN

Tremor is a prevalent neurological disorder characterized by involuntary shaking or trembling of body parts. This condition impairs fine motor skills and hand coordination to varying degrees and can even affect overall body mobility. As a result, tremors severely disrupt the daily lives and work of those affected, significantly limiting their physical activity space. This study developed an innovative spatial augmented reality (SAR) system aimed at assisting individuals with tremor disorders to overcome their physical limitations and expand their range of activities. The system integrates eye-tracking and Internet of Things (IoT) technologies, enabling users to smoothly control objects in the real world through eye movements. It uses a virtual stabilization algorithm for stable interaction with objects in the real environment. The study comprehensively evaluated the system's performance through three experiments: (1) assessing the effectiveness of the virtual stabilization algorithm in enhancing the system's ability to assist individuals with tremors in stable and efficient interaction with remote objects, (2) evaluating the system's fluidity and stability in performing complex interactive tasks, and (3) investigating the precision and efficiency of the system in remote interactions within complex physical environments. The results demonstrated that the system significantly improves the stability and efficiency of interactions between individuals with tremor and remote objects, reduces operational errors, and enhances the accuracy and communication efficiency of interactions.


Asunto(s)
Algoritmos , Realidad Aumentada , Temblor , Humanos , Temblor/fisiopatología , Masculino , Femenino , Adulto , Persona de Mediana Edad , Interfaz Usuario-Computador , Movimientos Oculares/fisiología , Anciano
15.
Dev Psychobiol ; 66(7): e22538, 2024 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-39192662

RESUMEN

Most studies of developing visual attention are conducted using screen-based tasks in which infants move their eyes to select where to look. However, real-world visual exploration entails active movements of both eyes and head to bring relevant areas in view. Thus, relatively little is known about how infants coordinate their eyes and heads to structure their visual experiences. Infants were tested every 3 months from 9 to 24 months while they played with their caregiver and three toys while sitting in a highchair at a table. Infants wore a head-mounted eye tracker that measured eye movement toward each of the visual targets (caregiver's face and toys) and how targets were oriented within the head-centered field of view (FOV). With age, infants increasingly aligned novel toys in the center of their head-centered FOV at the expense of their caregiver's face. Both faces and toys were better centered in view during longer looking events, suggesting that infants of all ages aligned their eyes and head to sustain attention. The bias in infants' head-centered FOV could not be accounted for by manual action: Held toys were more poorly centered compared with non-held toys. We discuss developmental factors-attentional, motoric, cognitive, and social-that may explain why infants increasingly adopted biased viewpoints with age.


Asunto(s)
Atención , Desarrollo Infantil , Movimientos Oculares , Tecnología de Seguimiento Ocular , Percepción Visual , Humanos , Atención/fisiología , Lactante , Masculino , Femenino , Desarrollo Infantil/fisiología , Percepción Visual/fisiología , Movimientos Oculares/fisiología , Preescolar , Movimientos de la Cabeza/fisiología , Cabeza/fisiología
16.
Biosensors (Basel) ; 14(8)2024 Aug 22.
Artículo en Inglés | MEDLINE | ID: mdl-39194635

RESUMEN

Over the past decades, feature-based statistical machine learning and deep neural networks have been extensively utilized for automatic sleep stage classification (ASSC). Feature-based approaches offer clear insights into sleep characteristics and require low computational power but often fail to capture the spatial-temporal context of the data. In contrast, deep neural networks can process raw sleep signals directly and deliver superior performance. However, their overfitting, inconsistent accuracy, and computational cost were the primary drawbacks that limited their end-user acceptance. To address these challenges, we developed a novel neural network model, MLS-Net, which integrates the strengths of neural networks and feature extraction for automated sleep staging in mice. MLS-Net leverages temporal and spectral features from multimodal signals, such as EEG, EMG, and eye movements (EMs), as inputs and incorporates a bidirectional Long Short-Term Memory (bi-LSTM) to effectively capture the spatial-temporal nonlinear characteristics inherent in sleep signals. Our studies demonstrate that MLS-Net achieves an overall classification accuracy of 90.4% and REM state precision of 91.1%, sensitivity of 84.7%, and an F1-Score of 87.5% in mice, outperforming other neural network and feature-based algorithms in our multimodal dataset.


Asunto(s)
Algoritmos , Electroencefalografía , Redes Neurales de la Computación , Fases del Sueño , Animales , Ratones , Fases del Sueño/fisiología , Electromiografía , Aprendizaje Automático , Procesamiento de Señales Asistido por Computador , Movimientos Oculares/fisiología
17.
J Sports Sci ; 42(13): 1243-1258, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-39155587

RESUMEN

The majority of a football referee's time is spent assessing open-play situations, yet little is known about how referees search for information during this uninterrupted play. The aim of the current study was to examine the exploratory gaze behaviour of elite and sub-elite football referees in open-play game situations. Four elite (i.e. national) and eight sub-elite (i.e. regional) referees officiated an in-situ football match while wearing a mobile eye-tracker to assess their gaze behaviour. Both referential head and eye movements (i.e. moving gaze away from and then back to the ball) were measured. Results showed gaze behaviour was characterised overall by more referential head than eye movements (~75 vs 25%), which were of longer duration (~950 vs 460 ms). Moreover, elite referees employed faster referential movements (~640 vs 730 ms), spending less time with their gaze away from the ball (carrier) than the sub-elite referees. Crucially, both the referential head and eye movements were coordinated relative to key events in the match, in this case passes, showing that referees anticipate the passes to ensure that the referential movements did not occur during passes, rather before or after. The results further our understanding of the coordinative gaze behaviours that underpin expertise in officiating.


Asunto(s)
Movimientos Oculares , Movimientos de la Cabeza , Fútbol , Humanos , Fútbol/fisiología , Fútbol/psicología , Movimientos Oculares/fisiología , Movimientos de la Cabeza/fisiología , Adulto , Masculino , Tecnología de Seguimiento Ocular , Fijación Ocular/fisiología
18.
J Vis ; 24(8): 1, 2024 Aug 01.
Artículo en Inglés | MEDLINE | ID: mdl-39087937

RESUMEN

Some locomotor tasks involve steering at high speeds through multiple waypoints within cluttered environments. Although in principle actors could treat each individual waypoint in isolation, skillful performance would seem to require them to adapt their trajectory to the most immediate waypoint in anticipation of subsequent waypoints. To date, there have been few studies of such behavior, and the evidence that does exist is inconclusive about whether steering is affected by multiple future waypoints. The present study was designed to address the need for a clearer understanding of how humans adapt their steering movements in anticipation of future goals. Subjects performed a simulated drone flying task in a forest-like virtual environment that was presented on a monitor while their eye movements were tracked. They were instructed to steer through a series of gates while the distance at which gates first became visible (i.e., lookahead distance) was manipulated between trials. When gates became visible at least 1-1/2 segments in advance, subjects successfully flew through a high percentage of gates, rarely collided with obstacles, and maintained a consistent speed. They also approached the most immediate gate in a way that depended on the angular position of the subsequent gate. However, when the lookahead distance was less than 1-1/2 segments, subjects followed longer paths and flew at slower, more variable speeds. The findings demonstrate that the control of steering through multiple waypoints does indeed depend on information from beyond the most immediate waypoint. Discussion focuses on the possible control strategies for steering through multiple waypoints.


Asunto(s)
Movimientos Oculares , Desempeño Psicomotor , Humanos , Masculino , Adulto , Femenino , Movimientos Oculares/fisiología , Desempeño Psicomotor/fisiología , Adulto Joven , Conducción de Automóvil , Percepción de Movimiento/fisiología , Realidad Virtual
19.
Sci Rep ; 14(1): 19028, 2024 08 16.
Artículo en Inglés | MEDLINE | ID: mdl-39152193

RESUMEN

In real-world listening situations, individuals typically utilize head and eye movements to receive and enhance sensory information while exploring acoustic scenes. However, the specific patterns of such movements have not yet been fully characterized. Here, we studied how movement behavior is influenced by scene complexity, varied in terms of reverberation and the number of concurrent talkers. Thirteen normal-hearing participants engaged in a speech comprehension and localization task, requiring them to indicate the spatial location of a spoken story in the presence of other stories in virtual audio-visual scenes. We observed delayed initial head movements when more simultaneous talkers were present in the scene. Both reverberation and a higher number of talkers extended the search period, increased the number of fixated source locations, and resulted in more gaze jumps. The period preceding the participants' responses was prolonged when more concurrent talkers were present, and listeners continued to move their eyes in the proximity of the target talker. In scenes with more reverberation, the final head position when making the decision was farther away from the target. These findings demonstrate that the complexity of the acoustic scene influences listener behavior during speech comprehension and localization in audio-visual scenes.


Asunto(s)
Movimientos Oculares , Percepción del Habla , Humanos , Percepción del Habla/fisiología , Masculino , Femenino , Adulto , Movimientos Oculares/fisiología , Adulto Joven , Localización de Sonidos/fisiología , Movimientos de la Cabeza/fisiología , Estimulación Acústica/métodos , Comprensión/fisiología , Realidad Virtual , Percepción Visual/fisiología
20.
Autism Res ; 17(8): 1640-1650, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-39087850

RESUMEN

Different empathic responses are often reported in autism but remain controversial. To investigate which component of empathy is most affected by autism, we examined the affective, cognitive, and motivational components of empathy in 25 5- to 8-year-old autistic and 27 neurotypical children. Participants were presented with visual stimuli depicting people's limbs in painful or nonpainful situations while their eye movements, pupillary responses, and verbal ratings of pain intensity and empathic concern were recorded. The results indicate an emotional overarousal and reduced empathic concern to others' pain in autism. Compared with neurotypical children, autistic children displayed larger pupil dilation accompanied by attentional avoidance to others' pain. Moreover, even though autistic children rated others in painful situations as painful, they felt less sorry than neurotypical children. Interestingly, autistic children felt more sorry in nonpainful situations compared with neurotypical children. These findings demonstrated an emotional overarousal in response to others' pain in autistic children, and provide important implications for clinical practice aiming to promote socio-emotional understanding in autistic children.


Asunto(s)
Trastorno Autístico , Emociones , Empatía , Dolor , Humanos , Empatía/fisiología , Masculino , Niño , Femenino , Dolor/psicología , Dolor/fisiopatología , Trastorno Autístico/psicología , Trastorno Autístico/fisiopatología , Trastorno Autístico/complicaciones , Emociones/fisiología , Preescolar , Movimientos Oculares/fisiología , Pupila/fisiología
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA