Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Transl Vis Sci Technol ; 10(12): 22, 2021 10 04.
Artículo en Inglés | MEDLINE | ID: mdl-34661623

RESUMEN

Purpose: Retinal implants (RIs) provide new vision for patients suffering from photoreceptor degeneration in the retina. The limited vision gained by RI, however, leaves room for improvement by training regimes. Methods: Two groups of normal-sighted participants were respectively trained with videos or still images of daily objects in a labeling task. Object appearance was simulated to resemble RI perception. In Experiment 1, the training effect was measured as the change in performance during the training, and the same labeling task was conducted after 1 week to test the retention. In Experiment 2 with a different pool of participants, a reverse labeling task was included before (pre-test) and after the training (post-test) to show if the training effect could be generalized into a different task context. Results: Both groups showed improved object recognition through training that was maintained for a week, and the video group showed better improvement (Experiment 1). Both groups showed improved object recognition in a different task that was maintained for a week, but the video group did not show better retention than the image group (Experiment 2). Conclusions: Training with video materials leads to more improvement than training with still images in simulated RI perception, but this better improvement was specific to the trained task. Translational Relevance: We recommend videos as better training materials than still images for patients with RIs to improve object recognition when the task-goal is highly specific. We also propose here that achieving highly specific training goals runs the risk of limiting the generalization of the training effects.


Asunto(s)
Degeneración Retiniana , Prótesis Visuales , Humanos , Aprendizaje , Retina , Percepción Visual
2.
Brain Sci ; 10(7)2020 Jul 13.
Artículo en Inglés | MEDLINE | ID: mdl-32668806

RESUMEN

In visual search, participants can incidentally learn spatial target-distractor configurations, leading to shorter search times for repeated compared to novel configurations. Usually, this is tested within the limited visual field provided by a computer monitor. While contextual cueing is typically investigated on two-dimensional screens, we present for the first time an implementation of a classic contextual cueing task (search for a T-shape among L-shapes) in a three-dimensional virtual environment. This enabled us to test if the typical finding of incidental learning of repeated search configurations, manifested by shorter search times, would hold in a three-dimensional virtual reality (VR) environment. One specific aspect that was tested by combining virtual reality and contextual cueing was if contextual cueing would hold for targets outside the initial field of view (FOV), requiring head movements to be found. In keeping with two-dimensional search studies, reduced search times were observed after the first epoch and remained stable in the remaining experiment. Importantly, comparable search time reductions were observed for targets both within and outside of the initial FOV. The results show that a repeated distractors-only configuration in the initial FOV can guide search for target locations requiring a head movement to be seen.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA