Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 10 de 10
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Elife ; 112022 08 11.
Artículo en Inglés | MEDLINE | ID: mdl-35950921

RESUMEN

Visually guided behaviors require the brain to transform ambiguous retinal images into object-level spatial representations and implement sensorimotor transformations. These processes are supported by the dorsal 'where' pathway. However, the specific functional contributions of areas along this pathway remain elusive due in part to methodological differences across studies. We previously showed that macaque caudal intraparietal (CIP) area neurons possess robust 3D visual representations, carry choice- and saccade-related activity, and exhibit experience-dependent sensorimotor associations (Chang et al., 2020b). Here, we used a common experimental design to reveal parallel processing, hierarchical transformations, and the formation of sensorimotor associations along the 'where' pathway by extending the investigation to V3A, a major feedforward input to CIP. Higher-level 3D representations and choice-related activity were more prevalent in CIP than V3A. Both areas contained saccade-related activity that predicted the direction/timing of eye movements. Intriguingly, the time course of saccade-related activity in CIP aligned with the temporally integrated V3A output. Sensorimotor associations between 3D orientation and saccade direction preferences were stronger in CIP than V3A, and moderated by choice signals in both areas. Together, the results explicate parallel representations, hierarchical transformations, and functional associations of visual and saccade-related signals at a key juncture in the 'where' pathway.


Asunto(s)
Lóbulo Parietal , Movimientos Sacádicos , Animales , Movimientos Oculares , Macaca , Neuronas/fisiología , Lóbulo Parietal/fisiología , Estimulación Luminosa/métodos
2.
Elife ; 92020 10 20.
Artículo en Inglés | MEDLINE | ID: mdl-33078705

RESUMEN

Three-dimensional (3D) representations of the environment are often critical for selecting actions that achieve desired goals. The success of these goal-directed actions relies on 3D sensorimotor transformations that are experience-dependent. Here we investigated the relationships between the robustness of 3D visual representations, choice-related activity, and motor-related activity in parietal cortex. Macaque monkeys performed an eight-alternative 3D orientation discrimination task and a visually guided saccade task while we recorded from the caudal intraparietal area using laminar probes. We found that neurons with more robust 3D visual representations preferentially carried choice-related activity. Following the onset of choice-related activity, the robustness of the 3D representations further increased for those neurons. We additionally found that 3D orientation and saccade direction preferences aligned, particularly for neurons with choice-related activity, reflecting an experience-dependent sensorimotor association. These findings reveal previously unrecognized links between the fidelity of ecologically relevant object representations, choice-related activity, and motor-related activity.


Asunto(s)
Neuronas Motoras/fisiología , Lóbulo Parietal/fisiología , Células Receptoras Sensoriales/fisiología , Visión Ocular , Animales , Conducta Animal , Macaca mulatta , Imagen por Resonancia Magnética , Masculino , Orientación/fisiología , Movimientos Sacádicos
3.
J Surg Res ; 247: 150-155, 2020 03.
Artículo en Inglés | MEDLINE | ID: mdl-31776024

RESUMEN

BACKGROUND: Time away from surgical practice can lead to skills decay. Research residents are thought to be prone to skills decay, given their limited experience and reduced exposure to clinical activities during their research training years. This study takes a cross-sectional approach to assess differences in residents' skills at the beginning and end of their research years using virtual reality. We hypothesized that research residents will have measurable decay in psychomotor skills when evaluated using virtual reality. METHODS: Surgical residents (n = 28) were divided into two groups; the first group was just beginning their research time (clinical residents: n = 19) and the second group (research residents: n = 9) had just finished at least 2 y of research. All participants were asked to perform a target-tracking task using a haptic device, and their performance was compared using Welch's t-test. RESULTS: Research residents showed a higher level of "tracking error" (1.69 ± 0.44 cm versus 1.40 ± 0.19 cm; P = 0.04) and a similar level of "path length" (62.5 ± 10.5 cm versus 62.1 ± 5.2 cm; P = 0.92) when compared with clinical residents. CONCLUSIONS: The increased "tracking error" among residents at the end of their research time suggests fine psychomotor skills decay in residents who spend time away from clinical duties during laboratory time. This decay demonstrates the need for research residents to regularly participate in clinical activities, simulation, or assessments to minimize and monitor skills decay while away from clinical practice. Additional longitudinal studies may help better map learning and decay curves for residents who spend time away from clinical practice.


Asunto(s)
Investigación Biomédica/estadística & datos numéricos , Competencia Clínica/estadística & datos numéricos , Internado y Residencia/estadística & datos numéricos , Desempeño Psicomotor , Entrenamiento Simulado/estadística & datos numéricos , Estudios Transversales , Femenino , Humanos , Masculino , Entrenamiento Simulado/métodos , Factores de Tiempo , Realidad Virtual
4.
eNeuro ; 7(1)2020.
Artículo en Inglés | MEDLINE | ID: mdl-31836597

RESUMEN

Reconstructing three-dimensional (3D) scenes from two-dimensional (2D) retinal images is an ill-posed problem. Despite this, 3D perception of the world based on 2D retinal images is seemingly accurate and precise. The integration of distinct visual cues is essential for robust 3D perception in humans, but it is unclear whether this is true for non-human primates (NHPs). Here, we assessed 3D perception in macaque monkeys using a planar surface orientation discrimination task. Perception was accurate across a wide range of spatial poses (orientations and distances), but precision was highly dependent on the plane's pose. The monkeys achieved robust 3D perception by dynamically reweighting the integration of stereoscopic and perspective cues according to their pose-dependent reliabilities. Errors in performance could be explained by a prior resembling the 3D orientation statistics of natural scenes. We used neural network simulations based on 3D orientation-selective neurons recorded from the same monkeys to assess how neural computation might constrain perception. The perceptual data were consistent with a model in which the responses of two independent neuronal populations representing stereoscopic cues and perspective cues (with perspective signals from the two eyes combined using nonlinear canonical computations) were optimally integrated through linear summation. Perception of combined-cue stimuli was optimal given this architecture. However, an alternative architecture in which stereoscopic cues, left eye perspective cues, and right eye perspective cues were represented by three independent populations yielded two times greater precision than the monkeys. This result suggests that, due to canonical computations, cue integration for 3D perception is optimized but not maximized.


Asunto(s)
Señales (Psicología) , Percepción de Movimiento , Neuronas , Orientación , Estimulación Luminosa , Percepción Visual
5.
Elife ; 82019 02 07.
Artículo en Inglés | MEDLINE | ID: mdl-30730290

RESUMEN

Modern neuroscience research often requires the coordination of multiple processes such as stimulus generation, real-time experimental control, as well as behavioral and neural measurements. The technical demands required to simultaneously manage these processes with high temporal fidelity is a barrier that limits the number of labs performing such work. Here we present an open-source, network-based parallel processing framework that lowers this barrier. The Real-Time Experimental Control with Graphical User Interface (REC-GUI) framework offers multiple advantages: (i) a modular design that is agnostic to coding language(s) and operating system(s) to maximize experimental flexibility and minimize researcher effort, (ii) simple interfacing to connect multiple measurement and recording devices, (iii) high temporal fidelity by dividing task demands across CPUs, and (iv) real-time control using a fully customizable and intuitive GUI. We present applications for human, non-human primate, and rodent studies which collectively demonstrate that the REC-GUI framework facilitates technically demanding, behavior-contingent neuroscience research. Editorial note: This article has been through an editorial process in which the authors decide how to respond to the issues raised during peer review. The Reviewing Editor's assessment is that all the issues have been addressed (see decision letter).


Asunto(s)
Neurociencias , Programas Informáticos , Potenciales de Acción , Animales , Reacción de Prevención , Conducta Animal , Humanos , Ratones , Primates , Reproducibilidad de los Resultados , Factores de Tiempo , Visión Ocular
6.
J Surg Res ; 233: 444-452, 2019 01.
Artículo en Inglés | MEDLINE | ID: mdl-30502284

RESUMEN

BACKGROUND: This project involved the development and evaluation of a new visual bleeding feedback (VBF) system for tourniquet training. We hypothesized that dynamic VBF during junctional tourniquet training would be helpful and well received by trainees. MATERIALS AND METHODS: We designed the VBF to simulate femoral bleeding. Medical students (n = 15) and emergency medical service (EMS) members (n = 4) were randomized in a single-blind, crossover study to the VBF or without feedback groups. Poststudy surveys assessing VBF usefulness and recommendations were conducted along with participants' reported confidence using a 7-point Likert scale. Data from the different groups were compared using Wilcoxon signed-rank and rank-sum tests. RESULTS: Participants rated the helpfulness of the VBF highly (6.53/7.00) and indicated they were very likely to recommend the VBF simulator to others (6.80/7.00). Pre- and post-VBF confidence were not statistically different (P = 0.59). Likewise, tourniquet application times for VBF and without feedback before crossover were not statistically different (P = 0.63). Although participant confidence did not change significantly from beginning to end of the study (P = 0.46), application time was significantly reduced (P = 0.001). CONCLUSIONS: New tourniquet learners liked our VBF prototype and found it useful. Although confidence did not change over the course of the study for any group, application times improved. Future studies using outcomes of this study will allow us to continue VBF development as well as incorporate other quantitative measures of task performance to elucidate VBF's true benefit and help trainees achieve mastery in junctional tourniquet skills.


Asunto(s)
Primeros Auxilios/métodos , Técnicas Hemostáticas/instrumentación , Entrenamiento Simulado/métodos , Torniquetes , Estudios Cruzados , Evaluación Educacional/estadística & datos numéricos , Auxiliares de Urgencia/educación , Retroalimentación Sensorial , Femenino , Hemorragia/terapia , Humanos , Masculino , Maniquíes , Personal Militar/educación , Método Simple Ciego , Estudiantes de Medicina , Heridas Relacionadas con la Guerra/terapia
8.
Proc Natl Acad Sci U S A ; 113(18): 5077-82, 2016 May 03.
Artículo en Inglés | MEDLINE | ID: mdl-27095846

RESUMEN

Terrestrial navigation naturally involves translations within the horizontal plane and eye rotations about a vertical (yaw) axis to track and fixate targets of interest. Neurons in the macaque ventral intraparietal (VIP) area are known to represent heading (the direction of self-translation) from optic flow in a manner that is tolerant to rotational visual cues generated during pursuit eye movements. Previous studies have also reported that eye rotations modulate the response gain of heading tuning curves in VIP neurons. We tested the hypothesis that VIP neurons simultaneously represent both heading and horizontal (yaw) eye rotation velocity by measuring heading tuning curves for a range of rotational velocities of either real or simulated eye movements. Three findings support the hypothesis of a joint representation. First, we show that rotation velocity selectivity based on gain modulations of visual heading tuning is similar to that measured during pure rotations. Second, gain modulations of heading tuning are similar for self-generated eye rotations and visually simulated rotations, indicating that the representation of rotation velocity in VIP is multimodal, driven by both visual and extraretinal signals. Third, we show that roughly one-half of VIP neurons jointly represent heading and rotation velocity in a multiplicatively separable manner. These results provide the first evidence, to our knowledge, for a joint representation of translation direction and rotation velocity in parietal cortex and show that rotation velocity can be represented based on visual cues, even in the absence of efference copy signals.


Asunto(s)
Señales (Psicología) , Movimientos Oculares/fisiología , Percepción de Movimiento/fisiología , Flujo Optico/fisiología , Lóbulo Parietal/fisiología , Navegación Espacial/fisiología , Animales , Macaca mulatta , Masculino , Orientación/fisiología , Rotación
9.
Elife ; 42015 Feb 18.
Artículo en Inglés | MEDLINE | ID: mdl-25693417

RESUMEN

As we navigate through the world, eye and head movements add rotational velocity patterns to the retinal image. When such rotations accompany observer translation, the rotational velocity patterns must be discounted to accurately perceive heading. The conventional view holds that this computation requires efference copies of self-generated eye/head movements. Here we demonstrate that the brain implements an alternative solution in which retinal velocity patterns are themselves used to dissociate translations from rotations. These results reveal a novel role for visual cues in achieving a rotation-invariant representation of heading in the macaque ventral intraparietal area. Specifically, we show that the visual system utilizes both local motion parallax cues and global perspective distortions to estimate heading in the presence of rotations. These findings further suggest that the brain is capable of performing complex computations to infer eye movements and discount their sensory consequences based solely on visual cues.


Asunto(s)
Lóbulo Parietal/fisiología , Visión Ocular , Animales , Macaca mulatta , Retina/fisiología
10.
Neuron ; 71(4): 750-61, 2011 Aug 25.
Artículo en Inglés | MEDLINE | ID: mdl-21867889

RESUMEN

Responses of neurons in early visual cortex change little with training and appear insufficient to account for perceptual learning. Behavioral performance, however, relies on population activity, and the accuracy of a population code is constrained by correlated noise among neurons. We tested whether training changes interneuronal correlations in the dorsal medial superior temporal area, which is involved in multisensory heading perception. Pairs of single units were recorded simultaneously in two groups of subjects: animals trained extensively in a heading discrimination task, and "naive" animals that performed a passive fixation task. Correlated noise was significantly weaker in trained versus naive animals, which might be expected to improve coding efficiency. However, we show that the observed uniform reduction in noise correlations leads to little change in population coding efficiency when all neurons are decoded. Thus, global changes in correlated noise among sensory neurons may be insufficient to account for perceptual learning.


Asunto(s)
Interneuronas/fisiología , Aprendizaje/fisiología , Percepción/fisiología , Corteza Visual/fisiología , Animales , Discriminación en Psicología/fisiología , Electrofisiología , Interneuronas/citología , Macaca mulatta , Masculino , Modelos Neurológicos , Corteza Visual/citología
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA