Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 289
Filtrar
1.
bioRxiv ; 2024 Aug 21.
Artículo en Inglés | MEDLINE | ID: mdl-39229213

RESUMEN

Navigating space and forming memories based on spatial experience are crucial for survival, including storing memories in an allocentric (map-like) framework and conversion into body-centered action. The hippocampus and parietal cortex (PC) comprise a network for coordinating these reference frames, though the mechanism remains unclear. We used a task requiring remembering previous spatial locations to make correct future action and observed that hippocampus can encode the allocentric place, while PC encodes upcoming actions and relays this to hippocampus. Transformation from location to action unfolds gradually, with 'Came From' signals diminishing and future action representations strengthening. PC sometimes encodes previous spatial locations in a route-based reference frame and conveys this to hippocampus. The signal for the future location appears first in PC, and then in hippocampus, in the form of an egocentric direction of future goal locations, suggesting egocentric encoding recently observed in hippocampus may originate in PC (or another "upstream" structure). Bidirectional signaling suggests a coordinated mechanism for integrating map-like, route-centered, and person-centered spatial reference frames at the network level during navigation.

2.
Vision Res ; 223: 108462, 2024 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-39111102

RESUMEN

When observers perceive 3D relations, they represent depth and spatial locations with the ground as a reference. This frame of reference could be egocentric, that is, moving with the observer, or allocentric, that is, remaining stationary and independent of the moving observer. We tested whether the representation of relative depth and of spatial location took an egocentric or allocentric frame of reference in three experiments, using a blind walking task. In Experiments 1 and 2, participants either observed a target in depth, and then straightaway blind walked for the previously seen distance between the target and the self; or walked to the side or along an oblique path for 3 m and then started blind walking for the previously seen distance. The difference between the conditions was whether blind walking started from the observation point. Results showed that blind walking distance varied with the starting locations. Thus, the represented distance did not seem to go through spatial updating with the moving observer and the frame of reference was likely allocentric. In Experiment 3, participants observed a target in space, then immediately blind walked to the target, or blind walked to another starting point and then blind walked to the target. Results showed that the end location of blind walking was different for different starting points, which suggested the representation of spatial location is likely to take an allocentric frame of reference. Taken together, these experiments convergingly suggested that observers used an allocentric frame of reference to construct their mental space representation.


Asunto(s)
Percepción de Profundidad , Percepción Espacial , Caminata , Humanos , Masculino , Percepción Espacial/fisiología , Femenino , Percepción de Profundidad/fisiología , Adulto , Adulto Joven , Caminata/fisiología , Análisis de Varianza , Percepción de Distancia/fisiología
3.
Sci Rep ; 14(1): 17534, 2024 07 30.
Artículo en Inglés | MEDLINE | ID: mdl-39080430

RESUMEN

We investigated whether distractor inhibition occurs relative to the target or fixation in a perceptual decision-making task using a purely saccadic response. Previous research has shown that during the process of discriminating a target from distractor, saccades made to a target deviate towards the distractor. Once discriminated, the distractor is inhibited, and trajectories deviate away from the distractor. Saccade deviation magnitudes provide a sensitive measure of target-distractor competition dependent on the distance between them. While saccades are planned in an egocentric reference frame (locations represented relative to fixation), object-based inhibition has been shown to occur in an allocentric reference frame (objects represented relative to each other independent of fixation). By varying the egocentric and allocentric distances of the target and distractor, we found that only egocentric distances contributed to saccade trajectories shifts towards the distractor during active decision-making. When the perceptual decision-making process was complete, and the distractor was inhibited, both ego- and allocentric distances independently contributed to saccade trajectory shifts away from the distractor. This is consistent with independent spatial and object-based inhibitory mechanisms. Therefore, we suggest that distractor inhibition is maintained in cortical visual areas with allocentric maps which then feeds into oculomotor areas for saccade planning.


Asunto(s)
Toma de Decisiones , Movimientos Sacádicos , Movimientos Sacádicos/fisiología , Humanos , Masculino , Femenino , Adulto , Adulto Joven , Toma de Decisiones/fisiología , Fijación Ocular/fisiología , Percepción Visual/fisiología , Atención/fisiología , Estimulación Luminosa , Tiempo de Reacción/fisiología
4.
bioRxiv ; 2024 May 01.
Artículo en Inglés | MEDLINE | ID: mdl-38746251

RESUMEN

Humans effortlessly use vision to plan and guide navigation through the local environment, or "scene". A network of three cortical regions responds selectively to visual scene information, including the occipital (OPA), parahippocampal (PPA), and medial place areas (MPA) - but how this network supports visually-guided navigation is unclear. Recent evidence suggests that one region in particular, the OPA, supports visual representations for navigation, while PPA and MPA support other aspects of scene processing. However, most previous studies tested only static scene images, which lack the dynamic experience of navigating through scenes. We used dynamic movie stimuli to test whether OPA, PPA, and MPA represent two critical kinds of navigationally-relevant information: navigational affordances (e.g., can I walk to the left, right, or both?) and ego-motion (e.g., am I walking forward or backward? turning left or right?). We found that OPA is sensitive to both affordances and ego-motion, as well as the conflict between these cues - e.g., turning toward versus away from an open doorway. These effects were significantly weaker or absent in PPA and MPA. Responses in OPA were also dissociable from those in early visual cortex, consistent with the idea that OPA responses are not merely explained by lower-level visual features. OPA responses to affordances and ego-motion were stronger in the contralateral than ipsilateral visual field, suggesting that OPA encodes navigationally relevant information within an egocentric reference frame. Taken together, these results support the hypothesis that OPA contains visual representations that are useful for planning and guiding navigation through scenes.

5.
Brain Sci ; 14(4)2024 Mar 27.
Artículo en Inglés | MEDLINE | ID: mdl-38671966

RESUMEN

Accurate comprehension of others' thoughts and intentions is crucial for smooth social interactions, wherein understanding their perceptual experiences serves as a fundamental basis for this high-level social cognition. However, previous research has predominantly focused on the visual modality when investigating perceptual processing from others' perspectives, leaving the exploration of multisensory inputs during this process largely unexplored. By incorporating auditory stimuli into visual perspective-taking (VPT) tasks, we have designed a novel experimental paradigm in which the spatial correspondence between visual and auditory stimuli was limited to the altercentric rather than the egocentric reference frame. Overall, we found that when individuals engaged in explicit or implicit VPT to process visual stimuli from an avatar's viewpoint, the concomitantly presented auditory stimuli were also processed within this avatar-centered reference frame, revealing altercentric cross-modal interactions.

6.
Psychol Res ; 88(2): 476-486, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-37555941

RESUMEN

Literature proposes five distinct cognitive strategies for wayfinding decisions at intersections. Our study investigates whether those strategies rely on a generalized decision-making process, on two frame-specific processes-one in an egocentric and the other in an allocentric spatial reference frame, and/or on five strategy-specific processes. Participants took six trips along a prescribed route through five virtual mazes, each designed for decision-making by a particular strategy. We found that wayfinding accuracy on trips through a given maze correlated significantly with the accuracy on trips through another maze that was designed for a different reference frame (rbetween-frames = 0.20). Correlations were not significantly higher if the other maze was designed for the same reference frame (rwithin-frames = 0.19). However, correlations between trips through the same maze were significantly higher than those between trips through different mazes that were designed for the same reference frame (rwithin-maze = 0.52). We conclude that wayfinding decisions were based on a generalized cognitive process, as well as on strategy-specific processes, while the role of frame-specific processes-if any-was relatively smaller. Thus, the well-established dichotomy of egocentric versus allocentric spatial representations did not translate into a similar, observable dichotomy of decision-making.


Asunto(s)
Percepción Espacial , Interfaz Usuario-Computador , Humanos , Aprendizaje por Laberinto , Cognición
7.
bioRxiv ; 2023 Nov 18.
Artículo en Inglés | MEDLINE | ID: mdl-38014023

RESUMEN

Since motion can only be defined relative to a reference frame, which reference frame guides perception? A century of psychophysical studies has produced conflicting evidence: retinotopic, egocentric, world-centric, or even object-centric. We introduce a hierarchical Bayesian model mapping retinal velocities to perceived velocities. Our model mirrors the structure in the world, in which visual elements move within causally connected reference frames. Friction renders velocities in these reference frames mostly stationary, formalized by an additional delta component (at zero) in the prior. Inverting this model automatically segments visual inputs into groups, groups into supergroups, etc. and "perceives" motion in the appropriate reference frame. Critical model predictions are supported by two new experiments, and fitting our model to the data allows us to infer the subjective set of reference frames used by individual observers. Our model provides a quantitative normative justification for key Gestalt principles providing inspiration for building better models of visual processing in general.

8.
Hippocampus ; 33(12): 1252-1266, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-37811797

RESUMEN

The anterior and lateral thalamus (ALT) contains head direction cells that signal the directional orientation of an individual within the environment. ALT has direct and indirect connections with the parietal cortex (PC), an area hypothesized to play a role in coordinating viewer-dependent and viewer-independent spatial reference frames. This coordination between reference frames would allow an individual to translate movements toward a desired location from memory. Thus, ALT-PC functional connectivity would be critical for moving toward remembered allocentric locations. This hypothesis was tested in rats with a place-action task that requires associating an appropriate action (left or right turn) with a spatial location. There are four arms, each offset by 90°, positioned around a central starting point. A trial begins in the central starting point. After exiting a pseudorandomly selected arm, the rat had to displace the correct object covering one of two (left versus right) feeding stations to receive a reward. For a pair of arms facing opposite directions, the reward was located on the left, and for the other pair, the reward was located on the right. Thus, each reward location had a different combination of allocentric location and egocentric action. Removal of an object was scored as correct or incorrect. Trials in which the rat did not displace any objects were scored as "no selection" trials. After an object was removed, the rat returned to the center starting position and the maze was reset for the next trial. To investigate the role of the ALT-PC network, muscimol inactivation infusions targeted bilateral PC, bilateral ALT, or the ALT-PC network. Muscimol sessions were counterbalanced and compared to saline sessions within the same animal. All inactivations resulted in decreased accuracy, but only bilateral PC inactivations resulted in increased non selecting, increased errors, and longer latency responses on the remaining trials. Thus, the ALT-PC circuit is critical for linking an action with a spatial location for successful navigation.


Asunto(s)
Lóbulo Parietal , Percepción Espacial , Ratas , Animales , Muscimol/farmacología , Lóbulo Parietal/fisiología , Tiempo de Reacción/fisiología , Percepción Espacial/fisiología
9.
Sensors (Basel) ; 23(12)2023 Jun 20.
Artículo en Inglés | MEDLINE | ID: mdl-37420924

RESUMEN

Safety plays a key role in human-robot interactions in collaborative robot (cobot) applications. This paper provides a general procedure to guarantee safe workstations allowing human operations, robot contributions, the dynamical environment, and time-variant objects in a set of collaborative robotic tasks. The proposed methodology focuses on the contribution and the mapping of reference frames. Multiple reference frame representation agents are defined at the same time by considering egocentric, allocentric, and route-centric perspectives. The agents are processed to provide a minimal and effective assessment of the ongoing human-robot interactions. The proposed formulation is based on the generalization and proper synthesis of multiple cooperating reference frame agents at the same time. Accordingly, it is possible to achieve a real-time assessment of the safety-related implications through the implementation and fast calculation of proper safety-related quantitative indices. This allows us to define and promptly regulate the controlling parameters of the involved cobot without velocity limitations that are recognized as the main disadvantage. A set of experiments has been realized and investigated to demonstrate the feasibility and effectiveness of the research by using a seven-DOF anthropomorphic arm in combination with a psychometric test. The acquired results agree with the current literature in terms of the kinematic, position, and velocity aspects; use measurement methods based on tests provided to the operator; and introduce novel features of work cell arranging, including the use of virtual instrumentation. Finally, the associated analytical-topological treatments have enabled the development of a safe and comfortable measure to the human-robot relation with satisfactory experimental results compared to previous research. Nevertheless, the robot posture, human perception, and learning technologies would have to apply research from multidisciplinary fields such as psychology, gesture, communication, and social sciences in order to be prepared for positioning in real-world applications that offer new challenges for cobot applications.


Asunto(s)
Robótica , Humanos , Robótica/métodos , Aprendizaje
10.
Anim Cogn ; 26(5): 1551-1569, 2023 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-37318674

RESUMEN

How do bottlenose dolphins visually perceive the space around them? In particular, what cues do they use as a frame of reference for left-right perception? To address this question, we examined the dolphin's responses to various manipulations of the spatial relationship between the dolphin and the trainer by using gestural signs for actions given by the trainer, which have different meanings in the left and right hands. When the dolphins were tested with their backs to the trainer (Experiment 1) or in an inverted position underwater (Experiments 2 and 3), correct responses from the trainer's perspective were maintained for signs related to movement direction instructions. In contrast, reversed responses were frequently observed for signs that required different sounds for the left and right hands. When the movement direction instructions were presented with symmetrical graphic signs such as " × " and "●", accuracy decreased in the inverted posture (Experiment 3). Furthermore, when the signs for sounds were presented from either the left or right side of the dolphin's body, performance was better when the side of the sign movement coincided with the body side on which it was presented than when it was mismatched (Experiment 4). In the final experiment, when one eye was covered with an eyecup, the results showed that, as in the case of body-side presentation, performance was better when the open eye coincided with the side on which the sign movement was presented. These results indicate that dolphins used the egocentric frame for visuospatial cognition. In addition, they showed better performances when the gestural signs were presented to the right eye, suggesting the possibility of a left-hemispheric advantage in the dolphin's visuospatial cognition.


Asunto(s)
Delfín Mular , Animales , Cognición , Señales (Psicología)
11.
Curr Biol ; 33(9): 1728-1743.e7, 2023 05 08.
Artículo en Inglés | MEDLINE | ID: mdl-37075750

RESUMEN

Animals use the geometry of their local environments to orient themselves during navigation. Single neurons in the rat postrhinal cortex (POR) appear to encode environmental geometry in an egocentric (self-centered) reference frame, such that they fire in response to the egocentric bearing and/or distance from the environment center or boundaries. One major issue is whether these neurons truly encode high-level global parameters, such as the bearing/distance of the environment centroid, or whether they are simply responsive to the bearings and distances of nearby walls. We recorded from POR neurons as rats foraged in environments with different geometric layouts and modeled their responses based on either global geometry (centroid) or local boundary encoding. POR neurons largely split into either centroid-encoding or local-boundary-encoding cells, with each group lying at one end of a continuum. We also found that distance-tuned cells tend to scale their linear tuning slopes in a very small environment, such that they lie somewhere between absolute and relative distance encoding. In addition, POR cells largely maintain their bearing preferences, but not their distance preferences, when exposed to different boundary types (opaque, transparent, drop edge), suggesting different driving forces behind the bearing and distance signals. Overall, the egocentric spatial correlates encoded by POR neurons comprise a largely robust and comprehensive representation of environmental geometry.


Asunto(s)
Corteza Cerebral , Navegación Espacial , Ratas , Animales , Corteza Cerebral/fisiología , Neuronas/fisiología , Percepción Espacial/fisiología , Navegación Espacial/fisiología
12.
Hippocampus ; 33(5): 658-666, 2023 05.
Artículo en Inglés | MEDLINE | ID: mdl-37013360

RESUMEN

How do rodents' and primates' differences in visual perception impact the way the brain constructs egocentric and allocentric reference frames to represent stimuli in space? Strikingly, there are important similarities in the egocentric spatial reference frames through which cortical regions represent objects with respect to an animal's head or body in rodents and primates. These egocentric representations are suitable for navigation across species. However, while the rodent hippocampus represents allocentric place, I draw on several pieces of evidence suggesting that an egocentric reference frame is paramount in the primate hippocampus, and relates to the first-person perspective characteristic of a primate's field of view. I further discuss the link between an allocentric reference frame and a conceptual frame to suggest that an allocentric reference frame is a semantic construct in primates. Finally, I discuss how views probe memory recall and support prospective coding, and as they are based on a first-person perspective, are a powerful tool for probing episodic memory across species.


Asunto(s)
Memoria , Percepción Espacial , Animales , Estudios Prospectivos , Primates , Hipocampo
13.
J Neurol Sci ; 448: 120635, 2023 05 15.
Artículo en Inglés | MEDLINE | ID: mdl-37031623

RESUMEN

When exploring a visual scene, humans make more saccades in the horizontal direction than any other direction. While many have shown that the horizontal saccade bias rotates in response to scene tilt, it is unclear whether this effect depends on saccade amplitude. We addressed this question by examining the effect of image tilt on the saccade direction distributions recorded during freely viewing natural scenes. Participants (n = 20) viewed scenes tilted at -30°, 0°, and 30°. Saccade distributions during free viewing rotated by an angle of 12.1° ± 6.7° (t(19) = 8.04, p < 0.001) in the direction of the image tilt. When we partitioned the saccades according to their amplitude we found that small amplitude saccades occurred most in the horizontal direction while large amplitude saccades were more oriented to the scene tilt (p < 0.001). To further study the characteristics of small saccades and how they are affected by scene tilt, we looked at the effect of image tilt on small fixational saccades made while fixating a central target amidst a larger scene and found that fixational saccade distributions did not rotate with scene tilt (-0.3° ±1.7° degrees; t(19) = -0.8, p = 0.39). These results suggest a combined effect of two reference frames in saccade generation: one egocentric reference frame that dominates for small saccades, biases them horizontally, and may be common for different tasks, and another allocentric reference frame that biases larger saccades along the orientation of an image during free viewing.


Asunto(s)
Fijación Ocular , Movimientos Sacádicos , Humanos , Estimulación Luminosa/métodos
14.
J Vis ; 23(1): 16, 2023 Jan 03.
Artículo en Inglés | MEDLINE | ID: mdl-36689216

RESUMEN

Accurate memory regarding the location of an object with respect to one's own body, termed egocentric visuospatial memory, is essential for action directed toward the object. Although researchers have suggested that the brain stores information related to egocentric visuospatial memory not only in the eye-centered reference frame but also in the other egocentric (i.e., head- or body-centered or both) reference frames, experimental evidence is scarce. Here, we tested this possibility by exploiting the perceptual distortion of head/body-centered coordinates via whole-body tilt relative to gravity. We hypothesized that if the head/body-centered reference frames are involved in storing the egocentric representation of a target in memory, then reproduction would be affected by this perceptual distortion. In two experiments, we asked participants to reproduce the remembered location of a visual target relative to their head/body. Using intervening whole-body roll rotations, we manipulated the initial (target presentation) and final (reproduction of the remembered location) body orientations in space and evaluated the effect on the reproduced location. Our results showed significant biases of the reproduced target location and perceived head/body longitudinal axis in the direction of the intervening body rotation. Importantly, the amount of error was correlated across participants. These results provide experimental evidence for the neural encoding and storage of information related to egocentric visuospatial memory in the head/body-centered reference frames.


Asunto(s)
Desempeño Psicomotor , Percepción Espacial , Humanos , Encéfalo , Orientación , Recuerdo Mental
15.
IEEE Trans Vis Comput Graph ; 29(1): 440-450, 2023 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-36170396

RESUMEN

Multiple-view (MV) representations enabling multi-perspective exploration of large and complex data are often employed on 2D displays. The technique also shows great potential in addressing complex analytic tasks in immersive visualization. However, although useful, the design space of MV representations in immersive visualization lacks in deep exploration. In this paper, we propose a new perspective to this line of research, by examining the effects of view layout for MV representations on situated analytics. Specifically, we disentangle situated analytics in perspectives of situatedness regarding spatial relationship between visual representations and physical referents, and analytics regarding cross-view data analysis including filtering, refocusing, and connecting tasks. Through an in-depth analysis of existing layout paradigms, we summarize design trade-offs for achieving high situatedness and effective analytics simultaneously. We then distill a list of design requirements for a desired layout that balances situatedness and analytics, and develop a prototype system with an automatic layout adaptation method to fulfill the requirements. The method mainly includes a cylindrical paradigm for egocentric reference frame, and a force-directed method for proper view-view, view-user, and view-referent proximities and high view visibility. We conducted a formal user study that compares layouts by our method with linked and embedded layouts. Quantitative results show that participants finished filtering- and connecting-centered tasks significantly faster with our layouts, and user feedback confirms high usability of the prototype system.

16.
Sci Adv ; 8(47): eabp9814, 2022 Nov 25.
Artículo en Inglés | MEDLINE | ID: mdl-36427312

RESUMEN

Spatial cognition is central to human behavior, but the way people conceptualize space varies within and across groups for unknown reasons. Here, we found that adults from an indigenous Bolivian group used systematically different spatial reference frames on different axes, according to known differences in their discriminability: In both verbal and nonverbal tests, participants preferred allocentric (i.e., environment-based) space on the left-right axis, where spatial discriminations (like "b" versus "d") are notoriously difficult, but the same participants preferred egocentric (i.e., body-based) space on the front-back axis, where spatial discrimination is relatively easy. The results (i) establish a relationship between spontaneous spatial language and memory across axes within a single culture, (ii) challenge the claim that each language group has a predominant spatial reference frame at a given scale, and (iii) suggest that spatial thinking and language may both be shaped by spatial discrimination abilities, as they vary across cultures and contexts.

17.
Nutrients ; 14(16)2022 Aug 13.
Artículo en Inglés | MEDLINE | ID: mdl-36014828

RESUMEN

Various lifestyle factors, including diet, physical activity, and sleep, have been studied in the context of children's health. However, how these lifestyle factors contribute to the development of cognitive abilities, including spatial cognition, remains vastly understudied. One landmark in spatial cognitive development occurs between 2.5 and 3 years of age. For spatial orientation at that age, children learn to use allocentric reference frames (using spatial relations between objects as the primary reference frame) in addition to, the already acquired, egocentric reference frames (using one's own body as the primary reference frame). In the current virtual reality study in a sample of 30-36-month-old toddlers (N = 57), we first demonstrated a marginally significant developmental shift in spatial orientation. Specifically, task performance with allocentric performance increased relative to egocentric performance (ηp2 = 0.06). Next, we explored a variety of lifestyle factors, including diet, in relation to task performance, to explain individual differences. Screen time and gestational weight gain of the mother were negatively associated with spatial task performance. The findings presented here can be used to guide future confirmatory studies about the role of lifestyle factors in the development of spatial cognition.


Asunto(s)
Orientación Espacial , Percepción Espacial , Preescolar , Cognición , Humanos , Estilo de Vida , Análisis y Desempeño de Tareas
18.
J Cogn Neurosci ; 34(11): 2168-2188, 2022 10 01.
Artículo en Inglés | MEDLINE | ID: mdl-35900862

RESUMEN

The ability to judge an object's orientation with respect to gravitational vertical relies on an egocentric reference frame that is maintained using not only vestibular cues but also contextual cues provided in the visual scene. Although much is known about how static contextual cues are incorporated into the egocentric reference frame, it is also important to understand how changes in these cues affect perception, since we move about in a world that is itself dynamic. To explore these temporal factors, we used a variant of the rod-and-frame illusion, in which participants indicated the perceived orientation of a briefly flashed rod (5-msec duration) presented before or after the onset of a tilted frame. The frame was found to bias the perceived orientation of rods presented as much as 185 msec before frame onset. To explain this postdictive effect, we propose a differential latency model, where the latency of the orientation judgment is greater than the latency of the contextual cues' initial impact on the egocentric reference frame. In a subsequent test of this model, we decreased the luminance of the rod, which is known to increase visual afferent delays and slow decision processes. This further slowing of the orientation judgment caused the frame-induced bias to affect the perceived orientation of rods presented even further in advance of the frame. These findings indicate that the brain fails to compensate for a mismatch between the timing of orientation judgments and the incorporation of visual cues into the egocentric reference frame.


Asunto(s)
Ilusiones , Vestíbulo del Laberinto , Señales (Psicología) , Humanos , Juicio , Percepción Espacial , Percepción Visual
19.
Brain Commun ; 4(3): fcac148, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35774184

RESUMEN

Congenital deafness modifies an individual's daily interaction with the environment and alters the fundamental perception of the external world. How congenital deafness shapes the interface between the internal and external worlds remains poorly understood. To interact efficiently with the external world, visuospatial representations of external target objects need to be effectively transformed into sensorimotor representations with reference to the body. Here, we tested the hypothesis that egocentric body-centred sensorimotor transformation is impaired in congenital deafness. Consistent with this hypothesis, we found that congenital deafness induced impairments in egocentric judgements, associating the external objects with the internal body. These impairments were due to deficient body-centred sensorimotor transformation per se, rather than the reduced fidelity of the visuospatial representations of the egocentric positions. At the neural level, we first replicated the previously well-documented critical involvement of the frontoparietal network in egocentric processing, in both congenitally deaf participants and hearing controls. However, both the strength of neural activity and the intra-network connectivity within the frontoparietal network alone could not account for egocentric performance variance. Instead, the inter-network connectivity between the task-positive frontoparietal network and the task-negative default-mode network was significantly correlated with egocentric performance: the more cross-talking between them, the worse the egocentric judgement. Accordingly, the impaired egocentric performance in the deaf group was related to increased inter-network connectivity between the frontoparietal network and the default-mode network and decreased intra-network connectivity within the default-mode network. The altered neural network dynamics in congenital deafness were observed for both evoked neural activity during egocentric processing and intrinsic neural activity during rest. Our findings thus not only demonstrate the optimal network configurations between the task-positive and -negative neural networks underlying coherent body-centred sensorimotor transformations but also unravel a critical cause (i.e. impaired body-centred sensorimotor transformation) of a variety of hitherto unexplained difficulties in sensory-guided movements the deaf population experiences in their daily life.

20.
J Vis ; 22(8): 13, 2022 07 11.
Artículo en Inglés | MEDLINE | ID: mdl-35857298

RESUMEN

Visual systems exploit temporal continuity principles to achieve stable spatial perception, manifested as the serial dependence and central tendency effects. These effects are posited to reflect a smoothing process whereby past and present information integrates over time to decrease noise and stabilize perception. Meanwhile, the basic spatial coordinate-Cartesian versus polar-that scaffolds the integration process in two-dimensional continuous space remains unknown. The spatial coordinates are largely related to the allocentric and egocentric reference frames and presumably correspond with early and late processing stages in spatial perception. Here, four experiments consistently demonstrate that Cartesian outperforms polar coordinates in characterizing the serial bias-serial dependence and central tendency effect-in two-dimensional continuous spatial perception. The superiority of Cartesian coordinates is robust, independent of task environment (online and offline task), experimental length (short and long blocks), spatial context (shape of visual mask), and response modality (keyboard and mouse). Taken together, the visual system relies on the Cartesian coordinates for spatiotemporal integration to facilitate stable representation of external information, supporting the involvement of allocentric reference frame and top-down modulation in spatial perception over long time intervals.


Asunto(s)
Percepción Espacial , Tiempo de Reacción/fisiología , Percepción Espacial/fisiología
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA