Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 89
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Int Immunopharmacol ; 142(Pt A): 113099, 2024 Sep 11.
Artículo en Inglés | MEDLINE | ID: mdl-39265355

RESUMEN

BACKGROUND: Immune checkpoint inhibitor (ICI) has been widely used in the treatment of advanced cancers, but predicting their efficacy remains challenging. Traditional biomarkers are numerous but exhibit heterogeneity within populations. For comprehensively utilizing the ICI-related biomarkers, we aim to conduct multidimensional feature selection and deep learning model construction. METHODS: We used statistical and machine learning methods to map features of different levels to next-generation sequencing gene expression. We integrated genes from different sources into the feature input of a deep learning model, by means of self-attention mechanism. RESULTS: We performed feature selection at the single-cell sequencing level, PD-L1 (CD274) analysis level, tumor mutational burden (TMB)/mismatch repair (MMR) level, and somatic copy number alteration (SCNA) level, obtaining 96 feature genes. Based on the pan-cancer dataset, we trained a multi-task deep learning model. We tested the model in the bladder urothelial carcinoma testing set 1 (AUC = 0.62, n = 298), bladder urothelial carcinoma testing set 2 (AUC = 0.66, n = 89), non-small cell lung cancer testing set (AUC = 0.85, n = 27), and skin cutaneous melanoma testing set (AUC = 0.71, n = 27). CONCLUSION: Our study demonstrates the potential of the deep learning model for integrating multidimensional features in predicting the outcome of ICI. Our study also provides a potential methodological case for medical scenarios requiring the integration of multiple levels of features.

2.
Phys Med Biol ; 69(14)2024 Jul 11.
Artículo en Inglés | MEDLINE | ID: mdl-38959911

RESUMEN

Objective.In recent years, convolutional neural networks, which typically focus on extracting spatial domain features, have shown limitations in learning global contextual information. However, frequency domain can offer a global perspective that spatial domain methods often struggle to capture. To address this limitation, we propose FreqSNet, which leverages both frequency and spatial features for medical image segmentation.Approach.To begin, we propose a frequency-space representation aggregation block (FSRAB) to replace conventional convolutions. FSRAB contains three frequency domain branches to capture global frequency information along different axial combinations, while a convolutional branch is designed to interact information across channels in local spatial features. Secondly, the multiplex expansion attention block extracts long-range dependency information using dilated convolutional blocks, while suppressing irrelevant information via attention mechanisms. Finally, the introduced Feature Integration Block enhances feature representation by integrating semantic features that fuse spatial and channel positional information.Main results.We validated our method on 5 public datasets, including BUSI, CVC-ClinicDB, CVC-ColonDB, ISIC-2018, and Luna16. On these datasets, our method achieved Intersection over Union (IoU) scores of 75.46%, 87.81%, 79.08%, 84.04%, and 96.99%, and Hausdorff distance values of 22.22 mm, 13.20 mm, 13.08 mm, 13.51 mm, and 5.22 mm, respectively. Compared to other state-of-the-art methods, our FreqSNet achieves better segmentation results.Significance.Our method can effectively combine frequency domain information with spatial domain features, enhancing the segmentation performance and generalization capability in medical image segmentation tasks.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Procesamiento de Imagen Asistido por Computador/métodos , Humanos , Redes Neurales de la Computación
3.
J Neurosci ; 44(33)2024 Aug 14.
Artículo en Inglés | MEDLINE | ID: mdl-39019614

RESUMEN

The simple act of viewing and grasping an object involves complex sensorimotor control mechanisms that have been shown to vary as a function of multiple object and other task features such as object size, shape, weight, and wrist orientation. However, these features have been mostly studied in isolation. In contrast, given the nonlinearity of motor control, its computations require multiple features to be incorporated concurrently. Therefore, the present study tested the hypothesis that grasp computations integrate multiple task features superadditively in particular when these features are relevant for the same action phase. We asked male and female human participants to reach-to-grasp objects of different shapes and sizes with different wrist orientations. Also, we delayed the movement onset using auditory signals to specify which effector to use. Using electroencephalography and representative dissimilarity analysis to map the time course of cortical activity, we found that grasp computations formed superadditive integrated representations of grasp features during different planning phases of grasping. Shape-by-size representations and size-by-orientation representations occurred before and after effector specification, respectively, and could not be explained by single-feature models. These observations are consistent with the brain performing different preparatory, phase-specific computations; visual object analysis to identify grasp points at abstract visual levels; and downstream sensorimotor preparatory computations for reach-to-grasp trajectories. Our results suggest the brain adheres to the needs of nonlinear motor control for integration. Furthermore, they show that examining the superadditive influence of integrated representations can serve as a novel lens to map the computations underlying sensorimotor control.


Asunto(s)
Fuerza de la Mano , Desempeño Psicomotor , Humanos , Masculino , Femenino , Fuerza de la Mano/fisiología , Desempeño Psicomotor/fisiología , Adulto , Adulto Joven , Percepción Visual/fisiología , Electroencefalografía , Estimulación Luminosa/métodos
4.
Front Comput Neurosci ; 18: 1397819, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39015744

RESUMEN

Many studies have shown that the human visual system has two major functionally distinct cortical visual pathways: a ventral pathway, thought to be important for object recognition, and a dorsal pathway, thought to be important for spatial cognition. According to our and others previous studies, artificial neural networks with two segregated pathways can determine objects' identities and locations more accurately and efficiently than one-pathway artificial neural networks. In addition, we showed that these two segregated artificial cortical visual pathways can each process identity and spatial information of visual objects independently and differently. However, when using such networks to process multiple objects' identities and locations, a binding problem arises because the networks may not associate each object's identity with its location correctly. In a previous study, we constrained the binding problem by training the artificial identity pathway to retain relative location information of objects. This design uses a location map to constrain the binding problem. One limitation of that study was that we only considered two attributes of our objects (identity and location) and only one possible map (location) for binding. However, typically the brain needs to process and bind many attributes of an object, and any of these attributes could be used to constrain the binding problem. In our current study, using visual objects with multiple attributes (identity, luminance, orientation, and location) that need to be recognized, we tried to find the best map (among an identity map, a luminance map, an orientation map, or a location map) to constrain the binding problem. We found that in our experimental simulations, when visual attributes are independent of each other, a location map is always a better choice than the other kinds of maps examined for constraining the binding problem. Our findings agree with previous neurophysiological findings that show that the organization or map in many visual cortical areas is primarily retinotopic or spatial.

5.
Comput Struct Biotechnol J ; 23: 2083-2096, 2024 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-38803517

RESUMEN

Understanding the structural similarity between genomes is pivotal in classification and phylogenetic analysis. As the number of known genomes rockets, alignment-free methods have gained considerable attention. Among these methods, the natural vector method stands out as it represents sequences as vectors using statistical moments, enabling effective clustering based on families in biological taxonomy. However, determining an optimal metric that combines different elements in natural vectors remains challenging due to the absence of a rigorous theoretical framework for weighting different k-mers and orders. In this study, we address this challenge by transforming the determination of optimal weights into an optimization problem and resolving it through gradient-based techniques. Our experimental results underscore the substantial improvement in classification accuracy achieved by employing these optimal weights, reaching an impressive 92.73% on the testing set, surpassing other alignment-free methods. On one hand, our method offers an outstanding metric for virus classification, and on the other hand, it provides valuable insights into feature integration within alignment-free methods.

6.
J Neurosci ; 44(29)2024 Jul 17.
Artículo en Inglés | MEDLINE | ID: mdl-38789263

RESUMEN

The intention to act influences the computations of various task-relevant features. However, little is known about the time course of these computations. Furthermore, it is commonly held that these computations are governed by conjunctive neural representations of the features. But, support for this view comes from paradigms arbitrarily combining task features and affordances, thus requiring representations in working memory. Therefore, the present study used electroencephalography and a well-rehearsed task with features that afford minimal working memory representations to investigate the temporal evolution of feature representations and their potential integration in the brain. Female and male human participants grasped objects or touched them with a knuckle. Objects had different shapes and were made of heavy or light materials with shape and weight being relevant for grasping, not for "knuckling." Using multivariate analysis showed that representations of object shape were similar for grasping and knuckling. However, only for grasping did early shape representations reactivate at later phases of grasp planning, suggesting that sensorimotor control signals feed back to the early visual cortex. Grasp-specific representations of material/weight only arose during grasp execution after object contact during the load phase. A trend for integrated representations of shape and material also became grasp-specific but only briefly during the movement onset. These results suggest that the brain generates action-specific representations of relevant features as required for the different subcomponents of its action computations. Our results argue against the view that goal-directed actions inevitably join all features of a task into a sustained and unified neural representation.


Asunto(s)
Electroencefalografía , Fuerza de la Mano , Movimiento , Desempeño Psicomotor , Humanos , Masculino , Femenino , Adulto , Desempeño Psicomotor/fisiología , Fuerza de la Mano/fisiología , Adulto Joven , Movimiento/fisiología , Estimulación Luminosa/métodos , Percepción Visual/fisiología , Memoria a Corto Plazo/fisiología
7.
Phys Med Biol ; 69(10)2024 Apr 29.
Artículo en Inglés | MEDLINE | ID: mdl-38593831

RESUMEN

Objective. To go beyond the deficiencies of the three conventional multimodal fusion strategies (i.e. input-, feature- and output-level fusion), we propose a bidirectional attention-aware fluid pyramid feature integrated fusion network (BAF-Net) with cross-modal interactions for multimodal medical image diagnosis and prognosis.Approach. BAF-Net is composed of two identical branches to preserve the unimodal features and one bidirectional attention-aware distillation stream to progressively assimilate cross-modal complements and to learn supplementary features in both bottom-up and top-down processes. Fluid pyramid connections were adopted to integrate the hierarchical features at different levels of the network, and channel-wise attention modules were exploited to mitigate cross-modal cross-level incompatibility. Furthermore, depth-wise separable convolution was introduced to fuse the cross-modal cross-level features to alleviate the increase in parameters to a great extent. The generalization abilities of BAF-Net were evaluated in terms of two clinical tasks: (1) an in-house PET-CT dataset with 174 patients for differentiation between lung cancer and pulmonary tuberculosis. (2) A public multicenter PET-CT head and neck cancer dataset with 800 patients from nine centers for overall survival prediction.Main results. On the LC-PTB dataset, improved performance was found in BAF-Net (AUC = 0.7342) compared with input-level fusion model (AUC = 0.6825;p< 0.05), feature-level fusion model (AUC = 0.6968;p= 0.0547), output-level fusion model (AUC = 0.7011;p< 0.05). On the H&N cancer dataset, BAF-Net (C-index = 0.7241) outperformed the input-, feature-, and output-level fusion model, with 2.95%, 3.77%, and 1.52% increments of C-index (p= 0.3336, 0.0479 and 0.2911, respectively). The ablation experiments demonstrated the effectiveness of all the designed modules regarding all the evaluated metrics in both datasets.Significance. Extensive experiments on two datasets demonstrated better performance and robustness of BAF-Net than three conventional fusion strategies and PET or CT unimodal network in terms of diagnosis and prognosis.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Humanos , Pronóstico , Procesamiento de Imagen Asistido por Computador/métodos , Tomografía Computarizada por Tomografía de Emisión de Positrones , Neoplasias Pulmonares/diagnóstico por imagen , Imagen Multimodal , Neoplasias de Cabeza y Cuello/diagnóstico por imagen
8.
Vision Res ; 215: 108346, 2024 02.
Artículo en Inglés | MEDLINE | ID: mdl-38171199

RESUMEN

We compare the recognition of foveal crowded Landolt Cs of two sizes: brief (40 ms), large, low-contrast Cs and high-contrast (1 sec) tests at the resolution limit of the visual system. In different series, the test Landolt C was surrounded by two identical distractors located symmetrically along the horizontal or by a single distractor. The distractors were Landolt Cs or rings. At the resolution limit, the critical spacing was similar in the two series and did not depend on the type of distractor. The result supports the hypothesis that crowding at the resolution limit occurs when both the test and the distractors fall into the same smallest receptive field responsible for the target recognition. For large stimuli, at almost all separations distractors of the same shape caused greater impairment than did rings, and recognition errors were non-random. The critical spacing was equal to 0.5 test diameters only in the presence of one distracting Landolt C. This result suggests that attention is involved: When one distractor is added, involuntary attention, which is directed to the centre of gravity of the stimulus, can lead to confusion of features that are present in both tests and distractors and thus to non-random errors.


Asunto(s)
Atención , Reconocimiento Visual de Modelos , Humanos , Reconocimiento en Psicología , Fóvea Central , Aglomeración
9.
Neural Netw ; 169: 532-541, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-37948971

RESUMEN

A proposed method, Enhancement, integration, and Expansion, aims to activate the representation of detailed features for occluded person re-identification. Region and context are two important and complementary features, and integrating them in an occluded environment can effectively improve the robustness of the model. Firstly, a self-enhancement module is designed. Based on the constructed multi-stream architecture, rich and meaningful feature interference is introduced in the feature extraction stage to enhance the model's ability to perceive noise. Next, a collaborative integration module similar to cascading cross-attention is proposed. By studying the intrinsic interaction patterns of regional and contextual features, it adaptively fuses features across streams and enhances the diverse and complete representation of internal information. The module is not only robust to complex occlusions, but also mitigates the feature interference problem due to similar appearances or scenes. Finally, a matching expansion module that enhances feature discriminability and completeness is proposed. Providing more stable and accurate features for recognition. Compared with state-of-the-art methods on two occluded and holistic datasets, the proposed method is proved to be advanced and the effectiveness of the module is proved by extensive ablation studies.


Asunto(s)
Identificación Biométrica , Redes Neurales de la Computación , Humanos
10.
Psychophysiology ; 61(3): e14467, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-37990794

RESUMEN

Our sensory system is able to build a unified perception of the world, which although rich, is limited and inaccurate. Sometimes, features from different objects are erroneously combined. At the neural level, the role of the parietal cortex in feature integration is well-known. However, the brain dynamics underlying correct and incorrect feature integration are less clear. To explore the temporal dynamics of feature integration, we studied the modulation of different frequency bands in trials in which feature integration was correct or incorrect. Participants responded to the color of a shape target, surrounded by distractors. A calibration procedure ensured that accuracy was around 70% in each participant. To explore the role of expectancy in feature integration, we introduced an unexpected feature to the target in the last blocks of trials. Results demonstrated the contribution of several frequency bands to feature integration. Alpha and beta power was reduced for hits compared to illusions. Moreover, gamma power was overall larger during the experiment for participants who were aware of the unexpected target presented during the last blocks of trials (as compared to unaware participants). These results demonstrate that feature integration is a complex process that can go wrong at different stages of information processing and is influenced by top-down expectancies.


Asunto(s)
Encéfalo , Cognición , Humanos , Lóbulo Parietal , Percepción Visual , Estimulación Luminosa/métodos
11.
Proc Biol Sci ; 290(2012): 20232134, 2023 Dec 06.
Artículo en Inglés | MEDLINE | ID: mdl-38052443

RESUMEN

We reveal a unique visual perception before feature-integration of colour and motion in infants. Visual perception is established by the integration of multiple features, such as colour and motion direction. The mechanism of feature integration benefits from the ongoing interplay between feedforward and feedback loops, yet our comprehension of this causal connection remains incomplete. Researchers have explored the role of recurrent processing in feature integration by studying a visual illusion called 'misbinding', wherein visual characteristics are erroneously merged, resulting in a perception distinct from the originally presented stimuli. Anatomical investigations have revealed that the neural pathways responsible for recurrent connections are underdeveloped in early infants. Therefore, there is a possibility that younger infants could potentially perceive the physically presented visual information that adults miss due to misbinding. Here, we demonstrate that infants less than half a year old showed no misbinding; thus, they perceived the physically presented visual information, while infants more than half a year old perceived incorrectly integrated visual information, showing misbinding. Our findings indicate that recurrent processing barely functions in infants younger than six months of age and that visual information that should have been originally integrated is perceived as it is without being integrated.


Asunto(s)
Ilusiones , Percepción de Movimiento , Adulto , Humanos , Lactante , Percepción Visual
12.
Percept Mot Skills ; 130(6): 2430-2449, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-37905513

RESUMEN

Previous studies of illusory conjunction (IC) mainly focused on alphabetic languages, while researchers have poorly understood the IC mechanism of Chinese words as an ideographic writing system. In the present study, we aimed to investigate the dynamic changes of IC effects for Chinese words under different stimulus exposure times and spatial arrangements. We conducted two experiments with a 3 (Condition: IC, non-IC-same, non-IC-different) × 3 (Exposure time: 38 ms, 88 ms, 138 ms) within-subject design. The results showed that in the IC condition, the two characters recombined regardless of exposure time as long as they could form an orthographically correct new word, demonstrating the universality of IC. In non-IC conditions, increasing exposure time decreased response time and significantly reduced error rate, indicating that attention played a decisive role in perceptual processing. The spatial arrangement had no impact on IC production. These findings support the feature confirmation account model, suggesting that attention modulates IC through top-down feature confirmation processes. These data expand an understanding of IC mechanisms, validate the role of attention in feature confirmation, and elucidate the inimitable mechanism of the Chinese word IC influenced by both low-level visual processing and high-level cognitive control.


Asunto(s)
Ilusiones , Humanos , Percepción Visual/fisiología , Lenguaje , Atención/fisiología , Tiempo de Reacción , Reconocimiento Visual de Modelos
13.
Cogn Neuropsychol ; 40(3-4): 186-213, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37858291

RESUMEN

Some dyslexics cannot process multiple letters simultaneously. It has been argued that this reduced visuo-attentional (VA) letter span could result from poor reading ability and experience. Here, moving away from reading context, we showed that dyslexic group exhibited slower visual search than normal readers group for "symbols", defined as graphic stimuli made up of separable visual features, but not for filled objects. Slowness in symbol visual search was explained by reduced VA field and atypical ocular behaviour when processing those letter-like stimuli and was associated with reduced VA letter span and impaired elementary visuo-spatial perception. Such a basic visual search deficit can hardly be attributed to poor reading ability and experience. Moreover, because it is specific to letter-like stimuli (i.e., "symbols"), it can specifically hinder reading acquisition. Symbol visual search can easily be tested in the pre-reading phase, opening up prospects for early risk detection and prevention of VA dyslexia.


Asunto(s)
Dislexia , Percepción Visual , Humanos , Lectura , Atención , Percepción Espacial
14.
Sensors (Basel) ; 23(17)2023 Aug 29.
Artículo en Inglés | MEDLINE | ID: mdl-37687971

RESUMEN

Remote sensing scene objective recognition (RSSOR) plays a serious application value in both military and civilian fields. Convolutional neural networks (CNNs) have greatly enhanced the improvement of intelligent objective recognition technology for remote sensing scenes, but most of the methods using CNN for high-resolution RSSOR either use only the feature map of the last layer or directly fuse the feature maps from various layers in the "summation" way, which not only ignores the favorable relationship information between adjacent layers but also leads to redundancy and loss of feature map, which hinders the improvement of recognition accuracy. In this study, a contextual, relational attention-based recognition network (CRABR-Net) was presented, which extracts different convolutional feature maps from CNN, focuses important feature content by using a simple, parameter-free attention module (SimAM), fuses the adjacent feature maps by using the complementary relationship feature map calculation, improves the feature learning ability by using the enhanced relationship feature map calculation, and finally uses the concatenated feature maps from different layers for RSSOR. Experimental results show that CRABR-Net exploits the relationship between the different CNN layers to improve recognition performance, achieves better results compared to several state-of-the-art algorithms, and the average accuracy on AID, UC-Merced, and RSSCN7 can be up to 96.46%, 99.20%, and 95.43% with generic training ratios.

15.
Foods ; 12(15)2023 Jul 29.
Artículo en Inglés | MEDLINE | ID: mdl-37569154

RESUMEN

Real-time and accurate awareness of the grain situation proves beneficial for making targeted and dynamic adjustments to cleaning parameters and strategies, leading to efficient and effective removal of impurities with minimal losses. In this study, harvested maize was employed as the raw material, and a specialized object detection network focused on impurity-containing maize images was developed to determine the types and distribution of impurities during the cleaning operations. On the basis of the classic contribution Faster Region Convolutional Neural Network, EfficientNetB7 was introduced as the backbone of the feature learning network and a cross-stage feature integration mechanism was embedded to obtain the global features that contained multi-scale mappings. The spatial information and semantic descriptions of feature matrices from different hierarchies could be fused through continuous convolution and upsampling operations. At the same time, taking into account the geometric properties of the objects to be detected and combining the images' resolution, the adaptive region proposal network (ARPN) was designed and utilized to generate candidate boxes with appropriate sizes for the detectors, which was beneficial to the capture and localization of tiny objects. The effectiveness of the proposed tiny object detection model and each improved component were validated through ablation experiments on the constructed RGB impurity-containing image datasets.

16.
Sensors (Basel) ; 23(13)2023 Jun 21.
Artículo en Inglés | MEDLINE | ID: mdl-37447638

RESUMEN

Camouflaged object detection (COD) aims to segment those camouflaged objects that blend perfectly into their surroundings. Due to the low boundary contrast between camouflaged objects and their surroundings, their detection poses a significant challenge. Despite the numerous excellent camouflaged object detection methods developed in recent years, issues such as boundary refinement and multi-level feature extraction and fusion still need further exploration. In this paper, we propose a novel multi-level feature integration network (MFNet) for camouflaged object detection. Firstly, we design an edge guidance module (EGM) to improve the COD performance by providing additional boundary semantic information by combining high-level semantic information and low-level spatial details to model the edges of camouflaged objects. Additionally, we propose a multi-level feature integration module (MFIM), which leverages the fine local information of low-level features and the rich global information of high-level features in adjacent three-level features to provide a supplementary feature representation for the current-level features, effectively integrating the full context semantic information. Finally, we propose a context aggregation refinement module (CARM) to efficiently aggregate and refine the cross-level features to obtain clear prediction maps. Our extensive experiments on three benchmark datasets show that the MFNet model is an effective COD model and outperforms other state-of-the-art models in all four evaluation metrics (Sα, Eϕ, Fßw, and MAE).


Asunto(s)
Benchmarking , Semántica
17.
Neuroimage ; 278: 120298, 2023 09.
Artículo en Inglés | MEDLINE | ID: mdl-37517573

RESUMEN

Pre-stimulus alpha (α) activity can influence perception of shortly presented, low-contrast stimuli. The underlying mechanisms are often thought to affect perception exactly at the time of presentation. In addition, it is suggested that α cycles determine temporal windows of integration. However, in everyday situations, stimuli are usually presented for periods longer than ∼100 ms and perception is often an integration of information across space and time. Moving objects are just one example. Hence, the question is whether α activity plays a role also in temporal integration, especially when stimuli are integrated over several α cycles. Using electroencephalography (EEG), we investigated the relationship between pre-stimulus brain activity and long-lasting integration in the sequential metacontrast paradigm (SQM), where two opposite vernier offsets, embedded in a stream of lines, are unconsciously integrated into a single percept. We show that increases in α power, even 300 ms before the stimulus, affected the probability of reporting the first offset, shown at the very beginning of the SQM. This effect was mediated by the systematic slowing of the α rhythm that followed the peak in α power. No phase effects were found. Together, our results demonstrate a cascade of neural changes, following spontaneous bursts of α activity and extending beyond a single moment, which influences the sensory representation of visual features for hundreds of milliseconds. Crucially, as feature integration in the SQM occurs before a conscious percept is elicited, this also provides evidence that α activity is linked to mechanisms regulating unconscious processing.


Asunto(s)
Electroencefalografía , Inconsciencia , Humanos , Electroencefalografía/métodos , Estado de Conciencia , Ritmo alfa/fisiología , Estimulación Luminosa/métodos , Percepción Visual/fisiología
18.
J Exp Biol ; 226(8)2023 04 15.
Artículo en Inglés | MEDLINE | ID: mdl-37066993

RESUMEN

Spatially invariant feature detection is a property of many visual systems that rely on visual information provided by two eyes. However, how information across both eyes is integrated for invariant feature detection is not fully understood. Here, we investigated spatial invariance of looming responses in descending neurons (DNs) of Drosophila melanogaster. We found that multiple looming responsive DNs integrate looming information across both eyes, even though their dendrites are restricted to a single visual hemisphere. One DN, the giant fiber (GF), responds invariantly to looming stimuli across tested azimuthal locations. We confirmed visual information propagates to the GF from the contralateral eye, through an unidentified pathway, and demonstrated that the absence of this pathway alters GF responses to looming stimuli presented to the ipsilateral eye. Our data highlight a role for bilateral visual integration in generating consistent, looming-evoked escape responses that are robust across different stimulus locations and parameters.


Asunto(s)
Drosophila melanogaster , Drosophila , Animales , Drosophila melanogaster/fisiología , Neuronas/fisiología , Estimulación Luminosa , Reacción de Fuga/fisiología
19.
Mem Cognit ; 51(5): 1076-1089, 2023 07.
Artículo en Inglés | MEDLINE | ID: mdl-36622505

RESUMEN

Spatial and temporal information are two major feature dimensions of human movements. How these two types of information are represented in working memory-whether as integrated units or as individual features-influences how much information might be retained and how the retained information might be manipulated. In this study, we investigated how spatial (path/trajectory) and temporal (speed/rhythm) information of complex whole-body movements are represented in working memory under a more ecologically valid condition wherein the spatiotemporal continuity of movement sequences was considered. We found that the spatial and temporal information are not automatically integrated but share the storage capacity and compete for a common pool of cognitive resources. The finding rejects the strong form of object-based representation and supports the partial independence of spatial and temporal processing. Nevertheless, we also found that contextual factors, such as the way movements are organized and displayed, can further modulate the level of object-based representation and spatiotemporal integration.


Asunto(s)
Memoria a Corto Plazo , Memoria Espacial , Humanos
20.
Cereb Cortex ; 33(4): 1440-1451, 2023 02 07.
Artículo en Inglés | MEDLINE | ID: mdl-35510933

RESUMEN

Our sensory system constantly receives information from the environment and our own body. Despite our impression to the contrary, we remain largely unaware of this information and often cannot report it correctly. Although perceptual processing does not require conscious effort on the part of the observer, it is often complex, giving rise to errors such as incorrect integration of features (illusory conjunctions). In the present study, we use functional magnetic resonance imaging to study the neural bases of feature integration in a dual task that produced ~30% illusions. A distributed set of regions demonstrated increased activity for correct compared to incorrect (illusory) feature integration, with increased functional coupling between occipital and parietal regions. In contrast, incorrect feature integration (illusions) was associated with increased occipital (V1-V2) responses at early stages, reduced functional connectivity between right occipital regions and the frontal eye field at later stages, and an overall decrease in coactivation between occipital and parietal regions. These results underscore the role of parietal regions in feature integration and highlight the relevance of functional occipito-frontal interactions in perceptual processing.


Asunto(s)
Ilusiones , Humanos , Reconocimiento Visual de Modelos , Atención/fisiología , Lóbulo Parietal/diagnóstico por imagen , Lóbulo Occipital/diagnóstico por imagen , Imagen por Resonancia Magnética
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA