Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 90
Filtrar
1.
Trends Cogn Sci ; 2024 Sep 09.
Artículo en Inglés | MEDLINE | ID: mdl-39256075

RESUMEN

Recent research by Lavan et al. explores how individuals form complex impressions from voices. Using electroencephalography and behavioral measures, the study identifies distinct time courses for discerning traits, with early acoustic processing preceding higher-order perception. These findings shed light on the temporal dynamics of voice-based person perception and its neural underpinnings.

2.
Elife ; 132024 Aug 13.
Artículo en Inglés | MEDLINE | ID: mdl-39136204

RESUMEN

A neural signature of serial dependence has been found, which mirrors the attractive bias of visual information seen in behavioral experiments.


Asunto(s)
Percepción Visual , Humanos , Animales , Percepción Visual/fisiología
3.
Med Image Anal ; 98: 103305, 2024 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-39168075

RESUMEN

Three-dimensional (3D) freehand ultrasound (US) is a widely used imaging modality that allows non-invasive imaging of medical anatomy without radiation exposure. Surface reconstruction of US volume is vital to acquire the accurate anatomical structures needed for modeling, registration, and visualization. However, traditional methods cannot produce a high-quality surface due to image noise. Despite improvements in smoothness, continuity, and resolution from deep learning approaches, research on surface reconstruction in freehand 3D US is still limited. This study introduces FUNSR, a self-supervised neural implicit surface reconstruction method to learn signed distance functions (SDFs) from US volumes. In particular, FUNSR iteratively learns the SDFs by moving the 3D queries sampled around volumetric point clouds to approximate the surface, guided by two novel geometric constraints: sign consistency constraint and on-surface constraint with adversarial learning. Our approach has been thoroughly evaluated across four datasets to demonstrate its adaptability to various anatomical structures, including a hip phantom dataset, two vascular datasets and one publicly available prostate dataset. We also show that smooth and continuous representations greatly enhance the visual appearance of US data. Furthermore, we highlight the potential of our method to improve segmentation performance, and its robustness to noise distribution and motion perturbation.


Asunto(s)
Imagenología Tridimensional , Ultrasonografía , Humanos , Imagenología Tridimensional/métodos , Ultrasonografía/métodos , Fantasmas de Imagen , Masculino , Próstata/diagnóstico por imagen , Algoritmos , Aprendizaje Profundo , Redes Neurales de la Computación
4.
Med Phys ; 2024 Aug 13.
Artículo en Inglés | MEDLINE | ID: mdl-39137294

RESUMEN

BACKGROUND: The use of magnetic resonance (MR) imaging for proton therapy treatment planning is gaining attention as a highly effective method for guidance. At the core of this approach is the generation of computed tomography (CT) images from MR scans. However, the critical issue in this process is accurately aligning the MR and CT images, a task that becomes particularly challenging in frequently moving body areas, such as the head-and-neck. Misalignments in these images can result in blurred synthetic CT (sCT) images, adversely affecting the precision and effectiveness of the treatment planning. PURPOSE: This study introduces a novel network that cohesively unifies image generation and registration processes to enhance the quality and anatomical fidelity of sCTs derived from better-aligned MR images. METHODS: The approach synergizes a generation network (G) with a deformable registration network (R), optimizing them jointly in MR-to-CT synthesis. This goal is achieved by alternately minimizing the discrepancies between the generated/registered CT images and their corresponding reference CT counterparts. The generation network employs a UNet architecture, while the registration network leverages an implicit neural representation (INR) of the displacement vector fields (DVFs). We validated this method on a dataset comprising 60 head-and-neck patients, reserving 12 cases for holdout testing. RESULTS: Compared to the baseline Pix2Pix method with MAE 124.95 ± $\pm$ 30.74 HU, the proposed technique demonstrated 80.98 ± $\pm$ 7.55 HU. The unified translation-registration network produced sharper and more anatomically congruent outputs, showing superior efficacy in converting MR images to sCTs. Additionally, from a dosimetric perspective, the plan recalculated on the resulting sCTs resulted in a remarkably reduced discrepancy to the reference proton plans. CONCLUSIONS: This study conclusively demonstrates that a holistic MR-based CT synthesis approach, integrating both image-to-image translation and deformable registration, significantly improves the precision and quality of sCT generation, particularly for the challenging body area with varied anatomic changes between corresponding MR and CT.

5.
Trends Cogn Sci ; 2024 Jul 17.
Artículo en Inglés | MEDLINE | ID: mdl-39025769

RESUMEN

The quality space hypothesis about conscious experience proposes that conscious sensory states are experienced in relation to other possible sensory states. For instance, the colour red is experienced as being more like orange, and less like green or blue. Recent empirical findings suggest that subjective similarity space can be explained in terms of similarities in neural activation patterns. Here, we consider how localist, workspace, and higher-order theories of consciousness can accommodate claims about the qualitative character of experience and functionally support a quality space. We review existing empirical evidence for each of these positions, and highlight novel experimental tools, such as altering local activation spaces via brain stimulation or behavioural training, that can distinguish these accounts.

6.
Phys Med Biol ; 69(15)2024 Jul 17.
Artículo en Inglés | MEDLINE | ID: mdl-38942004

RESUMEN

Reducing the radiation dose leads to the x-ray computed tomography (CT) images suffering from heavy noise and artifacts, which inevitably interferes with the subsequent clinic diagnostic and analysis. Leading works have explored diffusion models for low-dose CT imaging to avoid the structure degeneration and blurring effects of previous deep denoising models. However, most of them always begin their generative processes with Gaussian noise, which has little or no structure priors of the clean data distribution, thereby leading to long-time inference and unpleasant reconstruction quality. To alleviate these problems, this paper presents a Structure-Aware Diffusion model (SAD), an end-to-end self-guided learning framework for high-fidelity CT image reconstruction. First, SAD builds a nonlinear diffusion bridge between clean and degraded data distributions, which could directly learn the implicit physical degradation prior from observed measurements. Second, SAD integrates the prompt learning mechanism and implicit neural representation into the diffusion process, where rich and diverse structure representations extracted by degraded inputs are exploited as prompts, which provides global and local structure priors, to guide CT image reconstruction. Finally, we devise an efficient self-guided diffusion architecture using an iterative updated strategy, which further refines structural prompts during each generative step to drive finer image reconstruction. Extensive experiments on AAPM-Mayo and LoDoPaB-CT datasets demonstrate that our SAD could achieve superior performance in terms of noise removal, structure preservation, and blind-dose generalization, with few generative steps, even one step only.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Dosis de Radiación , Tomografía Computarizada por Rayos X , Tomografía Computarizada por Rayos X/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Difusión , Humanos
7.
Cell Rep ; 43(5): 114172, 2024 May 28.
Artículo en Inglés | MEDLINE | ID: mdl-38703366

RESUMEN

Changes in sound-evoked responses in the auditory cortex (ACtx) occur during learning, but how learning alters neural responses in different ACtx subregions and changes their interactions is unclear. To address these questions, we developed an automated training and widefield imaging system to longitudinally track the neural activity of all mouse ACtx subregions during a tone discrimination task. We find that responses in primary ACtx are highly informative of learned stimuli and behavioral outcomes throughout training. In contrast, representations of behavioral outcomes in the dorsal posterior auditory field, learned stimuli in the dorsal anterior auditory field, and inter-regional correlations between primary and higher-order areas are enhanced with training. Moreover, ACtx response changes vary between stimuli, and such differences display lag synchronization with the learning rate. These results indicate that learning alters functional connections between ACtx subregions, inducing region-specific modulations by propagating behavioral information from primary to higher-order areas.


Asunto(s)
Corteza Auditiva , Aprendizaje Discriminativo , Corteza Auditiva/fisiología , Animales , Aprendizaje Discriminativo/fisiología , Ratones , Estimulación Acústica , Percepción Auditiva/fisiología , Masculino , Femenino , Ratones Endogámicos C57BL , Potenciales Evocados Auditivos/fisiología
8.
Phys Med Biol ; 69(11)2024 May 23.
Artículo en Inglés | MEDLINE | ID: mdl-38697195

RESUMEN

Objective. Dynamic cone-beam computed tomography (CBCT) can capture high-spatial-resolution, time-varying images for motion monitoring, patient setup, and adaptive planning of radiotherapy. However, dynamic CBCT reconstruction is an extremely ill-posed spatiotemporal inverse problem, as each CBCT volume in the dynamic sequence is only captured by one or a few x-ray projections, due to the slow gantry rotation speed and the fast anatomical motion (e.g. breathing).Approach. We developed a machine learning-based technique, prior-model-free spatiotemporal implicit neural representation (PMF-STINR), to reconstruct dynamic CBCTs from sequentially acquired x-ray projections. PMF-STINR employs a joint image reconstruction and registration approach to address the under-sampling challenge, enabling dynamic CBCT reconstruction from singular x-ray projections. Specifically, PMF-STINR uses spatial implicit neural representations to reconstruct a reference CBCT volume, and it applies temporal INR to represent the intra-scan dynamic motion of the reference CBCT to yield dynamic CBCTs. PMF-STINR couples the temporal INR with a learning-based B-spline motion model to capture time-varying deformable motion during the reconstruction. Compared with the previous methods, the spatial INR, the temporal INR, and the B-spline model of PMF-STINR are all learned on the fly during reconstruction in a one-shot fashion, without using any patient-specific prior knowledge or motion sorting/binning.Main results. PMF-STINR was evaluated via digital phantom simulations, physical phantom measurements, and a multi-institutional patient dataset featuring various imaging protocols (half-fan/full-fan, full sampling/sparse sampling, different energy and mAs settings, etc). The results showed that the one-shot learning-based PMF-STINR can accurately and robustly reconstruct dynamic CBCTs and capture highly irregular motion with high temporal (∼ 0.1 s) resolution and sub-millimeter accuracy.Significance. PMF-STINR can reconstruct dynamic CBCTs and solve the intra-scan motion from conventional 3D CBCT scans without using any prior anatomical/motion model or motion sorting/binning. It can be a promising tool for motion management by offering richer motion information than traditional 4D-CBCTs.


Asunto(s)
Tomografía Computarizada de Haz Cónico , Procesamiento de Imagen Asistido por Computador , Tomografía Computarizada de Haz Cónico/métodos , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Fantasmas de Imagen , Aprendizaje Automático
9.
Trends Cogn Sci ; 28(7): 600-613, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38763804

RESUMEN

Our ability to perceive multiple objects is mysterious. Sensory neurons are broadly tuned, producing potential overlap in the populations of neurons activated by each object in a scene. This overlap raises questions about how distinct information is retained about each item. We present a novel signal switching theory of neural representation, which posits that neural signals may interleave representations of individual items across time. Evidence for this theory comes from new statistical tools that overcome the limitations inherent to standard time-and-trial-pooled assessments of neural signals. Our theory has implications for diverse domains of neuroscience, including attention, figure binding/scene segregation, oscillations, and divisive normalization. The general concept of switching between functions could also lend explanatory power to theories of grounded cognition.


Asunto(s)
Encéfalo , Humanos , Encéfalo/fisiología , Modelos Neurológicos , Atención/fisiología , Animales
10.
Psychon Bull Rev ; 2024 Apr 30.
Artículo en Inglés | MEDLINE | ID: mdl-38689188

RESUMEN

While the neural bases of the earliest stages of speech categorization have been widely explored using neural decoding methods, there is still a lack of consensus on questions as basic as how wordforms are represented and in what way this word-level representation influences downstream processing in the brain. Isolating and localizing the neural representations of wordform is challenging because spoken words activate a variety of representations (e.g., segmental, semantic, articulatory) in addition to form-based representations. We addressed these challenges through a novel integrated neural decoding and effective connectivity design using region of interest (ROI)-based, source-reconstructed magnetoencephalography/electroencephalography (MEG/EEG) data collected during a lexical decision task. To identify wordform representations, we trained classifiers on words and nonwords from different phonological neighborhoods and then tested the classifiers' ability to discriminate between untrained target words that overlapped phonologically with the trained items. Training with word neighbors supported significantly better decoding than training with nonword neighbors in the period immediately following target presentation. Decoding regions included mostly right hemisphere regions in the posterior temporal lobe implicated in phonetic and lexical representation. Additionally, neighbors that aligned with target word beginnings (critical for word recognition) supported decoding, but equivalent phonological overlap with word codas did not, suggesting lexical mediation. Effective connectivity analyses showed a rich pattern of interaction between ROIs that support decoding based on training with lexical neighbors, especially driven by right posterior middle temporal gyrus. Collectively, these results evidence functional representation of wordforms in temporal lobes isolated from phonemic or semantic representations.

11.
Comput Biol Med ; 175: 108368, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38663351

RESUMEN

BACKGROUND: The issue of using deep learning to obtain accurate gross tumor volume (GTV) and metastatic lymph nodes (MLN) segmentation for nasopharyngeal carcinoma (NPC) on heterogeneous magnetic resonance imaging (MRI) images with limited labeling remains unsolved. METHOD: We collected 918 patients with MRI images from three hospitals to develop and validate models and proposed a semi-supervised framework for the fine delineation of multi-center NPC boundaries by integrating uncertainty-based implicit neural representations named SIMN. The framework utilizes the deep mutual learning approach with CNN and Transformer, incorporating dynamic thresholds. Additionally, domain adaptive algorithms are employed to enhance the performance. RESULTS: SIMN predictions have a high overlap ratio with the ground truth. Under the 20 % labeled cases, for the internal test cohorts, the average DSC in GTV and MLN are 0.7981 and 0.7804, respectively; for external test cohort Wu Zhou Red Cross Hospital, the average DSC in GTV and MLN are 0.7217 and 0.7581, respectively; for external test cohorts First People Hospital of Foshan, the average DSC in GTV and MLN are 0.7004 and 0.7692, respectively. No significant differences are found in DSC, HD95, ASD, and Recall for patients with different clinical categories. Moreover, SIMN outperformed existing classical semi-supervised methods. CONCLUSIONS: SIMN showed a highly accurate GTV and MLN segmentation for NPC on multi-center MRI images under Semi-Supervised Learning (SSL), which can easily transfer to other centers without fine-tuning. It suggests that it has the potential to act as a generalized delineation solution for heterogeneous MRI images with limited labels in clinical deployment.


Asunto(s)
Imagen por Resonancia Magnética , Carcinoma Nasofaríngeo , Neoplasias Nasofaríngeas , Humanos , Imagen por Resonancia Magnética/métodos , Carcinoma Nasofaríngeo/diagnóstico por imagen , Neoplasias Nasofaríngeas/diagnóstico por imagen , Masculino , Femenino , Persona de Mediana Edad , Adulto , Aprendizaje Profundo , Algoritmos , Interpretación de Imagen Asistida por Computador/métodos , Redes Neurales de la Computación
12.
Phys Med Biol ; 69(10)2024 Apr 29.
Artículo en Inglés | MEDLINE | ID: mdl-38593820

RESUMEN

Objective.Limited-angle computed tomography (CT) presents a challenge due to its ill-posed nature. In such scenarios, analytical reconstruction methods often exhibit severe artifacts. To tackle this inverse problem, several supervised deep learning-based approaches have been proposed. However, they are constrained by limitations such as generalization issue and the difficulty of acquiring a large amount of paired CT images.Approach.In this work, we propose an iterative neural reconstruction framework designed for limited-angle CT. By leveraging a coordinate-based neural representation, we formulate tomographic reconstruction as a convex optimization problem involving a deep neural network. We then employ differentiable projection layer to optimize this network by minimizing the discrepancy between the predicted and measured projection data. In addition, we introduce a prior-based weight initialization method to ensure the network starts optimization with an informed initial guess. This strategic initialization significantly improves the quality of iterative reconstruction by stabilizing the divergent behavior in ill-posed neural fields. Our method operates in a self-supervised manner, thereby eliminating the need for extensive data.Main results.The proposed method outperforms other iterative and learning-based methods. Experimental results on XCAT and Mayo Clinic datasets demonstrate the effectiveness of our approach in restoring anatomical features as well as structures. This finding was substantiated by visual inspections and quantitative evaluations using NRMSE, PSNR, and SSIM. Moreover, we conduct a comprehensive investigation into the divergent behavior of iterative neural reconstruction, thus revealing its suboptimal convergence when starting from scratch. In contrast, our method consistently produced accurate images by incorporating an initial estimate as informed initialization.Significance.This work showcases the feasibility to reconstruct high-fidelity CT images from limited-angle x-ray projections. The proposed methodology introduces a novel data-free approach to enhance medical imaging, holding promise across various clinical applications.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Tomografía Computarizada por Rayos X , Tomografía Computarizada por Rayos X/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Redes Neurales de la Computación , Humanos , Aprendizaje Profundo
13.
Med Image Anal ; 95: 103173, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38657424

RESUMEN

Quantitative susceptibility mapping (QSM) is an MRI-based technique that estimates the underlying tissue magnetic susceptibility based on phase signal. Deep learning (DL)-based methods have shown promise in handling the challenging ill-posed inverse problem for QSM reconstruction. However, they require extensive paired training data that are typically unavailable and suffer from generalization problems. Recent model-incorporated DL approaches also overlook the non-local effect of the tissue phase in applying the source-to-field forward model due to patch-based training constraint, resulting in a discrepancy between the prediction and measurement and subsequently suboptimal QSM reconstruction. This study proposes an unsupervised and subject-specific DL method for QSM reconstruction based on implicit neural representation (INR), referred to as INR-QSM. INR has emerged as a powerful framework for learning a high-quality continuous representation of the signal (image) by exploiting its internal information without training labels. In INR-QSM, the desired susceptibility map is represented as a continuous function of the spatial coordinates, parameterized by a fully-connected neural network. The weights are learned by minimizing a loss function that includes a data fidelity term incorporated by the physical model and regularization terms. Additionally, a novel phase compensation strategy is proposed for the first time to account for the non-local effect of tissue phase in data consistency calculation to make the physical model more accurate. Our experiments show that INR-QSM outperforms traditional established QSM reconstruction methods and the compared unsupervised DL method both qualitatively and quantitatively, and is competitive against supervised DL methods under data perturbations.


Asunto(s)
Aprendizaje Profundo , Imagen por Resonancia Magnética , Aprendizaje Automático no Supervisado , Humanos , Imagen por Resonancia Magnética/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Redes Neurales de la Computación
14.
Hum Brain Mapp ; 45(6): e26651, 2024 Apr 15.
Artículo en Inglés | MEDLINE | ID: mdl-38646963

RESUMEN

Humans regularly assess the quality of their judgements, which helps them adjust their behaviours. Metacognition is the ability to accurately evaluate one's own judgements, and it is assessed by comparing objective task performance with subjective confidence report in perceptual decisions. However, for preferential decisions, assessing metacognition in preference-based decisions is difficult because it depends on subjective goals rather than the objective criterion. Here, we develop a new index that integrates choice, reaction time, and confidence report to quantify trial-by-trial metacognitive sensitivity in preference judgements. We found that the dorsomedial prefrontal cortex (dmPFC) and the right anterior insular were more activated when participants made bad metacognitive evaluations. Our study suggests a crucial role of the dmPFC-insula network in representing online metacognitive sensitivity in preferential decisions.


Asunto(s)
Mapeo Encefálico , Toma de Decisiones , Imagen por Resonancia Magnética , Metacognición , Humanos , Metacognición/fisiología , Masculino , Femenino , Adulto Joven , Toma de Decisiones/fisiología , Adulto , Tiempo de Reacción/fisiología , Corteza Prefrontal/fisiología , Corteza Prefrontal/diagnóstico por imagen , Juicio/fisiología , Corteza Cerebral/fisiología , Corteza Cerebral/diagnóstico por imagen , Conducta de Elección/fisiología
15.
Elife ; 132024 Mar 15.
Artículo en Inglés | MEDLINE | ID: mdl-38489224

RESUMEN

How neural representations preserve information about multiple stimuli is mysterious. Because tuning of individual neurons is coarse (e.g., visual receptive field diameters can exceed perceptual resolution), the populations of neurons potentially responsive to each individual stimulus can overlap, raising the question of how information about each item might be segregated and preserved in the population. We recently reported evidence for a potential solution to this problem: when two stimuli were present, some neurons in the macaque visual cortical areas V1 and V4 exhibited fluctuating firing patterns, as if they responded to only one individual stimulus at a time (Jun et al., 2022). However, whether such an information encoding strategy is ubiquitous in the visual pathway and thus could constitute a general phenomenon remains unknown. Here, we provide new evidence that such fluctuating activity is also evoked by multiple stimuli in visual areas responsible for processing visual motion (middle temporal visual area, MT), and faces (middle fundus and anterolateral face patches in inferotemporal cortex - areas MF and AL), thus extending the scope of circumstances in which fluctuating activity is observed. Furthermore, consistent with our previous results in the early visual area V1, MT exhibits fluctuations between the representations of two stimuli when these form distinguishable objects but not when they fuse into one perceived object, suggesting that fluctuating activity patterns may underlie visual object formation. Taken together, these findings point toward an updated model of how the brain preserves sensory information about multiple stimuli for subsequent processing and behavioral action.


Asunto(s)
Corteza Visual , Vías Visuales , Vías Visuales/fisiología , Corteza Visual/fisiología , Campos Visuales , Neuronas/fisiología , Estimulación Luminosa
16.
Ann N Y Acad Sci ; 1534(1): 45-68, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38528782

RESUMEN

This paper considers neural representation through the lens of active inference, a normative framework for understanding brain function. It delves into how living organisms employ generative models to minimize the discrepancy between predictions and observations (as scored with variational free energy). The ensuing analysis suggests that the brain learns generative models to navigate the world adaptively, not (or not solely) to understand it. Different living organisms may possess an array of generative models, spanning from those that support action-perception cycles to those that underwrite planning and imagination; namely, from explicit models that entail variables for predicting concurrent sensations, like objects, faces, or people-to action-oriented models that predict action outcomes. It then elucidates how generative models and belief dynamics might link to neural representation and the implications of different types of generative models for understanding an agent's cognitive capabilities in relation to its ecological niche. The paper concludes with open questions regarding the evolution of generative models and the development of advanced cognitive abilities-and the gradual transition from pragmatic to detached neural representations. The analysis on offer foregrounds the diverse roles that generative models play in cognitive processes and the evolution of neural representation.


Asunto(s)
Encéfalo , Cognición , Humanos , Sensación , Aprendizaje
17.
Phys Med Biol ; 69(9)2024 Apr 15.
Artículo en Inglés | MEDLINE | ID: mdl-38479004

RESUMEN

Objective. 3D cine-magnetic resonance imaging (cine-MRI) can capture images of the human body volume with high spatial and temporal resolutions to study anatomical dynamics. However, the reconstruction of 3D cine-MRI is challenged by highly under-sampled k-space data in each dynamic (cine) frame, due to the slow speed of MR signal acquisition. We proposed a machine learning-based framework, spatial and temporal implicit neural representation learning (STINR-MR), for accurate 3D cine-MRI reconstruction from highly under-sampled data.Approach. STINR-MR used a joint reconstruction and deformable registration approach to achieve a high acceleration factor for cine volumetric imaging. It addressed the ill-posed spatiotemporal reconstruction problem by solving a reference-frame 3D MR image and a corresponding motion model that deforms the reference frame to each cine frame. The reference-frame 3D MR image was reconstructed as a spatial implicit neural representation (INR) network, which learns the mapping from input 3D spatial coordinates to corresponding MR values. The dynamic motion model was constructed via a temporal INR, as well as basis deformation vector fields (DVFs) extracted from prior/onboard 4D-MRIs using principal component analysis. The learned temporal INR encodes input time points and outputs corresponding weighting factors to combine the basis DVFs into time-resolved motion fields that represent cine-frame-specific dynamics. STINR-MR was evaluated using MR data simulated from the 4D extended cardiac-torso (XCAT) digital phantom, as well as two MR datasets acquired clinically from human subjects. Its reconstruction accuracy was also compared with that of the model-based non-rigid motion estimation method (MR-MOTUS) and a deep learning-based method (TEMPEST).Main results. STINR-MR can reconstruct 3D cine-MR images with high temporal (<100 ms) and spatial (3 mm) resolutions. Compared with MR-MOTUS and TEMPEST, STINR-MR consistently reconstructed images with better image quality and fewer artifacts and achieved superior tumor localization accuracy via the solved dynamic DVFs. For the XCAT study, STINR reconstructed the tumors to a mean ± SD center-of-mass error of 0.9 ± 0.4 mm, compared to 3.4 ± 1.0 mm of the MR-MOTUS method. The high-frame-rate reconstruction capability of STINR-MR allows different irregular motion patterns to be accurately captured.Significance. STINR-MR provides a lightweight and efficient framework for accurate 3D cine-MRI reconstruction. It is a 'one-shot' method that does not require external data for pre-training, allowing it to avoid generalizability issues typically encountered in deep learning-based methods.


Asunto(s)
Neoplasias , Respiración , Humanos , Imagen por Resonancia Cinemagnética , Imagenología Tridimensional/métodos , Movimiento (Física) , Fantasmas de Imagen , Imagen por Resonancia Magnética/métodos , Procesamiento de Imagen Asistido por Computador/métodos
18.
Brain Imaging Behav ; 18(2): 412-420, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38324234

RESUMEN

The current study used functional magnetic resonance imaging (fMRI) and showed that state anxiety modulated extrastriate cortex activity in response to emotionally-charged visual images. State anxiety and neuroimaging data from 53 individuals were subjected to an intersubject representational similarity analysis (ISRSA), wherein the geometries between neural and behavioral data were compared. This analysis identified the extrastriate cortex (fusiform gyrus and area MT) to be the sole regions whose activity patterns covaried with state anxiety. Importantly, we show that this brain-behavior association is revealed when treating state anxiety data as a multidimensional response pattern, rather than a single composite score. This suggests that ISRSA using multivariate distances may be more sensitive in identifying the shared geometries between self-report questionnaires and brain imaging data. Overall, our findings demonstrate that a transient state of anxiety may influence how visual information - especially those relevant to the valence dimension - is processed in the extrastriate cortex.


Asunto(s)
Imagen por Resonancia Magnética , Corteza Visual , Humanos , Ansiedad , Encéfalo , Neuroimagen
19.
Conscious Cogn ; 119: 103668, 2024 03.
Artículo en Inglés | MEDLINE | ID: mdl-38417198

RESUMEN

How deep is the current diversity in the panoply of theories to define consciousness, and to what extent do these theories share common denominators? Here we first examine to what extent different theories are commensurable (or comparable) along particular dimensions. We posit logical (and, when applicable, empirical) commensurability as a necessary condition for identifying common denominators among different theories. By consequence, dimensions for inclusion in a set of logically and empirically commensurable theories of consciousness can be proposed. Next, we compare a limited subset of neuroscience-based theories in terms of commensurability. This analysis does not yield a denominator that might serve to define a minimally unifying model of consciousness. Theories that seem to be akin by one denominator can be remote by another. We suggest a methodology of comparing different theories via multiple probing questions, allowing to discern overall (dis)similarities between theories. Despite very different background definitions of consciousness, we conclude that, if attention is paid to the search for a common methological approach to brain-consciousness relationships, it should be possible in principle to overcome the current Babylonian confusion of tongues and eventually integrate and merge different theories.


Asunto(s)
Estado de Conciencia , Neurociencias , Humanos , Encéfalo , Atención
20.
Magn Reson Med ; 92(1): 319-331, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38308149

RESUMEN

PURPOSE: This study addresses the challenge of low resolution and signal-to-noise ratio (SNR) in diffusion-weighted images (DWI), which are pivotal for cancer detection. Traditional methods increase SNR at high b-values through multiple acquisitions, but this results in diminished image resolution due to motion-induced variations. Our research aims to enhance spatial resolution by exploiting the global structure within multicontrast DWI scans and millimetric motion between acquisitions. METHODS: We introduce a novel approach employing a "Perturbation Network" to learn subvoxel-size motions between scans, trained jointly with an implicit neural representation (INR) network. INR encodes the DWI as a continuous volumetric function, treating voxel intensities of low-resolution acquisitions as discrete samples. By evaluating this function with a finer grid, our model predicts higher-resolution signal intensities for intermediate voxel locations. The Perturbation Network's motion-correction efficacy was validated through experiments on biological phantoms and in vivo prostate scans. RESULTS: Quantitative analyses revealed significantly higher structural similarity measures of super-resolution images to ground truth high-resolution images compared to high-order interpolation (p < $$ < $$ 0.005). In blind qualitative experiments, 96 . 1 % $$ 96.1\% $$ of super-resolution images were assessed to have superior diagnostic quality compared to interpolated images. CONCLUSION: High-resolution details in DWI can be obtained without the need for high-resolution training data. One notable advantage of the proposed method is that it does not require a super-resolution training set. This is important in clinical practice because the proposed method can easily be adapted to images with different scanner settings or body parts, whereas the supervised methods do not offer such an option.


Asunto(s)
Algoritmos , Imagen de Difusión por Resonancia Magnética , Fantasmas de Imagen , Próstata , Neoplasias de la Próstata , Relación Señal-Ruido , Humanos , Masculino , Imagen de Difusión por Resonancia Magnética/métodos , Neoplasias de la Próstata/diagnóstico por imagen , Próstata/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos , Interpretación de Imagen Asistida por Computador/métodos , Redes Neurales de la Computación , Movimiento (Física) , Reproducibilidad de los Resultados
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA