Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Comput Methods Programs Biomed ; 211: 106374, 2021 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-34601186

RESUMEN

BACKGROUND AND OBJECTIVE: Fast and robust alignment of pre-operative MRI planning scans to intra-operative ultrasound is an important aspect for automatically supporting image-guided interventions. Thus far, learning-based approaches have failed to tackle the intertwined objectives of fast inference computation time and robustness to unexpectedly large motion and misalignment. In this work, we propose a novel method that decouples deep feature learning and the computation of long ranging local displacement probability maps from fast and robust global transformation prediction. METHODS: In our approach, we firstly train a convolutional neural network (CNN) to extract modality-agnostic features with sub-second computation times for both 3D volumes during inference. Using sparsity-based network weight pruning, the model complexity and computation times can be substantially reduced. Based on these features, a large discretized search range of 3D motion vectors is explored to compute a probabilistic displacement map for each control point. These 3D probability maps are employed in our newly proposed, computationally efficient, instance optimisation that robustly estimates the most likely globally linear transformation that best reflects the local displacement beliefs subject to outlier rejection. RESULTS: Our experimental validation demonstrates state-of-the-art accuracy on the challenging CuRIOUS dataset with average target registration errors of 2.50 mm, model size of only 1.2 MByte and run times of approx. 3 seconds for a full 3D multimodal registration. CONCLUSION: We show that a significant improvement in accuracy and robustness can be gained with instance optimisation and our fast self-supervised deep learning model can achieve state-of-the-art accuracy on challenging registration task in only 3 seconds.


Asunto(s)
Imagen por Resonancia Magnética , Redes Neurales de la Computación , Movimiento (Física) , Ultrasonografía , Ultrasonografía Intervencional
2.
Sensors (Basel) ; 20(5)2020 Mar 04.
Artículo en Inglés | MEDLINE | ID: mdl-32143297

RESUMEN

Deformable image registration is still a challenge when the considered images have strong variations in appearance and large initial misalignment. A huge performance gap currently remains for fast-moving regions in videos or strong deformations of natural objects. We present a new semantically guided and two-step deep deformation network that is particularly well suited for the estimation of large deformations. We combine a U-Net architecture that is weakly supervised with segmentation information to extract semantically meaningful features with multiple stages of nonrigid spatial transformer networks parameterized with low-dimensional B-spline deformations. Combining alignment loss and semantic loss functions together with a regularization penalty to obtain smooth and plausible deformations, we achieve superior results in terms of alignment quality compared to previous approaches that have only considered a label-driven alignment loss. Our network model advances the state of the art for inter-subject face part alignment and motion tracking in medical cardiac magnetic resonance imaging (MRI) sequences in comparison to the FlowNet and Label-Reg, two recent deep-learning registration frameworks. The models are compact, very fast in inference, and demonstrate clear potential for a variety of challenging tracking and/or alignment tasks in computer vision and medical image analysis.

3.
IEEE Trans Biomed Eng ; 66(2): 302-310, 2019 02.
Artículo en Inglés | MEDLINE | ID: mdl-29993528

RESUMEN

OBJECTIVE: Intra-interventional respiratory motion estimation is becoming a vital component in modern radiation therapy delivery or high intensity focused ultrasound systems. The treatment quality could tremendously benefit from more accurate dose delivery using real-time motion tracking based on magnetic-resonance (MR) or ultrasound (US) imaging techniques. However, current practice often relies on indirect measurements of external breathing indicators, which has an inherently limited accuracy. In this work, we present a new approach that is applicable to challenging real-time capable imaging modalities like MR-Linac scanners and 3D-US by employing contrast-invariant feature descriptors. METHODS: We combine GPU-accelerated image-based realtime tracking of sparsely distributed feature points and a dense patient-specific motion-model for regularisation and sparse-to-dense interpolation within a unified optimization framework. RESULTS: We achieve highly accurate motion predictions with landmark errors of ≈ 1 mm for MRI (and ≈ 2 mm for US) and substantial improvements over classical template tracking strategies. CONCLUSION: Our technique can model physiological respiratory motion more realistically and deals particularly well with the sliding of lungs against the rib cage. SIGNIFICANCE: Our model-based sparse-to-dense image registration approach allows for accurate and realtime respiratory motion tracking in image-guided interventions.


Asunto(s)
Imagen por Resonancia Magnética/métodos , Radioterapia Guiada por Imagen/métodos , Mecánica Respiratoria/fisiología , Algoritmos , Bases de Datos Factuales , Humanos , Pulmón/diagnóstico por imagen , Pulmón/fisiología , Movimiento/fisiología , Tórax/diagnóstico por imagen , Tórax/fisiología
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA