Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Comput Biol Med ; 180: 108980, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-39137668

RESUMEN

Automatic tumor segmentation via positron emission tomography (PET) and computed tomography (CT) images plays a critical role in the prevention, diagnosis, and treatment of this disease via radiation oncology. However, segmenting these tumors is challenging due to the heterogeneity of grayscale levels and fuzzy boundaries. To address these issues, in this paper, an efficient model-informed PET/CT tumor co-segmentation method that combines fuzzy C-means clustering and Bayesian classification information is proposed. To alleviate the grayscale heterogeneity of multi-modal images, in this method, a novel grayscale similar region term is designed based on the background region information of PET and the foreground region information of CT. An edge stop function is innovatively presented to enhance the localization of fuzzy edges by incorporating the fuzzy C-means clustering strategy. To improve the segmentation accuracy further, a unique data fidelity term is introduced based on PET images by combining the distribution characteristics of pixel points in PET images. Finally, experimental validation on datasets of head and neck tumor (HECKTOR) and non-small cell lung cancer (NSCLC) demonstrated impressive values for three key evaluation metrics, including DSC, RVD and HD5, achieved impressive values of 0.85, 5.32, and 0.17, respectively. These compelling results indicate that image segmentation methods based on mathematical models exhibit outstanding performance in handling grayscale heterogeneity and fuzzy boundaries in multi-modal images.


Asunto(s)
Lógica Difusa , Tomografía Computarizada por Tomografía de Emisión de Positrones , Humanos , Tomografía Computarizada por Tomografía de Emisión de Positrones/métodos , Análisis por Conglomerados , Teorema de Bayes , Algoritmos , Neoplasias de Cabeza y Cuello/diagnóstico por imagen , Neoplasias Pulmonares/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos , Carcinoma de Pulmón de Células no Pequeñas/diagnóstico por imagen
2.
Phys Med Biol ; 69(8)2024 Apr 03.
Artículo en Inglés | MEDLINE | ID: mdl-38471170

RESUMEN

Objective.Recently, deep learning techniques have found extensive application in accurate and automated segmentation of tumor regions. However, owing to the variety of tumor shapes, complex types, and unpredictability of spatial distribution, tumor segmentation still faces major challenges. Taking cues from the deep supervision and adversarial learning, we have devised a cascade-based methodology incorporating multi-scale adversarial learning and difficult-region supervision learning in this study to tackle these challenges.Approach.Overall, the method adheres to a coarse-to-fine strategy, first roughly locating the target region, and then refining the target object with multi-stage cascaded binary segmentation which converts complex multi-class segmentation problems into multiple simpler binary segmentation problems. In addition, a multi-scale adversarial learning difficult supervised UNet (MSALDS-UNet) is proposed as our model for fine-segmentation, which applies multiple discriminators along the decoding path of the segmentation network to implement multi-scale adversarial learning, thereby enhancing the accuracy of network segmentation. Meanwhile, in MSALDS-UNet, we introduce a difficult region supervision loss to effectively utilize structural information for segmenting difficult-to-distinguish areas, such as blurry boundary areas.Main results.A thorough validation of three independent public databases (KiTS21, MSD's Brain and Pancreas datasets) shows that our model achieves satisfactory results for tumor segmentation in terms of key evaluation metrics including dice similarity coefficient, Jaccard similarity coefficient, and HD95.Significance.This paper introduces a cascade approach that combines multi-scale adversarial learning and difficult supervision to achieve precise tumor segmentation. It confirms that the combination can improve the segmentation performance, especially for small objects (our codes are publicly availabled onhttps://zhengshenhai.github.io/).


Asunto(s)
Encéfalo , Señales (Psicología) , Benchmarking , Bases de Datos Factuales , Páncreas , Procesamiento de Imagen Asistido por Computador
3.
IEEE Trans Med Imaging ; 43(7): 2495-2508, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38386578

RESUMEN

The accurate segmentation of brain tumor is significant in clinical practice. Convolutional Neural Network (CNN)-based methods have made great progress in brain tumor segmentation due to powerful local modeling ability. However, brain tumors are frequently pattern-agnostic, i.e. variable in shape, size and location, which can not be effectively matched by traditional CNN-based methods with local and regular receptive fields. To address the above issues, we propose a shape-scale co-awareness network (S2CA-Net) for brain tumor segmentation, which can efficiently learn shape-aware and scale-aware features simultaneously to enhance pattern-agnostic representations. Primarily, three key components are proposed to accomplish the co-awareness of shape and scale. The Local-Global Scale Mixer (LGSM) decouples the extraction of local and global context by adopting the CNN-Former parallel structure, which contributes to obtaining finer hierarchical features. The Multi-level Context Aggregator (MCA) enriches the scale diversity of input patches by modeling global features across multiple receptive fields. The Multi-Scale Attentive Deformable Convolution (MS-ADC) learns the target deformation based on the multiscale inputs, which motivates the network to enforce feature constraints both in terms of scale and shape for optimal feature matching. Overall, LGSM and MCA focus on enhancing the scale-awareness of the network to cope with the size and location variations, while MS-ADC focuses on capturing deformation information for optimal shape matching. Finally, their effective integration prompts the network to perceive variations in shape and scale simultaneously, which can robustly tackle the variations in patterns of brain tumors. The experimental results on BraTS 2019, BraTS 2020, MSD BTS Task and BraTS2023-MEN show that S2CA-Net has superior overall performance in accuracy and efficiency compared to other state-of-the-art methods. Code: https://github.com/jiangyu945/S2CA-Net.


Asunto(s)
Neoplasias Encefálicas , Imagenología Tridimensional , Imagen por Resonancia Magnética , Redes Neurales de la Computación , Humanos , Neoplasias Encefálicas/diagnóstico por imagen , Imagenología Tridimensional/métodos , Imagen por Resonancia Magnética/métodos , Algoritmos , Encéfalo/diagnóstico por imagen , Bases de Datos Factuales , Interpretación de Imagen Asistida por Computador/métodos
4.
Phys Med Biol ; 68(22)2023 Nov 06.
Artículo en Inglés | MEDLINE | ID: mdl-37852283

RESUMEN

Objective.Head and neck (H&N) cancers are prevalent globally, and early and accurate detection is absolutely crucial for timely and effective treatment. However, the segmentation of H&N tumors is challenging due to the similar density of the tumors and surrounding tissues in CT images. While positron emission computed tomography (PET) images provide information about the metabolic activity of the tissue and can distinguish between lesion regions and normal tissue. But they are limited by their low spatial resolution. To fully leverage the complementary information from PET and CT images, we propose a novel and innovative multi-modal tumor segmentation method specifically designed for H&N tumor segmentation.Approach.The proposed novel and innovative multi-modal tumor segmentation network (LSAM) consists of two key learning modules, namely L2-Norm self-attention and latent space feature interaction, which exploit the high sensitivity of PET images and the anatomical information of CT images. These two advanced modules contribute to a powerful 3D segmentation network based on a U-shaped structure. The well-designed segmentation method can integrate complementary features from different modalities at multiple scales, thereby improving the feature interaction between modalities.Main results.We evaluated the proposed method on the public HECKTOR PET-CT dataset, and the experimental results demonstrate that the proposed method convincingly outperforms existing H&N tumor segmentation methods in terms of key evaluation metrics, including DSC (0.8457), Jaccard (0.7756), RVD (0.0938), and HD95 (11.75).Significance.The innovative Self-Attention mechanism based on L2-Norm offers scalability and is effective in reducing the impact of outliers on the performance of the model. And the novel method for multi-scale feature interaction based on Latent Space utilizes the learning process in the encoder phase to achieve the best complementary effects among different modalities.


Asunto(s)
Neoplasias de Cabeza y Cuello , Tomografía Computarizada por Tomografía de Emisión de Positrones , Humanos , Neoplasias de Cabeza y Cuello/diagnóstico por imagen , Benchmarking , Tomografía de Emisión de Positrones , Procesamiento de Imagen Asistido por Computador
5.
Phys Med Biol ; 68(2)2023 01 09.
Artículo en Inglés | MEDLINE | ID: mdl-36595252

RESUMEN

Objective.Over the past years, convolutional neural networks based methods have dominated the field of medical image segmentation. But the main drawback of these methods is that they have difficulty representing long-range dependencies. Recently, the Transformer has demonstrated super performance in computer vision and has also been successfully applied to medical image segmentation because of the self-attention mechanism and long-range dependencies encoding on images. To the best of our knowledge, only a few works focus on cross-modalities of image segmentation using the Transformer. Hence, the main objective of this study was to design, propose and validate a deep learning method to extend the application of Transformer to multi-modality medical image segmentation.Approach.This paper proposes a novel automated multi-modal Transformer network termed AMTNet for 3D medical image segmentation. Especially, the network is a well-modeled U-shaped network architecture where many effective and significant changes have been made in the feature encoding, fusion, and decoding parts. The encoding part comprises 3D embedding, 3D multi-modal Transformer, and 3D Co-learn down-sampling blocks. Symmetrically, the 3D Transformer block, upsampling block, and 3D-expanding blocks are included in the decoding part. In addition, a Transformer-based adaptive channel interleaved Transformer feature fusion module is designed to fully fuse features of different modalities.Main results.We provide a comprehensive experimental analysis of the Prostate and BraTS2021 datasets. The results show that our method achieves an average DSC of 0.907 and 0.851 (0.734 for ET, 0.895 for TC, and 0.924 for WT) on these two datasets, respectively. These values show that AMTNet yielded significant improvements over the state-of-the-art segmentation networks.Significance.The proposed 3D segmentation network exploits complementary features of different modalities during the feature extraction process at multiple scales to increase the 3D feature representations and improve the segmentation efficiency. This powerful network enriches the research of the Transformer to multi-modal medical image segmentation.


Asunto(s)
Redes Neurales de la Computación , Pelvis , Masculino , Humanos , Próstata , Procesamiento de Imagen Asistido por Computador
6.
Phys Med Biol ; 63(2): 025024, 2018 01 16.
Artículo en Inglés | MEDLINE | ID: mdl-29265012

RESUMEN

Medical image segmentation plays an important role in digital medical research, and therapy planning and delivery. However, the presence of noise and low contrast renders automatic liver segmentation an extremely challenging task. In this study, we focus on a variational approach to liver segmentation in computed tomography scan volumes in a semiautomatic and slice-by-slice manner. In this method, one slice is selected and its connected component liver region is determined manually to initialize the subsequent automatic segmentation process. From this guiding slice, we execute the proposed method downward to the last one and upward to the first one, respectively. A segmentation energy function is proposed by combining the statistical shape prior, global Gaussian intensity analysis, and enforced local statistical feature under the level set framework. During segmentation, the shape of the liver shape is estimated by minimization of this function. The improved Chan-Vese model is used to refine the shape to capture the long and narrow regions of the liver. The proposed method was verified on two independent public databases, the 3D-IRCADb and the SLIVER07. Among all the tested methods, our method yielded the best volumetric overlap error (VOE) of [Formula: see text], the best root mean square symmetric surface distance (RMSD) of [Formula: see text] mm, the best maximum symmetric surface distance (MSD) of [Formula: see text] mm in 3D-IRCADb dataset, and the best average symmetric surface distance (ASD) of [Formula: see text] mm, the best RMSD of [Formula: see text] mm in SLIVER07 dataset, respectively. The results of the quantitative comparison show that the proposed liver segmentation method achieves competitive segmentation performance with state-of-the-art techniques.


Asunto(s)
Algoritmos , Neoplasias Hepáticas/diagnóstico por imagen , Hígado/diagnóstico por imagen , Modelos Estadísticos , Tomografía Computarizada por Rayos X/métodos , Bases de Datos Factuales , Humanos , Imagenología Tridimensional/métodos , Hígado/patología , Neoplasias Hepáticas/patología
7.
Comput Med Imaging Graph ; 38(6): 490-507, 2014 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-25047734

RESUMEN

Shape-based 3D surface reconstructing methods for liver vessels have difficulties to tackle with limited contrast of medical images and the intrinsic complexity of multi-furcation parts. In this paper, we propose an effective and robust technique, called Gap Border Pairing (GBPa), to reconstruct surface of liver vessels with complicated multi-furcations. The proposed method starts from a tree-like skeleton which is extracted from segmented liver vessel volumes and preprocessed as a number of simplified smooth branching lines. Secondly, for each center point of any branching line, an optimized elliptic cross-section ring (contour) is generated by optimizedly fitting its actual cross-section outline based on its tangent vector. Thirdly, a tubular surface mesh is generated for each branching line by weaving all of its adjacent rings. Then for every multi-furcation part, a transitional regular mesh is effectively and regularly reconstructed by using GBP. An initial model is generated after reconstructing all multi-furcation parts. Finally, the model is refined by using just one time subdivision and its topologies can be re-maintained by grouping its facets according to the skeleton, providing high-level editability. Our method can be automatically implemented in parallel if the segmented vessel volume and corresponding skeletons are provided. The experimental results show that GBP model is accurate enough in terms of the boundary deviations between segmented volume and the model.


Asunto(s)
Venas Hepáticas/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos , Imagenología Tridimensional , Hígado/irrigación sanguínea , Venas Hepáticas/anatomía & histología , Humanos , Hígado/anatomía & histología , Hígado/diagnóstico por imagen , Radiografía
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA