Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 57
Filtrar
1.
Sci Rep ; 14(1): 19976, 2024 08 28.
Artículo en Inglés | MEDLINE | ID: mdl-39198553

RESUMEN

The diagnosis of early prostate cancer depends on the accurate segmentation of prostate regions in magnetic resonance imaging (MRI). However, this segmentation task is challenging due to the particularities of prostate MR images themselves and the limitations of existing methods. To address these issues, we propose a U-shaped encoder-decoder network MM-UNet based on Mamba and CNN for prostate segmentation in MR images. Specifically, we first proposed an adaptive feature fusion module based on channel attention guidance to achieve effective fusion between adjacent hierarchical features and suppress the interference of background noise. Secondly, we propose a global context-aware module based on Mamba, which has strong long-range modeling capabilities and linear complexity, to capture global context information in images. Finally, we propose a multi-scale anisotropic convolution module based on the principle of parallel multi-scale anisotropic convolution blocks and 3D convolution decomposition. Experimental results on two public prostate MR image segmentation datasets demonstrate that the proposed method outperforms competing models in terms of prostate segmentation performance and achieves state-of-the-art performance. In future work, we intend to enhance the model's robustness and extend its applicability to additional medical image segmentation tasks.


Asunto(s)
Imagen por Resonancia Magnética , Próstata , Neoplasias de la Próstata , Humanos , Masculino , Imagen por Resonancia Magnética/métodos , Neoplasias de la Próstata/diagnóstico por imagen , Neoplasias de la Próstata/patología , Próstata/diagnóstico por imagen , Próstata/patología , Procesamiento de Imagen Asistido por Computador/métodos , Redes Neurales de la Computación , Algoritmos , Interpretación de Imagen Asistida por Computador/métodos
2.
Discov Oncol ; 15(1): 323, 2024 Jul 31.
Artículo en Inglés | MEDLINE | ID: mdl-39085488

RESUMEN

PURPOSE OBJECTIVE(S): Manual contouring of the prostate region in planning computed tomography (CT) images is a challenging task due to factors such as low contrast in soft tissues, inter- and intra-observer variability, and variations in organ size and shape. Consequently, the use of automated contouring methods can offer significant advantages. In this study, we aimed to investigate automated male pelvic multi-organ contouring in multi-center planning CT images using a hybrid convolutional neural network-vision transformer (CNN-ViT) that combines convolutional and ViT techniques. MATERIALS/METHODS: We used retrospective data from 104 localized prostate cancer patients, with delineations of the clinical target volume (CTV) and critical organs at risk (OAR) for external beam radiotherapy. We introduced a novel attention-based fusion module that merges detailed features extracted through convolution with the global features obtained through the ViT. RESULTS: The average dice similarity coefficients (DSCs) achieved by VGG16-UNet-ViT for the prostate, bladder, rectum, right femoral head (RFH), and left femoral head (LFH) were 91.75%, 95.32%, 87.00%, 96.30%, and 96.34%, respectively. Experiments conducted on multi-center planning CT images indicate that combining the ViT structure with the CNN network resulted in superior performance for all organs compared to pure CNN and transformer architectures. Furthermore, the proposed method achieves more precise contours compared to state-of-the-art techniques. CONCLUSION: Results demonstrate that integrating ViT into CNN architectures significantly improves segmentation performance. These results show promise as a reliable and efficient tool to facilitate prostate radiotherapy treatment planning.

3.
Curr Med Imaging ; 20: e15734056305021, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38874030

RESUMEN

INTRODUCTION: The second highest cause of death among males is Prostate Cancer (PCa) in America. Over the globe, it's the usual case in men, and the annual PCa ratio is very surprising. Identical to other prognosis and diagnostic medical systems, deep learning-based automated recognition and detection systems (i.e., Computer Aided Detection (CAD) systems) have gained enormous attention in PCA. METHODS: These paradigms have attained promising results with a high segmentation, detection, and classification accuracy ratio. Numerous researchers claimed efficient results from deep learning-based approaches compared to other ordinary systems that utilized pathological samples. RESULTS: This research is intended to perform prostate segmentation using transfer learning-based Mask R-CNN, which is consequently helpful in prostate cancer detection. CONCLUSION: Lastly, limitations in current work, research findings, and prospects have been discussed.


Asunto(s)
Aprendizaje Profundo , Imagen por Resonancia Magnética , Neoplasias de la Próstata , Humanos , Neoplasias de la Próstata/diagnóstico por imagen , Masculino , Imagen por Resonancia Magnética/métodos , Próstata/diagnóstico por imagen , Redes Neurales de la Computación , Interpretación de Imagen Asistida por Computador/métodos , Procesamiento de Imagen Asistido por Computador/métodos
4.
Quant Imaging Med Surg ; 14(6): 4067-4085, 2024 Jun 01.
Artículo en Inglés | MEDLINE | ID: mdl-38846298

RESUMEN

Background: The segmentation of prostates from transrectal ultrasound (TRUS) images is a critical step in the diagnosis and treatment of prostate cancer. Nevertheless, the manual segmentation performed by physicians is a time-consuming and laborious task. To address this challenge, there is a pressing need to develop computerized algorithms capable of autonomously segmenting prostates from TRUS images, which sets a direction and form for future development. However, automatic prostate segmentation in TRUS images has always been a challenging problem since prostates in TRUS images have ambiguous boundaries and inhomogeneous intensity distribution. Although many prostate segmentation methods have been proposed, they still need to be improved due to the lack of sensibility to edge information. Consequently, the objective of this study is to devise a highly effective prostate segmentation method that overcomes these limitations and achieves accurate segmentation of prostates in TRUS images. Methods: A three-dimensional (3D) edge-aware attention generative adversarial network (3D EAGAN)-based prostate segmentation method is proposed in this paper, which consists of an edge-aware segmentation network (EASNet) that performs the prostate segmentation and a discriminator network that distinguishes predicted prostates from real prostates. The proposed EASNet is composed of an encoder-decoder-based U-Net backbone network, a detail compensation module (DCM), four 3D spatial and channel attention modules (3D SCAM), an edge enhancement module (EEM), and a global feature extractor (GFE). The DCM is proposed to compensate for the loss of detailed information caused by the down-sampling process of the encoder. The features of the DCM are selectively enhanced by the 3D spatial and channel attention module. Furthermore, an EEM is proposed to guide shallow layers in the EASNet to focus on contour and edge information in prostates. Finally, features from shallow layers and hierarchical features from the decoder module are fused through the GFE to predict the segmentation prostates. Results: The proposed method is evaluated on our TRUS image dataset and the open-source µRegPro dataset. Specifically, experimental results on two datasets show that the proposed method significantly improved the average segmentation Dice score from 85.33% to 90.06%, Jaccard score from 76.09% to 84.11%, Hausdorff distance (HD) score from 8.59 to 4.58 mm, Precision score from 86.48% to 90.58%, and Recall score from 84.79% to 89.24%. Conclusions: A novel 3D EAGAN-based prostate segmentation method is proposed. The proposed method consists of an EASNet and a discriminator network. Experimental results demonstrate that the proposed method has achieved satisfactory results on 3D TRUS image segmentation for prostates.

5.
Radiol Artif Intell ; 6(4): e230138, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38568094

RESUMEN

Purpose To investigate the accuracy and robustness of prostate segmentation using deep learning across various training data sizes, MRI vendors, prostate zones, and testing methods relative to fellowship-trained diagnostic radiologists. Materials and Methods In this systematic review, Embase, PubMed, Scopus, and Web of Science databases were queried for English-language articles using keywords and related terms for prostate MRI segmentation and deep learning algorithms dated to July 31, 2022. A total of 691 articles from the search query were collected and subsequently filtered to 48 on the basis of predefined inclusion and exclusion criteria. Multiple characteristics were extracted from selected studies, such as deep learning algorithm performance, MRI vendor, and training dataset features. The primary outcome was comparison of mean Dice similarity coefficient (DSC) for prostate segmentation for deep learning algorithms versus diagnostic radiologists. Results Forty-eight studies were included. Most published deep learning algorithms for whole prostate gland segmentation (39 of 42 [93%]) had a DSC at or above expert level (DSC ≥ 0.86). The mean DSC was 0.79 ± 0.06 (SD) for peripheral zone, 0.87 ± 0.05 for transition zone, and 0.90 ± 0.04 for whole prostate gland segmentation. For selected studies that used one major MRI vendor, the mean DSCs of each were as follows: General Electric (three of 48 studies), 0.92 ± 0.03; Philips (four of 48 studies), 0.92 ± 0.02; and Siemens (six of 48 studies), 0.91 ± 0.03. Conclusion Deep learning algorithms for prostate MRI segmentation demonstrated accuracy similar to that of expert radiologists despite varying parameters; therefore, future research should shift toward evaluating segmentation robustness and patient outcomes across diverse clinical settings. Keywords: MRI, Genital/Reproductive, Prostate Segmentation, Deep Learning Systematic review registration link: osf.io/nxaev © RSNA, 2024.


Asunto(s)
Aprendizaje Profundo , Imagen por Resonancia Magnética , Neoplasias de la Próstata , Humanos , Imagen por Resonancia Magnética/métodos , Masculino , Neoplasias de la Próstata/diagnóstico por imagen , Próstata/diagnóstico por imagen , Próstata/anatomía & histología , Interpretación de Imagen Asistida por Computador/métodos
6.
Comput Biol Med ; 171: 108216, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-38442555

RESUMEN

Despite being one of the most prevalent forms of cancer, prostate cancer (PCa) shows a significantly high survival rate, provided there is timely detection and treatment. Computational methods can help make this detection process considerably faster and more robust. However, some modern machine-learning approaches require accurate segmentation of the prostate gland and the index lesion. Since performing manual segmentations is a very time-consuming task, and highly prone to inter-observer variability, there is a need to develop robust semi-automatic segmentation models. In this work, we leverage the large and highly diverse ProstateNet dataset, which includes 638 whole gland and 461 lesion segmentation masks, from 3 different scanner manufacturers provided by 14 institutions, in addition to other 3 independent public datasets, to train accurate and robust segmentation models for the whole prostate gland, zones and lesions. We show that models trained on large amounts of diverse data are better at generalizing to data from other institutions and obtained with other manufacturers, outperforming models trained on single-institution single-manufacturer datasets in all segmentation tasks. Furthermore, we show that lesion segmentation models trained on ProstateNet can be reliably used as lesion detection models.


Asunto(s)
Próstata , Neoplasias de la Próstata , Masculino , Humanos , Próstata/diagnóstico por imagen , Imagenología Tridimensional/métodos , Estudios Retrospectivos , Algoritmos , Neoplasias de la Próstata/diagnóstico por imagen , Imagen por Resonancia Magnética/métodos
7.
Med Image Anal ; 93: 103095, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38310678

RESUMEN

Segmenting prostate from magnetic resonance imaging (MRI) is a critical procedure in prostate cancer staging and treatment planning. Considering the nature of labeled data scarcity for medical images, semi-supervised learning (SSL) becomes an appealing solution since it can simultaneously exploit limited labeled data and a large amount of unlabeled data. However, SSL relies on the assumption that the unlabeled images are abundant, which may not be satisfied when the local institute has limited image collection capabilities. An intuitive solution is to seek support from other centers to enrich the unlabeled image pool. However, this further introduces data heterogeneity, which can impede SSL that works under identical data distribution with certain model assumptions. Aiming at this under-explored yet valuable scenario, in this work, we propose a separated collaborative learning (SCL) framework for semi-supervised prostate segmentation with multi-site unlabeled MRI data. Specifically, on top of the teacher-student framework, SCL exploits multi-site unlabeled data by: (i) Local learning, which advocates local distribution fitting, including the pseudo label learning that reinforces confirmation of low-entropy easy regions and the cyclic propagated real label learning that leverages class prototypes to regularize the distribution of intra-class features; (ii) External multi-site learning, which aims to robustly mine informative clues from external data, mainly including the local-support category mutual dependence learning, which takes the spirit that mutual information can effectively measure the amount of information shared by two variables even from different domains, and the stability learning under strong adversarial perturbations to enhance robustness to heterogeneity. Extensive experiments on prostate MRI data from six different clinical centers show that our method can effectively generalize SSL on multi-site unlabeled data and significantly outperform other semi-supervised segmentation methods. Besides, we validate the extensibility of our method on the multi-class cardiac MRI segmentation task with data from four different clinical centers.


Asunto(s)
Prácticas Interdisciplinarias , Neoplasias de la Próstata , Masculino , Humanos , Próstata/diagnóstico por imagen , Neoplasias de la Próstata/diagnóstico por imagen , Entropía , Imagen por Resonancia Magnética
8.
Comput Biol Med ; 170: 107999, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-38244470

RESUMEN

The precise prostate gland and prostate cancer (PCa) segmentations enable the fusion of magnetic resonance imaging (MRI) and ultrasound imaging (US) to guide robotic prostate biopsy systems. This precise segmentation, applied to preoperative MRI images, is crucial for accurate image registration and automatic localization of the biopsy target. Nevertheless, describing local prostate lesions in MRI remains a challenging and time-consuming task, even for experienced physicians. Therefore, this research work develops a parallel dual-pyramid network that combines convolutional neural networks (CNN) and tokenized multi-layer perceptron (MLP) for automatic segmentation of the prostate gland and clinically significant PCa (csPCa) in MRI. The proposed network consists of two stages. The first stage focuses on prostate segmentation, while the second stage uses a prior partition from a previous stage to detect the cancerous regions. Both stages share a similar network architecture, combining CNN and tokenized MLP as the feature extraction backbone to creating a pyramid-structured network for feature encoding and decoding. By employing CNN layers of different scales, the network generates scale-aware local semantic features, which are integrated into feature maps and inputted into an MLP layer from a global perspective. This facilitates the complementarity between local and global information, capturing richer semantic features. Additionally, the network incorporates an interactive hybrid attention module to enhance the perception of the target area. Experimental results demonstrate the superiority of the proposed network over other state-of-the-art image segmentation methods for segmenting the prostate gland and csPCa tissue in MRI images.


Asunto(s)
Próstata , Neoplasias de la Próstata , Masculino , Humanos , Próstata/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética/métodos , Redes Neurales de la Computación , Neoplasias de la Próstata/diagnóstico por imagen
10.
Phys Med Biol ; 68(19)2023 09 20.
Artículo en Inglés | MEDLINE | ID: mdl-37652058

RESUMEN

Accurate and robust prostate segmentation in transrectal ultrasound (TRUS) images is of great interest for ultrasound-guided brachytherapy for prostate cancer. However, the current practice of manual segmentation is difficult, time-consuming, and prone to errors. To overcome these challenges, we developed an accurate prostate segmentation framework (A-ProSeg) for TRUS images. The proposed segmentation method includes three innovation steps: (1) acquiring the sequence of vertices by using an improved polygonal segment-based method with a small number of radiologist-defined seed points as prior points; (2) establishing an optimal machine learning-based method by using the improved evolutionary neural network; and (3) obtaining smooth contours of the prostate region of interest using the optimized machine learning-based method. The proposed method was evaluated on 266 patients who underwent prostate cancer brachytherapy. The proposed method achieved a high performance against the ground truth with a Dice similarity coefficient of 96.2% ± 2.4%, a Jaccard similarity coefficient of 94.4% ± 3.3%, and an accuracy of 95.7% ± 2.7%; these values are all higher than those obtained using state-of-the-art methods. A sensitivity evaluation on different noise levels demonstrated that our method achieved high robustness against changes in image quality. Meanwhile, an ablation study was performed, and the significance of all the key components of the proposed method was demonstrated.


Asunto(s)
Braquiterapia , Neoplasias de la Próstata , Masculino , Humanos , Próstata/diagnóstico por imagen , Neoplasias de la Próstata/diagnóstico por imagen , Neoplasias de la Próstata/radioterapia , Cabeza , Aprendizaje Automático
11.
Med Image Anal ; 89: 102924, 2023 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-37597316

RESUMEN

Deep learning models can achieve high accuracy when trained on large amounts of labeled data. However, real-world scenarios often involve several challenges: Training data may become available in installments, may originate from multiple different domains, and may not contain labels for training. Certain settings, for instance medical applications, often involve further restrictions that prohibit retention of previously seen data due to privacy regulations. In this work, to address such challenges, we study unsupervised segmentation in continual learning scenarios that involve domain shift. To that end, we introduce GarDA (Generative Appearance Replay for continual Domain Adaptation), a generative-replay based approach that can adapt a segmentation model sequentially to new domains with unlabeled data. In contrast to single-step unsupervised domain adaptation (UDA), continual adaptation to a sequence of domains enables leveraging and consolidation of information from multiple domains. Unlike previous approaches in incremental UDA, our method does not require access to previously seen data, making it applicable in many practical scenarios. We evaluate GarDA on three datasets with different organs and modalities, where it substantially outperforms existing techniques. Our code is available at: https://github.com/histocartography/generative-appearance-replay.

12.
Phys Med Biol ; 68(15)2023 Jul 28.
Artículo en Inglés | MEDLINE | ID: mdl-37433302

RESUMEN

Objective. Both computed tomography (CT) and magnetic resonance imaging (MRI) images are acquired for high-dose-rate (HDR) prostate brachytherapy patients at our institution. CT is used to identify catheters and MRI is used to segment the prostate. To address scenarios of limited MRI access, we developed a novel generative adversarial network (GAN) to generate synthetic MRI (sMRI) from CT with sufficient soft-tissue contrast to provide accurate prostate segmentation without MRI (rMRI).Approach. Our hybrid GAN, PxCGAN, was trained utilizing 58 paired CT-MRI datasets from our HDR prostate patients. Using 20 independent CT-MRI datasets, the image quality of sMRI was tested using mean absolute error (MAE), mean squared error (MSE), peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM). These metrics were compared with the metrics of sMRI generated using Pix2Pix and CycleGAN. The accuracy of prostate segmentation on sMRI was evaluated using the Dice similarity coefficient (DSC), Hausdorff distance (HD) and mean surface distance (MSD) on the prostate delineated by three radiation oncologists (ROs) on sMRI versus rMRI. To estimate inter-observer variability (IOV), these metrics between prostate contours delineated by each RO on rMRI and the prostate delineated by treating RO on rMRI (gold standard) were calculated.Main results. Qualitatively, sMRI images show enhanced soft-tissue contrast at the prostate boundary compared with CT scans. For MAE and MSE, PxCGAN and CycleGAN have similar results, while the MAE of PxCGAN is smaller than that of Pix2Pix. PSNR and SSIM of PxCGAN are significantly higher than Pix2Pix and CycleGAN (p < 0.01). The DSC for sMRI versus rMRI is within the range of the IOV, while the HD for sMRI versus rMRI is smaller than the HD for the IOV for all ROs (p ≤ 0.03).Significance. PxCGAN generates sMRI images from treatment-planning CT scans that depict enhanced soft-tissue contrast at the prostate boundary. The accuracy of prostate segmentation on sMRI compared to rMRI is within the segmentation variation on rMRI between different ROs.

13.
Quant Imaging Med Surg ; 13(5): 3255-3265, 2023 May 01.
Artículo en Inglés | MEDLINE | ID: mdl-37179941

RESUMEN

Background: Accurate whole prostate segmentation on magnetic resonance imaging (MRI) is important in the management of prostatic diseases. In this multicenter study, we aimed to develop and evaluate a clinically applicable deep learning-based tool for automatic whole prostate segmentation on T2-weighted imaging (T2WI) and diffusion-weighted imaging (DWI). Methods: In this retrospective study, 3-dimensional (3D) U-Net-based models in the segmentation tool were trained with 223 patients who underwent prostate MRI and subsequent biopsy from 1 hospital and validated in 1 internal testing cohort (n=95) and 3 external testing cohorts: PROSTATEx Challenge for T2WI and DWI (n=141), Tongji Hospital (n=30), and Beijing Hospital for T2WI (n=29). Patients from the latter 2 centers were diagnosed with advanced prostate cancer. The DWI model was further fine-tuned to compensate for the scanner variety in external testing. A quantitative evaluation, including Dice similarity coefficients (DSCs), 95% Hausdorff distance (95HD), and average boundary distance (ABD), and a qualitative analysis were used to evaluate the clinical usefulness. Results: The segmentation tool showed good performance in the testing cohorts on T2WI (DSC: 0.922 for internal testing and 0.897-0.947 for external testing) and DWI (DSC: 0.914 for internal testing and 0.815 for external testing with fine-tuning). The fine-tuning process significantly improved the DWI model's performance in the external testing dataset (DSC: 0.275 vs. 0.815; P<0.01). Across all testing cohorts, the 95HD was <8 mm, and the ABD was <3 mm. The DSCs in the prostate midgland (T2WI: 0.949-0.976; DWI: 0.843-0.942) were significantly higher than those in the apex (T2WI: 0.833-0.926; DWI: 0.755-0.821) and base (T2WI: 0.851-0.922; DWI: 0.810-0.929) (all P values <0.01). The qualitative analysis showed that 98.6% of T2WI and 72.3% of DWI autosegmentation results in the external testing cohort were clinically acceptable. Conclusions: The 3D U-Net-based segmentation tool can automatically segment the prostate on T2WI with good and robust performance, especially in the prostate midgland. Segmentation on DWI was feasible, but fine-tuning might be needed for different scanners.

14.
Comput Med Imaging Graph ; 107: 102241, 2023 07.
Artículo en Inglés | MEDLINE | ID: mdl-37201475

RESUMEN

In healthcare, a growing number of physicians and support staff are striving to facilitate personalized radiotherapy regimens for patients with prostate cancer. This is because individual patient biology is unique, and employing a single approach for all is inefficient. A crucial step for customizing radiotherapy planning and gaining fundamental information about the disease, is the identification and delineation of targeted structures. However, accurate biomedical image segmentation is time-consuming, requires considerable experience and is prone to observer variability. In the past decade, the use of deep learning models has significantly increased in the field of medical image segmentation. At present, a vast number of anatomical structures can be demarcated on a clinician's level with deep learning models. These models would not only unload work, but they can offer unbiased characterization of the disease. The main architectures used in segmentation are the U-Net and its variants, that exhibit outstanding performances. However, reproducing results or directly comparing methods is often limited by closed source of data and the large heterogeneity among medical images. With this in mind, our intention is to provide a reliable source for assessing deep learning models. As an example, we chose the challenging task of delineating the prostate gland in multi-modal images. First, this paper provides a comprehensive review of current state-of-the-art convolutional neural networks for 3D prostate segmentation. Second, utilizing public and in-house CT and MR datasets of varying properties, we created a framework for an objective comparison of automatic prostate segmentation algorithms. The framework was used for rigorous evaluations of the models, highlighting their strengths and weaknesses.


Asunto(s)
Próstata , Neoplasias de la Próstata , Masculino , Humanos , Próstata/diagnóstico por imagen , Benchmarking , Redes Neurales de la Computación , Algoritmos , Neoplasias de la Próstata/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos
15.
Bioengineering (Basel) ; 10(4)2023 Mar 26.
Artículo en Inglés | MEDLINE | ID: mdl-37106600

RESUMEN

Segmentation of the prostate gland from magnetic resonance images is rapidly becoming a standard of care in prostate cancer radiotherapy treatment planning. Automating this process has the potential to improve accuracy and efficiency. However, the performance and accuracy of deep learning models varies depending on the design and optimal tuning of the hyper-parameters. In this study, we examine the effect of loss functions on the performance of deep-learning-based prostate segmentation models. A U-Net model for prostate segmentation using T2-weighted images from a local dataset was trained and performance compared when using nine different loss functions, including: Binary Cross-Entropy (BCE), Intersection over Union (IoU), Dice, BCE and Dice (BCE + Dice), weighted BCE and Dice (W (BCE + Dice)), Focal, Tversky, Focal Tversky, and Surface loss functions. Model outputs were compared using several metrics on a five-fold cross-validation set. Ranking of model performance was found to be dependent on the metric used to measure performance, but in general, W (BCE + Dice) and Focal Tversky performed well for all metrics (whole gland Dice similarity coefficient (DSC): 0.71 and 0.74; 95HD: 6.66 and 7.42; Ravid 0.05 and 0.18, respectively) and Surface loss generally ranked lowest (DSC: 0.40; 95HD: 13.64; Ravid -0.09). When comparing the performance of the models for the mid-gland, apex, and base parts of the prostate gland, the models' performance was lower for the apex and base compared to the mid-gland. In conclusion, we have demonstrated that the performance of a deep learning model for prostate segmentation can be affected by choice of loss function. For prostate segmentation, it would appear that compound loss functions generally outperform singles loss functions such as Surface loss.

16.
Cancers (Basel) ; 15(5)2023 Feb 25.
Artículo en Inglés | MEDLINE | ID: mdl-36900261

RESUMEN

Prostate cancer is one of the most common forms of cancer globally, affecting roughly one in every eight men according to the American Cancer Society. Although the survival rate for prostate cancer is significantly high given the very high incidence rate, there is an urgent need to improve and develop new clinical aid systems to help detect and treat prostate cancer in a timely manner. In this retrospective study, our contributions are twofold: First, we perform a comparative unified study of different commonly used segmentation models for prostate gland and zone (peripheral and transition) segmentation. Second, we present and evaluate an additional research question regarding the effectiveness of using an object detector as a pre-processing step to aid in the segmentation process. We perform a thorough evaluation of the deep learning models on two public datasets, where one is used for cross-validation and the other as an external test set. Overall, the results reveal that the choice of model is relatively inconsequential, as the majority produce non-significantly different scores, apart from nnU-Net which consistently outperforms others, and that the models trained on data cropped by the object detector often generalize better, despite performing worse during cross-validation.

17.
Med Phys ; 50(6): 3445-3458, 2023 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-36905102

RESUMEN

BACKGROUND: Multiparametric magnetic resonance imaging (mp-MRI) is introduced and established as a noninvasive alternative for prostate cancer (PCa) detection and characterization. PURPOSE: To develop and evaluate a mutually communicated deep learning segmentation and classification network (MC-DSCN) based on mp-MRI for prostate segmentation and PCa diagnosis. METHODS: The proposed MC-DSCN can transfer mutual information between segmentation and classification components and facilitate each other in a bootstrapping way. For classification task, the MC-DSCN can transfer the masks produced by the coarse segmentation component to the classification component to exclude irrelevant regions and facilitate classification. For segmentation task, this model can transfer the high-quality localization information learned by the classification component to the fine segmentation component to mitigate the impact of inaccurate localization on segmentation results. Consecutive MRI exams of patients were retrospectively collected from two medical centers (referred to as center A and B). Two experienced radiologists segmented the prostate regions, and the ground truth of the classification refers to the prostate biopsy results. MC-DSCN was designed, trained, and validated using different combinations of distinct MRI sequences as input (e.g., T2-weighted and apparent diffusion coefficient) and the effect of different architectures on the network's performance was tested and discussed. Data from center A were used for training, validation, and internal testing, while another center's data were used for external testing. The statistical analysis is performed to evaluate the performance of the MC-DSCN. The DeLong test and paired t-test were used to assess the performance of classification and segmentation, respectively. RESULTS: In total, 134 patients were included. The proposed MC-DSCN outperforms the networks that were designed solely for segmentation or classification. Regarding the segmentation task, the classification localization information helped to improve the IOU in center A: from 84.5% to 87.8% (p < 0.01) and in center B: from 83.8% to 87.1% (p < 0.01), while the area under curve (AUC) of PCa classification was improved in center A: from 0.946 to 0.991 (p < 0.02) and in center B: from 0.926 to 0.955 (p < 0.01) as a result of the additional information provided by the prostate segmentation. CONCLUSION: The proposed architecture could effectively transfer mutual information between segmentation and classification components and facilitate each other in a bootstrapping way, thus outperforming the networks designed to perform only one task.


Asunto(s)
Imágenes de Resonancia Magnética Multiparamétrica , Neoplasias de la Próstata , Masculino , Humanos , Estudios Retrospectivos , Sensibilidad y Especificidad , Neoplasias de la Próstata/diagnóstico por imagen , Neoplasias de la Próstata/patología , Imagen por Resonancia Magnética/métodos
18.
J Digit Imaging ; 36(3): 947-963, 2023 06.
Artículo en Inglés | MEDLINE | ID: mdl-36729258

RESUMEN

Accurate prostate segmentation in ultrasound images is crucial for the clinical diagnosis of prostate cancer and for performing image-guided prostate surgery. However, it is challenging to accurately segment the prostate in ultrasound images due to their low signal-to-noise ratio, the low contrast between the prostate and neighboring tissues, and the diffuse or invisible boundaries of the prostate. In this paper, we develop a novel hybrid method for segmentation of the prostate in ultrasound images that generates accurate contours of the prostate from a range of datasets. Our method involves three key steps: (1) application of a principal curve-based method to obtain a data sequence comprising data coordinates and their corresponding projection index; (2) use of the projection index as training input for a fractional-order-based neural network that increases the accuracy of results; and (3) generation of a smooth mathematical map (expressed via the parameters of the neural network) that affords a smooth prostate boundary, which represents the output of the neural network (i.e., optimized vertices) and matches the ground truth contour. Experimental evaluation of our method and several other state-of-the-art segmentation methods on datasets of prostate ultrasound images generated at multiple institutions demonstrated that our method exhibited the best capability. Furthermore, our method is robust as it can be applied to segment prostate ultrasound images obtained at multiple institutions based on various evaluation metrics.


Asunto(s)
Próstata , Neoplasias de la Próstata , Masculino , Humanos , Próstata/diagnóstico por imagen , Redes Neurales de la Computación , Neoplasias de la Próstata/diagnóstico por imagen , Ultrasonografía , Modelos Teóricos , Procesamiento de Imagen Asistido por Computador/métodos
19.
Med Phys ; 50(2): 906-921, 2023 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-35923153

RESUMEN

PURPOSE: Automatic segmentation of prostate magnetic resonance (MR) images is crucial for the diagnosis, evaluation, and prognosis of prostate diseases (including prostate cancer). In recent years, the mainstream segmentation method for the prostate has been converted to convolutional neural networks. However, owing to the complexity of the tissue structure in MR images and the limitations of existing methods in spatial context modeling, the segmentation performance should be improved further. METHODS: In this study, we proposed a novel 3D pyramid pool Unet that benefits from the pyramid pooling structure embedded in the skip connection (SC) and the deep supervision (DS) in the up-sampling of the 3D Unet. The parallel SC of the conventional 3D Unet network causes low-resolution information to be sent to the feature map repeatedly, resulting in blurred image features. To overcome the shortcomings of the conventional 3D Unet, we merge each decoder layer with the feature map of the same scale as the encoder and the smaller scale feature map of the pyramid pooling encoder. This SC combines the low-level details and high-level semantics at two different levels of feature maps. In addition, pyramid pooling performs multifaceted feature extraction on each image behind the convolutional layer, and DS learns hierarchical representations from comprehensive aggregated feature maps, which can improve the accuracy of the task. RESULTS: Experiments on 3D prostate MR images of 78 patients demonstrated that our results were highly correlated with expert manual segmentation. The average relative volume difference and Dice similarity coefficient of the prostate volume area were 2.32% and 91.03%, respectively. CONCLUSION: Quantitative experiments demonstrate that, compared with other methods, the results of our method are highly consistent with the expert manual segmentation.


Asunto(s)
Próstata , Neoplasias de la Próstata , Masculino , Humanos , Próstata/diagnóstico por imagen , Neoplasias de la Próstata/diagnóstico por imagen , Aprendizaje , Imagen por Resonancia Magnética , Redes Neurales de la Computación , Procesamiento de Imagen Asistido por Computador
20.
Bioengineering (Basel) ; 9(8)2022 Jul 26.
Artículo en Inglés | MEDLINE | ID: mdl-35892756

RESUMEN

In prostate cancer, fusion biopsy, which couples magnetic resonance imaging (MRI) with transrectal ultrasound (TRUS), poses the basis for targeted biopsy by allowing the comparison of information coming from both imaging modalities at the same time. Compared with the standard clinical procedure, it provides a less invasive option for the patients and increases the likelihood of sampling cancerous tissue regions for the subsequent pathology analyses. As a prerequisite to image fusion, segmentation must be achieved from both MRI and TRUS domains. The automatic contour delineation of the prostate gland from TRUS images is a challenging task due to several factors including unclear boundaries, speckle noise, and the variety of prostate anatomical shapes. Automatic methodologies, such as those based on deep learning, require a huge quantity of training data to achieve satisfactory results. In this paper, the authors propose a novel optimization formulation to find the best superellipse, a deformable model that can accurately represent the prostate shape. The advantage of the proposed approach is that it does not require extensive annotations, and can be used independently of the specific transducer employed during prostate biopsies. Moreover, in order to show the clinical applicability of the method, this study also presents a module for the automatic segmentation of the prostate gland from MRI, exploiting the nnU-Net framework. Lastly, segmented contours from both imaging domains are fused with a customized registration algorithm in order to create a tool that can help the physician to perform a targeted prostate biopsy by interacting with the graphical user interface.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA