Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 168
Filtrar
1.
Heliyon ; 10(16): e36426, 2024 Aug 30.
Artículo en Inglés | MEDLINE | ID: mdl-39253160

RESUMEN

Objective: It is challenging to accurately distinguish atypical endometrial hyperplasia (AEH) and endometrial cancer (EC) under routine transvaginal ultrasonic (TVU) detection. Our research aims to use the few-shot learning (FSL) method to identify non-atypical endometrial hyperplasia (NAEH), AEH, and EC based on limited TVU images. Methods: The TVU images of pathologically confirmed NAEH, AEH, and EC patients (n = 33 per class) were split into the support set (SS, n = 3 per class) and the query set (QS, n = 30 per class). Next, we used dual pretrained ResNet50 V2 which pretrained on ImageNet first and then on extra collected TVU images to extract 1*64 eigenvectors from the TVU images in SS and QS. Then, the Euclidean distances were calculated between each TVU image in QS and nine TVU images of SS. Finally, the k-nearest neighbor (KNN) algorithm was used to diagnose the TVU images in QS. Results: The overall accuracy and macro precision of the proposed FSL model in QS were 0.878 and 0.882 respectively, superior to the automated machine learning models, traditional ResNet50 V2 model, junior sonographer, and senior sonographer. When identifying EC, the proposed FSL model achieved the highest precision of 0.964, the highest recall of 0.900, and the highest F1-score of 0.931. Conclusions: The proposed FSL model combining dual pretrained ResNet50 V2 eigenvectors extractor and KNN classifier presented well in identifying NAEH, AEH, and EC patients with limited TVU images, showing potential in the application of computer-aided disease diagnosis.

2.
Artículo en Inglés | MEDLINE | ID: mdl-39289317

RESUMEN

PURPOSE: Ultrasound imaging has emerged as a promising cost-effective and portable non-irradiant modality for the diagnosis and follow-up of diseases. Motion analysis can be performed by segmenting anatomical structures of interest before tracking them over time. However, doing so in a robust way is challenging as ultrasound images often display a low contrast and blurry boundaries. METHODS: In this paper, a robust descriptor inspired from the fractal dimension is presented to locally characterize the gray-level variations of an image. This descriptor is an adaptive grid pattern whose scale locally varies as the gray-level variations of the image. Robust features are then located based on the gray-level variations, which are more likely to be consistently tracked over time despite the presence of noise. RESULTS: The method was validated on three datasets: segmentation of the left ventricle on simulated echocardiography (Dice coefficient, DC), accuracy of diaphragm motion tracking for healthy subjects (mean sum of distances, MSD) and for a scoliosis patient (root mean square error, RMSE). Results show that the method segments the left ventricle accurately ( DC = 0.84 ) and robustly tracks the diaphragm motion for healthy subjects ( MSD = 1.10 mm) and for the scoliosis patient ( RMSE = 1.22 mm). CONCLUSIONS: This method has the potential to segment structures of interest according to their texture in an unsupervised fashion, as well as to help analyze the deformation of tissues. Possible applications are not limited to US image. The same principle could also be applied to other medical imaging modalities such as MRI or CT scans.

3.
Med Biol Eng Comput ; 2024 Sep 18.
Artículo en Inglés | MEDLINE | ID: mdl-39292382

RESUMEN

Atherosclerosis causes heart disease by forming plaques in arterial walls. IVUS imaging provides a high-resolution cross-sectional view of coronary arteries and plaque morphology. Healthcare professionals diagnose and quantify atherosclerosis physically or using VH-IVUS software. Since manual or VH-IVUS software-based diagnosis is time-consuming, automated plaque characterization tools are essential for accurate atherosclerosis detection and classification. Recently, deep learning (DL) and computer vision (CV) approaches are promising tools for automatically classifying plaques on IVUS images. With this motivation, this manuscript proposes an automated atherosclerotic plaque classification method using a hybrid Ant Lion Optimizer with Deep Learning (AAPC-HALODL) technique on IVUS images. The AAPC-HALODL technique uses the faster regional convolutional neural network (Faster RCNN)-based segmentation approach to identify diseased regions in the IVUS images. Next, the ShuffleNet-v2 model generates a useful set of feature vectors from the segmented IVUS images, and its hyperparameters can be optimally selected by using the HALO technique. Finally, an average ensemble classification process comprising a stacked autoencoder (SAE) and deep extreme learning machine (DELM) model can be utilized. The MICCAI Challenge 2011 dataset was used for AAPC-HALODL simulation analysis. A detailed comparative study showed that the AAPC-HALODL approach outperformed other DL models with a maximum accuracy of 98.33%, precision of 97.87%, sensitivity of 98.33%, and F score of 98.10%.

4.
Microsc Res Tech ; 2024 Aug 15.
Artículo en Inglés | MEDLINE | ID: mdl-39145424

RESUMEN

Ultrasound images are susceptible to various forms of quality degradation that negatively impact diagnosis. Common degradations include speckle noise, Gaussian noise, salt and pepper noise, and blurring. This research proposes an accurate ultrasound image denoising strategy based on firstly detecting the noise type, then, suitable denoising methods can be applied for each corruption. The technique depends on convolutional neural networks to categorize the type of noise affecting an input ultrasound image. Pre-trained convolutional neural network models including GoogleNet, VGG-19, AlexNet and AlexNet-support vector machine (SVM) are developed and trained to perform this classification. A dataset of 782 numerically generated ultrasound images across different diseases and noise types is utilized for model training and evaluation. Results show AlexNet-SVM achieves the highest accuracy of 99.2% in classifying noise types. The results indicate that, the present technique is considered one of the top-performing models is then applied to real ultrasound images with different noise corruptions to demonstrate efficacy of the proposed detect-then-denoise system. RESEARCH HIGHLIGHTS: Proposes an accurate ultrasound image denoising strategy based on detecting noise type first. Uses pre-trained convolutional neural networks to categorize noise type in input images. Evaluates GoogleNet, VGG-19, AlexNet, and AlexNet-support vector machine (SVM) models on a dataset of 782 synthetic ultrasound images. AlexNet-SVM achieves highest accuracy of 99.2% in classifying noise types. Demonstrates efficacy of the proposed detect-then-denoise system on real ultrasound images.

5.
World J Clin Cases ; 12(22): 4932-4939, 2024 Aug 06.
Artículo en Inglés | MEDLINE | ID: mdl-39109037

RESUMEN

BACKGROUND: Collision tumor are neoplasms, including two histologically distinct tumors that coexist in the same mass without histological admixture. The incidence of collision tumor is low and is rare clinically. AIM: To investigate ultrasound images and application of ovarian-adnexal reporting and data system (O-RADS) to evaluate the risk and pathological characteristics of ovarian collision tumor. METHODS: This study retrospectively analyzed 17 cases of ovarian collision tumor diagnosed pathologically from January 2020 to December 2023. All clinical features, ultrasound images and histopathological features were collected and analyzed. The O-RADS score was used for classification. The O-RADS score was determined by two senior doctors in the gynecological ultrasound group. Lesions with O-RADS score of 1-3 were classified as benign tumors, and lesions with O-RADS score of 4 or 5 were classified as malignant tumors. RESULTS: There were 17 collision tumors detected in 16 of 6274 patients who underwent gynecological surgery. The average age of 17 women with ovarian collision tumor was 36.7 years (range 20-68 years), in whom, one occurred bilaterally and the rest occurred unilaterally. The average tumor diameter was 10 cm, of which three were 2-5 cm, 11 were 5-10 cm, and three were > 10 cm. Five (29.4%) tumors with O-RADS score 3 were endometriotic cysts with fibroma/serous cystadenoma, and unilocular or multilocular cysts contained a small number of parenchymal components. Eleven (64.7%) tumors had an O-RADS score of 4, including two in category 4A, six in category 4B, and three in category 4C; all of which were multilocular cystic tumors with solid components or multiple papillary components. One (5.9%) tumor had an O-RADS score of 5. This case was a solid mass, and a small amount of pelvic effusion was detected under ultrasound. The pathology was high-grade serous cystic cancer combined with cystic mature teratoma. There were nine (52.9%) tumors with elevated serum carbohydrate antigen (CA)125 and two (11.8%) with elevated serum CA19-9. Histological and pathological results showed that epithelial-cell-derived tumors combined with other tumors were the most common, which was different from previous results. CONCLUSION: The ultrasound images of ovarian collision tumor have certain specificity, but diagnosis by preoperative ultrasound is difficult. The combination of epithelial and mesenchymal cell tumors is one of the most common types of ovarian collision tumor. The O-RADS score of ovarian collision tumor is mostly ≥ 4, which can sensitively detect malignant tumors.

6.
Ophthalmol Ther ; 13(10): 2645-2659, 2024 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-39127983

RESUMEN

INTRODUCTION: The aim of this work is to develop a deep learning (DL) system for rapidly and accurately screening for intraocular tumor (IOT), retinal detachment (RD), vitreous hemorrhage (VH), and posterior scleral staphyloma (PSS) using ocular B-scan ultrasound images. METHODS: Ultrasound images from five clinically confirmed categories, including vitreous hemorrhage, retinal detachment, intraocular tumor, posterior scleral staphyloma, and normal eyes, were used to develop and evaluate a fine-grained classification system (the Dual-Path Lesion Attention Network, DPLA-Net). Images were derived from five centers scanned by different sonographers and divided into training, validation, and test sets in a ratio of 7:1:2. Two senior ophthalmologists and four junior ophthalmologists were recruited to evaluate the system's performance. RESULTS: This multi-center cross-sectional study was conducted in six hospitals in China. A total of 6054 ultrasound images were collected; 4758 images were used for the training and validation of the system, and 1296 images were used as a testing set. DPLA-Net achieved a mean accuracy of 0.943 in the testing set, and the area under the curve was 0.988 for IOT, 0.997 for RD, 0.994 for PSS, 0.988 for VH, and 0.993 for normal. With the help of DPLA-Net, the accuracy of the four junior ophthalmologists improved from 0.696 (95% confidence interval [CI] 0.684-0.707) to 0.919 (95% CI 0.912-0.926, p < 0.001), and the time used for classifying each image reduced from 16.84 ± 2.34 s to 10.09 ± 1.79 s. CONCLUSIONS: The proposed DPLA-Net showed high accuracy for screening and classifying multiple ophthalmic diseases using B-scan ultrasound images across mutiple centers. Moreover, the system can promote the efficiency of classification by ophthalmologists.

7.
Med Biol Eng Comput ; 2024 Aug 31.
Artículo en Inglés | MEDLINE | ID: mdl-39215783

RESUMEN

Deep learning has been widely used in ultrasound image analysis, and it also benefits kidney ultrasound interpretation and diagnosis. However, the importance of ultrasound image resolution often goes overlooked within deep learning methodologies. In this study, we integrate the ultrasound image resolution into a convolutional neural network and explore the effect of the resolution on diagnosis of kidney tumors. In the process of integrating the image resolution information, we propose two different approaches to narrow the semantic gap between the features extracted by the neural network and the resolution features. In the first approach, the resolution is directly concatenated with the features extracted by the neural network. In the second approach, the features extracted by the neural network are first dimensionally reduced and then combined with the resolution features to form new composite features. We compare these two approaches incorporating the resolution with the method without incorporating the resolution on a kidney tumor dataset of 926 images consisting of 211 images of benign kidney tumors and 715 images of malignant kidney tumors. The area under the receiver operating characteristic curve (AUC) of the method without incorporating the resolution is 0.8665, and the AUCs of the two approaches incorporating the resolution are 0.8926 (P < 0.0001) and 0.9135 (P < 0.0001) respectively. This study has established end-to-end kidney tumor classification systems and has demonstrated the benefits of integrating image resolution, showing that incorporating image resolution into neural networks can more accurately distinguish between malignant and benign kidney tumors in ultrasound images.

8.
Phys Med Biol ; 69(15)2024 Jul 26.
Artículo en Inglés | MEDLINE | ID: mdl-38986480

RESUMEN

Objective.Automated detection and segmentation of breast masses in ultrasound images are critical for breast cancer diagnosis, but remain challenging due to limited image quality and complex breast tissues. This study aims to develop a deep learning-based method that enables accurate breast mass detection and segmentation in ultrasound images.Approach.A novel convolutional neural network-based framework that combines the You Only Look Once (YOLO) v5 network and the Global-Local (GOLO) strategy was developed. First, YOLOv5 was applied to locate the mass regions of interest (ROIs). Second, a Global Local-Connected Multi-Scale Selection (GOLO-CMSS) network was developed to segment the masses. The GOLO-CMSS operated on both the entire images globally and mass ROIs locally, and then integrated the two branches for a final segmentation output. Particularly, in global branch, CMSS applied Multi-Scale Selection (MSS) modules to automatically adjust the receptive fields, and Multi-Input (MLI) modules to enable fusion of shallow and deep features at different resolutions. The USTC dataset containing 28 477 breast ultrasound images was collected for training and test. The proposed method was also tested on three public datasets, UDIAT, BUSI and TUH. The segmentation performance of GOLO-CMSS was compared with other networks and three experienced radiologists.Main results.YOLOv5 outperformed other detection models with average precisions of 99.41%, 95.15%, 93.69% and 96.42% on the USTC, UDIAT, BUSI and TUH datasets, respectively. The proposed GOLO-CMSS showed superior segmentation performance over other state-of-the-art networks, with Dice similarity coefficients (DSCs) of 93.19%, 88.56%, 87.58% and 90.37% on the USTC, UDIAT, BUSI and TUH datasets, respectively. The mean DSC between GOLO-CMSS and each radiologist was significantly better than that between radiologists (p< 0.001).Significance.Our proposed method can accurately detect and segment breast masses with a decent performance comparable to radiologists, highlighting its great potential for clinical implementation in breast ultrasound examination.


Asunto(s)
Neoplasias de la Mama , Aprendizaje Profundo , Procesamiento de Imagen Asistido por Computador , Humanos , Neoplasias de la Mama/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos , Ultrasonografía/métodos , Femenino , Ultrasonografía Mamaria/métodos , Redes Neurales de la Computación
9.
Biomed Eng Lett ; 14(4): 785-800, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38946824

RESUMEN

The aim of this study is to propose a new diagnostic model based on "segmentation + classification" to improve the routine screening of Thyroid nodule ultrasonography by utilizing the key domain knowledge of medical diagnostic tasks. A Multi-scale segmentation network based on a pyramidal pooling structure of multi-parallel void spaces is proposed. First, in the segmentation network, the exact information of the underlying feature space is obtained by an Attention Gate. Second, the inflated convolutional part of Atrous Spatial Pyramid Pooling (ASPP) is cascaded for multiple downsampling. Finally, a three-branch classification network combined with expert knowledge is designed, drawing on doctors' clinical diagnosis experience, to extract features from the original image of the nodule, the regional image of the nodule, and the edge image of the nodule, respectively, and to improve the classification accuracy of the model by utilizing the Coordinate attention (CA) mechanism and cross-level feature fusion. The Multi-scale segmentation network achieves 94.27%, 93.90% and 88.85% of mean precision (mPA), Dice value (Dice) and mean joint intersection (MIoU), respectively, and the accuracy, specificity and sensitivity of the classification network reaches 86.07%, 81.34% and 90.19%, respectively. Comparison tests show that this method outperforms the U-Net, AGU-Net and DeepLab V3+ classical models as well as the nnU-Net, Swin UNetr and MedFormer models that have emerged in recent years. This algorithm, as an auxiliary diagnostic tool, can help physicians more accurately assess the benign or malignant nature of Thyroid nodules. It can provide objective quantitative indicators, reduce the bias of subjective judgment, and improve the consistency and accuracy of diagnosis. Codes and models are available at https://github.com/enheliang/Thyroid-Segmentation-Network.git.

10.
Breast Cancer Res Treat ; 207(2): 453-468, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-38853220

RESUMEN

PURPOSE: This study aims to assess the diagnostic value of ultrasound habitat sub-region radiomics feature parameters using a fully connected neural networks (FCNN) combination method L2,1-norm in relation to breast cancer Ki-67 status. METHODS: Ultrasound images from 528 cases of female breast cancer at the Affiliated Hospital of Xiangnan University and 232 cases of female breast cancer at the Affiliated Rehabilitation Hospital of Xiangnan University were selected for this study. We utilized deep learning methods to automatically outline the gross tumor volume and perform habitat clustering. Subsequently, habitat sub-regions were extracted to identify radiomics features and underwent feature engineering using the L1,2-norm. A prediction model for the Ki-67 status of breast cancer patients was then developed using a FCNN. The model's performance was evaluated using accuracy, area under the curve (AUC), specificity (Spe), positive predictive value (PPV), negative predictive value (NPV), Recall, and F1. In addition, calibration curves and clinical decision curves were plotted for the test set to visually assess the predictive accuracy and clinical benefit of the models. RESULT: Based on the feature engineering using the L1,2-norm, a total of 9 core features were identified. The predictive model, constructed by the FCNN model based on these 9 features, achieved the following scores: ACC 0.856, AUC 0.915, Spe 0.843, PPV 0.920, NPV 0.747, Recall 0.974, and F1 0.890. Furthermore, calibration curves and clinical decision curves of the validation set demonstrated a high level of confidence in the model's performance and its clinical benefit. CONCLUSION: Habitat clustering of ultrasound images of breast cancer is effectively supported by the combined implementation of the L1,2-norm and FCNN algorithms, allowing for the accurate classification of the Ki-67 status in breast cancer patients.


Asunto(s)
Neoplasias de la Mama , Antígeno Ki-67 , Redes Neurales de la Computación , Humanos , Femenino , Neoplasias de la Mama/diagnóstico por imagen , Neoplasias de la Mama/metabolismo , Neoplasias de la Mama/patología , Antígeno Ki-67/metabolismo , Antígeno Ki-67/análisis , Persona de Mediana Edad , Adulto , Anciano , Aprendizaje Profundo , Ultrasonografía Mamaria/métodos , Ultrasonografía/métodos , Curva ROC , Biomarcadores de Tumor , Radiómica
11.
BMC Med Imaging ; 24(1): 133, 2024 Jun 05.
Artículo en Inglés | MEDLINE | ID: mdl-38840240

RESUMEN

BACKGROUND: Breast cancer is the most common cancer among women, and ultrasound is a usual tool for early screening. Nowadays, deep learning technique is applied as an auxiliary tool to provide the predictive results for doctors to decide whether to make further examinations or treatments. This study aimed to develop a hybrid learning approach for breast ultrasound classification by extracting more potential features from local and multi-center ultrasound data. METHODS: We proposed a hybrid learning approach to classify the breast tumors into benign and malignant. Three multi-center datasets (BUSI, BUS, OASBUD) were used to pretrain a model by federated learning, then every dataset was fine-tuned at local. The proposed model consisted of a convolutional neural network (CNN) and a graph neural network (GNN), aiming to extract features from images at a spatial level and from graphs at a geometric level. The input images are small-sized and free from pixel-level labels, and the input graphs are generated automatically in an unsupervised manner, which saves the costs of labor and memory space. RESULTS: The classification AUCROC of our proposed method is 0.911, 0.871 and 0.767 for BUSI, BUS and OASBUD. The balanced accuracy is 87.6%, 85.2% and 61.4% respectively. The results show that our method outperforms conventional methods. CONCLUSIONS: Our hybrid approach can learn the inter-feature among multi-center data and the intra-feature of local data. It shows potential in aiding doctors for breast tumor classification in ultrasound at an early stage.


Asunto(s)
Neoplasias de la Mama , Aprendizaje Profundo , Redes Neurales de la Computación , Ultrasonografía Mamaria , Humanos , Neoplasias de la Mama/diagnóstico por imagen , Femenino , Ultrasonografía Mamaria/métodos , Interpretación de Imagen Asistida por Computador/métodos , Adulto
12.
Curr Med Imaging ; 20: e15734056293608, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38712376

RESUMEN

BACKGROUND: Transorbital Ultrasonography (TOS) is a promising imaging technology that can be used to characterize the structures of the optic nerve and the potential alterations that may occur in those structures as a result of an increase in intracranial pressure (ICP) or the presence of other disorders such as multiple sclerosis (MS) and hydrocephalus. OBJECTIVE: In this paper, the primary objective is to develop a fully automated system that is capable of segmenting and calculating the diameters of structures that are associated with the optic nerve in TOS images. These structures include the optic nerve diameter sheath (ONSD) and the optic nerve diameter (OND). METHODS: A fully convolutional neural network (FCN) model that has been pre-trained serves as the foundation for the segmentation method. The method that was developed was utilized to collect 464 different photographs from 110 different people, and it was accomplished with the assistance of four distinct pieces of apparatus. RESULTS: An examination was carried out to compare the outcomes of the automatic measurements with those of a manual operator. Both OND and ONSD have a typical inaccuracy of -0.12 0.32 mm and 0.14 0.58 mm, respectively, when compared to the operator. The Pearson correlation coefficient (PCC) for OND is 0.71, while the coefficient for ONSD is 0.64, showing that there is a positive link between the two measuring tools. CONCLUSION: A conclusion may be drawn that the technique that was developed is automatic, and the average error (AE) that was reached for the ONSD measurement is compatible with the ranges of inter-operator variability that have been discovered in the literature.


Asunto(s)
Aprendizaje Profundo , Nervio Óptico , Ultrasonografía , Humanos , Nervio Óptico/diagnóstico por imagen , Ultrasonografía/métodos , Redes Neurales de la Computación , Procesamiento de Imagen Asistido por Computador/métodos
13.
Cell Biochem Funct ; 42(4): e4054, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38783623

RESUMEN

One of the most dangerous conditions in clinical practice is breast cancer because it affects the entire life of women in recent days. Nevertheless, the existing techniques for diagnosing breast cancer are complicated, expensive, and inaccurate. Many trans-disciplinary and computerized systems are recently created to prevent human errors in both quantification and diagnosis. Ultrasonography is a crucial imaging technique for cancer detection. Therefore, it is essential to develop a system that enables the healthcare sector to rapidly and effectively detect breast cancer. Due to its benefits in predicting crucial feature identification from complicated breast cancer datasets, machine learning is widely employed in the categorization of breast cancer patterns. The performance of machine learning models is limited by the absence of a successful feature enhancement strategy. There are a few issues that need to be handled with the traditional breast cancer detection method. Thus, a novel breast cancer detection model is designed based on machine learning approaches and employing ultrasonic images. At first, ultrasound images utilized for the analysis is acquired from the benchmark resources and offered as the input to preprocessing phase. The images are preprocessed by utilizing a filtering and contrast enhancement approach and attained the preprocessed image. Then, the preprocessed images are subjected to the segmentation phase. In this phase, segmentation is performed by employing Fuzzy C-Means, active counter, and watershed algorithm and also attained the segmented images. Later, the segmented images are provided to the pixel selection phase. Here, the pixels are selected by the developed hybrid model Conglomerated Aphid with Galactic Swarm Optimization (CAGSO) to attain the final segmented pixels. Then, the selected segmented pixel is fed in to feature extraction phase for attaining the shape features and the textual features. Further, the acquired features are offered to the optimal weighted feature selection phase, and also their weights are tuned tune by the developed CAGSO. Finally, the optimal weighted features are offered to the breast cancer detection phase. Finally, the developed breast cancer detection model secured an enhanced performance rate than the classical approaches throughout the experimental analysis.


Asunto(s)
Neoplasias de la Mama , Aprendizaje Automático , Neoplasias de la Mama/diagnóstico por imagen , Neoplasias de la Mama/patología , Humanos , Femenino , Ultrasonografía , Algoritmos , Procesamiento de Imagen Asistido por Computador
14.
Biomed Phys Eng Express ; 10(4)2024 May 31.
Artículo en Inglés | MEDLINE | ID: mdl-38781934

RESUMEN

Congenital heart defects (CHD) are one of the serious problems that arise during pregnancy. Early CHD detection reduces death rates and morbidity but is hampered by the relatively low detection rates (i.e., 60%) of current screening technology. The detection rate could be increased by supplementing ultrasound imaging with fetal ultrasound image evaluation (FUSI) using deep learning techniques. As a result, the non-invasive foetal ultrasound image has clear potential in the diagnosis of CHD and should be considered in addition to foetal echocardiography. This review paper highlights cutting-edge technologies for detecting CHD using ultrasound images, which involve pre-processing, localization, segmentation, and classification. Existing technique of preprocessing includes spatial domain filter, non-linear mean filter, transform domain filter, and denoising methods based on Convolutional Neural Network (CNN); segmentation includes thresholding-based techniques, region growing-based techniques, edge detection techniques, Artificial Neural Network (ANN) based segmentation methods, non-deep learning approaches and deep learning approaches. The paper also suggests future research directions for improving current methodologies.


Asunto(s)
Aprendizaje Profundo , Cardiopatías Congénitas , Redes Neurales de la Computación , Ultrasonografía Prenatal , Humanos , Cardiopatías Congénitas/diagnóstico por imagen , Ultrasonografía Prenatal/métodos , Embarazo , Femenino , Procesamiento de Imagen Asistido por Computador/métodos , Ecocardiografía/métodos , Algoritmos , Corazón Fetal/diagnóstico por imagen , Feto/diagnóstico por imagen
15.
Ultrasound Med Biol ; 50(8): 1188-1193, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-38697896

RESUMEN

OBJECTIVE: This study investigated reliability and validity of muscle cross-sectional area and echo intensity using an automatic image analysis program. METHODS: Twenty-two participants completed two data collection trials consisting of ultrasound imaging of the vastus lateralis (VL) at 10 and 12 MHz. Images were analyzed manually and with Deep Anatomical Cross-Sectional Area (DeepACSA). Reliability statistics (i.e., intraclass correlation coefficient [ICC] model 2,1, standard error of measure expressed as a percentage of the mean [SEM%], minimal differences [MD] values needed to be considered real) and validity statistics (i.e., constant error [CE], total error [TE], standard error of the estimate [SEE]) were calculated. RESULTS: Automatic analyses of ACSA and EI demonstrated good reliability (10 MHz: ICC2,1 = 0.83 - 0.90; 12 MHz: ICC2,1 = 0.87-0.88), while manual analyses demonstrated moderate to excellent reliability (10 MHz: ICC2,1 = 0.82-0.99; 12 MHz: ICC2,1 = 0.73-0.99). Automatic analyses of ACSA presented greater error at 10 (CE = -0.76 cm2, TE = 4.94 cm2, SEE = 3.65 cm2) than 12 MHz (CE = 0.17 cm2, TE = 3.44 cm2, SEE = 3.11 cm2). Analyses of EI presented greater error at 10 (CE = 3.35 a.u., TE = 2.70 a.u., SEE = 2.58 a.u.) than at 12 MHz (CE = 3.21 a.u., TE = 2.61 a.u., SEE = 2.34 a.u.). CONCLUSION: The results suggest the DeepACSA program may be less reliable compared to manual analysis for VL ACSA but displayed similar reliability for EI. In addition, the results demonstrated the automatic program had low error for 10 and 12 MHz.


Asunto(s)
Ultrasonografía , Humanos , Reproducibilidad de los Resultados , Ultrasonografía/métodos , Masculino , Adulto , Femenino , Adulto Joven , Tamaño de los Órganos , Músculo Esquelético/diagnóstico por imagen , Músculo Esquelético/anatomía & histología , Músculo Cuádriceps/diagnóstico por imagen , Músculo Cuádriceps/anatomía & histología , Procesamiento de Imagen Asistido por Computador/métodos
16.
Quant Imaging Med Surg ; 14(5): 3676-3694, 2024 May 01.
Artículo en Inglés | MEDLINE | ID: mdl-38720857

RESUMEN

Background: Thyroid nodules are commonly identified through ultrasound imaging, which plays a crucial role in the early detection of malignancy. The diagnostic accuracy, however, is significantly influenced by the expertise of radiologists, the quality of equipment, and image acquisition techniques. This variability underscores the critical need for computational tools that support diagnosis. Methods: This retrospective study evaluates an artificial intelligence (AI)-driven system for thyroid nodule assessment, integrating clinical practices from multiple prominent Thai medical centers. We included patients who underwent thyroid ultrasonography complemented by ultrasound-guided fine needle aspiration (FNA) between January 2015 and March 2021. Participants formed a consecutive series, enhancing the study's validity. A comparative analysis was conducted between the AI model's diagnostic performance and that of both an experienced radiologist and a third-year radiology resident, using a dataset of 600 ultrasound images from three distinguished Thai medical institutions, each verified with cytological findings. Results: The AI system demonstrated superior diagnostic performance, with an overall sensitivity of 80% [95% confidence interval (CI): 59.3-93.2%] and specificity of 71.4% (95% CI: 53.7-85.4%). At Siriraj Hospital, the AI achieved a sensitivity of 90.0% (95% CI: 55.5-99.8%), specificity of 100.0% (95% CI: 69.2-100%), positive prediction value (PPV) of 100.0%, negative prediction value (NPV) of 90.9%, and an overall accuracy of 95.0%, indicating the benefits of AI's extensive training across diverse datasets. The experienced radiologist's sensitivity was 40.0% (95% CI: 21.1-61.3%), while the specificity was 80.0% (95% CIs: 63.6-91.6%), respectively, showing that the AI significantly outperformed the radiologist in terms of sensitivity (P=0.043) while maintaining comparable specificity. The inter-observer variability analysis indicated a moderate agreement (K=0.53) between the radiologist and the resident, contrasting with fair agreement (K=0.37/0.33) when each was compared with the AI system. Notably, 95% CIs for these diagnostic indexes highlight the AI system's consistent performance across different settings. Conclusions: The findings advocate for the integration of AI into clinical settings to enhance the diagnostic accuracy of radiologists in assessing thyroid nodules. The AI system, designed as a supportive tool rather than a replacement, promises to revolutionize thyroid nodule diagnosis and management by providing a high level of diagnostic precision.

17.
Phys Med Biol ; 69(11)2024 May 21.
Artículo en Inglés | MEDLINE | ID: mdl-38684166

RESUMEN

Objective.Automated biopsy needle segmentation in 3D ultrasound images can be used for biopsy navigation, but it is quite challenging due to the low ultrasound image resolution and interference similar to the needle appearance. For 3D medical image segmentation, such deep learning networks as convolutional neural network and transformer have been investigated. However, these segmentation methods require numerous labeled data for training, have difficulty in meeting the real-time segmentation requirement and involve high memory consumption.Approach.In this paper, we have proposed the temporal information-based semi-supervised training framework for fast and accurate needle segmentation. Firstly, a novel circle transformer module based on the static and dynamic features has been designed after the encoders for extracting and fusing the temporal information. Then, the consistency constraints of the outputs before and after combining temporal information are proposed to provide the semi-supervision for the unlabeled volume. Finally, the model is trained using the loss function which combines the cross-entropy and Dice similarity coefficient (DSC) based segmentation loss with mean square error based consistency loss. The trained model with the single ultrasound volume input is applied to realize the needle segmentation in ultrasound volume.Main results.Experimental results on three needle ultrasound datasets acquired during the beagle biopsy show that our approach is superior to the most competitive mainstream temporal segmentation model and semi-supervised method by providing higher DSC (77.1% versus 76.5%), smaller needle tip position (1.28 mm versus 1.87 mm) and length (1.78 mm versus 2.19 mm) errors on the kidney dataset as well as DSC (78.5% versus 76.9%), needle tip position (0.86 mm versus 1.12 mm) and length (1.01 mm versus 1.26 mm) errors on the prostate dataset.Significance.The proposed method can significantly enhance needle segmentation accuracy by training with sequential images at no additional cost. This enhancement may further improve the effectiveness of biopsy navigation systems.


Asunto(s)
Imagenología Tridimensional , Ultrasonografía , Imagenología Tridimensional/métodos , Agujas , Factores de Tiempo , Procesamiento de Imagen Asistido por Computador/métodos , Animales , Perros , Humanos , Aprendizaje Automático Supervisado , Biopsia con Aguja
18.
Ultrasound Med Biol ; 50(7): 1034-1044, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38679514

RESUMEN

To properly treat and care for hepatic cystic echinococcosis (HCE), it is essential to make an accurate diagnosis before treatment. OBJECTIVE: The objective of this study was to assess the diagnostic accuracy of computer-aided diagnosis techniques in classifying HCE ultrasound images into five subtypes. METHODS: A total of 1820 HCE ultrasound images collected from 967 patients were included in the study. A multi-kernel learning method was developed to learn the texture and depth features of the ultrasound images. Combined kernel functions were built-in Support Vector Machine (MK-SVM) for the classification work. The experimental results were evaluated using five-fold cross-validation. Finally, our approach was compared with three other machine learning algorithms: the decision tree classifier, random forest, and gradient boosting decision tree. RESULTS: Among all the methods used in the study, the MK-SVM achieved the highest accuracy of 96.6% on the fused feature set. CONCLUSION: The multi-kernel learning method effectively learns different image features from ultrasound images by utilizing various kernels. The MK-SVM method, which combines the learning of texture features and depth features separately, has significant application value in HCE classification tasks.


Asunto(s)
Equinococosis Hepática , Aprendizaje Automático , Ultrasonografía , Humanos , Equinococosis Hepática/diagnóstico por imagen , Ultrasonografía/métodos , Masculino , Hígado/diagnóstico por imagen , Femenino , Adulto , Persona de Mediana Edad , Máquina de Vectores de Soporte , Reproducibilidad de los Resultados , Algoritmos , Anciano , Interpretación de Imagen Asistida por Computador/métodos
19.
BMC Med Imaging ; 24(1): 74, 2024 Mar 27.
Artículo en Inglés | MEDLINE | ID: mdl-38539143

RESUMEN

OBJECTIVE: The objective of this research was to create a deep learning network that utilizes multiscale images for the classification of follicular thyroid carcinoma (FTC) and follicular thyroid adenoma (FTA) through preoperative US. METHODS: This retrospective study involved the collection of ultrasound images from 279 patients at two tertiary level hospitals. To address the issue of false positives caused by small nodules, we introduced a multi-rescale fusion network (MRF-Net). Four different deep learning models, namely MobileNet V3, ResNet50, DenseNet121 and MRF-Net, were studied based on the feature information extracted from ultrasound images. The performance of each model was evaluated using various metrics, including sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), accuracy, F1 value, receiver operating curve (ROC), area under the curve (AUC), decision curve analysis (DCA), and confusion matrix. RESULTS: Out of the total nodules examined, 193 were identified as FTA and 86 were confirmed as FTC. Among the deep learning models evaluated, MRF-Net exhibited the highest accuracy and area under the curve (AUC) with values of 85.3% and 84.8%, respectively. Additionally, MRF-Net demonstrated superior sensitivity and specificity compared to other models. Notably, MRF-Net achieved an impressive F1 value of 83.08%. The curve of DCA revealed that MRF-Net consistently outperformed the other models, yielding higher net benefits across various decision thresholds. CONCLUSION: The utilization of MRF-Net enables more precise discrimination between benign and malignant thyroid follicular tumors utilizing preoperative US.


Asunto(s)
Adenocarcinoma Folicular , Neoplasias de la Tiroides , Nódulo Tiroideo , Humanos , Estudios Retrospectivos , Neoplasias de la Tiroides/diagnóstico por imagen , Neoplasias de la Tiroides/patología , Adenocarcinoma Folicular/diagnóstico por imagen , Adenocarcinoma Folicular/patología , Redes Neurales de la Computación , Nódulo Tiroideo/diagnóstico por imagen , Nódulo Tiroideo/patología
20.
Artículo en Inglés | MEDLINE | ID: mdl-38434146

RESUMEN

Objectives: Localized autoimmune pancreatitis is difficult to differentiate from pancreatic ductal adenocarcinoma on endoscopic ultrasound images. In recent years, deep learning methods have improved the diagnosis of diseases. Hence, we developed a special cross-validation framework to search for effective methodologies of deep learning in distinguishing autoimmune pancreatitis from pancreatic ductal adenocarcinoma on endoscopic ultrasound images. Methods: Data from 24 patients diagnosed with localized autoimmune pancreatitis (8751 images) and 61 patients diagnosed with pancreatic ductal adenocarcinoma (20,584 images) were collected from 2016 to 2022. We applied transfer learning to a convolutional neural network called ResNet152, together with our innovative imaging method contributing to data augmentation and temporal data process. We divided patients into five groups according to different factors for 5-fold cross-validation, where the ordered and balanced datasets were created for the performance evaluations. Results: ResNet152 surpassed the endoscopists in all evaluation metrics with almost all datasets. Interestingly, when the dataset is balanced according to the factor of the endoscopists' diagnostic accuracy, the area under the receiver operating characteristic curve and accuracy were highest at 0.85 and 0.80, respectively. Conclusions: It is deduced that image features useful for ResNet152 correlate with those used by endoscopists for their diagnoses. This finding may contribute to sample-efficient dataset preparation to train convolutional neural networks for endoscopic ultrasonography-imaging diagnosis.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA