Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 1.731
Filtrar
1.
Health Informatics J ; 30(3): 14604582241288460, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39305515

RESUMEN

Importance: Medical imaging increases the workload involved in writing reports. Given the lack of a standardized format for reports, reports are not easily used as communication tools. Objective: During medical team-patient communication, the descriptions in reports also need to be understood. Automatically generated imaging reports with rich and understandable information can improve medical quality. Design, setting, and participants: The image analysis theory of Panofsky and Shatford from the perspective of image metadata was used in this study to establish a medical image interpretation template (MIIT) for automated image report generation. Main outcomes and measures: The image information included digital imaging and communications in medicine (DICOM), reporting and data systems (RADSs), and image features used in computer-aided diagnosis (CAD). The utility of the images was evaluated by a questionnaire survey to determine whether the image content could be better understood. Results: In 100 responses, exploratory factor analysis revealed that the factor loadings of the facets were greater than 0.5, indicating construct validity, and the overall Cronbach's alpha was 0.916, indicating reliability. No significant differences were noted according to sex, age or education. Conclusions and relevance: Overall, the results show that MIIT is helpful for understanding the content of medical images.


Asunto(s)
Metadatos , Humanos , Femenino , Toma de Decisiones Conjunta , Persona de Mediana Edad , Adulto , Encuestas y Cuestionarios , Reproducibilidad de los Resultados , Mama/diagnóstico por imagen
2.
Med Image Anal ; 99: 103307, 2024 Sep 05.
Artículo en Inglés | MEDLINE | ID: mdl-39303447

RESUMEN

Automatic analysis of colonoscopy images has been an active field of research motivated by the importance of early detection of precancerous polyps. However, detecting polyps during the live examination can be challenging due to various factors such as variation of skills and experience among the endoscopists, lack of attentiveness, and fatigue leading to a high polyp miss-rate. Therefore, there is a need for an automated system that can flag missed polyps during the examination and improve patient care. Deep learning has emerged as a promising solution to this challenge as it can assist endoscopists in detecting and classifying overlooked polyps and abnormalities in real time, improving the accuracy of diagnosis and enhancing treatment. In addition to the algorithm's accuracy, transparency and interpretability are crucial to explaining the whys and hows of the algorithm's prediction. Further, conclusions based on incorrect decisions may be fatal, especially in medicine. Despite these pitfalls, most algorithms are developed in private data, closed source, or proprietary software, and methods lack reproducibility. Therefore, to promote the development of efficient and transparent methods, we have organized the "Medico automatic polyp segmentation (Medico 2020)" and "MedAI: Transparency in Medical Image Segmentation (MedAI 2021)" competitions. The Medico 2020 challenge received submissions from 17 teams, while the MedAI 2021 challenge also gathered submissions from another 17 distinct teams in the following year. We present a comprehensive summary and analyze each contribution, highlight the strength of the best-performing methods, and discuss the possibility of clinical translations of such methods into the clinic. Our analysis revealed that the participants improved dice coefficient metrics from 0.8607 in 2020 to 0.8993 in 2021 despite adding diverse and challenging frames (containing irregular, smaller, sessile, or flat polyps), which are frequently missed during a routine clinical examination. For the instrument segmentation task, the best team obtained a mean Intersection over union metric of 0.9364. For the transparency task, a multi-disciplinary team, including expert gastroenterologists, accessed each submission and evaluated the team based on open-source practices, failure case analysis, ablation studies, usability and understandability of evaluations to gain a deeper understanding of the models' credibility for clinical deployment. The best team obtained a final transparency score of 21 out of 25. Through the comprehensive analysis of the challenge, we not only highlight the advancements in polyp and surgical instrument segmentation but also encourage subjective evaluation for building more transparent and understandable AI-based colonoscopy systems. Moreover, we discuss the need for multi-center and out-of-distribution testing to address the current limitations of the methods to reduce the cancer burden and improve patient care.

3.
BMC Med Imaging ; 24(1): 253, 2024 Sep 20.
Artículo en Inglés | MEDLINE | ID: mdl-39304839

RESUMEN

BACKGROUND: Breast cancer is one of the leading diseases worldwide. According to estimates by the National Breast Cancer Foundation, over 42,000 women are expected to die from this disease in 2024. OBJECTIVE: The prognosis of breast cancer depends on the early detection of breast micronodules and the ability to distinguish benign from malignant lesions. Ultrasonography is a crucial radiological imaging technique for diagnosing the illness because it allows for biopsy and lesion characterization. The user's level of experience and knowledge is vital since ultrasonographic diagnosis relies on the practitioner's expertise. Furthermore, computer-aided technologies significantly contribute by potentially reducing the workload of radiologists and enhancing their expertise, especially when combined with a large patient volume in a hospital setting. METHOD: This work describes the development of a hybrid CNN system for diagnosing benign and malignant breast cancer lesions. The models InceptionV3 and MobileNetV2 serve as the foundation for the hybrid framework. Features from these models are extracted and concatenated individually, resulting in a larger feature set. Finally, various classifiers are applied for the classification task. RESULTS: The model achieved the best results using the softmax classifier, with an accuracy of over 95%. CONCLUSION: Computer-aided diagnosis greatly assists radiologists and reduces their workload. Therefore, this research can serve as a foundation for other researchers to build clinical solutions.


Asunto(s)
Neoplasias de la Mama , Ultrasonografía Mamaria , Humanos , Femenino , Neoplasias de la Mama/diagnóstico por imagen , Ultrasonografía Mamaria/métodos , Redes Neurales de la Computación , Interpretación de Imagen Asistida por Computador/métodos , Diagnóstico por Computador/métodos
4.
Med Image Anal ; 99: 103320, 2024 Sep 02.
Artículo en Inglés | MEDLINE | ID: mdl-39244796

RESUMEN

The potential and promise of deep learning systems to provide an independent assessment and relieve radiologists' burden in screening mammography have been recognized in several studies. However, the low cancer prevalence, the need to process high-resolution images, and the need to combine information from multiple views and scales still pose technical challenges. Multi-view architectures that combine information from the four mammographic views to produce an exam-level classification score are a promising approach to the automated processing of screening mammography. However, training such architectures from exam-level labels, without relying on pixel-level supervision, requires very large datasets and may result in suboptimal accuracy. Emerging architectures such as Visual Transformers (ViT) and graph-based architectures can potentially integrate ipsi-lateral and contra-lateral breast views better than traditional convolutional neural networks, thanks to their stronger ability of modeling long-range dependencies. In this paper, we extensively evaluate novel transformer-based and graph-based architectures against state-of-the-art multi-view convolutional neural networks, trained in a weakly-supervised setting on a middle-scale dataset, both in terms of performance and interpretability. Extensive experiments on the CSAW dataset suggest that, while transformer-based architecture outperform other architectures, different inductive biases lead to complementary strengths and weaknesses, as each architecture is sensitive to different signs and mammographic features. Hence, an ensemble of different architectures should be preferred over a winner-takes-all approach to achieve more accurate and robust results. Overall, the findings highlight the potential of a wide range of multi-view architectures for breast cancer classification, even in datasets of relatively modest size, although the detection of small lesions remains challenging without pixel-wise supervision or ad-hoc networks.

5.
Comput Methods Programs Biomed ; 256: 108379, 2024 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-39217667

RESUMEN

BACKGROUND AND OBJECTIVE: The incidence of facial fractures is on the rise globally, yet limited studies are addressing the diverse forms of facial fractures present in 3D images. In particular, due to the nature of the facial fracture, the direction in which the bone fractures vary, and there is no clear outline, it is difficult to determine the exact location of the fracture in 2D images. Thus, 3D image analysis is required to find the exact fracture area, but it needs heavy computational complexity and expensive pixel-wise labeling for supervised learning. In this study, we tackle the problem of reducing the computational burden and increasing the accuracy of fracture localization by using a weakly-supervised object localization without pixel-wise labeling in a 3D image space. METHODS: We propose a Very Fast, High-Resolution Aggregation 3D Detection CAM (VFHA-CAM) model, which can detect various facial fractures. To better detect tiny fractures, our model uses high-resolution feature maps and employs Ablation CAM to find an exact fracture location without pixel-wise labeling, where we use a rough fracture image detected with 3D box-wise labeling. To this end, we extract important features and use only essential features to reduce the computational complexity in 3D image space. RESULTS: Experimental findings demonstrate that VFHA-CAM surpasses state-of-the-art 2D detection methods by up to 20% in sensitivity/person and specificity/person, achieving sensitivity/person and specificity/person scores of 87% and 85%, respectively. In addition, Our VFHA-CAM reduces location analysis time to 76 s without performance degradation compared to a simple Ablation CAM method that takes more than 20 min. CONCLUSION: This study introduces a novel weakly-supervised object localization approach for bone fracture detection in 3D facial images. The proposed method employs a 3D detection model, which helps detect various forms of facial bone fractures accurately. The CAM algorithm adopted for fracture area segmentation within a 3D fracture detection box is key in quickly informing medical staff of the exact location of a facial bone fracture in a weakly-supervised object localization. In addition, we provide 3D visualization so that even non-experts unfamiliar with 3D CT images can identify the fracture status and location.


Asunto(s)
Algoritmos , Imagenología Tridimensional , Humanos , Imagenología Tridimensional/métodos , Fracturas Craneales/diagnóstico por imagen , Huesos Faciales/diagnóstico por imagen , Huesos Faciales/lesiones , Tomografía Computarizada por Rayos X/métodos
6.
Diagnostics (Basel) ; 14(17)2024 Aug 28.
Artículo en Inglés | MEDLINE | ID: mdl-39272675

RESUMEN

Brain cancer is a substantial factor in the mortality associated with cancer, presenting difficulties in the timely identification of the disease. The precision of diagnoses is significantly dependent on the proficiency of radiologists and neurologists. Although there is potential for early detection with computer-aided diagnosis (CAD) algorithms, the majority of current research is hindered by its modest sample sizes. This meta-analysis aims to comprehensively assess the diagnostic test accuracy (DTA) of computer-aided design (CAD) models specifically designed for the detection of brain cancer utilizing hyperspectral (HSI) technology. We employ Quadas-2 criteria to choose seven papers and classify the proposed methodologies according to the artificial intelligence method, cancer type, and publication year. In order to evaluate heterogeneity and diagnostic performance, we utilize Deeks' funnel plot, the forest plot, and accuracy charts. The results of our research suggest that there is no notable variation among the investigations. The CAD techniques that have been examined exhibit a notable level of precision in the automated detection of brain cancer. However, the absence of external validation hinders their potential implementation in real-time clinical settings. This highlights the necessity for additional studies in order to authenticate the CAD models for wider clinical applicability.

7.
Med Biol Eng Comput ; 2024 Sep 18.
Artículo en Inglés | MEDLINE | ID: mdl-39292382

RESUMEN

Atherosclerosis causes heart disease by forming plaques in arterial walls. IVUS imaging provides a high-resolution cross-sectional view of coronary arteries and plaque morphology. Healthcare professionals diagnose and quantify atherosclerosis physically or using VH-IVUS software. Since manual or VH-IVUS software-based diagnosis is time-consuming, automated plaque characterization tools are essential for accurate atherosclerosis detection and classification. Recently, deep learning (DL) and computer vision (CV) approaches are promising tools for automatically classifying plaques on IVUS images. With this motivation, this manuscript proposes an automated atherosclerotic plaque classification method using a hybrid Ant Lion Optimizer with Deep Learning (AAPC-HALODL) technique on IVUS images. The AAPC-HALODL technique uses the faster regional convolutional neural network (Faster RCNN)-based segmentation approach to identify diseased regions in the IVUS images. Next, the ShuffleNet-v2 model generates a useful set of feature vectors from the segmented IVUS images, and its hyperparameters can be optimally selected by using the HALO technique. Finally, an average ensemble classification process comprising a stacked autoencoder (SAE) and deep extreme learning machine (DELM) model can be utilized. The MICCAI Challenge 2011 dataset was used for AAPC-HALODL simulation analysis. A detailed comparative study showed that the AAPC-HALODL approach outperformed other DL models with a maximum accuracy of 98.33%, precision of 97.87%, sensitivity of 98.33%, and F score of 98.10%.

8.
Sci Rep ; 14(1): 20647, 2024 09 04.
Artículo en Inglés | MEDLINE | ID: mdl-39232180

RESUMEN

Lung cancer (LC) is a life-threatening and dangerous disease all over the world. However, earlier diagnoses and treatment can save lives. Earlier diagnoses of malevolent cells in the lungs responsible for oxygenating the human body and expelling carbon dioxide due to significant procedures are critical. Even though a computed tomography (CT) scan is the best imaging approach in the healthcare sector, it is challenging for physicians to identify and interpret the tumour from CT scans. LC diagnosis in CT scan using artificial intelligence (AI) can help radiologists in earlier diagnoses, enhance performance, and decrease false negatives. Deep learning (DL) for detecting lymph node contribution on histopathological slides has become popular due to its great significance in patient diagnoses and treatment. This study introduces a computer-aided diagnosis for LC by utilizing the Waterwheel Plant Algorithm with DL (CADLC-WWPADL) approach. The primary aim of the CADLC-WWPADL approach is to classify and identify the existence of LC on CT scans. The CADLC-WWPADL method uses a lightweight MobileNet model for feature extraction. Besides, the CADLC-WWPADL method employs WWPA for the hyperparameter tuning process. Furthermore, the symmetrical autoencoder (SAE) model is utilized for classification. An investigational evaluation is performed to demonstrate the significant detection outputs of the CADLC-WWPADL technique. An extensive comparative study reported that the CADLC-WWPADL technique effectively performs with other models with a maximum accuracy of 99.05% under the benchmark CT image dataset.


Asunto(s)
Algoritmos , Aprendizaje Profundo , Diagnóstico por Computador , Neoplasias Pulmonares , Tomografía Computarizada por Rayos X , Humanos , Neoplasias Pulmonares/diagnóstico por imagen , Neoplasias Pulmonares/diagnóstico , Neoplasias Pulmonares/patología , Tomografía Computarizada por Rayos X/métodos , Diagnóstico por Computador/métodos
9.
Artículo en Inglés | MEDLINE | ID: mdl-39090504

RESUMEN

PURPOSE: The integration of deep learning in image segmentation technology markedly improves the automation capabilities of medical diagnostic systems, reducing the dependence on the clinical expertise of medical professionals. However, the accuracy of image segmentation is still impacted by various interference factors encountered during image acquisition. METHODS: To address this challenge, this paper proposes a loss function designed to mine specific pixel information which dynamically changes during training process. Based on the triplet concept, this dynamic change is leveraged to drive the predicted boundaries of images closer to the real boundaries. RESULTS: Extensive experiments on the PH2 and ISIC2017 dermoscopy datasets validate that our proposed loss function overcomes the limitations of traditional triplet loss methods in image segmentation applications. This loss function not only enhances Jaccard indices of neural networks by 2.42 % and 2.21 % for PH2 and ISIC2017, respectively, but also neural networks utilizing this loss function generally surpass those that do not in terms of segmentation performance. CONCLUSION: This work proposed a loss function that mined the information of specific pixels deeply without incurring additional training costs, significantly improving the automation of neural networks in image segmentation tasks. This loss function adapts to dermoscopic images of varying qualities and demonstrates higher effectiveness and robustness compared to other boundary loss functions, making it suitable for image segmentation tasks across various neural networks.

10.
Br J Radiol ; 2024 Aug 05.
Artículo en Inglés | MEDLINE | ID: mdl-39102827

RESUMEN

OBJECTIVE: To determine whether adding elastography strain ratio (SR) and a deep learning based computer-aided diagnosis (CAD) system to breast ultrasound (US) can help reclassify Breast Imaging Reporting and Data System (BI-RADS) 3 & 4a-c categories and avoid unnecessary biopsies. METHODS: This prospective, multicenter study included 1049 masses (691 benign, 358 malignant) with assigned BI-RADS 3 & 4a-c between 2020 and 2022. CAD results was dichotomized possibly malignant vs. benign. All patients underwent SR and CAD examinations and histopathological findings were the standard of reference. Reduction of unnecessary biopsies (biopsies in benign lesions) and missed malignancies after reclassified (new BI-RADS 3) with SR and CAD were the outcome measures. RESULTS: Following the routine conventional breast US assessment, 48.6% (336 of 691 masses) underwent unnecessary biopsies. After reclassifying BI-RADS 4a masses (SR cut-off < 2.90, CAD dichotomized possibly benign), 25.62% (177 of 691 masses) underwent an unnecessary biopsies corresponding to a 50.14% (177 vs. 355) reduction of unnecessary biopsies. After reclassification, only 1.72% (9 of 523 masses) malignancies were missed in the new BI-RADS 3 group. CONCLUSION: Adding SR and CAD to clinical practice may show an optimal performance in reclassifying BI-RADS 4a to 3 categories, and 50.14% masses would be benefit by keeping the rate of undetected malignancies with an acceptable value of 1.72%. ADVANCES IN KNOWLEDGE: Leveraging the potential of SR in conjunction with CAD holds immense promise in substantially reducing the biopsy frequency associated with BI-RADS 3 and 4A lesions, thereby conferring substantial advantages upon patients encompassed within this cohort.

11.
Quant Imaging Med Surg ; 14(8): 5902-5914, 2024 Aug 01.
Artículo en Inglés | MEDLINE | ID: mdl-39144019

RESUMEN

Background: Bone age assessment (BAA) is crucial for the diagnosis of growth disorders and the optimization of treatments. However, the random error caused by different observers' experiences and the low consistency of repeated assessments harms the quality of such assessments. Thus, automated assessment methods are needed. Methods: Previous research has sought to design localization modules in a strongly or weakly supervised fashion to aggregate part regions to better recognize subtle differences. Conversely, we sought to efficiently deliver information between multi-granularity regions for fine-grained feature learning and to directly model long-distance relationships for global understanding. The proposed method has been named the "Multi-Granularity and Multi-Attention Net (2M-Net)". Specifically, we first applied the jigsaw method to generate related tasks emphasizing regions with different granularities, and we then trained the model on these tasks using a hierarchical sharing mechanism. In effect, the training signals from the extra tasks created as an inductive bias, enabling 2M-Net to discover task relatedness without the need for annotations. Next, the self-attention mechanism acted as a plug-and-play module to effectively enhance the feature representation capabilities. Finally, multi-scale features were applied for prediction. Results: A public data set of 14,236 hand radiographs, provided by the Radiological Society of North America (RSNA), was used to develop and validate 2M-Net. In the public benchmark testing, the mean absolute error (MAE) between the bone age estimates of the model and of the reviewer was 3.98 months (3.89 months for males and 4.07 months for females). Conclusions: By using the jigsaw method to construct a multi-task learning strategy and inserting the self-attention module for efficient global modeling, we established 2M-Net, which is comparable to the previous best method in terms of performance.

12.
Quant Imaging Med Surg ; 14(8): 5443-5459, 2024 Aug 01.
Artículo en Inglés | MEDLINE | ID: mdl-39144045

RESUMEN

Background: The automated classification of histological images is crucial for the diagnosis of cancer. The limited availability of well-annotated datasets, especially for rare cancers, poses a significant challenge for deep learning methods due to the small number of relevant images. This has led to the development of few-shot learning approaches, which bear considerable clinical importance, as they are designed to overcome the challenges of data scarcity in deep learning for histological image classification. Traditional methods often ignore the challenges of intraclass diversity and interclass similarities in histological images. To address this, we propose a novel mutual reconstruction network model, aimed at meeting these challenges and improving the few-shot classification performance of histological images. Methods: The key to our approach is the extraction of subtle and discriminative features. We introduce a feature enhancement module (FEM) and a mutual reconstruction module to increase differences between classes while reducing variance within classes. First, we extract features of support and query images using a feature extractor. These features are then processed by the FEM, which uses a self-attention mechanism for self-reconstruction of features, enhancing the learning of detailed features. These enhanced features are then input into the mutual reconstruction module. This module uses enhanced support features to reconstruct enhanced query features and vice versa. The classification of query samples is based on weighted calculations of the distances between query features and reconstructed query features and between support features and reconstructed support features. Results: We extensively evaluated our model using a specially created few-shot histological image dataset. The results showed that in a 5-way 10-shot setup, our model achieved an impressive accuracy of 92.09%. This is a 23.59% improvement in accuracy compared to the model-agnostic meta-learning (MAML) method, which does not focus on fine-grained attributes. In the more challenging, 5-way 1-shot setting, our model also performed well, demonstrating a 18.52% improvement over the ProtoNet, which does not address this challenge. Additional ablation studies indicated the effectiveness and complementary nature of each module and confirmed our method's ability to parse small differences between classes and large variations within classes in histological images. These findings strongly support the superiority of our proposed method in the few-shot classification of histological images. Conclusions: The mutual reconstruction network provides outstanding performance in the few-shot classification of histological images, successfully overcoming the challenges of similarities between classes and diversity within classes. This marks a significant advancement in the automated classification of histological images.

13.
Biomed Eng Online ; 23(1): 84, 2024 Aug 22.
Artículo en Inglés | MEDLINE | ID: mdl-39175006

RESUMEN

This study aims to develop a super-resolution (SR) algorithm tailored specifically for enhancing the image quality and resolution of early cervical cancer (CC) magnetic resonance imaging (MRI) images. The proposed method is subjected to both qualitative and quantitative analyses, thoroughly investigating its performance across various upscaling factors and assessing its impact on medical image segmentation tasks. The innovative SR algorithm employed for reconstructing early CC MRI images integrates complex architectures and deep convolutional kernels. Training is conducted on matched pairs of input images through a multi-input model. The research findings highlight the significant advantages of the proposed SR method on two distinct datasets at different upscaling factors. Specifically, at a 2× upscaling factor, the sagittal test set outperforms the state-of-the-art methods in the PSNR index evaluation, second only to the hybrid attention transformer, while the axial test set outperforms the state-of-the-art methods in both PSNR and SSIM index evaluation. At a 4× upscaling factor, both the sagittal test set and the axial test set achieve the best results in the evaluation of PNSR and SSIM indicators. This method not only effectively enhances image quality, but also exhibits superior performance in medical segmentation tasks, thereby providing a more reliable foundation for clinical diagnosis and image analysis.


Asunto(s)
Aprendizaje Profundo , Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Neoplasias del Cuello Uterino , Neoplasias del Cuello Uterino/diagnóstico por imagen , Humanos , Femenino , Procesamiento de Imagen Asistido por Computador/métodos , Algoritmos
14.
Vis Comput Ind Biomed Art ; 7(1): 21, 2024 Aug 21.
Artículo en Inglés | MEDLINE | ID: mdl-39167337

RESUMEN

Medical image registration is vital for disease diagnosis and treatment with its ability to merge diverse information of images, which may be captured under different times, angles, or modalities. Although several surveys have reviewed the development of medical image registration, they have not systematically summarized the existing medical image registration methods. To this end, a comprehensive review of these methods is provided from traditional and deep-learning-based perspectives, aiming to help audiences quickly understand the development of medical image registration. In particular, we review recent advances in retinal image registration, which has not attracted much attention. In addition, current challenges in retinal image registration are discussed and insights and prospects for future research provided.

15.
J Imaging Inform Med ; 2024 Aug 16.
Artículo en Inglés | MEDLINE | ID: mdl-39150595

RESUMEN

Primary diffuse central nervous system large B-cell lymphoma (CNS-pDLBCL) and high-grade glioma (HGG) often present similarly, clinically and on imaging, making differentiation challenging. This similarity can complicate pathologists' diagnostic efforts, yet accurately distinguishing between these conditions is crucial for guiding treatment decisions. This study leverages a deep learning model to classify brain tumor pathology images, addressing the common issue of limited medical imaging data. Instead of training a convolutional neural network (CNN) from scratch, we employ a pre-trained network for extracting deep features, which are then used by a support vector machine (SVM) for classification. Our evaluation shows that the Resnet50 (TL + SVM) model achieves a 97.4% accuracy, based on tenfold cross-validation on the test set. These results highlight the synergy between deep learning and traditional diagnostics, potentially setting a new standard for accuracy and efficiency in the pathological diagnosis of brain tumors.

16.
Artículo en Inglés | MEDLINE | ID: mdl-39209199

RESUMEN

BACKGROUND & AIMS: Computer-Aided Diagnosis (CADx) assists endoscopists in differentiating between neoplastic and non-neoplastic polyps during colonoscopy. This study aimed to evaluate the impact of polyp location (proximal vs. distal colon) on the diagnostic performance of CADx for ≤5mm polyps. METHODS: We searched for studies evaluating the performance of real-time CADx alone (i.e., independently of endoscopist judgement) for predicting the histology of colorectal polyps ≤5mm. The primary endpoints were CADx sensitivity and specificity in the proximal and distal colon. Secondary outcomes were the negative predictive value (NPV), positive predictive value (PPV), and the accuracy of the CADx alone. Distal colon was limited to the rectum and sigmoid. RESULTS: We included 11 studies for analysis with a total of 7,782 <5mm polyps. CADx specificity was significantly lower in the proximal colon compared to the distal colon (62% versus 85%; Risk ratio (RR): 0.74 [95% CI: 0.72-0.84]). Conversely, sensitivity was similar (89% vs 87% (EC-1); RR: 1.00 [95% CI: 0.97-1.03]. The NPV (64% versus 93%; RR: 0.71 [95% CI: 0.64-0.79]) and accuracy (81% vs 86%; RR: 0.95 [95% CI: 0.91-0.99]) were significantly lower in the proximal than distal colon, while PPV was higher in the proximal colon (87% vs 76%; RR: 1.11 [95% CI: 1.06-1.17]). CONCLUSION: The diagnostic performance of CADx for polyps in the proximal colon is inadequate, exhibiting significantly lower specificity compared to its performance for distal polyps. While current CADx systems are suitable for use in the distal colon, they should not be employed for proximal polyps until more performant systems are developed specifically for these lesions.

17.
Cancer Med ; 13(16): e70069, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-39215495

RESUMEN

OBJECTIVE: Breast cancer is one of the leading cancer causes among women worldwide. It can be classified as invasive ductal carcinoma (IDC) or metastatic cancer. Early detection of breast cancer is challenging due to the lack of early warning signs. Generally, a mammogram is recommended by specialists for screening. Existing approaches are not accurate enough for real-time diagnostic applications and thus require better and smarter cancer diagnostic approaches. This study aims to develop a customized machine-learning framework that will give more accurate predictions for IDC and metastasis cancer classification. METHODS: This work proposes a convolutional neural network (CNN) model for classifying IDC and metastatic breast cancer. The study utilized a large-scale dataset of microscopic histopathological images to automatically perceive a hierarchical manner of learning and understanding. RESULTS: It is evident that using machine learning techniques significantly (15%-25%) boost the effectiveness of determining cancer vulnerability, malignancy, and demise. The results demonstrate an excellent performance ensuring an average of 95% accuracy in classifying metastatic cells against benign ones and 89% accuracy was obtained in terms of detecting IDC. CONCLUSIONS: The results suggest that the proposed model improves classification accuracy. Therefore, it could be applied effectively in classifying IDC and metastatic cancer in comparison to other state-of-the-art models.


Asunto(s)
Neoplasias de la Mama , Carcinoma Ductal de Mama , Aprendizaje Profundo , Redes Neurales de la Computación , Humanos , Femenino , Neoplasias de la Mama/patología , Neoplasias de la Mama/clasificación , Neoplasias de la Mama/diagnóstico por imagen , Carcinoma Ductal de Mama/patología , Carcinoma Ductal de Mama/clasificación , Carcinoma Ductal de Mama/diagnóstico por imagen , Carcinoma Ductal de Mama/secundario , Metástasis de la Neoplasia
18.
Med Biol Eng Comput ; 2024 Aug 31.
Artículo en Inglés | MEDLINE | ID: mdl-39215783

RESUMEN

Deep learning has been widely used in ultrasound image analysis, and it also benefits kidney ultrasound interpretation and diagnosis. However, the importance of ultrasound image resolution often goes overlooked within deep learning methodologies. In this study, we integrate the ultrasound image resolution into a convolutional neural network and explore the effect of the resolution on diagnosis of kidney tumors. In the process of integrating the image resolution information, we propose two different approaches to narrow the semantic gap between the features extracted by the neural network and the resolution features. In the first approach, the resolution is directly concatenated with the features extracted by the neural network. In the second approach, the features extracted by the neural network are first dimensionally reduced and then combined with the resolution features to form new composite features. We compare these two approaches incorporating the resolution with the method without incorporating the resolution on a kidney tumor dataset of 926 images consisting of 211 images of benign kidney tumors and 715 images of malignant kidney tumors. The area under the receiver operating characteristic curve (AUC) of the method without incorporating the resolution is 0.8665, and the AUCs of the two approaches incorporating the resolution are 0.8926 (P < 0.0001) and 0.9135 (P < 0.0001) respectively. This study has established end-to-end kidney tumor classification systems and has demonstrated the benefits of integrating image resolution, showing that incorporating image resolution into neural networks can more accurately distinguish between malignant and benign kidney tumors in ultrasound images.

19.
Radiologie (Heidelb) ; 2024 Aug 26.
Artículo en Alemán | MEDLINE | ID: mdl-39186073

RESUMEN

BACKGROUND: Artificial intelligence (AI) is increasingly finding its way into routine radiological work. OBJECTIVE: Presentation of the current advances and applications of AI along the entire radiological patient journey. METHODS: Systematic literature review of established AI techniques and current research projects, with reference to consensus recommendations. RESULTS: The applications of AI in radiology cover a wide range, starting with AI-supported scheduling and indications assessment, extending to AI-enhanced image acquisition and reconstruction techniques that have the potential to reduce radiation doses in computed tomography (CT) or acquisition times in magnetic resonance imaging (MRI), while maintaining comparable image quality. These include computer-aided detection and diagnosis, such as fracture recognition or nodule detection. Additionally, methods such as worklist prioritization and structured reporting facilitated by large language models enable a rethinking of the reporting process. The use of AI promises to increase the efficiency of all steps of the radiology workflow and an improved diagnostic accuracy. To achieve this, seamless integration into technical workflows and proven evidence of AI systems are necessary. CONCLUSION: Applications of AI have the potential to profoundly influence the role of radiologists in the future.

20.
Sci Rep ; 14(1): 20085, 2024 08 29.
Artículo en Inglés | MEDLINE | ID: mdl-39209880

RESUMEN

Computer-aided diagnosis has been slow to develop in the field of oral ulcers. One of the major reasons for this is the lack of publicly available datasets. However, oral ulcers have cancerous lesions and their mortality rate is high. The ability to recognize oral ulcers at an early stage in a timely and effective manner is a very critical issue. In recent years, although there exists a small group of researchers working on these, the datasets are private. Therefore to address this challenge, in this paper a multi-tasking oral ulcer dataset (Autooral) containing two major tasks of lesion segmentation and classification is proposed and made publicly available. To the best of our knowledge, we are the first team to make publicly available an oral ulcer dataset with multi-tasking. In addition, we propose a novel modeling framework, HF-UNet, for segmenting oral ulcer lesion regions. Specifically, the proposed high-order focus interaction module (HFblock) performs acquisition of global properties and focus for acquisition of local properties through high-order attention. The proposed lesion localization module (LL-M) employs a novel hybrid sobel filter, which improves the recognition of ulcer edges. Experimental results on the proposed Autooral dataset show that our proposed HF-UNet segmentation of oral ulcers achieves a DSC value of about 0.80 and the inference memory occupies only 2029 MB. The proposed method guarantees a low running load while maintaining a high-performance segmentation capability. The proposed Autooral dataset and code are available from  https://github.com/wurenkai/HF-UNet-and-Autooral-dataset .


Asunto(s)
Úlceras Bucales , Úlceras Bucales/patología , Humanos , Diagnóstico por Computador/métodos , Algoritmos , Bases de Datos Factuales
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA