Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 163
Filtrar
1.
Quant Imaging Med Surg ; 14(8): 5443-5459, 2024 Aug 01.
Artículo en Inglés | MEDLINE | ID: mdl-39144045

RESUMEN

Background: The automated classification of histological images is crucial for the diagnosis of cancer. The limited availability of well-annotated datasets, especially for rare cancers, poses a significant challenge for deep learning methods due to the small number of relevant images. This has led to the development of few-shot learning approaches, which bear considerable clinical importance, as they are designed to overcome the challenges of data scarcity in deep learning for histological image classification. Traditional methods often ignore the challenges of intraclass diversity and interclass similarities in histological images. To address this, we propose a novel mutual reconstruction network model, aimed at meeting these challenges and improving the few-shot classification performance of histological images. Methods: The key to our approach is the extraction of subtle and discriminative features. We introduce a feature enhancement module (FEM) and a mutual reconstruction module to increase differences between classes while reducing variance within classes. First, we extract features of support and query images using a feature extractor. These features are then processed by the FEM, which uses a self-attention mechanism for self-reconstruction of features, enhancing the learning of detailed features. These enhanced features are then input into the mutual reconstruction module. This module uses enhanced support features to reconstruct enhanced query features and vice versa. The classification of query samples is based on weighted calculations of the distances between query features and reconstructed query features and between support features and reconstructed support features. Results: We extensively evaluated our model using a specially created few-shot histological image dataset. The results showed that in a 5-way 10-shot setup, our model achieved an impressive accuracy of 92.09%. This is a 23.59% improvement in accuracy compared to the model-agnostic meta-learning (MAML) method, which does not focus on fine-grained attributes. In the more challenging, 5-way 1-shot setting, our model also performed well, demonstrating a 18.52% improvement over the ProtoNet, which does not address this challenge. Additional ablation studies indicated the effectiveness and complementary nature of each module and confirmed our method's ability to parse small differences between classes and large variations within classes in histological images. These findings strongly support the superiority of our proposed method in the few-shot classification of histological images. Conclusions: The mutual reconstruction network provides outstanding performance in the few-shot classification of histological images, successfully overcoming the challenges of similarities between classes and diversity within classes. This marks a significant advancement in the automated classification of histological images.

2.
J Pers Med ; 14(8)2024 Jul 26.
Artículo en Inglés | MEDLINE | ID: mdl-39201984

RESUMEN

Early detection of breast cancer is essential for increasing survival rates, as it is one of the primary causes of death for women globally. Mammograms are extensively used by physicians for diagnosis, but selecting appropriate algorithms for image enhancement, segmentation, feature extraction, and classification remains a significant research challenge. This paper presents a computer-aided diagnosis (CAD)-based hybrid model combining convolutional neural networks (CNN) with a pruned ensembled extreme learning machine (HCPELM) to enhance breast cancer detection, segmentation, feature extraction, and classification. The model employs the rectified linear unit (ReLU) activation function to enhance data analytics after removing artifacts and pectoral muscles, and the HCPELM hybridized with the CNN model improves feature extraction. The hybrid elements are convolutional and fully connected layers. Convolutional layers extract spatial features like edges, textures, and more complex features in deeper layers. The fully connected layers take these features and combine them in a non-linear manner to perform the final classification. ELM performs classification and recognition tasks, aiming for state-of-the-art performance. This hybrid classifier is used for transfer learning by freezing certain layers and modifying the architecture to reduce parameters, easing cancer detection. The HCPELM classifier was trained using the MIAS database and evaluated against benchmark methods. It achieved a breast image recognition accuracy of 86%, outperforming benchmark deep learning models. HCPELM is demonstrating superior performance in early detection and diagnosis, thus aiding healthcare practitioners in breast cancer diagnosis.

3.
Cancer Med ; 13(16): e70069, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-39215495

RESUMEN

OBJECTIVE: Breast cancer is one of the leading cancer causes among women worldwide. It can be classified as invasive ductal carcinoma (IDC) or metastatic cancer. Early detection of breast cancer is challenging due to the lack of early warning signs. Generally, a mammogram is recommended by specialists for screening. Existing approaches are not accurate enough for real-time diagnostic applications and thus require better and smarter cancer diagnostic approaches. This study aims to develop a customized machine-learning framework that will give more accurate predictions for IDC and metastasis cancer classification. METHODS: This work proposes a convolutional neural network (CNN) model for classifying IDC and metastatic breast cancer. The study utilized a large-scale dataset of microscopic histopathological images to automatically perceive a hierarchical manner of learning and understanding. RESULTS: It is evident that using machine learning techniques significantly (15%-25%) boost the effectiveness of determining cancer vulnerability, malignancy, and demise. The results demonstrate an excellent performance ensuring an average of 95% accuracy in classifying metastatic cells against benign ones and 89% accuracy was obtained in terms of detecting IDC. CONCLUSIONS: The results suggest that the proposed model improves classification accuracy. Therefore, it could be applied effectively in classifying IDC and metastatic cancer in comparison to other state-of-the-art models.


Asunto(s)
Neoplasias de la Mama , Carcinoma Ductal de Mama , Aprendizaje Profundo , Redes Neurales de la Computación , Humanos , Femenino , Neoplasias de la Mama/patología , Neoplasias de la Mama/clasificación , Neoplasias de la Mama/diagnóstico por imagen , Carcinoma Ductal de Mama/patología , Carcinoma Ductal de Mama/clasificación , Carcinoma Ductal de Mama/diagnóstico por imagen , Carcinoma Ductal de Mama/secundario , Metástasis de la Neoplasia
4.
Scand J Gastroenterol ; 59(8): 925-932, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-38950889

RESUMEN

OBJECTIVES: Recently, artificial intelligence (AI) has been applied to clinical diagnosis. Although AI has already been developed for gastrointestinal (GI) tract endoscopy, few studies have applied AI to endoscopic ultrasound (EUS) images. In this study, we used a computer-assisted diagnosis (CAD) system with deep learning analysis of EUS images (EUS-CAD) and assessed its ability to differentiate GI stromal tumors (GISTs) from other mesenchymal tumors and their risk classification performance. MATERIALS AND METHODS: A total of 101 pathologically confirmed cases of subepithelial lesions (SELs) arising from the muscularis propria layer, including 69 GISTs, 17 leiomyomas and 15 schwannomas, were examined. A total of 3283 EUS images were used for training and five-fold-cross-validation, and 827 images were independently tested for diagnosing GISTs. For the risk classification of 69 GISTs, including very-low-, low-, intermediate- and high-risk GISTs, 2,784 EUS images were used for training and three-fold-cross-validation. RESULTS: For the differential diagnostic performance of GIST among all SELs, the accuracy, sensitivity, specificity and area under the receiver operating characteristic (ROC) curve were 80.4%, 82.9%, 75.3% and 0.865, respectively, whereas those for intermediate- and high-risk GISTs were 71.8%, 70.2%, 72.0% and 0.771, respectively. CONCLUSIONS: The EUS-CAD system showed a good diagnostic yield in differentiating GISTs from other mesenchymal tumors and successfully demonstrated the GIST risk classification feasibility. This system can determine whether treatment is necessary based on EUS imaging alone without the need for additional invasive examinations.


Asunto(s)
Aprendizaje Profundo , Diagnóstico por Computador , Endosonografía , Neoplasias Gastrointestinales , Tumores del Estroma Gastrointestinal , Curva ROC , Humanos , Diagnóstico Diferencial , Tumores del Estroma Gastrointestinal/diagnóstico por imagen , Tumores del Estroma Gastrointestinal/patología , Tumores del Estroma Gastrointestinal/diagnóstico , Neoplasias Gastrointestinales/diagnóstico por imagen , Neoplasias Gastrointestinales/diagnóstico , Femenino , Persona de Mediana Edad , Masculino , Anciano , Adulto , Medición de Riesgo , Sensibilidad y Especificidad , Anciano de 80 o más Años
5.
Artículo en Inglés | MEDLINE | ID: mdl-39044036

RESUMEN

PURPOSE: The current study explores the application of 3D U-Net architectures combined with Inception and ResNet modules for precise lung nodule detection through deep learning-based segmentation technique. This investigation is motivated by the objective of developing a Computer-Aided Diagnosis (CAD) system for effective diagnosis and prognostication of lung nodules in clinical settings. METHODS: The proposed method trained four different 3D U-Net models on the retrospective dataset obtained from AIIMS Delhi. To augment the training dataset, affine transformations and intensity transforms were utilized. Preprocessing steps included CT scan voxel resampling, intensity normalization, and lung parenchyma segmentation. Model optimization utilized a hybrid loss function that combined Dice Loss and Focal Loss. The model performance of all four 3D U-Nets was evaluated patient-wise using dice coefficient and Jaccard coefficient, then averaged to obtain the average volumetric dice coefficient (DSCavg) and average Jaccard coefficient (IoUavg) on a test dataset comprising 53 CT scans. Additionally, an ensemble approach (Model-V) was utilized featuring 3D U-Net (Model-I), ResNet (Model-II), and Inception (Model-III) 3D U-Net architectures, combined with two distinct patch sizes for further investigation. RESULTS: The ensemble of models obtained the highest DSCavg of 0.84 ± 0.05 and IoUavg of 0.74 ± 0.06 on the test dataset, compared against individual models. It mitigated false positives, overestimations, and underestimations observed in individual U-Net models. Moreover, the ensemble of models reduced average false positives per scan in the test dataset (1.57 nodules/scan) compared to individual models (2.69-3.39 nodules/scan). CONCLUSIONS: The suggested ensemble approach presents a strong and effective strategy for automatically detecting and delineating lung nodules, potentially aiding CAD systems in clinical settings. This approach could assist radiologists in laborious and meticulous lung nodule detection tasks in CT scans, improving lung cancer diagnosis and treatment planning.

6.
Cell Biochem Funct ; 42(5): e4088, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38973163

RESUMEN

The field of image processing is experiencing significant advancements to support professionals in analyzing histological images obtained from biopsies. The primary objective is to enhance the process of diagnosis and prognostic evaluations. Various forms of cancer can be diagnosed by employing different segmentation techniques followed by postprocessing approaches that can identify distinct neoplastic areas. Using computer approaches facilitates a more objective and efficient study of experts. The progressive advancement of histological image analysis holds significant importance in modern medicine. This paper provides an overview of the current advances in segmentation and classification approaches for images of follicular lymphoma. This research analyzes the primary image processing techniques utilized in the various stages of preprocessing, segmentation of the region of interest, classification, and postprocessing as described in the existing literature. The study also examines the strengths and weaknesses associated with these approaches. Additionally, this study encompasses an examination of validation procedures and an exploration of prospective future research roads in the segmentation of neoplasias.


Asunto(s)
Diagnóstico por Computador , Procesamiento de Imagen Asistido por Computador , Linfoma Folicular , Linfoma Folicular/diagnóstico , Linfoma Folicular/patología , Humanos
7.
Jpn J Radiol ; 2024 Jun 13.
Artículo en Inglés | MEDLINE | ID: mdl-38867035

RESUMEN

PURPOSE: To assess the diagnostic accuracy of ChatGPT-4V in interpreting a set of four chest CT slices for each case of COVID-19, non-small cell lung cancer (NSCLC), and control cases, thereby evaluating its potential as an AI tool in radiological diagnostics. MATERIALS AND METHODS: In this retrospective study, 60 CT scans from The Cancer Imaging Archive, covering COVID-19, NSCLC, and control cases were analyzed using ChatGPT-4V. A radiologist selected four CT slices from each scan for evaluation. ChatGPT-4V's interpretations were compared against the gold standard diagnoses and assessed by two radiologists. Statistical analyses focused on accuracy, sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV), along with an examination of the impact of pathology location and lobe involvement. RESULTS: ChatGPT-4V showed an overall diagnostic accuracy of 56.76%. For NSCLC, sensitivity was 27.27% and specificity was 60.47%. In COVID-19 detection, sensitivity was 13.64% and specificity of 64.29%. For control cases, the sensitivity was 31.82%, with a specificity of 95.24%. The highest sensitivity (83.33%) was observed in cases involving all lung lobes. The chi-squared statistical analysis indicated significant differences in Sensitivity across categories and in relation to the location and lobar involvement of pathologies. CONCLUSION: ChatGPT-4V demonstrated variable diagnostic performance in chest CT interpretation, with notable proficiency in specific scenarios. This underscores the challenges of cross-modal AI models like ChatGPT-4V in radiology, pointing toward significant areas for improvement to ensure dependability. The study emphasizes the importance of enhancing these models for broader, more reliable medical use.

8.
Bioengineering (Basel) ; 11(6)2024 Jun 19.
Artículo en Inglés | MEDLINE | ID: mdl-38927865

RESUMEN

Prostate cancer is a significant health concern with high mortality rates and substantial economic impact. Early detection plays a crucial role in improving patient outcomes. This study introduces a non-invasive computer-aided diagnosis (CAD) system that leverages intravoxel incoherent motion (IVIM) parameters for the detection and diagnosis of prostate cancer (PCa). IVIM imaging enables the differentiation of water molecule diffusion within capillaries and outside vessels, offering valuable insights into tumor characteristics. The proposed approach utilizes a two-step segmentation approach through the use of three U-Net architectures for extracting tumor-containing regions of interest (ROIs) from the segmented images. The performance of the CAD system is thoroughly evaluated, considering the optimal classifier and IVIM parameters for differentiation and comparing the diagnostic value of IVIM parameters with the commonly used apparent diffusion coefficient (ADC). The results demonstrate that the combination of central zone (CZ) and peripheral zone (PZ) features with the Random Forest Classifier (RFC) yields the best performance. The CAD system achieves an accuracy of 84.08% and a balanced accuracy of 82.60%. This combination showcases high sensitivity (93.24%) and reasonable specificity (71.96%), along with good precision (81.48%) and F1 score (86.96%). These findings highlight the effectiveness of the proposed CAD system in accurately segmenting and diagnosing PCa. This study represents a significant advancement in non-invasive methods for early detection and diagnosis of PCa, showcasing the potential of IVIM parameters in combination with machine learning techniques. This developed solution has the potential to revolutionize PCa diagnosis, leading to improved patient outcomes and reduced healthcare costs.

9.
Diagnostics (Basel) ; 14(12)2024 Jun 17.
Artículo en Inglés | MEDLINE | ID: mdl-38928696

RESUMEN

Alzheimer's disease (AD) is a neurological disorder that significantly impairs cognitive function, leading to memory loss and eventually death. AD progresses through three stages: early stage, mild cognitive impairment (MCI) (middle stage), and dementia. Early diagnosis of Alzheimer's disease is crucial and can improve survival rates among patients. Traditional methods for diagnosing AD through regular checkups and manual examinations are challenging. Advances in computer-aided diagnosis systems (CADs) have led to the development of various artificial intelligence and deep learning-based methods for rapid AD detection. This survey aims to explore the different modalities, feature extraction methods, datasets, machine learning techniques, and validation methods used in AD detection. We reviewed 116 relevant papers from repositories including Elsevier (45), IEEE (25), Springer (19), Wiley (6), PLOS One (5), MDPI (3), World Scientific (3), Frontiers (3), PeerJ (2), Hindawi (2), IO Press (1), and other multiple sources (2). The review is presented in tables for ease of reference, allowing readers to quickly grasp the key findings of each study. Additionally, this review addresses the challenges in the current literature and emphasizes the importance of interpretability and explainability in understanding deep learning model predictions. The primary goal is to assess existing techniques for AD identification and highlight obstacles to guide future research.

10.
Diagnostics (Basel) ; 14(10)2024 May 10.
Artículo en Inglés | MEDLINE | ID: mdl-38786291

RESUMEN

In computer-aided medical diagnosis, deep learning techniques have shown that it is possible to offer performance similar to that of experienced medical specialists in the diagnosis of knee osteoarthritis. In this study, a new deep learning (DL) software, called "MedKnee" is developed to assist physicians in the diagnosis process of knee osteoarthritis according to the Kellgren and Lawrence (KL) score. To accomplish this task, 5000 knee X-ray images obtained from the Osteoarthritis Initiative public dataset (OAI) were divided into train, valid, and test datasets in a ratio of 7:1:2 with a balanced distribution across each KL grade. The pre-trained Xception model is used for transfer learning and then deployed in a Graphical User Interface (GUI) developed with Tkinter and Python. The suggested software was validated on an external public database, Medical Expert, and compared with a rheumatologist's diagnosis on a local database, with the involvement of a radiologist for arbitration. The MedKnee achieved an accuracy of 95.36% when tested on Medical Expert-I and 94.94% on Medical Expert-II. In the local dataset, the developed tool and the rheumatologist agreed on 23 images out of 30 images (74%). The MedKnee's satisfactory performance makes it an effective assistant for doctors in the assessment of knee osteoarthritis.

11.
Transl Cancer Res ; 13(4): 1969-1979, 2024 Apr 30.
Artículo en Inglés | MEDLINE | ID: mdl-38737674

RESUMEN

Background: The consistency of Breast Imaging Reporting and Data System (BI-RADS) classification among experienced radiologists is different, which is difficult for inexperienced radiologists to master. This study aims to explore the value of computer-aided diagnosis (CAD) (AI-SONIC breast automatic detection system) in the BI-RADS training for residents. Methods: A total of 12 residents who participated in the first year and the second year of standardized resident training in Ningbo No. 2 Hospital from May 2020 to May 2021 were randomly divided into 3 groups (Group 1, Group 2, Group 3) for BI-RADS training. They were asked to complete 2 tests and questionnaires at the beginning and end of the training. After the first test, the educational materials were given to the residents and reviewed during the breast imaging training month. Group 1 studied independently, Group 2 studied with CAD, and Group 3 was taught face-to-face by experts. The test scores and ultrasonographic descriptors of the residents were evaluated and compared with those of the radiology specialists. The trainees' confidence and recognition degree of CAD were investigated by questionnaire. Results: There was no statistical significance in the scores of residents in the first test among the 3 groups (P=0.637). After training and learning, the scores of all 3 groups of residents were improved in the second test (P=0.006). Group 2 (52±7.30) and Group 3 (54±5.16) scored significantly higher than Group 1 (38±3.65). The consistency of ultrasonographic descriptors and final assessments between the residents and senior radiologists were improved (κ3 > κ2 > κ1), with κ2 and κ3 >0.4 (moderately consistent with experts), and κ1 =0.225 (fairly agreed with experts). The results of the questionnaire showed that the trainees had increased confidence in BI-RADS classification, especially Group 2 (1.5 to 3.5) and Group 3 (1.25 to 3.75). All trainees agreed that CAD was helpful for BI-RADS learning (Likert scale score: 4.75 out of 5) and were willing to use CAD as an aid (4.5, max. 5). Conclusions: The AI-SONIC breast automatic detection system can help residents to quickly master BI-RADS, improve the consistency between residents and experts, and help to improve the confidence of residents in the classification of BI-RADS, which may have potential value in the BI-RADS training for radiology residents. Trial Registration: Chinese Clinical Trial Registry (ChiCTR2400081672).

12.
Radiol Artif Intell ; 6(3): e230318, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38568095

RESUMEN

Purpose To develop an artificial intelligence (AI) model for the diagnosis of breast cancer on digital breast tomosynthesis (DBT) images and to investigate whether it could improve diagnostic accuracy and reduce radiologist reading time. Materials and Methods A deep learning AI algorithm was developed and validated for DBT with retrospectively collected examinations (January 2010 to December 2021) from 14 institutions in the United States and South Korea. A multicenter reader study was performed to compare the performance of 15 radiologists (seven breast specialists, eight general radiologists) in interpreting DBT examinations in 258 women (mean age, 56 years ± 13.41 [SD]), including 65 cancer cases, with and without the use of AI. Area under the receiver operating characteristic curve (AUC), sensitivity, specificity, and reading time were evaluated. Results The AUC for stand-alone AI performance was 0.93 (95% CI: 0.92, 0.94). With AI, radiologists' AUC improved from 0.90 (95% CI: 0.86, 0.93) to 0.92 (95% CI: 0.88, 0.96) (P = .003) in the reader study. AI showed higher specificity (89.64% [95% CI: 85.34%, 93.94%]) than radiologists (77.34% [95% CI: 75.82%, 78.87%]) (P < .001). When reading with AI, radiologists' sensitivity increased from 85.44% (95% CI: 83.22%, 87.65%) to 87.69% (95% CI: 85.63%, 89.75%) (P = .04), with no evidence of a difference in specificity. Reading time decreased from 54.41 seconds (95% CI: 52.56, 56.27) without AI to 48.52 seconds (95% CI: 46.79, 50.25) with AI (P < .001). Interreader agreement measured by Fleiss κ increased from 0.59 to 0.62. Conclusion The AI model showed better diagnostic accuracy than radiologists in breast cancer detection, as well as reduced reading times. The concurrent use of AI in DBT interpretation could improve both accuracy and efficiency. Keywords: Breast, Computer-Aided Diagnosis (CAD), Tomosynthesis, Artificial Intelligence, Digital Breast Tomosynthesis, Breast Cancer, Computer-Aided Detection, Screening Supplemental material is available for this article. © RSNA, 2024 See also the commentary by Bae in this issue.


Asunto(s)
Inteligencia Artificial , Neoplasias de la Mama , Mamografía , Sensibilidad y Especificidad , Humanos , Femenino , Neoplasias de la Mama/diagnóstico por imagen , Persona de Mediana Edad , Mamografía/métodos , Estudios Retrospectivos , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , República de Corea/epidemiología , Aprendizaje Profundo , Adulto , Factores de Tiempo , Algoritmos , Estados Unidos , Reproducibilidad de los Resultados
13.
Heliyon ; 10(5): e27200, 2024 Mar 15.
Artículo en Inglés | MEDLINE | ID: mdl-38486759

RESUMEN

Arrhythmia, a frequently encountered and life-threatening cardiac disorder, can manifest as a transient or isolated event. Traditional automatic arrhythmia detection methods have predominantly relied on QRS-wave signal detection. Contemporary research has focused on the utilization of wearable devices for continuous monitoring of heart rates and rhythms through single-lead electrocardiogram (ECG), which holds the potential to promptly detect arrhythmias. However, in this study, we employed a convolutional neural network (CNN) to classify distinct arrhythmias without QRS wave detection step. The ECG data utilized in this study were sourced from the publicly accessible PhysioNet databases. Taking into account the impact of the duration of ECG signal on accuracy, this study trained one-dimensional CNN models with 5-s and 10-s segments, respectively, and compared their results. In the results, the CNN model exhibited the capability to differentiate between Normal Sinus Rhythm (NSR) and various arrhythmias, including Atrial Fibrillation (AFIB), Atrial Flutter (AFL), Wolff-Parkinson-White syndrome (WPW), Ventricular Fibrillation (VF), Ventricular Tachycardia (VT), Ventricular Flutter (VFL), Mobitz II AV Block (MII), and Sinus Bradycardia (SB). Both 10-s and 5-s ECG segments exhibited comparable results, with an average classification accuracy of 97.31%. It reveals the feasibility of utilizing even shorter 5-s recordings for detecting arrhythmias in everyday scenarios. Detecting arrhythmias with a single lead aligns well with the practicality of wearable devices for daily use, and shorter detection times also align with their clinical utility in emergency situations.

14.
Radiol Imaging Cancer ; 6(2): e230029, 2024 03.
Artículo en Inglés | MEDLINE | ID: mdl-38391311

RESUMEN

Purpose To investigate the role of quantitative US (QUS) radiomics data obtained after the 1st week of radiation therapy (RT) in predicting treatment response in individuals with head and neck squamous cell carcinoma (HNSCC). Materials and Methods This prospective study included 55 participants (21 with complete response [median age, 65 years {IQR: 47-80 years}, 20 male, one female; and 34 with incomplete response [median age, 59 years {IQR: 39-79 years}, 33 male, one female) with bulky node-positive HNSCC treated with curative-intent RT from January 2015 to October 2019. All participants received 70 Gy of radiation in 33-35 fractions over 6-7 weeks. US radiofrequency data from metastatic lymph nodes were acquired prior to and after 1 week of RT. QUS analysis resulted in five spectral maps from which mean values were extracted. We applied a gray-level co-occurrence matrix technique for textural analysis, leading to 20 QUS texture and 80 texture-derivative parameters. The response 3 months after RT was used as the end point. Model building and evaluation utilized nested leave-one-out cross-validation. Results Five delta (Δ) parameters had statistically significant differences (P < .05). The support vector machines classifier achieved a sensitivity of 71% (15 of 21), a specificity of 76% (26 of 34), a balanced accuracy of 74%, and an area under the receiver operating characteristic curve of 0.77 on the test set. For all the classifiers, the performance improved after the 1st week of treatment. Conclusion A QUS Δ-radiomics model using data obtained after the 1st week of RT from individuals with HNSCC predicted response 3 months after treatment completion with reasonable accuracy. Keywords: Computer-Aided Diagnosis (CAD), Ultrasound, Radiation Therapy/Oncology, Head/Neck, Radiomics, Quantitative US, Radiotherapy, Head and Neck Squamous Cell Carcinoma, Machine Learning Clinicaltrials.gov registration no. NCT03908684 Supplemental material is available for this article. © RSNA, 2024.


Asunto(s)
Neoplasias de Cabeza y Cuello , Anciano , Femenino , Humanos , Masculino , Persona de Mediana Edad , Neoplasias de Cabeza y Cuello/diagnóstico por imagen , Neoplasias de Cabeza y Cuello/radioterapia , Cuello , Estudios Prospectivos , Radiómica , Carcinoma de Células Escamosas de Cabeza y Cuello/diagnóstico por imagen , Carcinoma de Células Escamosas de Cabeza y Cuello/radioterapia
15.
J Pathol Inform ; 15: 100357, 2024 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-38420608

RESUMEN

Computational Pathology (CPath) is an interdisciplinary science that augments developments of computational approaches to analyze and model medical histopathology images. The main objective for CPath is to develop infrastructure and workflows of digital diagnostics as an assistive CAD system for clinical pathology, facilitating transformational changes in the diagnosis and treatment of cancer that are mainly address by CPath tools. With evergrowing developments in deep learning and computer vision algorithms, and the ease of the data flow from digital pathology, currently CPath is witnessing a paradigm shift. Despite the sheer volume of engineering and scientific works being introduced for cancer image analysis, there is still a considerable gap of adopting and integrating these algorithms in clinical practice. This raises a significant question regarding the direction and trends that are undertaken in CPath. In this article we provide a comprehensive review of more than 800 papers to address the challenges faced in problem design all-the-way to the application and implementation viewpoints. We have catalogued each paper into a model-card by examining the key works and challenges faced to layout the current landscape in CPath. We hope this helps the community to locate relevant works and facilitate understanding of the field's future directions. In a nutshell, we oversee the CPath developments in cycle of stages which are required to be cohesively linked together to address the challenges associated with such multidisciplinary science. We overview this cycle from different perspectives of data-centric, model-centric, and application-centric problems. We finally sketch remaining challenges and provide directions for future technical developments and clinical integration of CPath. For updated information on this survey review paper and accessing to the original model cards repository, please refer to GitHub. Updated version of this draft can also be found from arXiv.

16.
Radiol Artif Intell ; 6(2): e230327, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-38197795

RESUMEN

Tuberculosis, which primarily affects developing countries, remains a significant global health concern. Since the 2010s, the role of chest radiography has expanded in tuberculosis triage and screening beyond its traditional complementary role in the diagnosis of tuberculosis. Computer-aided diagnosis (CAD) systems for tuberculosis detection on chest radiographs have recently made substantial progress in diagnostic performance, thanks to deep learning technologies. The current performance of CAD systems for tuberculosis has approximated that of human experts, presenting a potential solution to the shortage of human readers to interpret chest radiographs in low- or middle-income, high-tuberculosis-burden countries. This article provides a critical appraisal of developmental process reporting in extant CAD software for tuberculosis, based on the Checklist for Artificial Intelligence in Medical Imaging. It also explores several considerations to scale up CAD solutions, encompassing manufacturer-independent CAD validation, economic and political aspects, and ethical concerns, as well as the potential for broadening radiography-based diagnosis to other nontuberculosis diseases. Collectively, CAD for tuberculosis will emerge as a representative deep learning application, catalyzing advances in global health and health equity. Keywords: Computer-aided Diagnosis (CAD), Conventional Radiography, Thorax, Lung, Machine Learning Supplemental material is available for this article. © RSNA, 2024.


Asunto(s)
Inteligencia Artificial , Tuberculosis , Humanos , Salud Global , Programas Informáticos , Diagnóstico por Computador/métodos
17.
Int J Comput Assist Radiol Surg ; 19(2): 261-272, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-37594684

RESUMEN

PURPOSE: The proposed work aims to develop an algorithm to precisely segment the lung parenchyma in thoracic CT scans. To achieve this goal, the proposed technique utilized a combination of deep learning and traditional image processing algorithms. The initial step utilized a trained convolutional neural network (CNN) to generate preliminary lung masks, followed by the proposed post-processing algorithm for lung boundary correction. METHODS: First, the proposed method trained an improved 2D U-Net CNN model with Inception-ResNet-v2 as its backbone. The model was trained on 32 CT scans from two different sources: one from the VESSEL12 grand challenge and the other from AIIMS Delhi. Further, the model's performance was evaluated on a test dataset of 16 CT scans with juxta-pleural nodules obtained from AIIMS Delhi and the LUNA16 challenge. The model's performance was assessed using evaluation metrics such as average volumetric dice coefficient (DSCavg), average IoU score (IoUavg), and average F1 score (F1avg). Finally, the proposed post-processing algorithm was implemented to eliminate false positives from the model's prediction and to include juxta-pleural nodules in the final lung masks. RESULTS: The trained model reported a DSCavg of 0.9791 ± 0.008, IoUavg of 0.9624 ± 0.007, and F1avg of 0.9792 ± 0.004 on the test dataset. Applying the post-processing algorithm to the predicted lung masks obtained a DSCavg of 0.9713 ± 0.007, IoUavg of 0.9486 ± 0.007, and F1avg of 0.9701 ± 0.008. The post-processing algorithm successfully included juxta-pleural nodules in the final lung mask. CONCLUSIONS: Using a CNN model, the proposed method for lung parenchyma segmentation produced precise segmentation results. Furthermore, the post-processing algorithm addressed false positives and negatives in the model's predictions. Overall, the proposed approach demonstrated promising results for lung parenchyma segmentation. The method has the potential to be valuable in the advancement of computer-aided diagnosis (CAD) systems for automatic nodule detection.


Asunto(s)
Aprendizaje Profundo , Humanos , Pulmón/diagnóstico por imagen , Tórax , Redes Neurales de la Computación , Procesamiento de Imagen Asistido por Computador/métodos , Tomografía Computarizada por Rayos X
18.
Eur J Cancer ; 196: 113431, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-37980855

RESUMEN

BACKGROUND: Cutaneous adnexal tumors are a diverse group of tumors arising from structures of the hair appendages. Although often benign, malignant entities occur which can metastasize and lead to patients´ death. Correct diagnosis is critical to ensure optimal treatment and best possible patient outcome. Artificial intelligence (AI) in the form of deep neural networks has recently shown enormous potential in the field of medicine including pathology, where we and others have found common cutaneous tumors can be detected with high sensitivity and specificity. To become a widely applied tool, AI approaches will also need to reliably detect and distinguish less common tumor entities including the diverse group of cutaneous adnexal tumors. METHODS: To assess the potential of AI to recognize cutaneous adnexal tumors, we selected a diverse set of these entities from five German centers. The algorithm was trained with samples from four centers and then tested on slides from the fifth center. RESULTS: The neural network was able to differentiate 14 different cutaneous adnexal tumors and distinguish them from more common cutaneous tumors (i.e. basal cell carcinoma and seborrheic keratosis). The total accuracy on the test set for classifying 248 samples into these 16 diagnoses was 89.92 %. Our findings support AI can distinguish rare tumors, for morphologically distinct entities even with very limited case numbers (< 50) for training. CONCLUSION: This study further underlines the enormous potential of AI in pathology which could become a standard tool to aid pathologists in routine diagnostics in the foreseeable future. The final diagnostic responsibility will remain with the pathologist.


Asunto(s)
Aprendizaje Profundo , Neoplasias Cutáneas , Humanos , Inteligencia Artificial , Neoplasias Cutáneas/patología , Algoritmos , Redes Neurales de la Computación
19.
Med Image Anal ; 91: 103039, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-37992495

RESUMEN

Ultrasound has become the most widely used modality for thyroid nodule diagnosis, due to its portability, real-time feedback, lack of toxicity, and low cost. Recently, the computer-aided diagnosis (CAD) of thyroid nodules has attracted significant attention. However, most existing techniques can only be applied to either static images with prominent features (manually selected from scanning videos) or rely on 'black boxes' that cannot provide interpretable results. In this study, we develop a user-friendly framework for the automated diagnosis of thyroid nodules in ultrasound videos, by simulating the typical diagnostic workflow used by radiologists. This process consists of two orderly part-to-whole tasks. The first interprets the characteristics of each image using prior knowledge, to obtain corresponding frame-wise TI-RADS scores. Associated embedded representations not only provide diagnostic information for radiologists but also reduce computational costs. The second task models temporal contextual information in an embedding vector sequence and selectively enhances important information to distinguish benign and malignant thyroid nodules, thereby improving the efficiency and generalizability of the proposed framework. Experimental results demonstrated this approach outperformed other state-of-the-art video classification methods. In addition to assisting radiologists in understanding model predictions, these CAD results could further ease diagnostic workloads and improve patient care.


Asunto(s)
Nódulo Tiroideo , Humanos , Nódulo Tiroideo/diagnóstico por imagen , Nódulo Tiroideo/patología , Sensibilidad y Especificidad , Diagnóstico Diferencial , Ultrasonografía/métodos , Diagnóstico por Computador/métodos
20.
Bioengineering (Basel) ; 10(11)2023 Nov 07.
Artículo en Inglés | MEDLINE | ID: mdl-38002413

RESUMEN

Breast cancer is the second most common cancer in women who are mainly middle-aged and older. The American Cancer Society reported that the average risk of developing breast cancer sometime in their life is about 13%, and this incident rate has increased by 0.5% per year in recent years. A biopsy is done when screening tests and imaging results show suspicious breast changes. Advancements in computer-aided system capabilities and performance have fueled research using histopathology images in cancer diagnosis. Advances in machine learning and deep neural networks have tremendously increased the number of studies developing computerized detection and classification models. The dataset-dependent nature and trial-and-error approach of the deep networks' performance produced varying results in the literature. This work comprehensively reviews the studies published between 2010 and 2022 regarding commonly used public-domain datasets and methodologies used in preprocessing, segmentation, feature engineering, machine-learning approaches, classifiers, and performance metrics.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA