Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
BMC Med Imaging ; 24(1): 165, 2024 Jul 02.
Artículo en Inglés | MEDLINE | ID: mdl-38956579

RESUMEN

BACKGROUND: Pneumoconiosis has a significant impact on the quality of patient survival due to its difficult staging diagnosis and poor prognosis. This study aimed to develop a computer-aided diagnostic system for the screening and staging of pneumoconiosis based on a multi-stage joint deep learning approach using X-ray chest radiographs of pneumoconiosis patients. METHODS: In this study, a total of 498 medical chest radiographs were obtained from the Department of Radiology of West China Fourth Hospital. The dataset was randomly divided into a training set and a test set at a ratio of 4:1. Following histogram equalization for image enhancement, the images were segmented using the U-Net model, and staging was predicted using a convolutional neural network classification model. We first used Efficient-Net for multi-classification staging diagnosis, but the results showed that stage I/II of pneumoconiosis was difficult to diagnose. Therefore, based on clinical practice we continued to improve the model by using the Res-Net 34 Multi-stage joint method. RESULTS: Of the 498 cases collected, the classification model using the Efficient-Net achieved an accuracy of 83% with a Quadratic Weighted Kappa (QWK) score of 0.889. The classification model using the multi-stage joint approach of Res-Net 34 achieved an accuracy of 89% with an area under the curve (AUC) of 0.98 and a high QWK score of 0.94. CONCLUSIONS: In this study, the diagnostic accuracy of pneumoconiosis staging was significantly improved by an innovative combined multi-stage approach, which provided a reference for clinical application and pneumoconiosis screening.


Asunto(s)
Aprendizaje Profundo , Neumoconiosis , Humanos , Neumoconiosis/diagnóstico por imagen , Neumoconiosis/patología , Masculino , Persona de Mediana Edad , Femenino , Radiografía Torácica/métodos , Anciano , Adulto , Redes Neurales de la Computación , China , Diagnóstico por Computador/métodos , Interpretación de Imagen Radiográfica Asistida por Computador/métodos
2.
Asian Pac J Cancer Prev ; 25(5): 1795-1802, 2024 May 01.
Artículo en Inglés | MEDLINE | ID: mdl-38809652

RESUMEN

BACKGROUND: Skin cancer diagnosis challenges dermatologists due to its complex visual variations across diagnostic categories. Convolutional neural networks (CNNs), specifically the Efficient Net B0-B7 series, have shown superiority in multiclass skin cancer classification. This study addresses the limitations of visual examination by presenting a tailored preprocessing pipeline designed for Efficient Net models. Leveraging transfer learning with pre-trained ImageNet weights, the research aims to enhance diagnostic accuracy in an imbalanced multiclass classification context. METHODS: The study develops a specialized image preprocessing pipeline involving image scaling, dataset augmentation, and artifact removal tailored to the nuances of Efficient Net models. Using the Efficient Net B0-B7 dataset, transfer learning fine-tunes CNNs with pre-trained ImageNet weights. Rigorous evaluation employs key metrics like Precision, Recall, Accuracy, F1 Score, and Confusion Matrices to assess the impact of transfer learning and fine-tuning on each Efficient Net variant's performance in classifying diverse skin cancer categories. RESULTS: The research showcases the effectiveness of the tailored preprocessing pipeline for Efficient Net models. Transfer learning and fine-tuning significantly enhance the models' ability to discern diverse skin cancer categories. The evaluation of eight Efficient Net models (B0-B7) for skin cancer classification reveals distinct performance patterns across various cancer classes. While the majority class, Benign Kertosis, achieves high accuracy (>87%), challenges arise in accurately classifying Eczema classes. Melanoma, despite its minority representation (2.42% of images), attains an average accuracy of 80.51% across all models. However, suboptimal performance is observed in predicting warts molluscum (90.7%) and psoriasis (84.2%) instances, highlighting the need for targeted improvements in accurately identifying specific skin cancer types. CONCLUSION: The study on skin cancer classification utilizes EfficientNets B0-B7 with transfer learning from ImageNet weights. The pinnacle performance is observed with EfficientNet-B7, achieving a groundbreaking top-1 accuracy of 84.4% and top-5 accuracy of 97.1%. Remarkably efficient, it is 8.4 times smaller than the leading CNN. Detailed per-class classification exactitudes through Confusion Matrices affirm its proficiency, signaling the potential of EfficientNets for precise dermatological image analysis.


Asunto(s)
Redes Neurales de la Computación , Neoplasias Cutáneas , Humanos , Neoplasias Cutáneas/patología , Neoplasias Cutáneas/clasificación , Procesamiento de Imagen Asistido por Computador/métodos , Aprendizaje Profundo
3.
BMC Med Inform Decis Mak ; 24(1): 37, 2024 Feb 06.
Artículo en Inglés | MEDLINE | ID: mdl-38321416

RESUMEN

The most common eye infection in people with diabetes is diabetic retinopathy (DR). It might cause blurred vision or even total blindness. Therefore, it is essential to promote early detection to prevent or alleviate the impact of DR. However, due to the possibility that symptoms may not be noticeable in the early stages of DR, it is difficult for doctors to identify them. Therefore, numerous predictive models based on machine learning (ML) and deep learning (DL) have been developed to determine all stages of DR. However, existing DR classification models cannot classify every DR stage or use a computationally heavy approach. Common metrics such as accuracy, F1 score, precision, recall, and AUC-ROC score are not reliable for assessing DR grading. This is because they do not account for two key factors: the severity of the discrepancy between the assigned and predicted grades and the ordered nature of the DR grading scale. This research proposes computationally efficient ensemble methods for the classification of DR. These methods leverage pre-trained model weights, reducing training time and resource requirements. In addition, data augmentation techniques are used to address data limitations, improve features, and improve generalization. This combination offers a promising approach for accurate and robust DR grading. In particular, we take advantage of transfer learning using models trained on DR data and employ CLAHE for image enhancement and Gaussian blur for noise reduction. We propose a three-layer classifier that incorporates dropout and ReLU activation. This design aims to minimize overfitting while effectively extracting features and assigning DR grades. We prioritize the Quadratic Weighted Kappa (QWK) metric due to its sensitivity to label discrepancies, which is crucial for an accurate diagnosis of DR. This combined approach achieves state-of-the-art QWK scores (0.901, 0.967 and 0.944) in the Eyepacs, Aptos, and Messidor datasets.


Asunto(s)
Diabetes Mellitus , Retinopatía Diabética , Médicos , Humanos , Retinopatía Diabética/diagnóstico , Algoritmos , Aprendizaje Automático , Interpretación de Imagen Asistida por Computador/métodos
4.
Multimed Tools Appl ; : 1-23, 2023 Mar 27.
Artículo en Inglés | MEDLINE | ID: mdl-37362692

RESUMEN

Corona Virus (COVID-19) could be considered as one of the most devastating pandemics of the twenty-first century. The effective and the rapid screening of infected patients could reduce the mortality and even the contagion rate. Chest X-ray radiology could be designed as one of the effective screening techniques for COVID-19 exploration. In this paper, we propose an advanced approach based on deep learning architecture to automatic and effective screening techniques dedicated to the COVID-19 exploration through chest X-ray (CXR) imaging. Despite the success of state-of-the-art deep learning-based models for COVID-19 detection, they might suffer from several problems such as the huge memory and the computational requirement, the overfitting effect, and the high variance. To alleviate these issues, we investigate the Transfer Learning to the Efficient-Nets models. Next, we fine-tuned the whole network to select the optimal hyperparameters. Furthermore, in the preprocessing step, we consider an intensity-normalization method succeeded by some data augmentation techniques to solve the imbalanced dataset classes' issues. The proposed approach has presented a good performance in detecting patients attained by COVID-19 achieving an accuracy rate of 99.0% and 98% respectively using training and testing datasets. A comparative study over a publicly available dataset with the recently published deep-learning-based architectures could attest the proposed approach's performance.

5.
Sensors (Basel) ; 23(2)2023 Jan 11.
Artículo en Inglés | MEDLINE | ID: mdl-36679655

RESUMEN

Defects or cracks in roads, building walls, floors, and product surfaces can degrade the completeness of the product and become an impediment to quality control. Machine learning can be a solution for detecting defects effectively without human experts; however, the low-power computing device cannot afford that. In this paper, we suggest a crack detection system accelerated by edge computing. Our system consists of two: Rsef and Rsef-Edge. Rsef is a real-time segmentation method based on effective feature extraction that can perform crack image segmentation by optimizing conventional deep learning models. Then, we construct the edge-based system, named Rsef-Edge, to significantly decrease the inference time of Rsef, even in low-power IoT devices. As a result, we show both a fast inference time and good accuracy even in a low-powered computing environment.


Asunto(s)
Aprendizaje Profundo , Humanos , Aprendizaje Automático , Control de Calidad
6.
Multimed Tools Appl ; 81(19): 27737-27781, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35368855

RESUMEN

Glaucoma is the dominant reason for irreversible blindness worldwide, and its best remedy is early and timely detection. Optical coherence tomography has come to be the most commonly used imaging modality in detecting glaucomatous damage in recent years. Deep Learning using Optical Coherence Tomography Modality helps in predicting glaucoma more accurately and less tediously. This experimental study aims to perform glaucoma prediction using eight different ImageNet models from Optical Coherence Tomography of Glaucoma. A thorough investigation is performed to evaluate these models' performances on various efficiency metrics, which will help discover the best performing model. Every net is tested on three different optimizers, namely Adam, Root Mean Squared Propagation, and Stochastic Gradient Descent, to find the best relevant results. An attempt has been made to improvise the performance of models using transfer learning and fine-tuning. The work presented in this study was initially trained and tested on a private database that consists of 4220 images (2110 normal optical coherence tomography and 2110 glaucoma optical coherence tomography). Based on the results, the four best-performing models are shortlisted. Later, these models are tested on the well-recognized standard public Mendeley dataset. Experimental results illustrate that VGG16 using the Root Mean Squared Propagation Optimizer attains auspicious performance with 95.68% accuracy. The proposed work concludes that different ImageNet models are a good alternative as a computer-based automatic glaucoma screening system. This fully automated system has a lot of potential to tell the difference between normal Optical Coherence Tomography and glaucomatous Optical Coherence Tomography automatically. The proposed system helps in efficiently detecting this retinal infection in suspected patients for better diagnosis to avoid vision loss and also decreases senior ophthalmologists' (experts) precious time and involvement.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA