Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 132
Filtrar
1.
Front Oncol ; 14: 1424546, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39228981

RESUMEN

Objective: The research aims to develop an advanced and precise lung cancer screening model based on Convolutional Neural Networks (CNN). Methods: Based on the health medical big data platform of Shandong University, we developed a VGG16-Based CNN lung cancer screening model. This model was trained using the Computed Tomography scans data of patients from Pingyi Traditional Chinese Medicine Hospital in Shandong Province, from January to February 2023. Data augmentation techniques, including random resizing, cropping, horizontal flipping, color jitter, random rotation and normalization, were applied to improve model generalization. We used five-fold cross-validation to robustly assess performance. The model was fine-tuned with an SGD optimizer (learning rate 0.001, momentum 0.9, and L2 regularization) and a learning rate scheduler. Dropout layers were added to prevent the model from relying too heavily on specific neurons, enhancing its ability to generalize. Early stopping was implemented when validation loss did not decrease over 10 epochs. In addition, we evaluated the model's performance with Area Under the Curve (AUC), Classification accuracy, Positive Predictive Value (PPV), and Negative Predictive Value (NPV), Sensitivity, Specificity and F1 score. External validation used an independent dataset from the same hospital, covering January to February 2022. Results: The training and validation loss and accuracy over iterations show that both accuracy metrics peak at over 0.9 by iteration 15, prompting early stopping to prevent overfitting. Based on five-fold cross-validation, the ROC curves for the VGG16-Based CNN model, demonstrate an AUC of 0.963 ± 0.004, highlighting its excellent diagnostic capability. Confusion matrices provide average metrics with a classification accuracy of 0.917 ± 0.004, PPV of 0.868 ± 0.015, NPV of 0.931 ± 0.003, Sensitivity of 0.776 ± 0.01, Specificity of 0.962 ± 0.005 and F1 score of 0.819 ± 0.008, respectively. External validation confirmed the model's robustness across different patient populations and imaging conditions. Conclusion: The VGG16-Based CNN lung screening model constructed in this study can effectively identify lung tumors, demonstrating reliability and effectiveness in real-world medical settings, and providing strong theoretical and empirical support for its use in lung cancer screening.

2.
Int Dent J ; 2024 Sep 03.
Artículo en Inglés | MEDLINE | ID: mdl-39232939

RESUMEN

BACKGROUND: During preclinical training, dental students take radiographs of acrylic (plastic) blocks containing extracted patient teeth. With the digitisation of medical records, a central archiving system was created to store and retrieve all x-ray images, regardless of whether they were images of teeth on acrylic blocks, or those from patients. In the early stage of the digitisation process, and due to the immaturity of the data management system, numerous images were mixed up and stored in random locations within a unified archiving system, including patient record files. Filtering out and expunging the undesired training images is imperative as manual searching for such images is problematic. Hence the aim of this stidy was to differentiate intraoral images from artificial images on acrylic blocks. METHODS: An artificial intelligence (AI) solution to automatically differentiate between intraoral radiographs taken of patients and those taken of acrylic blocks was utilised in this study. The concept of transfer learning was applied to a dataset provided by a Dental Hospital. RESULTS: An accuracy score, F1 score, and a recall score of 98.8%, 99.2%, and 100%, respectively, were achieved using a VGG16 pre-trained model. These results were more sensitive compared to those obtained initally using a baseline model with 96.5%, 97.5%, and 98.9% accuracy score, F1 score, and a recall score respectively. CONCLUSIONS: The proposed system using transfer learning was able to accurately identify "fake" radiographs images and distinguish them from the real intraoral images.

3.
MethodsX ; 13: 102901, 2024 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-39247156

RESUMEN

Interaction and communication for normal human beings are easier than for a person with disabilities like speaking and hearing who may face communication problems with other people. Sign Language helps reduce this communication gap between a normal and disabled person. The prior solutions proposed using several deep learning techniques, such as Convolutional Neural Networks, Support Vector Machines, and K-Nearest Neighbors, have either demonstrated low accuracy or have not been implemented as real-time working systems. This system addresses both issues effectively. This work extends the difficulties faced while classifying the characters in Indian Sign Language(ISL). It can identify a total of 23 hand poses of the ISL. The system uses a pre-trained VGG16 Convolution Neural Network(CNN) with an attention mechanism. The model's training uses the Adam optimizer and cross-entropy loss function. The results demonstrate the effectiveness of transfer learning for ISL classification, achieving an accuracy of 97.5 % with VGG16 and 99.8 % with VGG16 plus attention mechanism.•Enabling quick and accurate sign language recognition with the help of trained model VGG16 with an attention mechanism.•The system does not require any external gloves or sensors, which helps to eliminate the need for physical sensors while simplifying the process with reduced costs.•Real-time processing makes the system more helpful for people with speaking and hearing disabilities, making it easier for them to communicate with other humans.

4.
Heliyon ; 10(14): e33941, 2024 Jul 30.
Artículo en Inglés | MEDLINE | ID: mdl-39108897

RESUMEN

In the grain industry, identifying seed purity is a crucial task because it is an important factor in evaluating seed quality. For rice seeds, this attribute enables the minimization of unexpected influences of other varieties on rice yield, nutrient composition, and price. However, in practice, they are often mixed with seeds from other varieties. This study proposes a novel method for automatically identifying the purity of a specific rice variety using hybrid machine learning algorithms. The core concept involves leveraging deep learning architectures to extract pertinent features from raw data, followed by the application of machine learning algorithms for classification. Several experiments are conducted to evaluate the performance of the proposed model through practical implementation. The results demonstrate that the novel method substantially outperformed the existing methods, demonstrating the potential for effective rice seed purity identification systems.

5.
Stud Health Technol Inform ; 316: 1145-1150, 2024 Aug 22.
Artículo en Inglés | MEDLINE | ID: mdl-39176583

RESUMEN

Advances in general-purpose computers have enabled the generation of high-quality synthetic medical images that human eyes cannot differ between real and AI-generated images. To analyse the efficacy of the generated medical images, this study proposed a modified VGG16-based algorithm to recognise AI-generated medical images. Initially, 10,000 synthetic medical skin lesion images were generated using a Generative Adversarial Network (GAN), providing a set of images for comparison to real images. Then, an enhanced VGG16-based algorithm has been developed to classify real images vs AI-generated images. Following hyperparameters tuning and training, the optimal approach can classify the images with 99.82% accuracy. Multiple other evaluations have been used to evaluate the efficacy of the proposed network. The complete dataset used in this study is available online to the research community for future research.


Asunto(s)
Aprendizaje Profundo , Humanos , Algoritmos , Enfermedades de la Piel/diagnóstico por imagen , Interpretación de Imagen Asistida por Computador/métodos , Neoplasias Cutáneas/diagnóstico por imagen
6.
Heliyon ; 10(12): e33447, 2024 Jun 30.
Artículo en Inglés | MEDLINE | ID: mdl-39027426

RESUMEN

The identification of pepper leaf diseases is crucial for ensuring the safety and quality of pepper yield. However, existing methods heavily rely on manual diagnosis, resulting in inefficiencies and inaccuracies. In this study, we propose a lightweight convolutional neural network (CNN) model for recognizing pepper leaf diseases and subsequently develop an application based on this model. To begin with, we acquired various images depicting healthy leaves as well as leaves affected by viral diseases, brown spots, and leaf mold. It is noteworthy that these images were captured against a background of human palms, which is commonly encountered in field conditions. The proposed CNN model adopts the GGM-VGG16 architecture, incorporating Ghost modules, global average pooling, and multi-scale convolution. Following training with the collected image dataset, the model was deployed on a mobile terminal, where an application for pepper leaf disease recognition was developed using Android Studio. Experimental results indicate that the proposed model achieved 100 % accuracy on images with a human palm background, while also demonstrating satisfactory performance on images with other backgrounds, achieving an accuracy of 87.38 %. Furthermore, the developed application has a compact size of only 12.84 MB and exhibits robust performance in recognizing pepper leaf diseases.

7.
Front Plant Sci ; 15: 1402835, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38988642

RESUMEN

The agricultural sector is pivotal to food security and economic stability worldwide. Corn holds particular significance in the global food industry, especially in developing countries where agriculture is a cornerstone of the economy. However, corn crops are vulnerable to various diseases that can significantly reduce yields. Early detection and precise classification of these diseases are crucial to prevent damage and ensure high crop productivity. This study leverages the VGG16 deep learning (DL) model to classify corn leaves into four categories: healthy, blight, gray spot, and common rust. Despite the efficacy of DL models, they often face challenges related to the explainability of their decision-making processes. To address this, Layer-wise Relevance Propagation (LRP) is employed to enhance the model's transparency by generating intuitive and human-readable heat maps of input images. The proposed VGG16 model, augmented with LRP, outperformed previous state-of-the-art models in classifying corn leaf diseases. Simulation results demonstrated that the model not only achieved high accuracy but also provided interpretable results, highlighting critical regions in the images used for classification. By generating human-readable explanations, this approach ensures greater transparency and reliability in model performance, aiding farmers in improving their crop yields.

8.
Sci Rep ; 14(1): 17615, 2024 07 30.
Artículo en Inglés | MEDLINE | ID: mdl-39080324

RESUMEN

The process of brain tumour segmentation entails locating the tumour precisely in images. Magnetic Resonance Imaging (MRI) is typically used by doctors to find any brain tumours or tissue abnormalities. With the use of region-based Convolutional Neural Network (R-CNN) masks, Grad-CAM and transfer learning, this work offers an effective method for the detection of brain tumours. Helping doctors make extremely accurate diagnoses is the goal. A transfer learning-based model has been suggested that offers high sensitivity and accuracy scores for brain tumour detection when segmentation is done using R-CNN masks. To train the model, the Inception V3, VGG-16, and ResNet-50 architectures were utilised. The Brain MRI Images for Brain Tumour Detection dataset was utilised to develop this method. This work's performance is evaluated and reported in terms of recall, specificity, sensitivity, accuracy, precision, and F1 score. A thorough analysis has been done comparing the proposed model operating with three distinct architectures: VGG-16, Inception V3, and Resnet-50. Comparing the proposed model, which was influenced by the VGG-16, to related works also revealed its performance. Achieving high sensitivity and accuracy percentages was the main goal. Using this approach, an accuracy and sensitivity of around 99% were obtained, which was much greater than current efforts.


Asunto(s)
Neoplasias Encefálicas , Imagen por Resonancia Magnética , Redes Neurales de la Computación , Humanos , Neoplasias Encefálicas/diagnóstico por imagen , Neoplasias Encefálicas/patología , Imagen por Resonancia Magnética/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Interpretación de Imagen Asistida por Computador/métodos , Algoritmos , Sensibilidad y Especificidad
9.
BMC Med Imaging ; 24(1): 176, 2024 Jul 19.
Artículo en Inglés | MEDLINE | ID: mdl-39030496

RESUMEN

Medical imaging stands as a critical component in diagnosing various diseases, where traditional methods often rely on manual interpretation and conventional machine learning techniques. These approaches, while effective, come with inherent limitations such as subjectivity in interpretation and constraints in handling complex image features. This research paper proposes an integrated deep learning approach utilizing pre-trained models-VGG16, ResNet50, and InceptionV3-combined within a unified framework to improve diagnostic accuracy in medical imaging. The method focuses on lung cancer detection using images resized and converted to a uniform format to optimize performance and ensure consistency across datasets. Our proposed model leverages the strengths of each pre-trained network, achieving a high degree of feature extraction and robustness by freezing the early convolutional layers and fine-tuning the deeper layers. Additionally, techniques like SMOTE and Gaussian Blur are applied to address class imbalance, enhancing model training on underrepresented classes. The model's performance was validated on the IQ-OTH/NCCD lung cancer dataset, which was collected from the Iraq-Oncology Teaching Hospital/National Center for Cancer Diseases over a period of three months in fall 2019. The proposed model achieved an accuracy of 98.18%, with precision and recall rates notably high across all classes. This improvement highlights the potential of integrated deep learning systems in medical diagnostics, providing a more accurate, reliable, and efficient means of disease detection.


Asunto(s)
Aprendizaje Profundo , Neoplasias Pulmonares , Humanos , Neoplasias Pulmonares/diagnóstico por imagen , Tomografía Computarizada por Rayos X/métodos , Redes Neurales de la Computación
10.
Sensors (Basel) ; 24(13)2024 Jul 08.
Artículo en Inglés | MEDLINE | ID: mdl-39001200

RESUMEN

Acute lymphoblastic leukemia, commonly referred to as ALL, is a type of cancer that can affect both the blood and the bone marrow. The process of diagnosis is a difficult one since it often calls for specialist testing, such as blood tests, bone marrow aspiration, and biopsy, all of which are highly time-consuming and expensive. It is essential to obtain an early diagnosis of ALL in order to start therapy in a timely and suitable manner. In recent medical diagnostics, substantial progress has been achieved through the integration of artificial intelligence (AI) and Internet of Things (IoT) devices. Our proposal introduces a new AI-based Internet of Medical Things (IoMT) framework designed to automatically identify leukemia from peripheral blood smear (PBS) images. In this study, we present a novel deep learning-based fusion model to detect ALL types of leukemia. The system seamlessly delivers the diagnostic reports to the centralized database, inclusive of patient-specific devices. After collecting blood samples from the hospital, the PBS images are transmitted to the cloud server through a WiFi-enabled microscopic device. In the cloud server, a new fusion model that is capable of classifying ALL from PBS images is configured. The fusion model is trained using a dataset including 6512 original and segmented images from 89 individuals. Two input channels are used for the purpose of feature extraction in the fusion model. These channels include both the original and the segmented images. VGG16 is responsible for extracting features from the original images, whereas DenseNet-121 is responsible for extracting features from the segmented images. The two output features are merged together, and dense layers are used for the categorization of leukemia. The fusion model that has been suggested obtains an accuracy of 99.89%, a precision of 99.80%, and a recall of 99.72%, which places it in an excellent position for the categorization of leukemia. The proposed model outperformed several state-of-the-art Convolutional Neural Network (CNN) models in terms of performance. Consequently, this proposed model has the potential to save lives and effort. For a more comprehensive simulation of the entire methodology, a web application (Beta Version) has been developed in this study. This application is designed to determine the presence or absence of leukemia in individuals. The findings of this study hold significant potential for application in biomedical research, particularly in enhancing the accuracy of computer-aided leukemia detection.


Asunto(s)
Aprendizaje Profundo , Internet de las Cosas , Humanos , Leucemia-Linfoma Linfoblástico de Células Precursoras/diagnóstico , Inteligencia Artificial , Leucemia/diagnóstico , Leucemia/clasificación , Leucemia/patología , Algoritmos , Procesamiento de Imagen Asistido por Computador/métodos , Redes Neurales de la Computación
11.
Diagnostics (Basel) ; 14(13)2024 Jun 24.
Artículo en Inglés | MEDLINE | ID: mdl-39001228

RESUMEN

In this research, we introduce a network that can identify pneumonia, COVID-19, and tuberculosis using X-ray images of patients' chests. The study emphasizes tuberculosis, COVID-19, and healthy lung conditions, discussing how advanced neural networks, like VGG16 and ResNet50, can improve the detection of lung issues from images. To prepare the images for the model's input requirements, we enhanced them through data augmentation techniques for training purposes. We evaluated the model's performance by analyzing the precision, recall, and F1 scores across training, validation, and testing datasets. The results show that the ResNet50 model outperformed VGG16 with accuracy and resilience. It displayed superior ROC AUC values in both validation and test scenarios. Particularly impressive were ResNet50's precision and recall rates, nearing 0.99 for all conditions in the test set. On the hand, VGG16 also performed well during testing-detecting tuberculosis with a precision of 0.99 and a recall of 0.93. Our study highlights the performance of our deep learning method by showcasing the effectiveness of ResNet50 over traditional approaches like VGG16. This progress utilizes methods to enhance classification accuracy by augmenting data and balancing them. This positions our approach as an advancement in using state-of-the-art deep learning applications in imaging. By enhancing the accuracy and reliability of diagnosing ailments such as COVID-19 and tuberculosis, our models have the potential to transform care and treatment strategies, highlighting their role in clinical diagnostics.

12.
Respiration ; : 1-14, 2024 Jul 24.
Artículo en Inglés | MEDLINE | ID: mdl-39047695

RESUMEN

INTRODUCTION: Exacerbations of chronic obstructive pulmonary disease (COPD) have a significant impact on hospitalizations, morbidity, and mortality of patients. This study aimed to develop a model for predicting acute exacerbation in COPD patients (AECOPD) based on deep-learning (DL) features. METHODS: We performed a retrospective study on 219 patients with COPD who underwent inspiratory and expiratory HRCT scans. By recording the acute respiratory events of the previous year, these patients were further divided into non-AECOPD group and AECOPD group according to the presence of acute exacerbation events. Sixty-nine quantitative CT (QCT) parameters of emphysema and airway were calculated by NeuLungCARE software, and 2,000 DL features were extracted by VGG-16 method. The logistic regression method was employed to identify AECOPD patients, and 29 patients of external validation cohort were used to access the robustness of the results. RESULTS: The model 3-B achieved an area under the receiver operating characteristic curve (AUC) of 0.933 and 0.865 in the testing cohort and external validation cohort, respectively. Model 3-I obtained AUC of 0.895 in the testing cohort and AUC of 0.774 in the external validation cohort. Model 7-B combined clinical characteristics, QCT parameters, and DL features achieved the best performance with an AUC of 0.979 in the testing cohort and demonstrating robust predictability with an AUC of 0.932 in the external validation cohort. Likewise, model 7-I achieved an AUC of 0.938 and 0.872 in the testing cohort and external validation cohort, respectively. CONCLUSIONS: DL features extracted from HRCT scans can effectively predict acute exacerbation phenotype in COPD patients.

13.
Sensors (Basel) ; 24(11)2024 May 26.
Artículo en Inglés | MEDLINE | ID: mdl-38894210

RESUMEN

In hazardous environments like mining sites, mobile inspection robots play a crucial role in condition monitoring (CM) tasks, particularly by collecting various kinds of data, such as images. However, the sheer volume of collected image samples and existing noise pose challenges in processing and visualizing thermal anomalies. Recognizing these challenges, our study addresses the limitations of industrial big data analytics for mobile robot-generated image data. We present a novel, fully integrated approach involving a dimension reduction procedure. This includes a semantic segmentation technique utilizing the pre-trained VGG16 CNN architecture for feature selection, followed by random forest (RF) and extreme gradient boosting (XGBoost) classifiers for the prediction of the pixel class labels. We also explore unsupervised learning using the PCA-K-means method for dimension reduction and classification of unlabeled thermal defects based on anomaly severity. Our comprehensive methodology aims to efficiently handle image-based CM tasks in hazardous environments. To validate its practicality, we applied our approach in a real-world scenario, and the results confirm its robust performance in processing and visualizing thermal data collected by mobile inspection robots. This affirms the effectiveness of our methodology in enhancing the overall performance of CM processes.

14.
Ultrasound Med Biol ; 50(9): 1361-1371, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-38910034

RESUMEN

BACKGROUND: Ultrasound image examination has become the preferred choice for diagnosing metabolic dysfunction-associated steatotic liver disease (MASLD) due to its non-invasive nature. Computer-aided diagnosis (CAD) technology can assist doctors in avoiding deviations in the detection and classification of MASLD. METHOD: We propose a hybrid model that integrates the pre-trained VGG16 network with an attention mechanism and a stacking ensemble learning model, which is capable of multi-scale feature aggregation based on the self-attention mechanism and multi-classification model fusion (Logistic regression, random forest, support vector machine) based on stacking ensemble learning. The proposed hybrid method achieves four classifications of normal, mild, moderate, and severe fatty liver based on ultrasound images. RESULT AND CONCLUSION: Our proposed hybrid model reaches an accuracy of 91.34% and exhibits superior robustness against interference, which is better than traditional neural network algorithms. Experimental results show that, compared with the pre-trained VGG16 model, adding the self-attention mechanism improves the accuracy by 3.02%. Using the stacking ensemble learning model as a classifier further increases the accuracy to 91.34%, exceeding any single classifier such as LR (89.86%) and SVM (90.34%) and RF (90.73%). The proposed hybrid method can effectively improve the efficiency and accuracy of MASLD ultrasound image detection.


Asunto(s)
Algoritmos , Redes Neurales de la Computación , Ultrasonografía , Humanos , Ultrasonografía/métodos , Hígado/diagnóstico por imagen , Hígado Graso/diagnóstico por imagen , Aprendizaje Automático , Interpretación de Imagen Asistida por Computador/métodos
15.
BMC Med Imaging ; 24(1): 156, 2024 Jun 24.
Artículo en Inglés | MEDLINE | ID: mdl-38910241

RESUMEN

Parkinson's disease (PD) is challenging for clinicians to accurately diagnose in the early stages. Quantitative measures of brain health can be obtained safely and non-invasively using medical imaging techniques like magnetic resonance imaging (MRI) and single photon emission computed tomography (SPECT). For accurate diagnosis of PD, powerful machine learning and deep learning models as well as the effectiveness of medical imaging tools for assessing neurological health are required. This study proposes four deep learning models with a hybrid model for the early detection of PD. For the simulation study, two standard datasets are chosen. Further to improve the performance of the models, grey wolf optimization (GWO) is used to automatically fine-tune the hyperparameters of the models. The GWO-VGG16, GWO-DenseNet, GWO-DenseNet + LSTM, GWO-InceptionV3 and GWO-VGG16 + InceptionV3 are applied to the T1,T2-weighted and SPECT DaTscan datasets. All the models performed well and obtained near or above 99% accuracy. The highest accuracy of 99.94% and AUC of 99.99% is achieved by the hybrid model (GWO-VGG16 + InceptionV3) for T1,T2-weighted dataset and 100% accuracy and 99.92% AUC is recorded for GWO-VGG16 + InceptionV3 models using SPECT DaTscan dataset.


Asunto(s)
Algoritmos , Aprendizaje Profundo , Imagen por Resonancia Magnética , Enfermedad de Parkinson , Tomografía Computarizada de Emisión de Fotón Único , Humanos , Enfermedad de Parkinson/diagnóstico por imagen , Tomografía Computarizada de Emisión de Fotón Único/métodos , Imagen por Resonancia Magnética/métodos , Masculino , Femenino
16.
Diagnostics (Basel) ; 14(12)2024 Jun 12.
Artículo en Inglés | MEDLINE | ID: mdl-38928647

RESUMEN

This study evaluates the efficacy of several Convolutional Neural Network (CNN) models for the classification of hearing loss in patients using preprocessed auditory brainstem response (ABR) image data. Specifically, we employed six CNN architectures-VGG16, VGG19, DenseNet121, DenseNet-201, AlexNet, and InceptionV3-to differentiate between patients with hearing loss and those with normal hearing. A dataset comprising 7990 preprocessed ABR images was utilized to assess the performance and accuracy of these models. Each model was systematically tested to determine its capability to accurately classify hearing loss. A comparative analysis of the models focused on metrics of accuracy and computational efficiency. The results indicated that the AlexNet model exhibited superior performance, achieving an accuracy of 95.93%. The findings from this research suggest that deep learning models, particularly AlexNet in this instance, hold significant potential for automating the diagnosis of hearing loss using ABR graph data. Future work will aim to refine these models to enhance their diagnostic accuracy and efficiency, fostering their practical application in clinical settings.

17.
Heliyon ; 10(10): e30957, 2024 May 30.
Artículo en Inglés | MEDLINE | ID: mdl-38803954

RESUMEN

A self-driving car is necessary to implement traffic intelligence because it can vastly enhance both the safety of driving and the comfort of the driver by adjusting to the circumstances of the road ahead. Road hazards such as potholes can be a big challenge for autonomous vehicles, increasing the risk of crashes and vehicle damage. Real-time identification of road potholes is required to solve this issue. To this end, various approaches have been tried, including notifying the appropriate authorities, utilizing vibration-based sensors, and engaging in three-dimensional laser imaging. Unfortunately, these approaches have several drawbacks, such as large initial expenditures and the possibility of being discovered. Transfer learning is considered a potential answer to the pressing necessity of automating the process of pothole identification. A Convolutional Neural Network (CNN) is constructed to categorize potholes effectively using the VGG-16 pre-trained model as a transfer learning model throughout the training process. A Super-Resolution Generative Adversarial Network (SRGAN) is suggested to enhance the image's overall quality. Experiments conducted with the suggested approach of classifying road potholes revealed a high accuracy rate of 97.3%, and its effectiveness was tested using various criteria. The developed transfer learning technique obtained the best accuracy rate compared to many other deep learning algorithms.

18.
Bioengineering (Basel) ; 11(5)2024 Apr 23.
Artículo en Inglés | MEDLINE | ID: mdl-38790279

RESUMEN

Brain cancer is a life-threatening disease requiring close attention. Early and accurate diagnosis using non-invasive medical imaging is critical for successful treatment and patient survival. However, manual diagnosis by radiologist experts is time-consuming and has limitations in processing large datasets efficiently. Therefore, efficient systems capable of analyzing vast amounts of medical data for early tumor detection are urgently needed. Deep learning (DL) with deep convolutional neural networks (DCNNs) emerges as a promising tool for understanding diseases like brain cancer through medical imaging modalities, especially MRI, which provides detailed soft tissue contrast for visualizing tumors and organs. DL techniques have become more and more popular in current research on brain tumor detection. Unlike traditional machine learning methods requiring manual feature extraction, DL models are adept at handling complex data like MRIs and excel in classification tasks, making them well-suited for medical image analysis applications. This study presents a novel Dual DCNN model that can accurately classify cancerous and non-cancerous MRI samples. Our Dual DCNN model uses two well-performed DL models, i.e., inceptionV3 and denseNet121. Features are extracted from these models by appending a global max pooling layer. The extracted features are then utilized to train the model with the addition of five fully connected layers and finally accurately classify MRI samples as cancerous or non-cancerous. The fully connected layers are retrained to learn the extracted features for better accuracy. The technique achieves 99%, 99%, 98%, and 99% of accuracy, precision, recall, and f1-scores, respectively. Furthermore, this study compares the Dual DCNN's performance against various well-known DL models, including DenseNet121, InceptionV3, ResNet architectures, EfficientNetB2, SqueezeNet, VGG16, AlexNet, and LeNet-5, with different learning rates. This study indicates that our proposed approach outperforms these established models in terms of performance.

19.
BMC Med Imaging ; 24(1): 110, 2024 May 15.
Artículo en Inglés | MEDLINE | ID: mdl-38750436

RESUMEN

Brain tumor classification using MRI images is a crucial yet challenging task in medical imaging. Accurate diagnosis is vital for effective treatment planning but is often hindered by the complex nature of tumor morphology and variations in imaging. Traditional methodologies primarily rely on manual interpretation of MRI images, supplemented by conventional machine learning techniques. These approaches often lack the robustness and scalability needed for precise and automated tumor classification. The major limitations include a high degree of manual intervention, potential for human error, limited ability to handle large datasets, and lack of generalizability to diverse tumor types and imaging conditions.To address these challenges, we propose a federated learning-based deep learning model that leverages the power of Convolutional Neural Networks (CNN) for automated and accurate brain tumor classification. This innovative approach not only emphasizes the use of a modified VGG16 architecture optimized for brain MRI images but also highlights the significance of federated learning and transfer learning in the medical imaging domain. Federated learning enables decentralized model training across multiple clients without compromising data privacy, addressing the critical need for confidentiality in medical data handling. This model architecture benefits from the transfer learning technique by utilizing a pre-trained CNN, which significantly enhances its ability to classify brain tumors accurately by leveraging knowledge gained from vast and diverse datasets.Our model is trained on a diverse dataset combining figshare, SARTAJ, and Br35H datasets, employing a federated learning approach for decentralized, privacy-preserving model training. The adoption of transfer learning further bolsters the model's performance, making it adept at handling the intricate variations in MRI images associated with different types of brain tumors. The model demonstrates high precision (0.99 for glioma, 0.95 for meningioma, 1.00 for no tumor, and 0.98 for pituitary), recall, and F1-scores in classification, outperforming existing methods. The overall accuracy stands at 98%, showcasing the model's efficacy in classifying various tumor types accurately, thus highlighting the transformative potential of federated learning and transfer learning in enhancing brain tumor classification using MRI images.


Asunto(s)
Neoplasias Encefálicas , Aprendizaje Profundo , Imagen por Resonancia Magnética , Humanos , Neoplasias Encefálicas/diagnóstico por imagen , Neoplasias Encefálicas/clasificación , Imagen por Resonancia Magnética/métodos , Redes Neurales de la Computación , Aprendizaje Automático , Interpretación de Imagen Asistida por Computador/métodos
20.
J Prosthodont ; 33(7): 645-654, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-38566564

RESUMEN

PURPOSE: The study aimed to compare the performance of four pre-trained convolutional neural networks in recognizing seven distinct prosthodontic scenarios involving the maxilla, as a preliminary step in developing an artificial intelligence (AI)-powered prosthesis design system. MATERIALS AND METHODS: Seven distinct classes, including cleft palate, dentulous maxillectomy, edentulous maxillectomy, reconstructed maxillectomy, completely dentulous, partially edentulous, and completely edentulous, were considered for recognition. Utilizing transfer learning and fine-tuned hyperparameters, four AI models (VGG16, Inception-ResNet-V2, DenseNet-201, and Xception) were employed. The dataset, consisting of 3541 preprocessed intraoral occlusal images, was divided into training, validation, and test sets. Model performance metrics encompassed accuracy, precision, recall, F1 score, area under the receiver operating characteristic curve (AUC), and confusion matrix. RESULTS: VGG16, Inception-ResNet-V2, DenseNet-201, and Xception demonstrated comparable performance, with maximum test accuracies of 0.92, 0.90, 0.94, and 0.95, respectively. Xception and DenseNet-201 slightly outperformed the other models, particularly compared with InceptionResNet-V2. Precision, recall, and F1 scores exceeded 90% for most classes in Xception and DenseNet-201 and the average AUC values for all models ranged between 0.98 and 1.00. CONCLUSIONS: While DenseNet-201 and Xception demonstrated superior performance, all models consistently achieved diagnostic accuracy exceeding 90%, highlighting their potential in dental image analysis. This AI application could help work assignments based on difficulty levels and enable the development of an automated diagnosis system at patient admission. It also facilitates prosthesis designing by integrating necessary prosthesis morphology, oral function, and treatment difficulty. Furthermore, it tackles dataset size challenges in model optimization, providing valuable insights for future research.


Asunto(s)
Maxilar , Redes Neurales de la Computación , Prostodoncia , Humanos , Maxilar/diagnóstico por imagen , Prostodoncia/métodos , Inteligencia Artificial
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA