Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 62
Filtrar
1.
Biomed Eng Lett ; 14(5): 1069-1077, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-39220025

RESUMEN

Multiclass classification of brain tumors from magnetic resonance (MR) images is challenging due to high inter-class similarities. To this end, convolution neural networks (CNN) have been widely adopted in recent studies. However, conventional CNN architectures fail to capture the small lesion patterns of brain tumors. To tackle this issue, in this paper, we propose a global transformer network dubbed GT-Net for multiclass brain tumor classification. The GT-Net mainly comprises a global transformer module (GTM), which is introduced on the top of a backbone network. A generalized self-attention block (GSB) is proposed to capture the feature inter-dependencies not only across spatial dimension but also channel dimension, thereby facilitating the extraction of the detailed tumor lesion information while ignoring less important information. Further, multiple GSB heads are used in GTM to leverage global feature dependencies. We evaluate our GT-Net on a benchmark dataset by adopting several backbone networks, and the results demonstrate the effectiveness of GTM. Further, comparison with state-of-the-art methods validates the superiority of our model.

2.
Front Neuroinform ; 18: 1403732, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39139696

RESUMEN

Introduction: Brain diseases, particularly the classification of gliomas and brain metastases and the prediction of HT in strokes, pose significant challenges in healthcare. Existing methods, relying predominantly on clinical data or imaging-based techniques such as radiomics, often fall short in achieving satisfactory classification accuracy. These methods fail to adequately capture the nuanced features crucial for accurate diagnosis, often hindered by noise and the inability to integrate information across various scales. Methods: We propose a novel approach that mask attention mechanisms with multi-scale feature fusion for Multimodal brain disease classification tasks, termed M 3, which aims to extract features highly relevant to the disease. The extracted features are then dimensionally reduced using Principal Component Analysis (PCA), followed by classification with a Support Vector Machine (SVM) to obtain the predictive results. Results: Our methodology underwent rigorous testing on multi-parametric MRI datasets for both brain tumors and strokes. The results demonstrate a significant improvement in addressing critical clinical challenges, including the classification of gliomas, brain metastases, and the prediction of hemorrhagic stroke transformations. Ablation studies further validate the effectiveness of our attention mechanism and feature fusion modules. Discussion: These findings underscore the potential of our approach to meet and exceed current clinical diagnostic demands, offering promising prospects for enhancing healthcare outcomes in the diagnosis and treatment of brain diseases.

3.
J Neurosci Methods ; 410: 110227, 2024 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-39038716

RESUMEN

BACKGROUND: Accurately diagnosing brain tumors from MRI scans is crucial for effective treatment planning. While traditional methods heavily rely on radiologist expertise, the integration of AI, particularly Convolutional Neural Networks (CNNs), has shown promise in improving accuracy. However, the lack of transparency in AI decision-making processes presents a challenge for clinical adoption. METHODS: Recent advancements in deep learning, particularly the utilization of CNNs, have facilitated the development of models for medical image analysis. In this study, we employed the EfficientNetB0 architecture and integrated explainable AI techniques to enhance both accuracy and interpretability. Grad-CAM visualization was utilized to highlight significant areas in MRI scans influencing classification decisions. RESULTS: Our model achieved a classification accuracy of 98.72 % across four categories of brain tumors (Glioma, Meningioma, No Tumor, Pituitary), with precision and recall exceeding 97 % for all categories. The incorporation of explainable AI techniques was validated through visual inspection of Grad-CAM heatmaps, which aligned well with established diagnostic markers in MRI scans. CONCLUSION: The AI-enhanced EfficientNetB0 framework with explainable AI techniques significantly improves brain tumor classification accuracy to 98.72 %, offering clear visual insights into the decision-making process. This method enhances diagnostic reliability and trust, demonstrating substantial potential for clinical adoption in medical diagnostics.


Asunto(s)
Neoplasias Encefálicas , Aprendizaje Profundo , Imagen por Resonancia Magnética , Humanos , Neoplasias Encefálicas/diagnóstico por imagen , Imagen por Resonancia Magnética/métodos , Meningioma/diagnóstico por imagen , Glioma/diagnóstico por imagen , Neuroimagen/métodos , Neuroimagen/normas , Interpretación de Imagen Asistida por Computador/métodos , Redes Neurales de la Computación
4.
Electromagn Biol Med ; : 1-15, 2024 Jul 30.
Artículo en Inglés | MEDLINE | ID: mdl-39081005

RESUMEN

Efficient and accurate classification of brain tumor categories remains a critical challenge in medical imaging. While existing techniques have made strides, their reliance on generic features often leads to suboptimal results. To overcome these issues, Multimodal Contrastive Domain Sharing Generative Adversarial Network for Improved Brain Tumor Classification Based on Efficient Invariant Feature Centric Growth Analysis (MCDS-GNN-IBTC-CGA) is proposed in this manuscript.Here, the input imagesare amassed from brain tumor dataset. Then the input images are preprocesssed using Range - Doppler Matched Filter (RDMF) for improving the quality of the image. Then Ternary Pattern and Discrete Wavelet Transforms (TPDWT) is employed for feature extraction and focusing on white, gray mass, edge correlation, and depth features. The proposed method leverages Multimodal Contrastive Domain Sharing Generative Adversarial Network (MCDS-GNN) to categorize brain tumor images into Glioma, Meningioma, and Pituitary tumors. Finally, Coati Optimization Algorithm (COA) optimizes MCDS-GNN's weight parameters. The proposed MCDS-GNN-IBTC-CGA is empirically evaluated utilizing accuracy, specificity, sensitivity, Precision, F1-score,Mean Square Error (MSE). Here, MCDS-GNN-IBTC-CGA attains 12.75%, 11.39%, 13.35%, 11.42% and 12.98% greater accuracy comparing to the existingstate-of-the-arts techniques, likeMRI brain tumor categorization utilizing parallel deep convolutional neural networks (PDCNN-BTC), attention-guided convolutional neural network for the categorization of braintumor (AGCNN-BTC), intelligent driven deep residual learning method for the categorization of braintumor (DCRN-BTC),fully convolutional neural networks method for the classification of braintumor (FCNN-BTC), Convolutional Neural Network and Multi-Layer Perceptron based brain tumor classification (CNN-MLP-BTC) respectively.


The proposed MCDS-GNN-IBTC-CGA method starts by cleaning brain tumor images with RDMF and extracting features using TPDWT, focusing on color and texture. Subsequently, the MCDS-GNN artificial intelligence system categorizes tumors into types like Glioma and Meningioma. To enhance accuracy, COA fine-tunes the MCDS-GNN parameters. Ultimately, this approach aids in more effective diagnosis and treatment of brain tumors.

5.
Cureus ; 16(6): e61483, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38952601

RESUMEN

This research study explores of the effectiveness of a machine learning image classification model in the accurate identification of various types of brain tumors. The types of tumors under consideration in this study are gliomas, meningiomas, and pituitary tumors. These are some of the most common types of brain tumors and pose significant challenges in terms of accurate diagnosis and treatment. The machine learning model that is the focus of this study is built on the Google Teachable Machine platform (Alphabet Inc., Mountain View, CA). The Google Teachable Machine is a machine learning image classification platform that is built from Tensorflow, a popular open-source platform for machine learning. The Google Teachable Machine model was specifically evaluated for its ability to differentiate between normal brains and the aforementioned types of tumors in MRI images. MRI images are a common tool in the diagnosis of brain tumors, but the challenge lies in the accurate classification of the tumors. This is where the machine learning model comes into play. The model is trained to recognize patterns in the MRI images that correspond to the different types of tumors. The performance of the machine learning model was assessed using several metrics. These include precision, recall, and F1 score. These metrics were generated from a confusion matrix analysis and performance graphs. A confusion matrix is a table that is often used to describe the performance of a classification model. Precision is a measure of the model's ability to correctly identify positive instances among all instances it identified as positive. Recall, on the other hand, measures the model's ability to correctly identify positive instances among all actual positive instances. The F1 score is a measure that combines precision and recall providing a single metric for model performance. The results of the study were promising. The Google Teachable Machine model demonstrated high performance, with accuracy, precision, recall, and F1 scores ranging between 0.84 and 1.00. This suggests that the model is highly effective in accurately classifying the different types of brain tumors. This study provides insights into the potential of machine learning models in the accurate classification of brain tumors. The findings of this study lay the groundwork for further research in this area and have implications for the diagnosis and treatment of brain tumors. The study also highlights the potential of machine learning in enhancing the field of medical imaging and diagnosis. With the increasing complexity and volume of medical data, machine learning models like the one evaluated in this study could play a crucial role in improving the accuracy and efficiency of diagnoses. Furthermore, the study underscores the importance of continued research and development in this field to further refine these models and overcome any potential limitations or challenges. Overall, the study contributes to the field of medical imaging and machine learning and sets the stage for future research and advancements in this area.

6.
Front Neuroinform ; 18: 1414925, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38957549

RESUMEN

Background: The Rotation Invariant Vision Transformer (RViT) is a novel deep learning model tailored for brain tumor classification using MRI scans. Methods: RViT incorporates rotated patch embeddings to enhance the accuracy of brain tumor identification. Results: Evaluation on the Brain Tumor MRI Dataset from Kaggle demonstrates RViT's superior performance with sensitivity (1.0), specificity (0.975), F1-score (0.984), Matthew's Correlation Coefficient (MCC) (0.972), and an overall accuracy of 0.986. Conclusion: RViT outperforms the standard Vision Transformer model and several existing techniques, highlighting its efficacy in medical imaging. The study confirms that integrating rotational patch embeddings improves the model's capability to handle diverse orientations, a common challenge in tumor imaging. The specialized architecture and rotational invariance approach of RViT have the potential to enhance current methodologies for brain tumor detection and extend to other complex imaging tasks.

7.
Biomedicines ; 12(7)2024 Jun 23.
Artículo en Inglés | MEDLINE | ID: mdl-39061969

RESUMEN

Brain tumor classification is essential for clinical diagnosis and treatment planning. Deep learning models have shown great promise in this task, but they are often challenged by the complex and diverse nature of brain tumors. To address this challenge, we propose a novel deep residual and region-based convolutional neural network (CNN) architecture, called Res-BRNet, for brain tumor classification using magnetic resonance imaging (MRI) scans. Res-BRNet employs a systematic combination of regional and boundary-based operations within modified spatial and residual blocks. The spatial blocks extract homogeneity, heterogeneity, and boundary-related features of brain tumors, while the residual blocks significantly capture local and global texture variations. We evaluated the performance of Res-BRNet on a challenging dataset collected from Kaggle repositories, Br35H, and figshare, containing various tumor categories, including meningioma, glioma, pituitary, and healthy images. Res-BRNet outperformed standard CNN models, achieving excellent accuracy (98.22%), sensitivity (0.9811), F1-score (0.9841), and precision (0.9822). Our results suggest that Res-BRNet is a promising tool for brain tumor classification, with the potential to improve the accuracy and efficiency of clinical diagnosis and treatment planning.

8.
Neuro Oncol ; 2024 Jun 24.
Artículo en Inglés | MEDLINE | ID: mdl-38912846

RESUMEN

The 2016 and 2021 World Health Organization (WHO) 2021 Classification of Central Nervous System (CNS) tumors have resulted in a major improvement of the classification of IDH-mutant gliomas. With more effective treatments many patients experience prolonged survival . However, treatment guidelines are often still based on information from historical series comprising both patients with IDHwt and IDH mutant tumors. They provide recommendations for radiotherapy and chemotherapy for so-called high-risk patients, usually based on residual tumor after surgery and age over 40. More up-to-date studies give a better insight into clinical, radiological and molecular factors associated with outcome of patients with IDH-mutant glioma. These insights should be used today for risk stratification and for treatment decisions. In many patients with an IDH-mutant grade 2 and grade 3 glioma, if carefully monitored postponing radiotherapy and chemotherapy is safe, and will not jeopardize overall outcome of patients. With the INDIGO trial showing patient benefit from the IDH inhibitor vorasidenib, there is a sizable population in which it seems reasonable to try this class of agents before recommending radio-chemotherapy with its delayed adverse event profile affecting quality of survival. Ongoing trials should help to further identify the patients that are benefiting from this treatment.

9.
Comput Biol Med ; 175: 108412, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38691914

RESUMEN

Brain tumor segmentation and classification play a crucial role in the diagnosis and treatment planning of brain tumors. Accurate and efficient methods for identifying tumor regions and classifying different tumor types are essential for guiding medical interventions. This study comprehensively reviews brain tumor segmentation and classification techniques, exploring various approaches based on image processing, machine learning, and deep learning. Furthermore, our study aims to review existing methodologies, discuss their advantages and limitations, and highlight recent advancements in this field. The impact of existing segmentation and classification techniques for automated brain tumor detection is also critically examined using various open-source datasets of Magnetic Resonance Images (MRI) of different modalities. Moreover, our proposed study highlights the challenges related to segmentation and classification techniques and datasets having various MRI modalities to enable researchers to develop innovative and robust solutions for automated brain tumor detection. The results of this study contribute to the development of automated and robust solutions for analyzing brain tumors, ultimately aiding medical professionals in making informed decisions and providing better patient care.


Asunto(s)
Neoplasias Encefálicas , Imagen por Resonancia Magnética , Humanos , Neoplasias Encefálicas/diagnóstico por imagen , Imagen por Resonancia Magnética/métodos , Aprendizaje Profundo , Interpretación de Imagen Asistida por Computador/métodos , Encéfalo/diagnóstico por imagen , Aprendizaje Automático , Procesamiento de Imagen Asistido por Computador/métodos , Neuroimagen/métodos
10.
BMC Med Imaging ; 24(1): 110, 2024 May 15.
Artículo en Inglés | MEDLINE | ID: mdl-38750436

RESUMEN

Brain tumor classification using MRI images is a crucial yet challenging task in medical imaging. Accurate diagnosis is vital for effective treatment planning but is often hindered by the complex nature of tumor morphology and variations in imaging. Traditional methodologies primarily rely on manual interpretation of MRI images, supplemented by conventional machine learning techniques. These approaches often lack the robustness and scalability needed for precise and automated tumor classification. The major limitations include a high degree of manual intervention, potential for human error, limited ability to handle large datasets, and lack of generalizability to diverse tumor types and imaging conditions.To address these challenges, we propose a federated learning-based deep learning model that leverages the power of Convolutional Neural Networks (CNN) for automated and accurate brain tumor classification. This innovative approach not only emphasizes the use of a modified VGG16 architecture optimized for brain MRI images but also highlights the significance of federated learning and transfer learning in the medical imaging domain. Federated learning enables decentralized model training across multiple clients without compromising data privacy, addressing the critical need for confidentiality in medical data handling. This model architecture benefits from the transfer learning technique by utilizing a pre-trained CNN, which significantly enhances its ability to classify brain tumors accurately by leveraging knowledge gained from vast and diverse datasets.Our model is trained on a diverse dataset combining figshare, SARTAJ, and Br35H datasets, employing a federated learning approach for decentralized, privacy-preserving model training. The adoption of transfer learning further bolsters the model's performance, making it adept at handling the intricate variations in MRI images associated with different types of brain tumors. The model demonstrates high precision (0.99 for glioma, 0.95 for meningioma, 1.00 for no tumor, and 0.98 for pituitary), recall, and F1-scores in classification, outperforming existing methods. The overall accuracy stands at 98%, showcasing the model's efficacy in classifying various tumor types accurately, thus highlighting the transformative potential of federated learning and transfer learning in enhancing brain tumor classification using MRI images.


Asunto(s)
Neoplasias Encefálicas , Aprendizaje Profundo , Imagen por Resonancia Magnética , Humanos , Neoplasias Encefálicas/diagnóstico por imagen , Neoplasias Encefálicas/clasificación , Imagen por Resonancia Magnética/métodos , Redes Neurales de la Computación , Aprendizaje Automático , Interpretación de Imagen Asistida por Computador/métodos
11.
Front Oncol ; 14: 1363756, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38746679

RESUMEN

Objectives: The diagnosis and treatment of brain tumors have greatly benefited from extensive research in traditional radiomics, leading to improved efficiency for clinicians. With the rapid development of cutting-edge technologies, especially deep learning, further improvements in accuracy and automation are expected. In this study, we explored a hybrid deep learning scheme that integrates several advanced techniques to achieve reliable diagnosis of primary brain tumors with enhanced classification performance and interpretability. Methods: This study retrospectively included 230 patients with primary brain tumors, including 97 meningiomas, 66 gliomas and 67 pituitary tumors, from the First Affiliated Hospital of Yangtze University. The effectiveness of the proposed scheme was validated by the included data and a commonly used data. Based on super-resolution reconstruction and dynamic learning rate annealing strategies, we compared the classification results of several deep learning models. The multi-classification performance was further improved by combining feature transfer and machine learning. Classification performance metrics included accuracy (ACC), area under the curve (AUC), sensitivity (SEN), and specificity (SPE). Results: In the deep learning tests conducted on two datasets, the DenseNet121 model achieved the highest classification performance, with five-test accuracies of 0.989 ± 0.006 and 0.967 ± 0.013, and AUCs of 0.999 ± 0.001 and 0.994 ± 0.005, respectively. In the hybrid deep learning tests, LightGBM, a promising classifier, achieved accuracies of 0.989 and 0.984, which were improved from the original deep learning scheme of 0.987 and 0.965. Sensitivities for both datasets were 0.985, specificities were 0.988 and 0.984, respectively, and relatively desirable receiver operating characteristic (ROC) curves were obtained. In addition, model visualization studies further verified the reliability and interpretability of the results. Conclusions: These results illustrated that deep learning models combining several advanced technologies can reliably improve the performance, automation, and interpretability of primary brain tumor diagnosis, which is crucial for further brain tumor diagnostic research and individualized treatment.

12.
Diagnostics (Basel) ; 14(10)2024 May 11.
Artículo en Inglés | MEDLINE | ID: mdl-38786294

RESUMEN

Deep learning (DL) networks have shown attractive performance in medical image processing tasks such as brain tumor classification. However, they are often criticized as mysterious "black boxes". The opaqueness of the model and the reasoning process make it difficult for health workers to decide whether to trust the prediction outcomes. In this study, we develop an interpretable multi-part attention network (IMPA-Net) for brain tumor classification to enhance the interpretability and trustworthiness of classification outcomes. The proposed model not only predicts the tumor grade but also provides a global explanation for the model interpretability and a local explanation as justification for the proffered prediction. Global explanation is represented as a group of feature patterns that the model learns to distinguish high-grade glioma (HGG) and low-grade glioma (LGG) classes. Local explanation interprets the reasoning process of an individual prediction by calculating the similarity between the prototypical parts of the image and a group of pre-learned task-related features. Experiments conducted on the BraTS2017 dataset demonstrate that IMPA-Net is a verifiable model for the classification task. A percentage of 86% of feature patterns were assessed by two radiologists to be valid for representing task-relevant medical features. The model shows a classification accuracy of 92.12%, of which 81.17% were evaluated as trustworthy based on local explanations. Our interpretable model is a trustworthy model that can be used for decision aids for glioma classification. Compared with black-box CNNs, it allows health workers and patients to understand the reasoning process and trust the prediction outcomes.

13.
Front Physiol ; 15: 1349111, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38665597

RESUMEN

Deep learning is a very important technique in clinical diagnosis and therapy in the present world. Convolutional Neural Network (CNN) is a recent development in deep learning that is used in computer vision. Our medical investigation focuses on the identification of brain tumour. To improve the brain tumour classification performance a Balanced binary Tree CNN (BT-CNN) which is framed in a binary tree-like structure is proposed. It has a two distinct modules-the convolution and the depthwise separable convolution group. The usage of convolution group achieves lower time and higher memory, while the opposite is true for the depthwise separable convolution group. This balanced binarty tree inspired CNN balances both the groups to achieve maximum performance in terms of time and space. The proposed model along with state-of-the-art models like CNN-KNN and models proposed by Musallam et al., Saikat et al., and Amin et al. are experimented on public datasets. Before we feed the data into model the images are pre-processed using CLAHE, denoising, cropping, and scaling. The pre-processed dataset is partitioned into training and testing datasets as per 5 fold cross validation. The proposed model is trained and compared its perforarmance with state-of-the-art models like CNN-KNN and models proposed by Musallam et al., Saikat et al., and Amin et al. The proposed model reported average training accuracy of 99.61% compared to other models. The proposed model achieved 96.06% test accuracy where as other models achieved 68.86%, 85.8%, 86.88%, and 90.41% respectively. Further, the proposed model obtained lowest standard deviation on training and test accuracies across all folds, making it invariable to dataset.

14.
Front Neurosci ; 18: 1288274, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38440396

RESUMEN

Brain tumors can be classified into many different types based on their shape, texture, and location. Accurate diagnosis of brain tumor types can help doctors to develop appropriate treatment plans to save patients' lives. Therefore, it is very crucial to improve the accuracy of this classification system for brain tumors to assist doctors in their treatment. We propose a deep feature fusion method based on convolutional neural networks to enhance the accuracy and robustness of brain tumor classification while mitigating the risk of over-fitting. Firstly, the extracted features of three pre-trained models including ResNet101, DenseNet121, and EfficientNetB0 are adjusted to ensure that the shape of extracted features for the three models is the same. Secondly, the three models are fine-tuned to extract features from brain tumor images. Thirdly, pairwise summation of the extracted features is carried out to achieve feature fusion. Finally, classification of brain tumors based on fused features is performed. The public datasets including Figshare (Dataset 1) and Kaggle (Dataset 2) are used to verify the reliability of the proposed method. Experimental results demonstrate that the fusion method of ResNet101 and DenseNet121 features achieves the best performance, which achieves classification accuracy of 99.18 and 97.24% in Figshare dataset and Kaggle dataset, respectively.

15.
Heliyon ; 10(3): e25468, 2024 Feb 15.
Artículo en Inglés | MEDLINE | ID: mdl-38352765

RESUMEN

Brain tumors are a diverse group of neoplasms that are challenging to detect and classify due to their varying characteristics. Deep learning techniques have proven to be effective in tumor classification. However, there is a lack of studies that compare these techniques using a common methodology. This work aims to analyze the performance of convolutional neural networks in the classification of brain tumors. We propose a network consisting of a few convolutional layers, batch normalization, and max-pooling. Then, we explore recent deep architectures, such as VGG, ResNet, EfficientNet, or ConvNeXt. The study relies on two magnetic resonance imaging datasets with over 3000 images of three types of tumors -gliomas, meningiomas, and pituitary tumors-, as well as images without tumors. We determine the optimal hyperparameters of the networks using the training and validation sets. The training and test sets are used to assess the performance of the models from different perspectives, including training from scratch, data augmentation, transfer learning, and fine-tuning. The experiments are performed using the TensorFlow and Keras libraries in Python. We compare the accuracy of the models and analyze their complexity based on the capacity of the networks, their training times, and image throughput. Several networks achieve high accuracy rates on both datasets, with the best model achieving 98.7% accuracy, which is on par with state-of-the-art methods. The average precision for each type of tumor is 94.3% for gliomas, 93.8% for meningiomas, 97.9% for pituitary tumors, and 95.3% for images without tumors. VGG is the largest model with over 171 million parameters, whereas MobileNet and EfficientNetB0 are the smallest ones with 3.2 and 5.9 million parameters, respectively. These two neural networks are also the fastest to train with 23.7 and 25.4 seconds per epoch, respectively. On the other hand, ConvNext is the slowest model with 58.2 seconds per epoch. Our custom model obtained the highest image throughput with 234.37 images per second, followed by MobileNet with 226 images per second. ConvNext obtained the smallest throughput with 97.35 images per second. ResNet, MobileNet, and EfficientNet are the most accurate networks, with MobileNet and EfficientNet demonstrating superior performance in terms of complexity. Most models achieve the best accuracy using transfer learning followed by a fine-tuning step. However, data augmentation does not contribute to increasing the accuracy of the models in general.

16.
Diagnostics (Basel) ; 14(4)2024 Feb 09.
Artículo en Inglés | MEDLINE | ID: mdl-38396422

RESUMEN

Brain tumors can have fatal consequences, affecting many body functions. For this reason, it is essential to detect brain tumor types accurately and at an early stage to start the appropriate treatment process. Although convolutional neural networks (CNNs) are widely used in disease detection from medical images, they face the problem of overfitting in the training phase on limited labeled and insufficiently diverse datasets. The existing studies use transfer learning and ensemble models to overcome these problems. When the existing studies are examined, it is evident that there is a lack of models and weight ratios that will be used with the ensemble technique. With the framework proposed in this study, several CNN models with different architectures are trained with transfer learning and fine-tuning on three brain tumor datasets. A particle swarm optimization-based algorithm determined the optimum weights for combining the five most successful CNN models with the ensemble technique. The results across three datasets are as follows: Dataset 1, 99.35% accuracy and 99.20 F1-score; Dataset 2, 98.77% accuracy and 98.92 F1-score; and Dataset 3, 99.92% accuracy and 99.92 F1-score. We achieved successful performances on three brain tumor datasets, showing that the proposed framework is reliable in classification. As a result, the proposed framework outperforms existing studies, offering clinicians enhanced decision-making support through its high-accuracy classification performance.

17.
Acta Neuropathol Commun ; 12(1): 9, 2024 Jan 16.
Artículo en Inglés | MEDLINE | ID: mdl-38229158

RESUMEN

DNA methylation analysis has become a powerful tool in neuropathology. Although DNA methylation-based classification usually shows high accuracy, certain samples cannot be classified and remain clinically challenging. We aimed to gain insight into these cases from a clinical perspective. To address, central nervous system (CNS) tumors were subjected to DNA methylation profiling and classified according to their calibrated score using the DKFZ brain tumor classifier (V11.4) as "≥ 0.84" (score ≥ 0.84), "0.3-0.84" (score 0.3-0.84), or "< 0.3" (score < 0.3). Histopathology, patient characteristics, DNA input amount, and tumor purity were correlated. Clinical outcome parameters were time to treatment decision, progression-free, and overall survival. In 1481 patients, the classifier identified 69 (4.6%) tumors with an unreliable score as "< 0.3". Younger age (P < 0.01) and lower tumor purity (P < 0.01) compromised accurate classification. A clinical impact was demonstrated as unclassifiable cases ("< 0.3") had a longer time to treatment decision (P < 0.0001). In a subset of glioblastomas, these cases experienced an increased time to adjuvant treatment start (P < 0.001) and unfavorable survival (P < 0.025). Although DNA methylation profiling adds an important contribution to CNS tumor diagnostics, clinicians should be aware of a potentially longer time to treatment initiation, especially in malignant brain tumors.


Asunto(s)
Neoplasias Encefálicas , Neoplasias del Sistema Nervioso Central , Humanos , Metilación de ADN , Pronóstico , Estudios Retrospectivos , Neoplasias del Sistema Nervioso Central/diagnóstico , Neoplasias del Sistema Nervioso Central/genética , Neoplasias Encefálicas/diagnóstico , Neoplasias Encefálicas/genética , Neoplasias Encefálicas/patología
18.
Brain Pathol ; 34(3): e13228, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38012085

RESUMEN

The current state-of-the-art analysis of central nervous system (CNS) tumors through DNA methylation profiling relies on the tumor classifier developed by Capper and colleagues, which centrally harnesses DNA methylation data provided by users. Here, we present a distributed-computing-based approach for CNS tumor classification that achieves a comparable performance to centralized systems while safeguarding privacy. We utilize the t-distributed neighborhood embedding (t-SNE) model for dimensionality reduction and visualization of tumor classification results in two-dimensional graphs in a distributed approach across multiple sites (DistSNE). DistSNE provides an intuitive web interface (https://gin-tsne.med.uni-giessen.de) for user-friendly local data management and federated methylome-based tumor classification calculations for multiple collaborators in a DataSHIELD environment. The freely accessible web interface supports convenient data upload, result review, and summary report generation. Importantly, increasing sample size as achieved through distributed access to additional datasets allows DistSNE to improve cluster analysis and enhance predictive power. Collectively, DistSNE enables a simple and fast classification of CNS tumors using large-scale methylation data from distributed sources, while maintaining the privacy and allowing easy and flexible network expansion to other institutes. This approach holds great potential for advancing human brain tumor classification and fostering collaborative precision medicine in neuro-oncology.


Asunto(s)
Neoplasias Encefálicas , Neoplasias del Sistema Nervioso Central , Humanos , Metilación de ADN , Neoplasias del Sistema Nervioso Central/genética , Neoplasias Encefálicas/genética
19.
J Biomol Struct Dyn ; : 1-12, 2023 Nov 18.
Artículo en Inglés | MEDLINE | ID: mdl-37979152

RESUMEN

There has been an abrupt increase in brain tumor (BT) related medical cases during the past ten years. The tenth most typical type of tumor affecting millions of people is the BT. The cure rate can, however, rise if it is found early. When evaluating BT diagnosis and treatment options, MRI is a crucial tool. However, segmenting the tumors from magnetic resonance (MR) images is complex. The advancement of deep learning (DL) has led to the development of numerous automatic segmentation and classification approaches. However, most need improvement since they are limited to 2D images. So, this article proposes a novel and optimal DL system for segmenting and classifying the BTs from 3D brain MR images. Preprocessing, segmentation, feature extraction, feature selection, and tumor classification are the main phases of the proposed work. Preprocessing, such as noise removal, is performed on the collected brain MR images using bilateral filtering. The tumor segmentation uses spatial and channel attention-based three-dimensional u-shaped network (SC3DUNet) to segment the tumor lesions from the preprocessed data. After that, the feature extraction is done based on dilated convolution-based visual geometry group-19 (DCVGG-19), making the classification task more manageable. The optimal features are selected from the extracted feature sets using diagonal linear uniform and tangent flight included butterfly optimization algorithm. Finally, the proposed system applies an optimal hyperparameters-based deep neural network to classify the tumor classes. The experiments conducted on the BraTS2020 dataset show that the suggested method can segment tumors and categorize them more accurately than the existing state-of-the-art mechanisms.Communicated by Ramaswamy H. Sarma.

20.
Diagnostics (Basel) ; 13(20)2023 Oct 17.
Artículo en Inglés | MEDLINE | ID: mdl-37892055

RESUMEN

Brain tumors pose a complex and urgent challenge in medical diagnostics, requiring precise and timely classification due to their diverse characteristics and potentially life-threatening consequences. While existing deep learning (DL)-based brain tumor classification (BTC) models have shown significant progress, they encounter limitations like restricted depth, vanishing gradient issues, and difficulties in capturing intricate features. To address these challenges, this paper proposes an efficient skip connections-based residual network (ESRNet). leveraging the residual network (ResNet) with skip connections. ESRNet ensures smooth gradient flow during training, mitigating the vanishing gradient problem. Additionally, the ESRNet architecture includes multiple stages with increasing numbers of residual blocks for improved feature learning and pattern recognition. ESRNet utilizes residual blocks from the ResNet architecture, featuring skip connections that enable identity mapping. Through direct addition of the input tensor to the convolutional layer output within each block, skip connections preserve the gradient flow. This mechanism prevents vanishing gradients, ensuring effective information propagation across network layers during training. Furthermore, ESRNet integrates efficient downsampling techniques and stabilizing batch normalization layers, which collectively contribute to its robust and reliable performance. Extensive experimental results reveal that ESRNet significantly outperforms other approaches in terms of accuracy, sensitivity, specificity, F-score, and Kappa statistics, with median values of 99.62%, 99.68%, 99.89%, 99.47%, and 99.42%, respectively. Moreover, the achieved minimum performance metrics, including accuracy (99.34%), sensitivity (99.47%), specificity (99.79%), F-score (99.04%), and Kappa statistics (99.21%), underscore the exceptional effectiveness of ESRNet for BTC. Therefore, the proposed ESRNet showcases exceptional performance and efficiency in BTC, holding the potential to revolutionize clinical diagnosis and treatment planning.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA