Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 34.910
Filtrar
1.
Sci Rep ; 14(1): 22811, 2024 10 01.
Artigo em Inglês | MEDLINE | ID: mdl-39354013

RESUMO

Objective was to assess the precision and reproducibility of spatial penalty-based intravoxel incoherent motion (IVIM) methods in comparison to the conventional bi-exponential (BE) model-based IVIM methods. IVIM-MRI (11 b-values; 0-800 s/mm2) of forty patients (N = 40; Age = 17.7 ± 5.9 years; Male:Female = 30:10) with biopsy-proven osteosarcoma were acquired on a 1.5 Tesla scanner at 3 time-points: (i) baseline, (ii) after 1-cycle and (iii) after 3-cycles of neoadjuvant chemotherapy. Diffusion coefficient (D), Perfusion coefficient (D*) and Perfusion fraction (f) were estimated at three time-points in whole tumor and healthy muscle tissue using five methodologies (1) BE with three-parameter-fitting (BE), (2) Segmented-BE with two-parameter-fitting (BESeg-2), (3) Segmented-BE with one-parameter-fitting (BESeg-1), (4) BE with adaptive Total-Variation-penalty (BE + TV) and (5) BE with adaptive Huber-penalty (BE + HPF). Within-subject coefficient-of-variation (wCV) and between-subject coefficient-of-variation (bCV) of IVIM parameters were measured in healthy and tumor tissue. For precision and reproducibility, intra-scan comparison of wCV and bCV among five IVIM methods were performed using Friedman test followed by Wilcoxon-signed-ranks (WSR) post-hoc test. Experimental results demonstrated that BE + TV and BE + HPF showed significantly (p < 10-3) lower wCV and bCV for D (wCV: 24-32%; bCV: 22-31%) than BE method (wCV: 38-49%; bCV: 36-46%) across three time-points in healthy muscle and tumor. BE + TV and BE + HPF also demonstrated significantly (p < 10-3) lower wCV and bCV for estimating D* (wCV: 89-108%; bCV: 83-102%) and f (wCV: 55-60%; bCV: 56-60%) than BE, BESeg-2 and BESeg-1 methods (D*-wCV: 102-122%; D*-bCV: 98-114% and f-wCV: 96-130%; f-bCV: 94-125%) in both tumor and healthy tissue across three time-points. Spatial penalty based IVIM analysis methods BE + TV and BE + HPF demonstrated lower variability and improved precision and reproducibility in the current clinical settings.


Assuntos
Imagem de Difusão por Ressonância Magnética , Osteossarcoma , Humanos , Masculino , Imagem de Difusão por Ressonância Magnética/métodos , Feminino , Reprodutibilidade dos Testes , Adolescente , Osteossarcoma/diagnóstico por imagem , Adulto , Adulto Jovem , Criança , Processamento de Imagem Assistida por Computador/métodos , Neoplasias Ósseas/diagnóstico por imagem , Movimento (Física) , Interpretação de Imagem Assistida por Computador/métodos
2.
Sci Rep ; 14(1): 22797, 2024 10 01.
Artigo em Inglês | MEDLINE | ID: mdl-39354009

RESUMO

Brain tumor, a leading cause of uncontrolled cell growth in the central nervous system, presents substantial challenges in medical diagnosis and treatment. Early and accurate detection is essential for effective intervention. This study aims to enhance the detection and classification of brain tumors in Magnetic Resonance Imaging (MRI) scans using an innovative framework combining Vision Transformer (ViT) and Gated Recurrent Unit (GRU) models. We utilized primary MRI data from Bangabandhu Sheikh Mujib Medical College Hospital (BSMMCH) in Faridpur, Bangladesh. Our hybrid ViT-GRU model extracts essential features via ViT and identifies relationships between these features using GRU, addressing class imbalance and outperforming existing diagnostic methods. We extensively processed the dataset, and then trained the model using various optimizers (SGD, Adam, AdamW) and evaluated through rigorous 10-fold cross-validation. Additionally, we incorporated Explainable Artificial Intelligence (XAI) techniques-Attention Map, SHAP, and LIME-to enhance the interpretability of the model's predictions. For the primary dataset BrTMHD-2023, the ViT-GRU model achieved precision, recall, and F1-score metrics of 97%. The highest accuracies obtained with SGD, Adam, and AdamW optimizers were 81.66%, 96.56%, and 98.97%, respectively. Our model outperformed existing Transfer Learning models by 1.26%, as validated through comparative analysis and cross-validation. The proposed model also shows excellent performances with another Brain Tumor Kaggle Dataset outperforming the existing research done on the same dataset with 96.08% accuracy. The proposed ViT-GRU framework significantly improves the detection and classification of brain tumors in MRI scans. The integration of XAI techniques enhances the model's transparency and reliability, fostering trust among clinicians and facilitating clinical application. Future work will expand the dataset and apply findings to real-time diagnostic devices, advancing the field.


Assuntos
Neoplasias Encefálicas , Imageamento por Ressonância Magnética , Humanos , Bangladesh , Imageamento por Ressonância Magnética/métodos , Neoplasias Encefálicas/diagnóstico por imagem , Neoplasias Encefálicas/classificação , Neoplasias Encefálicas/patologia , Inteligência Artificial , Algoritmos , Interpretação de Imagem Assistida por Computador/métodos
3.
Sci Rep ; 14(1): 22754, 2024 10 01.
Artigo em Inglês | MEDLINE | ID: mdl-39354128

RESUMO

Accurate and unbiased classification of breast lesions is pivotal for early diagnosis and treatment, and a deep learning approach can effectively represent and utilize the digital content of images for more precise medical image analysis. Breast ultrasound imaging is useful for detecting and distinguishing benign masses from malignant masses. Based on the different ways in which benign and malignant tumors affect neighboring tissues, i.e., the pattern of growth and border irregularities, the penetration degree of the adjacent tissue, and tissue-level changes, we investigated the relationship between breast cancer imaging features and the roles of inter- and extra-lesional tissues and their impact on refining the performance of deep learning classification. The novelty of the proposed approach lies in considering the features extracted from the tissue inside the tumor (by performing an erosion operation) and from the lesion and surrounding tissue (by performing a dilation operation) for classification. This study uses these new features and three pre-trained deep neuronal networks to address the challenge of breast lesion classification in ultrasound images. To improve the classification accuracy and interpretability of the model, the proposed model leverages transfer learning to accelerate the training process. Three modern pre-trained CNN architectures (MobileNetV2, VGG16, and EfficientNetB7) are used for transfer learning and fine-tuning for optimization. There are concerns related to the neuronal networks producing erroneous outputs in the presence of noisy images, variations in input data, or adversarial attacks; thus, the proposed system uses the BUS-BRA database (two classes/benign and malignant) for training and testing and the unseen BUSI database (two classes/benign and malignant) for testing. Extensive experiments have recorded accuracy and AUC as performance parameters. The results indicate that the proposed system outperforms the existing breast cancer detection algorithms reported in the literature. AUC values of 1.00 are calculated for VGG16 and EfficientNet-B7 in the dilation cases. The proposed approach will facilitate this challenging and time-consuming classification task.


Assuntos
Neoplasias da Mama , Aprendizado Profundo , Humanos , Neoplasias da Mama/diagnóstico por imagem , Neoplasias da Mama/patologia , Neoplasias da Mama/classificação , Neoplasias da Mama/diagnóstico , Feminino , Redes Neurais de Computação , Ultrassonografia Mamária/métodos , Mama/diagnóstico por imagem , Mama/patologia , Interpretação de Imagem Assistida por Computador/métodos , Algoritmos
4.
Skin Res Technol ; 30(9): e70040, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-39221858

RESUMO

BACKGROUND: Skin cancer is one of the highly occurring diseases in human life. Early detection and treatment are the prime and necessary points to reduce the malignancy of infections. Deep learning techniques are supplementary tools to assist clinical experts in detecting and localizing skin lesions. Vision transformers (ViT) based on image segmentation classification using multiple classes provide fairly accurate detection and are gaining more popularity due to legitimate multiclass prediction capabilities. MATERIALS AND METHODS: In this research, we propose a new ViT Gradient-Weighted Class Activation Mapping (GradCAM) based architecture named ViT-GradCAM for detecting and classifying skin lesions by spreading ratio on the lesion's surface area. The proposed system is trained and validated using a HAM 10000 dataset by studying seven skin lesions. The database comprises 10 015 dermatoscopic images of varied sizes. The data preprocessing and data augmentation techniques are applied to overcome the class imbalance issues and improve the model's performance. RESULT: The proposed algorithm is based on ViT models that classify the dermatoscopic images into seven classes with an accuracy of 97.28%, precision of 98.51, recall of 95.2%, and an F1 score of 94.6, respectively. The proposed ViT-GradCAM obtains better and more accurate detection and classification than other state-of-the-art deep learning-based skin lesion detection models. The architecture of ViT-GradCAM is extensively visualized to highlight the actual pixels in essential regions associated with skin-specific pathologies. CONCLUSION: This research proposes an alternate solution to overcome the challenges of detecting and classifying skin lesions using ViTs and GradCAM, which play a significant role in detecting and classifying skin lesions accurately rather than relying solely on deep learning models.


Assuntos
Algoritmos , Aprendizado Profundo , Dermoscopia , Neoplasias Cutâneas , Humanos , Dermoscopia/métodos , Neoplasias Cutâneas/diagnóstico por imagem , Neoplasias Cutâneas/classificação , Neoplasias Cutâneas/patologia , Interpretação de Imagem Assistida por Computador/métodos , Bases de Dados Factuais , Pele/diagnóstico por imagem , Pele/patologia
5.
PLoS One ; 19(9): e0310107, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39264929

RESUMO

BACKGROUND: Regional Wall Motion Abnormality (RWMA) serves as an early indicator of myocardial infarction (MI), the global leader in mortality. Accurate and early detection of RWMA is vital for the successful treatment of MI. Current automated echocardiography analyses typically concentrate on peak values from left ventricular (LV) displacement curves, based on LV contour annotations or key frames during the heart's systolic or diastolic phases within a single echocardiographic cycle. This approach may overlook the rich motion field features available in multi-cycle cardiac data, which could enhance RWMA detection. METHODS: In this research, we put forward an innovative approach to detect RWMA by harnessing motion information across multiple echocardiographic cycles and multi-views. Our methodology synergizes U-Net-based segmentation with optical flow algorithms for detailed cardiac structure delineation, and Temporal Convolutional Networks (ConvNet) to extract nuanced motion features. We utilize a variety of machine learning and deep learning classifiers on both A2C and A4C views echocardiograms to enhance detection accuracy. A three-phase algorithm-originating from the HMC-QU dataset-incorporates U-Net for segmentation, followed by optical flow for cardiac wall motion field features. Temporal ConvNet, inspired by the Temporal Segment Network (TSN), is then applied to interpret these motion field features, independent of traditional cardiac parameter curves or specific key phase frame inputs. RESULTS: Employing five-fold cross-validation, our SVM classifier demonstrated high performance, with a sensitivity of 93.13%, specificity of 83.61%, precision of 88.52%, and an F1 score of 90.39%. When compared with other studies using the HMC-QU datasets, these Fig s stand out, underlining our method's effectiveness. The classifier also attained an overall accuracy of 89.25% and Area Under the Curve (AUC) of 95%, reinforcing its potential for reliable RWMA detection in echocardiographic analysis. CONCLUSIONS: This research not only demonstrates a novel technique but also contributes a more comprehensive and precise tool for early myocardial infarction diagnosis.


Assuntos
Algoritmos , Ecocardiografia , Aprendizado de Máquina , Infarto do Miocárdio , Humanos , Ecocardiografia/métodos , Infarto do Miocárdio/diagnóstico por imagem , Infarto do Miocárdio/diagnóstico , Redes Neurais de Computação , Ventrículos do Coração/diagnóstico por imagem , Ventrículos do Coração/fisiopatologia , Masculino , Aprendizado Profundo , Interpretação de Imagem Assistida por Computador/métodos , Feminino
6.
Sci Rep ; 14(1): 21984, 2024 09 20.
Artigo em Inglês | MEDLINE | ID: mdl-39304708

RESUMO

The analysis and interpretation of cytopathological images are crucial in modern medical diagnostics. However, manually locating and identifying relevant cells from the vast amount of image data can be a daunting task. This challenge is particularly pronounced in developing countries where there may be a shortage of medical expertise to handle such tasks. The challenge of acquiring large amounts of high-quality labelled data remains, many researchers have begun to use semi-supervised learning methods to learn from unlabeled data. Although current semi-supervised learning models partially solve the issue of limited labelled data, they are inefficient in exploiting unlabeled samples. To address this, we introduce a new AI-assisted semi-supervised scheme, the Reliable-Unlabeled Semi-Supervised Segmentation (RU3S) model. This model integrates the ResUNet-SE-ASPP-Attention (RSAA) model, which includes the Squeeze-and-Excitation (SE) network, Atrous Spatial Pyramid Pooling (ASPP) structure, Attention module, and ResUNet architecture. Our model leverages unlabeled data effectively, improving accuracy significantly. A novel confidence filtering strategy is introduced to make better use of unlabeled samples, addressing the scarcity of labelled data. Experimental results show a 2.0% improvement in mIoU accuracy over the current state-of-the-art semi-supervised segmentation model ST, demonstrating our approach's effectiveness in solving this medical problem.


Assuntos
Inteligência Artificial , Humanos , Aprendizado de Máquina Supervisionado , Algoritmos , Processamento de Imagem Assistida por Computador/métodos , Interpretação de Imagem Assistida por Computador/métodos
7.
BMC Med Imaging ; 24(1): 253, 2024 Sep 20.
Artigo em Inglês | MEDLINE | ID: mdl-39304839

RESUMO

BACKGROUND: Breast cancer is one of the leading diseases worldwide. According to estimates by the National Breast Cancer Foundation, over 42,000 women are expected to die from this disease in 2024. OBJECTIVE: The prognosis of breast cancer depends on the early detection of breast micronodules and the ability to distinguish benign from malignant lesions. Ultrasonography is a crucial radiological imaging technique for diagnosing the illness because it allows for biopsy and lesion characterization. The user's level of experience and knowledge is vital since ultrasonographic diagnosis relies on the practitioner's expertise. Furthermore, computer-aided technologies significantly contribute by potentially reducing the workload of radiologists and enhancing their expertise, especially when combined with a large patient volume in a hospital setting. METHOD: This work describes the development of a hybrid CNN system for diagnosing benign and malignant breast cancer lesions. The models InceptionV3 and MobileNetV2 serve as the foundation for the hybrid framework. Features from these models are extracted and concatenated individually, resulting in a larger feature set. Finally, various classifiers are applied for the classification task. RESULTS: The model achieved the best results using the softmax classifier, with an accuracy of over 95%. CONCLUSION: Computer-aided diagnosis greatly assists radiologists and reduces their workload. Therefore, this research can serve as a foundation for other researchers to build clinical solutions.


Assuntos
Neoplasias da Mama , Ultrassonografia Mamária , Humanos , Feminino , Neoplasias da Mama/diagnóstico por imagem , Ultrassonografia Mamária/métodos , Redes Neurais de Computação , Interpretação de Imagem Assistida por Computador/métodos , Diagnóstico por Computador/métodos
8.
Breast Cancer Res ; 26(1): 137, 2024 Sep 20.
Artigo em Inglês | MEDLINE | ID: mdl-39304962

RESUMO

Breast cancer is the most common malignant tumor among women worldwide and remains one of the leading causes of death among women. Its incidence and mortality rates are continuously rising. In recent years, with the rapid advancement of deep learning (DL) technology, DL has demonstrated significant potential in breast cancer diagnosis, prognosis evaluation, and treatment response prediction. This paper reviews relevant research progress and applies DL models to image enhancement, segmentation, and classification based on large-scale datasets from TCGA and multiple centers. We employed foundational models such as ResNet50, Transformer, and Hover-net to investigate the performance of DL models in breast cancer diagnosis, treatment, and prognosis prediction. The results indicate that DL techniques have significantly improved diagnostic accuracy and efficiency, particularly in predicting breast cancer metastasis and clinical prognosis. Furthermore, the study emphasizes the crucial role of robust databases in developing highly generalizable models. Future research will focus on addressing challenges related to data management, model interpretability, and regulatory compliance, ultimately aiming to provide more precise clinical treatment and prognostic evaluation programs for breast cancer patients.


Assuntos
Neoplasias da Mama , Aprendizado Profundo , Humanos , Neoplasias da Mama/patologia , Neoplasias da Mama/terapia , Neoplasias da Mama/diagnóstico , Neoplasias da Mama/diagnóstico por imagem , Feminino , Prognóstico , Interpretação de Imagem Assistida por Computador/métodos
9.
J Pathol Clin Res ; 10(5): e12395, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-39294925

RESUMO

The gold standard for enrollment and endpoint assessment in metabolic dysfunction-associated steatosis clinical trials is histologic assessment of a liver biopsy performed on glass slides. However, obtaining the evaluations from several expert pathologists on glass is challenging, as shipping the slides around the country or around the world is time-consuming and comes with the hazards of slide breakage. This study demonstrated that pathologic assessment of disease activity in steatohepatitis, performed using digital images on the AISight whole slide image management system, yields results that are comparable to those obtained using glass slides. The accuracy of scoring for steatohepatitis (nonalcoholic fatty liver disease activity score ≥4 with ≥1 for each feature and absence of atypical features suggestive of other liver disease) performed on the system was evaluated against scoring conducted on glass slides. Both methods were assessed for overall percent agreement with a consensus "ground truth" score (defined as the median score of a panel of three pathologists' glass slides). Each case was also read by three different pathologists, once on glass and once digitally with a minimum 2-week washout period between the modalities. It was demonstrated that the average agreement across three pathologists of digital scoring with ground truth was noninferior to the average agreement of glass scoring with ground truth [noninferiority margin: -0.05; difference: -0.001; 95% CI: (-0.027, 0.026); and p < 0.0001]. For each pathologist, there was a similar average agreement of digital and glass reads with glass ground truth (pathologist A, 0.843 and 0.849; pathologist B, 0.633 and 0.605; and pathologist C, 0.755 and 0.780). Here, we demonstrate that the accuracy of digital reads for steatohepatitis using digital images is equivalent to glass reads in the context of a clinical trial for scoring using the Clinical Research Network scoring system.


Assuntos
Hepatopatia Gordurosa não Alcoólica , Humanos , Hepatopatia Gordurosa não Alcoólica/patologia , Ensaios Clínicos como Assunto , Reprodutibilidade dos Testes , Biópsia , Fígado/patologia , Interpretação de Imagem Assistida por Computador/métodos , Variações Dependentes do Observador
10.
Sci Rep ; 14(1): 22533, 2024 09 28.
Artigo em Inglês | MEDLINE | ID: mdl-39342030

RESUMO

Recent developments have highlighted the critical role that computer-aided diagnosis (CAD) systems play in analyzing whole-slide digital histopathology images for detecting gastric cancer (GC). We present a novel framework for gastric histology classification and segmentation (GHCS) that offers modest yet meaningful improvements over existing CAD models for GC classification and segmentation. Our methodology achieves marginal improvements over conventional deep learning (DL) and machine learning (ML) models by adaptively focusing on pertinent characteristics of images. This contributes significantly to our study, highlighting that the proposed model, which performs well on normalized images, is robust in certain respects, particularly in handling variability and generalizing to different datasets. We anticipate that this robustness will lead to better results across various datasets. An expectation-maximizing Naïve Bayes classifier that uses an updated Gaussian Mixture Model is at the heart of the suggested GHCS framework. The effectiveness of our classifier is demonstrated by experimental validation on two publicly available datasets, which produced exceptional classification accuracies of 98.87% and 97.28% on validation sets and 98.47% and 97.31% on test sets. Our framework shows a slight but consistent improvement over previously existing techniques in gastric histopathology image classification tasks, as demonstrated by comparative analysis. This may be attributed to its ability to capture critical features of gastric histopathology images better. Furthermore, using an improved Fuzzy c-means method, our study produces good results in GC histopathology picture segmentation, outperforming state-of-the-art segmentation models with a Dice coefficient of 65.21% and a Jaccard index of 60.24%. The model's interpretability is complemented by Grad-CAM visualizations, which help understand the decision-making process and increase the model's trustworthiness for end-users, especially clinicians.


Assuntos
Diagnóstico por Computador , Neoplasias Gástricas , Neoplasias Gástricas/patologia , Neoplasias Gástricas/classificação , Neoplasias Gástricas/diagnóstico por imagem , Humanos , Diagnóstico por Computador/métodos , Aprendizado Profundo , Processamento de Imagem Assistida por Computador/métodos , Aprendizado de Máquina , Teorema de Bayes , Algoritmos , Interpretação de Imagem Assistida por Computador/métodos
11.
JCO Clin Cancer Inform ; 8: e2300180, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-39292984

RESUMO

PURPOSE: Emerging evidence suggests that the use of artificial intelligence can assist in the timely detection and optimization of therapeutic approach in patients with prostate cancer. The conventional perspective on radiomics encompassing segmentation and the extraction of radiomic features considers it as an independent and sequential process. However, it is not necessary to adhere to this viewpoint. In this study, we show that besides generating masks from which radiomic features can be extracted, prostate segmentation and reconstruction models provide valuable information in their feature space, which can improve the quality of radiomic signatures models for disease aggressiveness classification. MATERIALS AND METHODS: We perform 2,244 experiments with deep learning features extracted from 13 different models trained using different anatomic zones and characterize how modeling decisions, such as deep feature aggregation and dimensionality reduction, affect performance. RESULTS: While models using deep features from full gland and radiomic features consistently lead to improved disease aggressiveness prediction performance, others are detrimental. Our results suggest that the use of deep features can be beneficial, but an appropriate and comprehensive assessment is necessary to ensure that their inclusion does not harm predictive performance. CONCLUSION: The study findings reveal that incorporating deep features derived from autoencoder models trained to reconstruct the full prostate gland (both zonal models show worse performance than radiomics only models), combined with radiomic features, often lead to a statistically significant increase in model performance for disease aggressiveness classification. Additionally, the results also demonstrate that the choice of feature selection is key to achieving good performance, with principal component analysis (PCA) and PCA + relief being the best approaches and that there is no clear difference between the three proposed latent representation extraction techniques.


Assuntos
Aprendizado Profundo , Neoplasias da Próstata , Humanos , Masculino , Neoplasias da Próstata/diagnóstico por imagem , Neoplasias da Próstata/patologia , Processamento de Imagem Assistida por Computador/métodos , Algoritmos , Prognóstico , Interpretação de Imagem Assistida por Computador/métodos , Radiômica
12.
Zhonghua Zhong Liu Za Zhi ; 46(9): 855-861, 2024 Sep 23.
Artigo em Chinês | MEDLINE | ID: mdl-39293988

RESUMO

Bone and soft tissue tumors occur in the musculoskeletal system, and malignant bone tumors of bone and soft tissue account for 0.2% of all human malignant tumors, and if not diagnosed and treated in a timely manner, patients may be at risk of a poor prognosis. Image interpretation plays an increasingly important role in the diagnosis of bone and soft tissue tumors. Artificial intelligence (AI) can be applied in clinical treatment to integrate large amounts of multidimensional data, derive models, predict outcomes, and improve treatment decisions. Among these methods, deep learning is a widely employed technique in AI that predominantly utilizes convolutional neural networks (CNN). The network is implemented through repeated training of datasets and iterative parameter adjustments. Deep learning-based AI models have successfully been applied to various aspects of bone and soft tissue tumors, encompassing but not limiting in image segmentation, tumor detection, classification, grading and staging, chemotherapy effect evaluation, recurrence and prognosis prediction. This paper provides a comprehensive review of the principles and current state of AI in the medical image diagnosis and treatment of bone and soft tissue tumors. Additionally, it explores the present challenges and future prospects in this field.


Assuntos
Inteligência Artificial , Neoplasias Ósseas , Redes Neurais de Computação , Neoplasias de Tecidos Moles , Humanos , Neoplasias Ósseas/diagnóstico por imagem , Neoplasias Ósseas/terapia , Neoplasias Ósseas/diagnóstico , Neoplasias de Tecidos Moles/diagnóstico por imagem , Neoplasias de Tecidos Moles/terapia , Neoplasias de Tecidos Moles/diagnóstico , Aprendizado Profundo , Prognóstico , Interpretação de Imagem Assistida por Computador/métodos
13.
Sci Rep ; 14(1): 21348, 2024 09 12.
Artigo em Inglês | MEDLINE | ID: mdl-39266642

RESUMO

Segmentation of multiple sclerosis (MS) lesions on brain MRI scans is crucial for diagnosis, disease and treatment monitoring but is a time-consuming task. Despite several automated algorithms have been proposed, there is still no consensus on the most effective method. Here, we applied a consensus-based framework to improve lesion segmentation on T1-weighted and FLAIR scans. The framework is designed to combine publicly available state-of-the-art deep learning models, by running multiple segmentation tasks before merging the outputs of each algorithm. To assess the effectiveness of the approach, we applied it to MRI datasets from two different centers, including a private and a public dataset, with 131 and 30 MS patients respectively, with manually segmented lesion masks available. No further training was performed for any of the included algorithms. Overlap and detection scores were improved, with Dice increasing by 4-8% and precision by 3-4% respectively for the private and public dataset. High agreement was obtained between estimated and true lesion load (ρ = 0.92 and ρ = 0.97) and count (ρ = 0.83 and ρ = 0.94). Overall, this framework ensures accurate and reliable results, exploiting complementary features and overcoming some of the limitations of individual algorithms.


Assuntos
Algoritmos , Encéfalo , Imageamento por Ressonância Magnética , Esclerose Múltipla , Humanos , Esclerose Múltipla/diagnóstico por imagem , Esclerose Múltipla/patologia , Imageamento por Ressonância Magnética/métodos , Encéfalo/diagnóstico por imagem , Encéfalo/patologia , Feminino , Consenso , Masculino , Processamento de Imagem Assistida por Computador/métodos , Adulto , Aprendizado Profundo , Interpretação de Imagem Assistida por Computador/métodos , Pessoa de Meia-Idade
14.
Asian Pac J Cancer Prev ; 25(9): 3327-3336, 2024 Sep 01.
Artigo em Inglês | MEDLINE | ID: mdl-39348561

RESUMO

Objective: The three steps of brain image processing - preprocessing, segmentation, and classification are becoming increasingly important in patient care. The aim of this article is to present a proposed method in the mentioned three-steps, with emphasis on the preprocessing step, which includes noise removal and contrast enhancement. Methods: The fast and adaptive bidimensional empirical mode decomposition and the anisotropic diffusion equation as well as the modified combination of top-hat and bottom-hat transforms are used for noise reduction and contrast enhancement. Fast C-means clustering with enhanced image is used to detect tumors and the tumor cluster corresponds to the maximum centroid. Finally, Ensemble learning is used for classification. Result: The Figshare brain tumor dataset contains magnetic resonance images used for data selection. The optimal parameters for both noise reduction and contrast enhancement are investigated using a tumor contaminated with Gaussian noise. The results are evaluated against state-of-the-art results and qualitative performance metrics to demonstrate the dominance of the proposed approach. The fast C-means algorithm is applied to detect tumors using twelve enhanced images. The detected tumors were compared to the ground truth and showed an accuracy and specificity of 99% each, and a sensitivity and precision of 90% each. Six statistical features are retrieved from 150 enhanced images using wavelet packet coefficients at level 4 of the Daubechies 4 wavelet function. These features are used to develop the classifier model using ensemble learning to create a model with training and testing accuracy of 96.7% and 76.7%, respectively. When this model is applied to classify twelve detected tumor images, the accuracy is 75%; there are three misclassified images, all of which belong to the pituitary disease group. Conclusion: Based on the research, it appears that the proposed approach could lead to the development of computer-aided diagnosis (CADx) software that physicians can use as a reference for the treatment of rain tumor. OBJECTIVE: The three steps of brain image processing ­ preprocessing, segmentation, and classification are becoming increasingly important in patient care. The aim of this article is to present a proposed method in the mentioned three-steps, with emphasis on the preprocessing step, which includes noise removal and contrast enhancement. METHODS: The fast and adaptive bidimensional empirical mode decomposition and the anisotropic diffusion equation as well as the modified combination of top-hat and bottom-hat transforms are used for noise reduction and contrast enhancement. Fast C-means clustering with enhanced image is used to detect tumors and the tumor cluster corresponds to the maximum centroid. Finally, Ensemble learning is used for classification. RESULT: The Figshare brain tumor dataset contains magnetic resonance images used for data selection. The optimal parameters for both noise reduction and contrast enhancement are investigated using a tumor contaminated with Gaussian noise. The results are evaluated against state-of-the-art results and qualitative performance metrics to demonstrate the dominance of the proposed approach. The fast C-means algorithm is applied to detect tumors using twelve enhanced images. The detected tumors were compared to the ground truth and showed an accuracy and specificity of 99% each, and a sensitivity and precision of 90% each. Six statistical features are retrieved from 150 enhanced images using wavelet packet coefficients at level 4 of the Daubechies 4 wavelet function. These features are used to develop the classifier model using ensemble learning to create a model with training and testing accuracy of 96.7% and 76.7%, respectively. When this model is applied to classify twelve detected tumor images, the accuracy is 75%; there are three misclassified images, all of which belong to the pituitary disease group. CONCLUSION: Based on the research, it appears that the proposed approach could lead to the development of computer-aided diagnosis (CADx) software that physicians can use as a reference for the treatment of rain tumor.


Assuntos
Algoritmos , Neoplasias Encefálicas , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Humanos , Neoplasias Encefálicas/diagnóstico por imagem , Neoplasias Encefálicas/patologia , Neoplasias Encefálicas/classificação , Imageamento por Ressonância Magnética/métodos , Processamento de Imagem Assistida por Computador/métodos , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos
15.
J Nucl Med ; 65(10): 1526-1532, 2024 Oct 01.
Artigo em Inglês | MEDLINE | ID: mdl-39266287

RESUMO

Tumor hypoxia, an integral biomarker to guide radiotherapy, can be imaged with 18F-fluoromisonidazole (18F-FMISO) hypoxia PET. One major obstacle to its broader application is the lack of standardized interpretation criteria. We sought to develop and validate practical interpretation criteria and a dedicated training protocol for nuclear medicine physicians to interpret 18F-FMISO hypoxia PET. Methods: We randomly selected 123 patients with human papillomavirus-positive oropharyngeal cancer enrolled in a phase II trial who underwent 123 18F-FDG PET/CT and 134 18F-FMISO PET/CT scans. Four independent nuclear medicine physicians with no 18F-FMISO experience read the scans. Interpretation by a fifth nuclear medicine physician with over 2 decades of 18F-FMISO experience was the reference standard. Performance was evaluated after initial instruction and subsequent dedicated training. Scans were considered positive for hypoxia by visual assessment if 18F-FMISO uptake was greater than floor-of-mouth uptake. Additionally, SUVmax was determined to evaluate whether quantitative assessment using tumor-to-background ratios could be helpful to define hypoxia positivity. Results: Visual assessment produced a mean sensitivity and specificity of 77.3% and 80.9%, with fair interreader agreement (κ = 0.34), after initial instruction. After dedicated training, mean sensitivity and specificity improved to 97.6% and 86.9%, with almost perfect agreement (κ = 0.86). Quantitative assessment with an estimated best SUVmax ratio threshold of more than 1.2 to define hypoxia positivity produced a mean sensitivity and specificity of 56.8% and 95.9%, respectively, with substantial interreader agreement (κ = 0.66), after initial instruction. After dedicated training, mean sensitivity improved to 89.6% whereas mean specificity remained high at 95.3%, with near-perfect interreader agreement (κ = 0.86). Conclusion: Nuclear medicine physicians without 18F-FMISO hypoxia PET reading experience demonstrate much improved interreader agreement with dedicated training using specific interpretation criteria.


Assuntos
Misonidazol , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada , Humanos , Misonidazol/análogos & derivados , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada/métodos , Reprodutibilidade dos Testes , Masculino , Feminino , Pessoa de Meia-Idade , Variações Dependentes do Observador , Hipóxia Tumoral , Idoso , Neoplasias Orofaríngeas/diagnóstico por imagem , Interpretação de Imagem Assistida por Computador/métodos
16.
Curr Oncol ; 31(9): 5057-5079, 2024 Aug 28.
Artigo em Inglês | MEDLINE | ID: mdl-39330002

RESUMO

Multi-task learning (MTL) methods are widely applied in breast imaging for lesion area perception and classification to assist in breast cancer diagnosis and personalized treatment. A typical paradigm of MTL is the shared-backbone network architecture, which can lead to information sharing conflicts and result in the decline or even failure of the main task's performance. Therefore, extracting richer lesion features and alleviating information-sharing conflicts has become a significant challenge for breast cancer classification. This study proposes a novel Multi-Feature Fusion Multi-Task (MFFMT) model to effectively address this issue. Firstly, in order to better capture the local and global feature relationships of lesion areas, a Contextual Lesion Enhancement Perception (CLEP) module is designed, which integrates channel attention mechanisms with detailed spatial positional information to extract more comprehensive lesion feature information. Secondly, a novel Multi-Feature Fusion (MFF) module is presented. The MFF module effectively extracts differential features that distinguish between lesion-specific characteristics and the semantic features used for tumor classification, and enhances the common feature information of them as well. Experimental results on two public breast ultrasound imaging datasets validate the effectiveness of our proposed method. Additionally, a comprehensive study on the impact of various factors on the model's performance is conducted to gain a deeper understanding of the working mechanism of the proposed framework.


Assuntos
Neoplasias da Mama , Aprendizado Profundo , Humanos , Neoplasias da Mama/diagnóstico por imagem , Feminino , Ultrassonografia Mamária/métodos , Interpretação de Imagem Assistida por Computador/métodos
18.
Sci Rep ; 14(1): 22260, 2024 09 27.
Artigo em Inglês | MEDLINE | ID: mdl-39333699

RESUMO

Brain tumors pose a serious threat to public health, impacting thousands of individuals directly or indirectly worldwide. Timely and accurate detection of these tumors is crucial for effective treatment and enhancing the quality of patients' lives. The widely used brain imaging technique is magnetic resonance imaging, the precise identification of brain tumors in MRI images is challenging due to the diverse anatomical structures. This paper introduces an innovative approach known as the ensemble attention mechanism to address this challenge. Initially, the approach uses two networks to extract intermediate- and final-level feature maps from MobileNetV3 and EfficientNetB7. This assists in gathering the relevant feature maps from the different models at different levels. Then, the technique incorporates a co-attention mechanism into the intermediate and final feature map levels on both networks and ensembles them. This directs attention to certain regions to extract global-level features at different levels. Ensemble of attentive feature maps enabling the precise detection of various feature patterns within brain tumor images at both model, local, and global levels. This leads to an improvement in the classification process. The proposed system was evaluated on the Figshare dataset and achieved an accuracy of 98.94%, and 98.48% for the BraTS 2019 dataset which is superior to other methods. Thus, it is robust and suitable for brain tumor detection in healthcare systems.


Assuntos
Neoplasias Encefálicas , Imageamento por Ressonância Magnética , Humanos , Neoplasias Encefálicas/diagnóstico por imagem , Neoplasias Encefálicas/patologia , Neoplasias Encefálicas/classificação , Imageamento por Ressonância Magnética/métodos , Processamento de Imagem Assistida por Computador/métodos , Algoritmos , Interpretação de Imagem Assistida por Computador/métodos
19.
Sci Rep ; 14(1): 22422, 2024 09 28.
Artigo em Inglês | MEDLINE | ID: mdl-39341859

RESUMO

Breast cancer, a prevalent and life-threatening disease, necessitates early detection for the effective intervention and the improved patient health outcomes. This paper focuses on the critical problem of identifying breast cancer using a model called Attention U-Net. The model is utilized on the Breast Ultrasound Image Dataset (BUSI), comprising 780 breast images. The images are categorized into three distinct groups: 437 cases classified as benign, 210 cases classified as malignant, and 133 cases classified as normal. The proposed model leverages the attention-driven U-Net's encoder blocks to capture hierarchical features effectively. The model comprises four decoder blocks which is a pivotal component in the U-Net architecture, responsible for expanding the encoded feature representation obtained from the encoder block and for reconstructing spatial information. Four attention gates are incorporated strategically to enhance feature localization during decoding, showcasing a sophisticated design that facilitates accurate segmentation of breast tumors in ultrasound images. It displays its efficacy in accurately delineating and segregating tumor borders. The experimental findings demonstrate outstanding performance, achieving an overall accuracy of 0.98, precision of 0.97, recall of 0.90, and a dice score of 0.92. It demonstrates its effectiveness in precisely defining and separating tumor boundaries. This research aims to make automated breast cancer segmentation algorithms by emphasizing the importance of early detection in boosting diagnostic capabilities and enabling prompt and targeted medical interventions.


Assuntos
Neoplasias da Mama , Humanos , Neoplasias da Mama/diagnóstico por imagem , Neoplasias da Mama/patologia , Feminino , Ultrassonografia Mamária/métodos , Algoritmos , Interpretação de Imagem Assistida por Computador/métodos , Bases de Dados Factuais , Processamento de Imagem Assistida por Computador/métodos
20.
BMC Med Imaging ; 24(1): 258, 2024 Sep 27.
Artigo em Inglês | MEDLINE | ID: mdl-39333903

RESUMO

OBJECTIVE: Alzheimer's disease (AD) is a type of neurological illness that significantly impacts individuals' daily lives. In the intelligent diagnosis of AD, 3D networks require larger computational resources and storage space for training the models, leading to increased model complexity and training time. On the other hand, 2D slices analysis may overlook the 3D structural information of MRI and can result in information loss. APPROACH: We propose a multi-slice attention fusion and multi-view personalized fusion lightweight network for automated AD diagnosis. It incorporates a multi-branch lightweight backbone to extract features from sagittal, axial, and coronal view of MRI, respectively. In addition, we introduce a novel multi-slice attention fusion module, which utilizes a combination of global and local channel attention mechanism to ensure consistent classification across multiple slices. Additionally, a multi-view personalized fusion module is tailored to assign appropriate weights to the three views, taking into account the varying significance of each view in achieving accurate classification results. To enhance the performance of the multi-view personalized fusion module, we utilize a label consistency loss to guide the model's learning process. This encourages the acquisition of more consistent and stable representations across all three views. MAIN RESULTS: The suggested strategy is efficient in lowering the number of parameters and FLOPs, with only 3.75M and 4.45G respectively, and accuracy improved by 10.5% to 14% in three tasks. Moreover, in the classification tasks of AD vs. CN, AD vs. MCI and MCI vs. CN, the accuracy of the proposed method is 95.63%, 86.88% and 85.00%, respectively, which is superior to the existing methods. CONCLUSIONS: The results show that the proposed approach not only excels in resource utilization, but also significantly outperforms the four comparison methods in terms of accuracy and sensitivity, particularly in detecting early-stage AD lesions. It can precisely capture and accurately identify subtle brain lesions, providing crucial technical support for early intervention and treatment.


Assuntos
Doença de Alzheimer , Imageamento por Ressonância Magnética , Doença de Alzheimer/diagnóstico por imagem , Humanos , Imageamento por Ressonância Magnética/métodos , Interpretação de Imagem Assistida por Computador/métodos , Imageamento Tridimensional , Idoso , Redes Neurais de Computação
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA