Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 90
Filtrar
1.
Front Physiol ; 15: 1416912, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39175612

RESUMEN

Introduction: The cardiothoracic ratio (CTR) based on postero-anterior chest X-rays (P-A CXR) images is one of the most commonly used cardiac measurement methods and an indicator for initially evaluating cardiac diseases. However, the hearts are not readily observable on P-A CXR images compared to the lung fields. Therefore, radiologists often manually determine the CTR's right and left heart border points of the adjacent left and right lung fields to the heart based on P-A CXR images. Meanwhile, manual CTR measurement based on the P-A CXR image requires experienced radiologists and is time-consuming and laborious. Methods: Based on the above, this article proposes a novel, fully automatic CTR calculation method based on lung fields abstracted from the P-A CXR images using convolutional neural networks (CNNs), overcoming the limitations to heart segmentation and avoiding errors in heart segmentation. First, the lung field mask images are abstracted from the P-A CXR images based on the pre-trained CNNs. Second, a novel localization method of the heart's right and left border points is proposed based on the two-dimensional projection morphology of the lung field mask images using graphics. Results: The results show that the mean distance errors at the x-axis direction of the CTR's four key points in the test sets T1 (21 × 512 × 512 static P-A CXR images) and T2 (13 × 512 × 512 dynamic P-A CXR images) based on various pre-trained CNNs are 4.1161 and 3.2116 pixels, respectively. In addition, the mean CTR errors on the test sets T1 and T2 based on four proposed models are 0.0208 and 0.0180, respectively. Discussion: Our proposed model achieves the equivalent performance of CTR calculation as the previous CardioNet model, overcomes heart segmentation, and takes less time. Therefore, our proposed method is practical and feasible and may become an effective tool for initially evaluating cardiac diseases.

2.
J Xray Sci Technol ; 2024 Jul 05.
Artículo en Inglés | MEDLINE | ID: mdl-38995761

RESUMEN

BACKGROUND: Chest X-rays (CXR) are widely used to facilitate the diagnosis and treatment of critically ill and emergency patients in clinical practice. Accurate hemi-diaphragm detection based on postero-anterior (P-A) CXR images is crucial for the diaphragm function assessment of critically ill and emergency patients to provide precision healthcare for these vulnerable populations. OBJECTIVE: Therefore, an effective and accurate hemi-diaphragm detection method for P-A CXR images is urgently developed to assess these vulnerable populations' diaphragm function. METHODS: Based on the above, this paper proposes an effective hemi-diaphragm detection method for P-A CXR images based on the convolutional neural network (CNN) and graphics. First, we develop a robust and standard CNN model of pathological lungs trained by human P-A CXR images of normal and abnormal cases with multiple lung diseases to extract lung fields from P-A CXR images. Second, we propose a novel localization method of the cardiophrenic angle based on the two-dimensional projection morphology of the left and right lungs by graphics for detecting the hemi-diaphragm. RESULTS: The mean errors of the four key hemi-diaphragm points in the lung field mask images abstracted from static P-A CXR images based on five different segmentation models are 9.05, 7.19, 7.92, 7.27, and 6.73 pixels, respectively. Besides, the results also show that the mean errors of these four key hemi-diaphragm points in the lung field mask images abstracted from dynamic P-A CXR images based on these segmentation models are 5.50, 7.07, 4.43, 4.74, and 6.24 pixels,respectively. CONCLUSION: Our proposed hemi-diaphragm detection method can effectively perform hemi-diaphragm detection and may become an effective tool to assess these vulnerable populations' diaphragm function for precision healthcare.

3.
Network ; : 1-32, 2024 May 16.
Artículo en Inglés | MEDLINE | ID: mdl-38753162

RESUMEN

One of the most used diagnostic imaging techniques for identifying a variety of lung and bone-related conditions is the chest X-ray. Recent developments in deep learning have demonstrated several successful cases of illness diagnosis from chest X-rays. However, issues of stability and class imbalance still need to be resolved. Hence in this manuscript, multi-class lung disease classification in chest x-ray images using a hybrid manta-ray foraging volcano eruption algorithm boosted multilayer perceptron neural network approach is proposed (MPNN-Hyb-MRF-VEA). Initially, the input chest X-ray images are taken from the Covid-Chest X-ray dataset. Anisotropic diffusion Kuwahara filtering (ADKF) is used to enhance the quality of these images and lower noise. To capture significant discriminative features, the Term frequency-inverse document frequency (TF-IDF) based feature extraction method is utilized in this case. The Multilayer Perceptron Neural Network (MPNN) serves as the classification model for multi-class lung disorders classification as COVID-19, pneumonia, tuberculosis (TB), and normal. A Hybrid Manta-Ray Foraging and Volcano Eruption Algorithm (Hyb-MRF-VEA) is introduced to further optimize and fine-tune the MPNN's parameters. The Python platform is used to accurately evaluate the proposed methodology. The performance of the proposed method provides 23.21%, 12.09%, and 5.66% higher accuracy compared with existing methods like NFM, SVM, and CNN respectively.

4.
Sensors (Basel) ; 24(9)2024 Apr 29.
Artículo en Inglés | MEDLINE | ID: mdl-38732936

RESUMEN

Lung diseases are the third-leading cause of mortality in the world. Due to compromised lung function, respiratory difficulties, and physiological complications, lung disease brought on by toxic substances, pollution, infections, or smoking results in millions of deaths every year. Chest X-ray images pose a challenge for classification due to their visual similarity, leading to confusion among radiologists. To imitate those issues, we created an automated system with a large data hub that contains 17 datasets of chest X-ray images for a total of 71,096, and we aim to classify ten different disease classes. For combining various resources, our large datasets contain noise and annotations, class imbalances, data redundancy, etc. We conducted several image pre-processing techniques to eliminate noise and artifacts from images, such as resizing, de-annotation, CLAHE, and filtering. The elastic deformation augmentation technique also generates a balanced dataset. Then, we developed DeepChestGNN, a novel medical image classification model utilizing a deep convolutional neural network (DCNN) to extract 100 significant deep features indicative of various lung diseases. This model, incorporating Batch Normalization, MaxPooling, and Dropout layers, achieved a remarkable 99.74% accuracy in extensive trials. By combining graph neural networks (GNNs) with feedforward layers, the architecture is very flexible when it comes to working with graph data for accurate lung disease classification. This study highlights the significant impact of combining advanced research with clinical application potential in diagnosing lung diseases, providing an optimal framework for precise and efficient disease identification and classification.


Asunto(s)
Enfermedades Pulmonares , Redes Neurales de la Computación , Humanos , Enfermedades Pulmonares/diagnóstico por imagen , Enfermedades Pulmonares/diagnóstico , Procesamiento de Imagen Asistido por Computador/métodos , Aprendizaje Profundo , Algoritmos , Pulmón/diagnóstico por imagen , Pulmón/patología
5.
Int J Neural Syst ; 34(6): 2450032, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38624267

RESUMEN

Deep learning technology has been successfully used in Chest X-ray (CXR) images of COVID-19 patients. However, due to the characteristics of COVID-19 pneumonia and X-ray imaging, the deep learning methods still face many challenges, such as lower imaging quality, fewer training samples, complex radiological features and irregular shapes. To address these challenges, this study first introduces an extensive NSNP-like neuron model, and then proposes a multitask adversarial network architecture based on ENSNP-like neurons for chest X-ray images of COVID-19, called MAE-Net. The MAE-Net serves two tasks: (i) converting low-quality CXR images to high-quality images; (ii) classifying CXR images of COVID-19. The adversarial architecture of MAE-Net uses two generators and two discriminators, and two new loss functions have been introduced to guide the optimization of the network. The MAE-Net is tested on four benchmark COVID-19 CXR image datasets and compared them with eight deep learning models. The experimental results show that the proposed MAE-Net can enhance the conversion quality and the accuracy of image classification results.


Asunto(s)
COVID-19 , Aprendizaje Profundo , Redes Neurales de la Computación , Humanos , Neuronas/fisiología , Radiografía Torácica , Modelos Neurológicos , Dinámicas no Lineales
6.
Quant Imaging Med Surg ; 14(3): 2539-2555, 2024 Mar 15.
Artículo en Inglés | MEDLINE | ID: mdl-38545066

RESUMEN

Background: Disease diagnosis in chest X-ray images has predominantly relied on convolutional neural networks (CNNs). However, Vision Transformer (ViT) offers several advantages over CNNs, as it excels at capturing long-term dependencies, exploring correlations, and extracting features with richer semantic information. Methods: We adapted ViT for chest X-ray image analysis by making the following three key improvements: (I) employing a sliding window approach in the image sequence feature extraction module to divide the input image into blocks to identify small and difficult-to-detect lesion areas; (II) introducing an attention region selection module in the encoder layer of the ViT model to enhance the model's ability to focus on relevant regions; and (III) constructing a parallel patient metadata feature extraction network on top of the image feature extraction network to integrate multi-modal input data, enabling the model to synergistically learn and expand image-semantic information. Results: The experimental results showed the effectiveness of our proposed model, which had an average area under the curve value of 0.831 in diagnosing 14 common chest diseases. The metadata feature network module effectively integrated patient metadata, further enhancing the model's accuracy in diagnosis. Our ViT-based model had a sensitivity of 0.863, a specificity of 0.821, and an accuracy of 0.834 in diagnosing these common chest diseases. Conclusions: Our model has good general applicability and shows promise in chest X-ray image analysis, effectively integrating patient metadata and enhancing diagnostic capabilities.

7.
Comput Biol Med ; 171: 108121, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-38382388

RESUMEN

Predicting inpatient length of stay (LoS) is important for hospitals aiming to improve service efficiency and enhance management capabilities. Patient medical records are strongly associated with LoS. However, due to diverse modalities, heterogeneity, and complexity of data, it becomes challenging to effectively leverage these heterogeneous data to put forth a predictive model that can accurately predict LoS. To address the challenge, this study aims to establish a novel data-fusion model, termed as DF-Mdl, to integrate heterogeneous clinical data for predicting the LoS of inpatients between hospital discharge and admission. Multi-modal data such as demographic data, clinical notes, laboratory test results, and medical images are utilized in our proposed methodology with individual "basic" sub-models separately applied to each different data modality. Specifically, a convolutional neural network (CNN) model, which we termed CRXMDL, is designed for chest X-ray (CXR) image data, two long short-term memory networks are used to extract features from long text data, and a novel attention-embedded 1D convolutional neural network is developed to extract useful information from numerical data. Finally, these basic models are integrated to form a new data-fusion model (DF-Mdl) for inpatient LoS prediction. The proposed method attains the best R2 and EVAR values of 0.6039 and 0.6042 among competitors for the LoS prediction on the Medical Information Mart for Intensive Care (MIMIC)-IV test dataset. Empirical evidence suggests better performance compared with other state-of-the-art (SOTA) methods, which demonstrates the effectiveness and feasibility of the proposed approach.


Asunto(s)
Pacientes Internos , Aprendizaje , Humanos , Tiempo de Internación , Hospitalización , Cuidados Críticos
8.
BMC Med Imaging ; 24(1): 1, 2024 01 02.
Artículo en Inglés | MEDLINE | ID: mdl-38166813

RESUMEN

Deep learning is a highly significant technology in clinical treatment and diagnostics nowadays. Convolutional Neural Network (CNN) is a new idea in deep learning that is being used in the area of computer vision. The COVID-19 detection is the subject of our medical study. Researchers attempted to increase the detection accuracy but at the cost of high model complexity. In this paper, we desire to achieve better accuracy with little training space and time so that this model easily deployed in edge devices. In this paper, a new CNN design is proposed that has three stages: pre-processing, which removes the black padding on the side initially; convolution, which employs filter banks; and feature extraction, which makes use of deep convolutional layers with skip connections. In order to train the model, chest X-ray images are partitioned into three sets: learning(0.7), validation(0.1), and testing(0.2). The models are then evaluated using the test and training data. The LMNet, CoroNet, CVDNet, and Deep GRU-CNN models are the other four models used in the same experiment. The propose model achieved 99.47% & 98.91% accuracy on training and testing respectively. Additionally, it achieved 97.54%, 98.19%, 99.49%, and 97.86% scores for precision, recall, specificity, and f1-score respectively. The proposed model obtained nearly equivalent accuracy and other similar metrics when compared with other models but greatly reduced the model complexity. Moreover, it is found that proposed model is less prone to over fitting as compared to other models.


Asunto(s)
COVID-19 , Humanos , COVID-19/diagnóstico por imagen , Rayos X , Tórax , Redes Neurales de la Computación
9.
Comput Struct Biotechnol J ; 24: 53-65, 2024 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-38093971

RESUMEN

Background and Objective: Severe courses of COVID-19 disease can lead to long-term complications. The post-acute phase of COVID-19 refers to the persistent or new symptoms. This problem is becoming more relevant with the increasing number of patients who have contracted COVID-19 and the emergence of new virus variants. In this case, preventive treatment with corticosteroids can be applied. However, not everyone benefits from the treatment, moreover, it can have severe side effects. Currently, no study would analyze who benefits from the treatment. Methods: This work introduces a novel approach to the recommendation of Corticosteroid (CS) treatment for patients in the post-acute phase. We have used a novel combination of clinical data, including blood tests, spirometry, and X-ray images from 273 patients. These are very challenging to collect, especially from patients in the post-acute phase of COVID-19. To our knowledge, no similar dataset exists in the literature. Moreover, we have proposed a unique methodology that combines machine learning and deep learning models based on Vision Transformer (ViT) and InceptionNet, preprocessing techniques, and pretraining strategies to deal with the specific characteristics of our data. Results: The experiments have proved that combining clinical data with CXR images achieves 8% higher accuracy than independent analysis of CXR images. The proposed method reached 80.0% accuracy (78.7% balanced accuracy) and a ROC-AUC of 0.89. Conclusions: The introduced system for CS treatment prediction using our neural network and learning algorithm is unique in this field of research. Here, we have shown the efficiency of using mixed data and proved it on real-world data. The paper also introduces the factors that could be used to predict long-term complications. Additionally, this system was deployed to the hospital environment as a recommendation tool, which admits the clinical application of the proposed methodology.

10.
Bioengineering (Basel) ; 10(11)2023 Nov 14.
Artículo en Inglés | MEDLINE | ID: mdl-38002438

RESUMEN

The detection of Coronavirus disease 2019 (COVID-19) is crucial for controlling the spread of the virus. Current research utilizes X-ray imaging and artificial intelligence for COVID-19 diagnosis. However, conventional X-ray scans expose patients to excessive radiation, rendering repeated examinations impractical. Ultra-low-dose X-ray imaging technology enables rapid and accurate COVID-19 detection with minimal additional radiation exposure. In this retrospective cohort study, ULTRA-X-COVID, a deep neural network specifically designed for automatic detection of COVID-19 infections using ultra-low-dose X-ray images, is presented. The study included a multinational and multicenter dataset consisting of 30,882 X-ray images obtained from approximately 16,600 patients across 51 countries. It is important to note that there was no overlap between the training and test sets. The data analysis was conducted from 1 April 2020 to 1 January 2022. To evaluate the effectiveness of the model, various metrics such as the area under the receiver operating characteristic curve, receiver operating characteristic, accuracy, specificity, and F1 score were utilized. In the test set, the model demonstrated an AUC of 0.968 (95% CI, 0.956-0.983), accuracy of 94.3%, specificity of 88.9%, and F1 score of 99.0%. Notably, the ULTRA-X-COVID model demonstrated a performance comparable to conventional X-ray doses, with a prediction time of only 0.1 s per image. These findings suggest that the ULTRA-X-COVID model can effectively identify COVID-19 cases using ultra-low-dose X-ray scans, providing a novel alternative for COVID-19 detection. Moreover, the model exhibits potential adaptability for diagnoses of various other diseases.

11.
Comput Med Imaging Graph ; 108: 102277, 2023 09.
Artículo en Inglés | MEDLINE | ID: mdl-37567045

RESUMEN

The chest X-ray is commonly employed in the diagnosis of thoracic diseases. Over the years, numerous approaches have been proposed to address the issue of automatic diagnosis based on chest X-rays. However, the limited availability of labeled data for related diseases remains a significant challenge in achieving accurate diagnoses. This paper focuses on the diagnostic problem of thorax diseases and presents a novel deep reinforcement learning framework. This framework incorporates prior knowledge to guide the learning process of diagnostic agents, and the model parameters can be continually updated as more data becomes available, mimicking a person's learning process. Specifically, our approach offers two key contributions: (1) prior knowledge can be acquired from pre-trained models using old data or similar data from other domains, effectively reducing the dependence on target domain data; and (2) the reinforcement learning framework enables the diagnostic agent to be as exploratory as a human, leading to improved diagnostic accuracy through continuous exploration. Moreover, this method effectively addresses the challenge of learning models with limited data, enhancing the model's generalization capability. We evaluate the performance of our approach using the well-known NIH ChestX-ray 14 and CheXpert datasets, and achieve competitive results. More importantly, in clinical application, we make considerable progress. The source code for our approach can be accessed at the following URL: https://github.com/NeaseZ/MARL.


Asunto(s)
Aprendizaje , Enfermedades Torácicas , Humanos , Enfermedades Torácicas/diagnóstico por imagen , Tórax , Programas Informáticos
12.
Open Life Sci ; 18(1): 20220665, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37589001

RESUMEN

In accordance with the inability of various hair artefacts subjected to dermoscopic medical images, undergoing illumination challenges that include chest-Xray featuring conditions of imaging acquisi-tion situations built with clinical segmentation. The study proposed a novel deep-convolutional neural network (CNN)-integrated methodology for applying medical image segmentation upon chest-Xray and dermoscopic clinical images. The study develops a novel technique of segmenting medical images merged with CNNs with an architectural comparison that incorporates neural networks of U-net and fully convolutional networks (FCN) schemas with loss functions associated with Jaccard distance and Binary-cross entropy under optimised stochastic gradient descent + Nesterov practices. Digital image over clinical approach significantly built the diagnosis and determination of the best treatment for a patient's condition. Even though medical digital images are subjected to varied components clarified with the effect of noise, quality, disturbance, and precision depending on the enhanced version of images segmented with the optimised process. Ultimately, the threshold technique has been employed for the output reached under the pre- and post-processing stages to contrast the image technically being developed. The data source applied is well-known in PH2 Database for Melanoma lesion segmentation and chest X-ray images since it has variations in hair artefacts and illumination. Experiment outcomes outperform other U-net and FCN architectures of CNNs. The predictions produced from the model on test images were post-processed using the threshold technique to remove the blurry boundaries around the predicted lesions. Experimental results proved that the present model has better efficiency than the existing one, such as U-net and FCN, based on the image segmented in terms of sensitivity = 0.9913, accuracy = 0.9883, and dice coefficient = 0.0246.

13.
BMC Med Imaging ; 23(1): 83, 2023 06 15.
Artículo en Inglés | MEDLINE | ID: mdl-37322450

RESUMEN

BACKGROUND: The medical profession is facing an excessive workload, which has led to the development of various Computer-Aided Diagnosis (CAD) systems as well as Mobile-Aid Diagnosis (MAD) systems. These technologies enhance the speed and accuracy of diagnoses, particularly in areas with limited resources or remote regions during the pandemic. The primary purpose of this research is to predict and diagnose COVID-19 infection from chest X-ray images by developing a mobile-friendly deep learning framework, which has the potential for deployment in portable devices such as mobile or tablet, especially in situations where the workload of radiology specialists may be high. Moreover, this could improve the accuracy and transparency of population screening to assist radiologists during the pandemic. METHODS: In this study, the Mobile Networks ensemble model called COV-MobNets is proposed to classify positive COVID-19 X-ray images from negative ones and can have an assistant role in diagnosing COVID-19. The proposed model is an ensemble model, combining two lightweight and mobile-friendly models: MobileViT based on transformer structure and MobileNetV3 based on Convolutional Neural Network. Hence, COV-MobNets can extract the features of chest X-ray images in two different methods to achieve better and more accurate results. In addition, data augmentation techniques were applied to the dataset to avoid overfitting during the training process. The COVIDx-CXR-3 benchmark dataset was used for training and evaluation. RESULTS: The classification accuracy of the improved MobileViT and MobileNetV3 models on the test set has reached 92.5% and 97%, respectively, while the accuracy of the proposed model (COV-MobNets) has reached 97.75%. The sensitivity and specificity of the proposed model have also reached 98.5% and 97%, respectively. Experimental comparison proves the result is more accurate and balanced than other methods. CONCLUSION: The proposed method can distinguish between positive and negative COVID-19 cases more accurately and quickly. The proposed method proves that utilizing two automatic feature extractors with different structures as an overall framework of COVID-19 diagnosis can lead to improved performance, enhanced accuracy, and better generalization to new or unseen data. As a result, the proposed framework in this study can be used as an effective method for computer-aided diagnosis and mobile-aided diagnosis of COVID-19. The code is available publicly for open access at https://github.com/MAmirEshraghi/COV-MobNets .


Asunto(s)
COVID-19 , Aprendizaje Profundo , Humanos , COVID-19/diagnóstico por imagen , Prueba de COVID-19 , Rayos X , SARS-CoV-2
14.
Healthcare (Basel) ; 11(11)2023 May 26.
Artículo en Inglés | MEDLINE | ID: mdl-37297701

RESUMEN

Pneumonia has been directly responsible for a huge number of deaths all across the globe. Pneumonia shares visual features with other respiratory diseases, such as tuberculosis, which can make it difficult to distinguish between them. Moreover, there is significant variability in the way chest X-ray images are acquired and processed, which can impact the quality and consistency of the images. This can make it challenging to develop robust algorithms that can accurately identify pneumonia in all types of images. Hence, there is a need to develop robust, data-driven algorithms that are trained on large, high-quality datasets and validated using a range of imaging techniques and expert radiological analysis. In this research, a deep-learning-based model is demonstrated for differentiating between normal and severe cases of pneumonia. This complete proposed system has a total of eight pre-trained models, namely, ResNet50, ResNet152V2, DenseNet121, DenseNet201, Xception, VGG16, EfficientNet, and MobileNet. These eight pre-trained models were simulated on two datasets having 5856 images and 112,120 images of chest X-rays. The best accuracy is obtained on the MobileNet model with values of 94.23% and 93.75% on two different datasets. Key hyperparameters including batch sizes, number of epochs, and different optimizers have all been considered during comparative interpretation of these models to determine the most appropriate model.

15.
Soft comput ; : 1-22, 2023 May 27.
Artículo en Inglés | MEDLINE | ID: mdl-37362273

RESUMEN

COVID-19, a highly infectious respiratory disease a used by SARS virus, has killed millions of people across many countries. To enhance quick and accurate diagnosis of COVID-19, chest X-ray (CXR) imaging methods were commonly utilized. Identifying the infection manually by radio imaging, on the other hand, was considered, extremely difficult due to the time commitment and significant risk of human error. Emerging artificial intelligence (AI) techniques promised exploration in the development of precise and as well as automated COVID-19 detection tools. Convolution neural networks (CNN), a well performing deep learning strategy tends to gain substantial favors among AI approaches for COVID-19 classification. The preprints and published studies to diagnose COVID-19 with CXR pictures using CNN and other deep learning methodologies are reviewed and critically assessed in this research. This study focused on the methodology, algorithms, and preprocessing techniques used in various deep learning architectures, as well as datasets and performance studies of several deep learning architectures used in prediction and diagnosis. Our research concludes with a list of future research directions in COVID-19 imaging categorization.

16.
Curr Med Imaging ; 2023 Apr 26.
Artículo en Inglés | MEDLINE | ID: mdl-37170972

RESUMEN

AIMS: COVID-19 has become a worldwide epidemic disease and a new challenge for all mankind. The potential advantages of chest X-ray images on COVID-19 were discovered. We proposed a lightweight and effective Convolution Neural Network framework based on chest X-ray images for the diagnosis of COVID-19, named AMResNet. BACKGROUND: COVID-19 has become a worldwide epidemic disease and a new challenge for all mankind. The potential advantages of chest X-ray images on COVID-19 were discovered. OBJECTIVE: A lightweight and effective Convolution Neural Network framework based on chest X-ray images for the diagnosis of COVID-19. METHOD: By introducing the channel attention mechanism and image spatial information attention mechanism, a better level can be achieved without increasing the number of model parameters. RESULT: In the collected data sets, we achieved an average accuracy rate of more than 92%, and the sensitivity and specificity of specific disease categories were also above 90%. CONCLUSION: The convolution neural network framework can be used as a novel method for artificial intelligence to diagnose COVID-19 or other diseases based on medical images.

17.
BMC Med Imaging ; 23(1): 62, 2023 05 09.
Artículo en Inglés | MEDLINE | ID: mdl-37161392

RESUMEN

BACKGROUND: This study was conducted to alleviate a common difficulty in chest X-ray image diagnosis: The attention region in a convolutional neural network (CNN) does not often match the doctor's point of focus. The method presented herein, which guides the area of attention in CNN to a medically plausible region, can thereby improve diagnostic capabilities. METHODS: The model is based on an attention branch network, which has excellent interpretability of the classification model. This model has an additional new operation branch that guides the attention region to the lung field and heart in chest X-ray images. We also used three chest X-ray image datasets (Teikyo, Tokushima, and ChestX-ray14) to evaluate the CNN attention area of interest in these fields. Additionally, after devising a quantitative method of evaluating improvement of a CNN's region of interest, we applied it to evaluation of the proposed model. RESULTS: Operation branch networks maintain or improve the area under the curve to a greater degree than conventional CNNs do. Furthermore, the network better emphasizes reasonable anatomical parts in chest X-ray images. CONCLUSIONS: The proposed network better emphasizes the reasonable anatomical parts in chest X-ray images. This method can enhance capabilities for image interpretation based on judgment.


Asunto(s)
Corazón , Tórax , Humanos , Rayos X , Tórax/diagnóstico por imagen , Redes Neurales de la Computación
18.
Big Data ; 2023 Apr 17.
Artículo en Inglés | MEDLINE | ID: mdl-37074075

RESUMEN

Pneumonia, caused by microorganisms, is a severely contagious disease that damages one or both the lungs of the patients. Early detection and treatment are typically favored to recover infected patients since untreated pneumonia can lead to major complications in the elderly (>65 years) and children (<5 years). The objectives of this work are to develop several models to evaluate big X-ray images (XRIs) of the chest, to determine whether the images show/do not show signs of pneumonia, and to compare the models based on their accuracy, precision, recall, loss, and receiver operating characteristic area under the ROC curve scores. Enhanced convolutional neural network (CNN), VGG-19, ResNet-50, and ResNet-50 with fine-tuning are some of the deep learning (DL) algorithms employed in this study. By training the transfer learning model and enhanced CNN model using a big data set, these techniques are used to identify pneumonia. The data set for the study was obtained from Kaggle. It should be noted that the data set has been expanded to include further records. This data set included 5863 chest XRIs, which were categorized into 3 different folders (i.e., train, val, test). These data are produced every day from personnel records and Internet of Medical Things devices. According to the experimental findings, the ResNet-50 model showed the lowest accuracy, that is, 82.8%, while the enhanced CNN model showed the highest accuracy of 92.4%. Owing to its high accuracy, enhanced CNN was regarded as the best model in this study. The techniques developed in this study outperformed the popular ensemble techniques, and the models showed better results than those generated by cutting-edge methods. Our study implication is that a DL models can detect the progression of pneumonia, which improves the general diagnostic accuracy and gives patients new hope for speedy treatment. Since enhanced CNN and ResNet-50 showed the highest accuracy compared with other algorithms, it was concluded that these techniques could be effectively used to identify pneumonia after performing fine-tuning.

19.
J Imaging ; 9(2)2023 Jan 30.
Artículo en Inglés | MEDLINE | ID: mdl-36826951

RESUMEN

Radiomic analysis allows for the detection of imaging biomarkers supporting decision-making processes in clinical environments, from diagnosis to prognosis. Frequently, the original set of radiomic features is augmented by considering high-level features, such as wavelet transforms. However, several wavelets families (so called kernels) are able to generate different multi-resolution representations of the original image, and which of them produces more salient images is not yet clear. In this study, an in-depth analysis is performed by comparing different wavelet kernels and by evaluating their impact on predictive capabilities of radiomic models. A dataset composed of 1589 chest X-ray images was used for COVID-19 prognosis prediction as a case study. Random forest, support vector machine, and XGBoost were trained (on a subset of 1103 images) after a rigorous feature selection strategy to build-up the predictive models. Next, to evaluate the models generalization capability on unseen data, a test phase was performed (on a subset of 486 images). The experimental findings showed that Bior1.5, Coif1, Haar, and Sym2 kernels guarantee better and similar performance for all three machine learning models considered. Support vector machine and random forest showed comparable performance, and they were better than XGBoost. Additionally, random forest proved to be the most stable model, ensuring an appropriate balance between sensitivity and specificity.

20.
J Digit Imaging ; 36(3): 988-1000, 2023 06.
Artículo en Inglés | MEDLINE | ID: mdl-36813978

RESUMEN

COVID-19 has claimed millions of lives since its outbreak in December 2019, and the damage continues, so it is urgent to develop new technologies to aid its diagnosis. However, the state-of-the-art deep learning methods often rely on large-scale labeled data, limiting their clinical application in COVID-19 identification. Recently, capsule networks have achieved highly competitive performance for COVID-19 detection, but they require expensive routing computation or traditional matrix multiplication to deal with the capsule dimensional entanglement. A more lightweight capsule network is developed to effectively address these problems, namely DPDH-CapNet, which aims to enhance the technology of automated diagnosis for COVID-19 chest X-ray images. It adopts depthwise convolution (D), point convolution (P), and dilated convolution (D) to construct a new feature extractor, thus successfully capturing the local and global dependencies of COVID-19 pathological features. Simultaneously, it constructs the classification layer by homogeneous (H) vector capsules with an adaptive, non-iterative, and non-routing mechanism. We conduct experiments on two publicly available combined datasets, including normal, pneumonia, and COVID-19 images. With a limited number of samples, the parameters of the proposed model are reduced by 9x compared to the state-of-the-art capsule network. Moreover, our model has faster convergence speed and better generalization, and its accuracy, precision, recall, and F-measure are improved to 97.99%, 98.05%, 98.02%, and 98.03%, respectively. In addition, experimental results demonstrate that, contrary to the transfer learning method, the proposed model does not require pre-training and a large number of training samples.


Asunto(s)
COVID-19 , Humanos , COVID-19/diagnóstico por imagen , Prueba de COVID-19 , Rayos X
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA