Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 3.212
Filtrar
1.
Sci Rep ; 14(1): 21459, 2024 09 13.
Artículo en Inglés | MEDLINE | ID: mdl-39271825

RESUMEN

Data augmentation is a technique usually deployed to mitigate the possible performance limitation from training a neural network model on a limited dataset, especially in the medical domain. This paper presents a study on effects of applying different rotation settings to augment cardiac volumes from the Multi-modality Whole Heart Segmentation dataset, in order to improve the segmentation performance. This study presents a comparison between conventional 2D (slice-wise) rotation primarily on the axial axis, 3D (volume-wise) rotation, and our proposed rotation setting that takes into account possible cardiac alignment according to its anatomy. The study has suggested two key considerations: 2D slice-wise rotation should be avoided when using 3D data for segmentation, due to intrinsic structural correlation between subsequent slices, and that 3D rotations may help improve segmentation performance on data previously unseen to the model.


Asunto(s)
Corazón , Imagenología Tridimensional , Humanos , Imagenología Tridimensional/métodos , Corazón/diagnóstico por imagen , Corazón/anatomía & histología , Redes Neurales de la Computación , Algoritmos , Procesamiento de Imagen Asistido por Computador/métodos
2.
Sensors (Basel) ; 24(17)2024 Aug 30.
Artículo en Inglés | MEDLINE | ID: mdl-39275561

RESUMEN

Potholes and other road surface damages pose significant risks to vehicles and traffic safety. The current methods of in situ visual inspection for potholes or cracks are inefficient, costly, and hazardous. Therefore, there is a pressing need to develop automated systems for assessing road surface conditions, aiming to efficiently and accurately reconstruct, recognize, and locate potholes. In recent years, various methods utilizing (a) computer vision, (b) three-dimensional (3D) point clouds, or (c) smartphone data have been employed to map road surface quality conditions. Machine learning and deep learning techniques have increasingly enhanced the performance of these methods. This review aims to provide a comprehensive overview of cutting-edge computer vision and machine learning algorithms for pothole detection. It covers topics such as sensing systems for acquiring two-dimensional (2D) and 3D road data, classical algorithms based on 2D image processing, segmentation-based algorithms using 3D point cloud modeling, machine learning, deep learning algorithms, and hybrid approaches. The review highlights that hybrid methods combining traditional image processing and advanced machine learning techniques offer the highest accuracy in pothole detection. Machine learning approaches, particularly deep learning, demonstrate superior adaptability and detection rates, while traditional 2D and 3D methods provide valuable baseline techniques. By reviewing and evaluating existing vision-based methods, this paper clarifies the current landscape of pothole detection technologies and identifies opportunities for future research and development. Additionally, insights provided by this review can inform the design and implementation of more robust and effective systems for automated road surface condition assessment, thereby contributing to enhanced roadway safety and infrastructure management.

3.
Sensors (Basel) ; 24(17)2024 Sep 07.
Artículo en Inglés | MEDLINE | ID: mdl-39275725

RESUMEN

This paper comprehensively reviews hardware acceleration techniques and the deployment of convolutional neural networks (CNNs) for analyzing electroencephalogram (EEG) signals across various application areas, including emotion classification, motor imagery, epilepsy detection, and sleep monitoring. Previous reviews on EEG have mainly focused on software solutions. However, these reviews often overlook key challenges associated with hardware implementation, such as scenarios that require a small size, low power, high security, and high accuracy. This paper discusses the challenges and opportunities of hardware acceleration for wearable EEG devices by focusing on these aspects. Specifically, this review classifies EEG signal features into five groups and discusses hardware implementation solutions for each category in detail, providing insights into the most suitable hardware acceleration strategies for various application scenarios. In addition, it explores the complexity of efficient CNN architectures for EEG signals, including techniques such as pruning, quantization, tensor decomposition, knowledge distillation, and neural architecture search. To the best of our knowledge, this is the first systematic review that combines CNN hardware solutions with EEG signal processing. By providing a comprehensive analysis of current challenges and a roadmap for future research, this paper provides a new perspective on the ongoing development of hardware-accelerated EEG systems.


Asunto(s)
Electroencefalografía , Redes Neurales de la Computación , Procesamiento de Señales Asistido por Computador , Electroencefalografía/métodos , Electroencefalografía/instrumentación , Humanos , Dispositivos Electrónicos Vestibles , Epilepsia/diagnóstico , Epilepsia/fisiopatología
4.
MethodsX ; 13: 102910, 2024 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-39280760

RESUMEN

The prevalence of diabetic retinopathy (DR) among the geriatric population poses significant challenges for early detection and management. Optical Coherence Tomography Angiography (OCTA) combined with Deep Learning presents a promising avenue for improving diagnostic accuracy in this vulnerable demographic. In this method, we propose an innovative approach utilizing OCTA images and Deep Learning algorithms to detect diabetic retinopathy in geriatric patients. We have collected 262 OCTA scans of 179 elderly individuals, both with and without diabetes, and trained a deep-learning model to classify retinopathy severity levels. Convolutional Neural Network (CNN) models: Inception V3, ResNet-50, ResNet50V2, VggNet-16, VggNet-19, DenseNet121, DenseNet201, EfficientNetV2B0, are trained to extract features and further classify them. Here we demonstrate:•The potential of OCTA and Deep Learning in enhancing geriatric eye care at the very initial stage.•The importance of technological advancements in addressing age-related ocular diseases and providing reliable assistance to clinicians for DR classification.•The efficacy of this approach in accurately identifying diabetic retinopathy stages, thereby facilitating timely interventions, and preventing vision loss in the elderly population.

5.
J Bone Oncol ; 48: 100626, 2024 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-39290649

RESUMEN

Objective: Bone tumors, known for their infrequent occurrence and diverse imaging characteristics, require precise differentiation into benign and malignant categories. Existing diagnostic approaches heavily depend on the laborious and variable manual delineation of tumor regions. Deep learning methods, particularly convolutional neural networks (CNNs), have emerged as a promising solution to tackle these issues. This paper introduces an enhanced deep-learning model based on AlexNet to classify femoral bone tumors accurately. Methods: This study involved 500 femoral tumor patients from July 2020 to January 2023, with 500 imaging cases (335 benign and 165 malignant). A CNN was employed for automated classification. The model framework encompassed training and testing stages, with 8 layers (5 Conv and 3 FC) and ReLU activation. Essential architectural modifications included Batch Normalization (BN) after the first and second convolutional filters. Comparative experiments with various existing methods were conducted to assess algorithm performance in tumor staging. Evaluation metrics encompassed accuracy, precision, sensitivity, specificity, F-measure, ROC curves, and AUC values. Results: The analysis of precision, sensitivity, specificity, and F1 score from the results demonstrates that the method introduced in this paper offers several advantages, including a low feature dimension and robust generalization (with an accuracy of 98.34 %, sensitivity of 97.26 %, specificity of 95.74 %, and an F1 score of 96.37). These findings underscore its exceptional overall detection capabilities. Notably, when comparing various algorithms, they generally exhibit similar classification performance. However, the algorithm presented in this paper stands out with a higher AUC value (AUC=0.848), signifying enhanced sensitivity and more robust specificity. Conclusion: This study presents an optimized AlexNet model for classifying femoral bone tumor images based on convolutional neural networks. This algorithm demonstrates higher accuracy, precision, sensitivity, specificity, and F1-score than other methods. Furthermore, the AUC value further confirms the outstanding performance of this algorithm in terms of sensitivity and specificity. This research makes a significant contribution to the field of medical image classification, offering an efficient automated classification solution, and holds the potential to advance the application of artificial intelligence in bone tumor classification.

6.
Med Biol Eng Comput ; 2024 Sep 18.
Artículo en Inglés | MEDLINE | ID: mdl-39292381

RESUMEN

Accurate and fast extraction of step parameters from video recordings of gait allows for richer information to be obtained from clinical tests such as Timed Up and Go. Current deep-learning methods are promising, but lack in accuracy for many clinical use cases. Extracting step parameters will often depend on extracted landmarks (keypoints) on the feet. We hypothesize that such keypoints can be determined with an accuracy relevant for clinical practice from video recordings by combining an existing general-purpose pose estimation method (OpenPose) with custom convolutional neural networks (convnets) specifically trained to identify keypoints on the heel. The combined method finds keypoints on the posterior and lateral aspects of the heel of the foot in side-view and frontal-view images from which step length and step width can be determined for calibrated cameras. Six different candidate convnets were evaluated, combining three different standard architectures as networks for feature extraction (backbone), and with two different networks for predicting keypoints on the heel (head networks). Using transfer learning, the backbone networks were pre-trained on the ImageNet dataset, and the combined networks (backbone + head) were fine-tuned on data from 184 trials of older, unimpaired adults. The data was recorded at three different locations and consisted of 193 k side-view images and 110 k frontal-view images. We evaluated the six different models using the absolute distance on the floor between predicted keypoints and manually labelled keypoints. For the best-performing convnet, the median error was 0.55 cm and the 75% quartile was below 1.26 cm using data from the side-view camera. The predictions are overall accurate, but show some outliers. The results indicate potential for future clinical use by automating a key step in marker-less gait parameter extraction.

7.
Int J Mol Sci ; 25(17)2024 Sep 06.
Artículo en Inglés | MEDLINE | ID: mdl-39273622

RESUMEN

Glycation Stress (GS), induced by advanced glycation end-products (AGEs), significantly impacts aging processes. This study introduces a new model of GS of Caenorhabditis elegans by feeding them Escherichia coli OP50 cultured in a glucose-enriched medium, which better simulates human dietary glycation compared to previous single protein-glucose cross-linking methods. Utilizing WormCNN, a deep learning model, we assessed the health status and calculated the Healthy Aging Index (HAI) of worms with or without GS. Our results demonstrated accelerated aging in the GS group, evidenced by increased autofluorescence and altered gene expression of key aging regulators, daf-2 and daf-16. Additionally, we observed elevated pharyngeal pumping rates in AGEs-fed worms, suggesting an addictive response similar to human dietary patterns. This study highlights the profound effects of GS on worm aging and underscores the critical role of computer vision in accurately assessing health status and aiding in the establishment of disease models. The findings provide insights into glycation-induced aging and offer a comprehensive approach to studying the effects of dietary glycation on aging processes.


Asunto(s)
Proteínas de Caenorhabditis elegans , Caenorhabditis elegans , Productos Finales de Glicación Avanzada , Animales , Caenorhabditis elegans/metabolismo , Caenorhabditis elegans/genética , Productos Finales de Glicación Avanzada/metabolismo , Proteínas de Caenorhabditis elegans/metabolismo , Proteínas de Caenorhabditis elegans/genética , Envejecimiento Saludable/metabolismo , Envejecimiento/metabolismo , Estrés Fisiológico , Factores de Transcripción Forkhead/metabolismo , Factores de Transcripción Forkhead/genética , Glicosilación , Glucosa/metabolismo , Modelos Animales de Enfermedad , Receptor de Insulina
8.
Neural Netw ; 179: 106496, 2024 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-39285609

RESUMEN

Filter pruning has achieved remarkable success in reducing memory consumption and speeding up inference for convolutional neural networks (CNNs). Some prior works, such as heuristic methods, attempted to search for suitable sparse structures during the pruning process, which may be expensive and time-consuming. In this paper, an efficient cross-layer importance evaluation (CIE) method is proposed to automatically calculate proportional relationships among convolutional layers. Firstly, every layer is pruned separately by grid sampling way to obtain the accuracy of the model for all sampling points. And then, contribution matrices are built to describe the importance of each layer to model accuracy. Finally, the binary search algorithm is used to search the optimal sparse structure under a target pruned value. Extensive experiments on multiple representative image classification tasks demonstrate that proposed method acquires better compression performance under a little time cost compared to existing pruning algorithms. For instance, it reduces more than 50% FLOPs with only a small loss of 0.93% and 0.43% in the top-1 and top-5 accuracy for ResNet50, respectively. At the cost of only 0.24% accuracy loss, the pruned VGG19 model parameters are successfully compressed by 27.23× and the throughput has increased by 2.46×. On the whole, CIE has an excellent effect on the deployment and application of the CNNs model in edge device in terms of efficiency and accuracy.


Asunto(s)
Algoritmos , Redes Neurales de la Computación , Procesamiento de Imagen Asistido por Computador/métodos , Humanos
9.
IEEE Trans Comput Soc Syst ; 11(1): 247-266, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-39239536

RESUMEN

Adaptive interpretable ensemble model based on three-dimensional Convolutional Neural Network (3DCNN) and Genetic Algorithm (GA), i.e., 3DCNN+EL+GA, was proposed to differentiate the subjects with Alzheimer's Disease (AD) or Mild Cognitive Impairment (MCI) and further identify the discriminative brain regions significantly contributing to the classifications in a data-driven way. Plus, the discriminative brain sub-regions at a voxel level were further located in these achieved brain regions, with a gradient-based attribution method designed for CNN. Besides disclosing the discriminative brain sub-regions, the testing results on the datasets from the Alzheimer's Disease Neuroimaging Initiative (ADNI) and the Open Access Series of Imaging Studies (OASIS) indicated that 3DCNN+EL+GA outperformed other state-of-the-art deep learning algorithms and that the achieved discriminative brain regions (e.g., the rostral hippocampus, caudal hippocampus, and medial amygdala) were linked to emotion, memory, language, and other essential brain functions impaired early in the AD process. Future research is needed to examine the generalizability of the proposed method and ideas to discern discriminative brain regions for other brain disorders, such as severe depression, schizophrenia, autism, and cerebrovascular diseases, using neuroimaging.

10.
Diagnostics (Basel) ; 14(17)2024 Aug 23.
Artículo en Inglés | MEDLINE | ID: mdl-39272624

RESUMEN

The application of artificial intelligence (AI) in electrocardiography is revolutionizing cardiology and providing essential insights into the consequences of the COVID-19 pandemic. This comprehensive review explores AI-enhanced ECG (AI-ECG) applications in risk prediction and diagnosis of heart diseases, with a dedicated chapter on COVID-19-related complications. Introductory concepts on AI and machine learning (ML) are explained to provide a foundational understanding for those seeking knowledge, supported by examples from the literature and current practices. We analyze AI and ML methods for arrhythmias, heart failure, pulmonary hypertension, mortality prediction, cardiomyopathy, mitral regurgitation, hypertension, pulmonary embolism, and myocardial infarction, comparing their effectiveness from both medical and AI perspectives. Special emphasis is placed on AI applications in COVID-19 and cardiology, including detailed comparisons of different methods, identifying the most suitable AI approaches for specific medical applications and analyzing their strengths, weaknesses, accuracy, clinical relevance, and key findings. Additionally, we explore AI's role in the emerging field of cardio-oncology, particularly in managing chemotherapy-induced cardiotoxicity and detecting cardiac masses. This comprehensive review serves as both an insightful guide and a call to action for further research and collaboration in the integration of AI in cardiology, aiming to enhance precision medicine and optimize clinical decision-making.

11.
Diagnostics (Basel) ; 14(17)2024 Aug 30.
Artículo en Inglés | MEDLINE | ID: mdl-39272696

RESUMEN

The aim and objective of the research are to develop an automated diagnosis system for the prediction of rheumatoid arthritis (RA) based on artificial intelligence (AI) and quantum computing for hand radiographs and thermal images. The hand radiographs and thermal images were segmented using a UNet++ model and color-based k-means clustering technique, respectively. The attributes from the segmented regions were generated using the Speeded-Up Robust Features (SURF) feature extractor and classification was performed using k-star and Hoeffding classifiers. For the ground truth and the predicted test image, the study utilizing UNet++ segmentation achieved a pixel-wise accuracy of 98.75%, an intersection over union (IoU) of 0.87, and a dice coefficient of 0.86, indicating a high level of similarity. The custom RA-X-ray thermal imaging (XTNet) surpassed all the models for the detection of RA with a classification accuracy of 90% and 93% for X-ray and thermal imaging modalities, respectively. Furthermore, the study employed quantum support vector machine (QSVM) as a quantum computing approach which yielded an accuracy of 93.75% and 87.5% for the detection of RA from hand X-ray and thermal images. In addition, vision transformer (ViT) was employed to classify RA which obtained an accuracy of 80% for hand X-rays and 90% for thermal images. Thus, depending on the performance measures, the RA-XTNet model can be used as an effective automated diagnostic method to diagnose RA accurately and rapidly in hand radiographs and thermal images.

12.
Heliyon ; 10(16): e36112, 2024 Aug 30.
Artículo en Inglés | MEDLINE | ID: mdl-39253141

RESUMEN

Implementing diabetes surveillance systems is paramount to mitigate the risk of incurring substantial medical expenses. Currently, blood glucose is measured by minimally invasive methods, which involve extracting a small blood sample and transmitting it to a blood glucose meter. This method is deemed discomforting for individuals who are undergoing it. The present study introduces an Explainable Artificial Intelligence (XAI) system, which aims to create an intelligible machine capable of explaining expected outcomes and decision models. To this end, we analyze abnormal glucose levels by utilizing Bi-directional Long Short-Term Memory (Bi-LSTM) and Convolutional Neural Network (CNN). In this regard, the glucose levels are acquired through the glucose oxidase (GOD) strips placed over the human body. Later, the signal data is converted to the spectrogram images, classified as low glucose, average glucose, and abnormal glucose levels. The labeled spectrogram images are then used to train the individualized monitoring model. The proposed XAI model to track real-time glucose levels uses the XAI-driven architecture in its feature processing. The model's effectiveness is evaluated by analyzing the performance of the proposed model and several evolutionary metrics used in the confusion matrix. The data revealed in the study demonstrate that the proposed model effectively identifies individuals with elevated glucose levels.

13.
J Stomatol Oral Maxillofac Surg ; : 102048, 2024 Sep 05.
Artículo en Inglés | MEDLINE | ID: mdl-39244033

RESUMEN

INTRODUCTION: In orthodontic treatments, accurately assessing the upper airway volume and morphology is essential for proper diagnosis and planning. Cone beam computed tomography (CBCT) is used for assessing upper airway volume through manual, semi-automatic, and automatic airway segmentation methods. This study evaluates upper airway segmentation accuracy by comparing the results of an automatic model and a semi-automatic method against the gold standard manual method. MATERIALS AND METHODS: An automatic segmentation model was trained using the MONAI Label framework to segment the upper airway from CBCT images. An open-source program, ITK-SNAP, was used for semi-automatic segmentation. The accuracy of both methods was evaluated against manual segmentations. Evaluation metrics included Dice Similarity Coefficient (DSC), Precision, Recall, 95% Hausdorff Distance (HD), and volumetric differences. RESULTS: The automatic segmentation group averaged a DSC score of 0.915±0.041, while the semi-automatic group scored 0.940±0.021, indicating clinically acceptable accuracy for both methods. Analysis of the 95% HD revealed that semi-automatic segmentation (0.997±0.585) was more accurate and closer to manual segmentation than automatic segmentation (1.447±0.674). Volumetric comparisons revealed no statistically significant differences between automatic and manual segmentation for total, oropharyngeal, and velopharyngeal airway volumes. Similarly, no significant differences were noted between the semi-automatic and manual methods across these regions. CONCLUSION: It has been observed that both automatic and semi-automatic methods, which utilise open-source software, align effectively with manual segmentation. Implementing these methods can aid in decision-making by allowing faster and easier upper airway segmentation with comparable accuracy in orthodontic practice.

14.
J Bone Oncol ; 48: 100629, 2024 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-39257652

RESUMEN

Objective: This study aims to explore the application of radiographic imaging and image recognition algorithms, particularly AlexNet and ResNet, in classifying malignancies for spinal bone tumors. Methods: We selected a cohort of 580 patients diagnosed with primary spinal osseous tumors who underwent treatment at our hospital between January 2016 and December 2023, whereby 1532 images (679 images of benign tumors, 853 images of malignant tumors) were extracted from this imaging dataset. Training and validation follow a ratio of 2:1. All patients underwent X-ray examinations as part of their diagnostic workup. This study employed convolutional neural networks (CNNs) to categorize spinal bone tumor images according to their malignancy. AlexNet and ResNet models were employed for this classification task. These models were fine-tuned through training, which involved the utilization of a database of bone tumor images representing different categories. Results: Through rigorous experimentation, the performance of AlexNet and ResNet in classifying spinal bone tumor malignancy was extensively evaluated. The models were subjected to an extensive dataset of bone tumor images, and the following results were observed. AlexNet: This model exhibited commendable efficiency during training, with each epoch taking an average of 3 s. Its classification accuracy was found to be approximately 95.6 %. ResNet: The ResNet model showed remarkable accuracy in image classification. After an extended training period, it achieved a striking 96.2 % accuracy rate, signifying its proficiency in distinguishing the malignancy of spinal bone tumors. However, these results illustrate the clear advantage of AlexNet in terms of proficiency despite a lower classification accuracy. The robust performance of the ResNet model is auspicious when accuracy is more favored in the context of diagnosing spinal bone tumor malignancy, albeit at the cost of longer training times, with each epoch taking an average of 32 s. Conclusion: Integrating deep learning and CNN-based image recognition technology offers a promising solution for qualitatively classifying bone tumors. This research underscores the potential of these models in enhancing the diagnosis and treatment processes for patients, benefiting both patients and medical professionals alike. The study highlights the significance of selecting appropriate models, such as ResNet, to improve accuracy in image recognition tasks.

15.
J Imaging Inform Med ; 2024 Sep 03.
Artículo en Inglés | MEDLINE | ID: mdl-39227538

RESUMEN

Liver cancer, a leading cause of cancer mortality, is often diagnosed by analyzing the grayscale variations in liver tissue across different computed tomography (CT) images. However, the intensity similarity can be strong, making it difficult for radiologists to visually identify hepatocellular carcinoma (HCC) and metastases. It is crucial for the management and prevention strategies to accurately differentiate between these two liver cancers. This study proposes an automated system using a convolutional neural network (CNN) to enhance diagnostic accuracy to detect HCC, metastasis, and healthy liver tissue. This system incorporates automatic segmentation and classification. The liver lesions segmentation model is implemented using residual attention U-Net. A 9-layer CNN classifier implements the lesions classification model. Its input is the combination of the results of the segmentation model with original images. The dataset included 300 patients, with 223 used to develop the segmentation model and 77 to test it. These 77 patients also served as inputs for the classification model, consisting of 20 HCC cases, 27 with metastasis, and 30 healthy. The system achieved a mean Dice score of 87.65 % in segmentation and a mean accuracy of 93.97 % in classification, both in the test phase. The proposed method is a preliminary study with great potential in helping radiologists diagnose liver cancers.

16.
Sci Rep ; 14(1): 20637, 2024 09 04.
Artículo en Inglés | MEDLINE | ID: mdl-39232043

RESUMEN

Skin cancer (SC) is an important medical condition that necessitates prompt identification to ensure timely treatment. Although visual evaluation by dermatologists is considered the most reliable method, its efficacy is subjective and laborious. Deep learning-based computer-aided diagnostic (CAD) platforms have become valuable tools for supporting dermatologists. Nevertheless, current CAD tools frequently depend on Convolutional Neural Networks (CNNs) with huge amounts of deep layers and hyperparameters, single CNN model methodologies, large feature space, and exclusively utilise spatial image information, which restricts their effectiveness. This study presents SCaLiNG, an innovative CAD tool specifically developed to address and surpass these constraints. SCaLiNG leverages a collection of three compact CNNs and Gabor Wavelets (GW) to acquire a comprehensive feature vector consisting of spatial-textural-frequency attributes. SCaLiNG gathers a wide range of image details by breaking down these photos into multiple directional sub-bands using GW, and then learning several CNNs using those sub-bands and the original picture. SCaLiNG also combines attributes taken from various CNNs trained with the actual images and subbands derived from GW. This fusion process correspondingly improves diagnostic accuracy due to the thorough representation of attributes. Furthermore, SCaLiNG applies a feature selection approach which further enhances the model's performance by choosing the most distinguishing features. Experimental findings indicate that SCaLiNG maintains a classification accuracy of 0.9170 in categorising SC subcategories, surpassing conventional single-CNN models. The outstanding performance of SCaLiNG underlines its ability to aid dermatologists in swiftly and precisely recognising and classifying SC, thereby enhancing patient outcomes.


Asunto(s)
Redes Neurales de la Computación , Neoplasias Cutáneas , Humanos , Neoplasias Cutáneas/patología , Aprendizaje Profundo , Diagnóstico por Computador/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Algoritmos
17.
Sci Rep ; 14(1): 20711, 2024 09 05.
Artículo en Inglés | MEDLINE | ID: mdl-39237689

RESUMEN

Tuberculosis (TB) is the leading cause of mortality among infectious diseases globally. Effectively managing TB requires early identification of individuals with TB disease. Resource-constrained settings often lack skilled professionals for interpreting chest X-rays (CXRs) used in TB diagnosis. To address this challenge, we developed "DecXpert" a novel Computer-Aided Detection (CAD) software solution based on deep neural networks for early TB diagnosis from CXRs, aiming to detect subtle abnormalities that may be overlooked by human interpretation alone. This study was conducted on the largest cohort size to date, where the performance of a CAD software (DecXpert version 1.4) was validated against the gold standard molecular diagnostic technique, GeneXpert MTB/RIF, analyzing data from 4363 individuals across 12 primary health care centers and one tertiary hospital in North India. DecXpert demonstrated 88% sensitivity (95% CI 0.85-0.93) and 85% specificity (95% CI 0.82-0.91) for active TB detection. Incorporating demographics, DecXpert achieved an area under the curve of 0.91 (95% CI 0.88-0.94), indicating robust diagnostic performance. Our findings establish DecXpert's potential as an accurate, efficient AI solution for early identification of active TB cases. Deployed as a screening tool in resource-limited settings, DecXpert could enable early identification of individuals with TB disease and facilitate effective TB management where skilled radiological interpretation is limited.


Asunto(s)
Programas Informáticos , Humanos , India/epidemiología , Femenino , Masculino , Adulto , Persona de Mediana Edad , Diagnóstico por Computador/métodos , Tuberculosis/diagnóstico , Tuberculosis/diagnóstico por imagen , Tuberculosis Pulmonar/diagnóstico por imagen , Tuberculosis Pulmonar/diagnóstico , Sensibilidad y Especificidad , Adulto Joven , Adolescente , Radiografía Torácica/métodos , Anciano
18.
J Comput Chem ; 2024 Sep 02.
Artículo en Inglés | MEDLINE | ID: mdl-39223071

RESUMEN

Predicting protein-ligand binding affinity is a crucial and challenging task in structure-based drug discovery. With the accumulation of complex structures and binding affinity data, various machine-learning scoring functions, particularly those based on deep learning, have been developed for this task, exhibiting superiority over their traditional counterparts. A fusion model sequentially connecting a graph neural network (GNN) and a convolutional neural network (CNN) to predict protein-ligand binding affinity is proposed in this work. In this model, the intermediate outputs of the GNN layers, as supplementary descriptors of atomic chemical environments at different levels, are concatenated with the input features of CNN. The model demonstrates a noticeable improvement in performance on CASF-2016 benchmark compared to its constituent CNN models. The generalization ability of the model is evaluated by setting a series of thresholds for ligand extended-connectivity fingerprint similarity or protein sequence similarity between the training and test sets. Masking experiment reveals that model can capture key interaction regions. Furthermore, the fusion model is applied to a virtual screening task for a novel target, PI5P4Kα. The fusion strategy significantly improves the ability of the constituent CNN model to identify active compounds. This work offers a novel approach to enhancing the accuracy of deep learning models in predicting binding affinity through fusion strategies.

19.
Dent Mater J ; 2024 Sep 04.
Artículo en Inglés | MEDLINE | ID: mdl-39231691

RESUMEN

This project aimed to develop an artificial intelligence program tailored for cephalometric images. The program employs a convolutional neural network with 6 convolutional layers and 2 affine layers. It identifies 18 key points on the skull to compute various angles essential for diagnosis. Utilizing a custom-built desktop computer with a moderately priced graphics processing unit, cephalogram images were resized to 800×800 pixels. Training data comprised 833 images, augmented 100 times; an additional 179 images were used for testing. Due to the complexity of training with full-size images, training was divided into two steps. The first step reduced images to 128×128 pixels, recognizing all 18 points. In the second step, 100×100 pixels blocks were extracted from original images for individual point training. The program then measured six angles, achieving an average error of 3.1 pixels for the 18 points, with SNA and SNB angles showing an average difference of less than 1°.

20.
J World Fed Orthod ; 2024 Sep 03.
Artículo en Inglés | MEDLINE | ID: mdl-39232889

RESUMEN

BACKGROUND: The purpose of this study was to compare the success of various convolutional neural network (CNN) models trained with handwriting samples in predicting patient cooperation. METHODS: A total of 237 (147 female and 90 male, mean age 14.94 ± 2.4) patients undergoing fixed orthodontic treatment were included in the study. In the 12th month of treatment, participants were divided into two groups based on the patient cooperation scale: cooperative or noncooperative. Then, for each patient, handwriting samples were obtained. Artificial neural network models were used to classify the patients as cooperative or noncooperative using the collected data. The accuracy, precision, recall, and F1-score values of nine different CNN models were compared. RESULTS: By overall success rate, InceptionResNetV2 (Accuracy: 72.0%, F1-score: 0.649) and NasNetMobil (Accuracy: 70.0%, F1-score: 0.417) were the two most effective CNN models. The two models with the lowest success rate were DenseNet121 (Accuracy: 59.0%, F1-score: 0.424) and ResNet50V2 (Accuracy: 46.0%, F1-score: 0.286). The success rates of the other five models were comparable. CONCLUSIONS: The artificial intelligence models trained with handwriting samples are not sufficiently accurate for clinical application in cooperation prediction.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA