Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 46
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Ann Med Surg (Lond) ; 86(7): 3917-3923, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38989161

RESUMEN

Introduction: In this cross-sectional study, the authors explored the knowledge, attitudes, and practices related to artificial intelligence (AI) among medical students in Sudan. With AI increasingly impacting healthcare, understanding its integration into medical education is crucial. This study aimed to assess the current state of AI awareness, perceptions, and practical experiences among medical students in Sudan. The authors aimed to evaluate the extent of AI familiarity among Sudanese medical students by examining their attitudes toward its application in medicine. Additionally, this study seeks to identify the factors influencing knowledge levels and explore the practical implementation of AI in the medical field. Method: A web-based survey was distributed to medical students in Sudan via social media platforms and e-mail during October 2023. The survey included questions on demographic information, knowledge of AI, attitudes toward its applications, and practical experiences. The descriptive statistics, χ2 tests, logistic regression, and correlations were analyzed using SPSS version 26.0. Results: Out of the 762 participants, the majority exhibited a basic understanding of AI, but detailed knowledge of its applications was limited. Positive attitudes toward the importance of AI in diagnosis, radiology, and pathology were prevalent. However, practical application of these methods was infrequent, with only a minority of the participants having hands-on experience. Factors influencing knowledge included the lack of a formal curriculum and gender disparities. Conclusion: This study highlights the need for comprehensive AI education in medical training programs in Sudan. While participants displayed positive attitudes, there was a notable gap in practical experience. Addressing these gaps through targeted educational interventions is crucial for preparing future healthcare professionals to navigate the evolving landscape of AI in medicine. Recommendations: Policy efforts should focus on integrating AI education into the medical curriculum to ensure readiness for the technological advancements shaping the future of healthcare.

2.
Front Endocrinol (Lausanne) ; 14: 1025749, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37033240

RESUMEN

Objective: To develop and validate an artificial intelligence diagnostic system based on X-ray imaging data for diagnosing vertebral compression fractures (VCFs). Methods: In total, 1904 patients who underwent X-ray at four independent hospitals were retrospectively (n=1847) and prospectively (n=57) enrolled. The participants were separated into a development cohort, a prospective test cohort and three external test cohorts. The proposed model used a transfer learning method based on the ResNet-18 architecture. The diagnostic performance of the model was evaluated using receiver operating characteristic curve (ROC) analysis and validated using a prospective validation set and three external sets. The performance of the model was compared with three degrees of musculoskeletal expertise: expert, competent, and trainee. Results: The diagnostic accuracy for identifying compression fractures was 0.850 in the testing set, 0.829 in the prospective set, and ranged from 0.757 to 0.832 in the three external validation sets. In the human and deep learning (DL) collaboration dataset, the area under the ROC curves(AUCs) in acute, chronic, and pathological compression fractures were as follows: 0.780, 0.809, 0.734 for the DL model; 0.573, 0.618, 0.541 for the trainee radiologist; 0.701, 0.782, 0.665 for the competent radiologist; 0.707,0.732, 0.667 for the expert radiologist; 0.722, 0.744, 0.610 for the DL and trainee; 0.767, 0.779, 0.729 for the DL and competent; 0.801, 0.825, 0.751 for the DL and expert radiologist. Conclusions: Our study offers a high-accuracy multi-class deep learning model which could assist community-based hospitals in improving the diagnostic accuracy of VCFs.


Asunto(s)
Enfermedades Óseas Metabólicas , Aprendizaje Profundo , Fracturas por Compresión , Fracturas de la Columna Vertebral , Humanos , Inteligencia Artificial , Fracturas de la Columna Vertebral/diagnóstico por imagen , Fracturas por Compresión/diagnóstico por imagen , Estudios Retrospectivos
3.
JACC Asia ; 3(1): 1-14, 2023 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-36873752

RESUMEN

Percutaneous coronary intervention has been a standard treatment strategy for patients with coronary artery disease with continuous ebullient progress in technology and techniques. The application of artificial intelligence and deep learning in particular is currently boosting the development of interventional solutions, improving the efficiency and objectivity of diagnosis and treatment. The ever-growing amount of data and computing power together with cutting-edge algorithms pave the way for the integration of deep learning into clinical practice, which has revolutionized the interventional workflow in imaging processing, interpretation, and navigation. This review discusses the development of deep learning algorithms and their corresponding evaluation metrics together with their clinical applications. Advanced deep learning algorithms create new opportunities for precise diagnosis and tailored treatment with a high degree of automation, reduced radiation, and enhanced risk stratification. Generalization, interpretability, and regulatory issues are remaining challenges that need to be addressed through joint efforts from multidisciplinary community.

4.
Clin Transl Radiat Oncol ; 39: 100590, 2023 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-36935854

RESUMEN

Head and neck radiotherapy induces important toxicity, and its efficacy and tolerance vary widely across patients. Advancements in radiotherapy delivery techniques, along with the increased quality and frequency of image guidance, offer a unique opportunity to individualize radiotherapy based on imaging biomarkers, with the aim of improving radiation efficacy while reducing its toxicity. Various artificial intelligence models integrating clinical data and radiomics have shown encouraging results for toxicity and cancer control outcomes prediction in head and neck cancer radiotherapy. Clinical implementation of these models could lead to individualized risk-based therapeutic decision making, but the reliability of the current studies is limited. Understanding, validating and expanding these models to larger multi-institutional data sets and testing them in the context of clinical trials is needed to ensure safe clinical implementation. This review summarizes the current state of the art of machine learning models for prediction of head and neck cancer radiotherapy outcomes.

5.
J Pathol Inform ; 14: 100192, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-36818020

RESUMEN

Treatment of patients with oesophageal and gastric cancer (OeGC) is guided by disease stage, patient performance status and preferences. Lymph node (LN) status is one of the strongest prognostic factors for OeGC patients. However, survival varies between patients with the same disease stage and LN status. We recently showed that LN size from patients with OeGC might also have prognostic value, thus making delineations of LNs essential for size estimation and the extraction of other imaging biomarkers. We hypothesized that a machine learning workflow is able to: (1) find digital H&E stained slides containing LNs, (2) create a scoring system providing degrees of certainty for the results, and (3) delineate LNs in those images. To train and validate the pipeline, we used 1695 H&E slides from the OE02 trial. The dataset was divided into training (80%) and validation (20%). The model was tested on an external dataset of 826 H&E slides from the OE05 trial. U-Net architecture was used to generate prediction maps from which predefined features were extracted. These features were subsequently used to train an XGBoost model to determine if a region truly contained a LN. With our innovative method, the balanced accuracies of the LN detection were 0.93 on the validation dataset (0.83 on the test dataset) compared to 0.81 (0.81) on the validation (test) datasets when using the standard method of thresholding U-Net predictions to arrive at a binary mask. Our method allowed for the creation of an "uncertain" category, and partly limited false-positive predictions on the external dataset. The mean Dice score was 0.73 (0.60) per-image and 0.66 (0.48) per-LN for the validation (test) datasets. Our pipeline detects images with LNs more accurately than conventional methods, and high-throughput delineation of LNs can facilitate future LN content analyses of large datasets.

6.
Comput Struct Biotechnol J ; 21: 1487-1497, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-36851914

RESUMEN

One of the key features of intrinsically disordered regions (IDRs) is their ability to interact with a broad range of partner molecules. Multiple types of interacting IDRs were identified including molecular recognition fragments (MoRFs), short linear sequence motifs (SLiMs), and protein-, nucleic acids- and lipid-binding regions. Prediction of binding IDRs in protein sequences is gaining momentum in recent years. We survey 38 predictors of binding IDRs that target interactions with a diverse set of partners, such as peptides, proteins, RNA, DNA and lipids. We offer a historical perspective and highlight key events that fueled efforts to develop these methods. These tools rely on a diverse range of predictive architectures that include scoring functions, regular expressions, traditional and deep machine learning and meta-models. Recent efforts focus on the development of deep neural network-based architectures and extending coverage to RNA, DNA and lipid-binding IDRs. We analyze availability of these methods and show that providing implementations and webservers results in much higher rates of citations/use. We also make several recommendations to take advantage of modern deep network architectures, develop tools that bundle predictions of multiple and different types of binding IDRs, and work on algorithms that model structures of the resulting complexes.

7.
Heliyon ; 9(2): e13601, 2023 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-36852052

RESUMEN

The prevalence of cardiovascular diseases is increasing around the world. However, the technology is evolving and can be monitored with low-cost sensors anywhere at any time. This subject is being researched, and different methods can automatically identify these diseases, helping patients and healthcare professionals with the treatments. This paper presents a systematic review of disease identification, classification, and recognition with ECG sensors. The review was focused on studies published between 2017 and 2022 in different scientific databases, including PubMed Central, Springer, Elsevier, Multidisciplinary Digital Publishing Institute (MDPI), IEEE Xplore, and Frontiers. It results in the quantitative and qualitative analysis of 103 scientific papers. The study demonstrated that different datasets are available online with data related to various diseases. Several ML/DP-based models were identified in the research, where Convolutional Neural Network and Support Vector Machine were the most applied algorithms. This review can allow us to identify the techniques that can be used in a system that promotes the patient's autonomy.

8.
Ophthalmol Sci ; 3(2): 100258, 2023 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-36685715

RESUMEN

Purpose: Rare disease diagnosis is challenging in medical image-based artificial intelligence due to a natural class imbalance in datasets, leading to biased prediction models. Inherited retinal diseases (IRDs) are a research domain that particularly faces this issue. This study investigates the applicability of synthetic data in improving artificial intelligence-enabled diagnosis of IRDs using generative adversarial networks (GANs). Design: Diagnostic study of gene-labeled fundus autofluorescence (FAF) IRD images using deep learning. Participants: Moorfields Eye Hospital (MEH) dataset of 15 692 FAF images obtained from 1800 patients with confirmed genetic diagnosis of 1 of 36 IRD genes. Methods: A StyleGAN2 model is trained on the IRD dataset to generate 512 × 512 resolution images. Convolutional neural networks are trained for classification using different synthetically augmented datasets, including real IRD images plus 1800 and 3600 synthetic images, and a fully rebalanced dataset. We also perform an experiment with only synthetic data. All models are compared against a baseline convolutional neural network trained only on real data. Main Outcome Measures: We evaluated synthetic data quality using a Visual Turing Test conducted with 4 ophthalmologists from MEH. Synthetic and real images were compared using feature space visualization, similarity analysis to detect memorized images, and Blind/Referenceless Image Spatial Quality Evaluator (BRISQUE) score for no-reference-based quality evaluation. Convolutional neural network diagnostic performance was determined on a held-out test set using the area under the receiver operating characteristic curve (AUROC) and Cohen's Kappa (κ). Results: An average true recognition rate of 63% and fake recognition rate of 47% was obtained from the Visual Turing Test. Thus, a considerable proportion of the synthetic images were classified as real by clinical experts. Similarity analysis showed that the synthetic images were not copies of the real images, indicating that copied real images, meaning the GAN was able to generalize. However, BRISQUE score analysis indicated that synthetic images were of significantly lower quality overall than real images (P < 0.05). Comparing the rebalanced model (RB) with the baseline (R), no significant change in the average AUROC and κ was found (R-AUROC = 0.86[0.85-88], RB-AUROC = 0.88[0.86-0.89], R-k = 0.51[0.49-0.53], and RB-k = 0.52[0.50-0.54]). The synthetic data trained model (S) achieved similar performance as the baseline (S-AUROC = 0.86[0.85-87], S-k = 0.48[0.46-0.50]). Conclusions: Synthetic generation of realistic IRD FAF images is feasible. Synthetic data augmentation does not deliver improvements in classification performance. However, synthetic data alone deliver a similar performance as real data, and hence may be useful as a proxy to real data. Financial Disclosure(s): Proprietary or commercial disclosure may be found after the references.

9.
Ophthalmol Sci ; 3(2): 100254, 2023 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-36691594

RESUMEN

Objective: To develop automated algorithms for the detection of posterior vitreous detachment (PVD) using OCT imaging. Design: Evaluation of a diagnostic test or technology. Subjects: Overall, 42 385 consecutive OCT images (865 volumetric OCT scans) obtained with Heidelberg Spectralis from 865 eyes from 464 patients at an academic retina clinic between October 2020 and December 2021 were retrospectively reviewed. Methods: We developed a customized computer vision algorithm based on image filtering and edge detection to detect the posterior vitreous cortex for the determination of PVD status. A second deep learning (DL) image classification model based on convolutional neural networks and ResNet-50 architecture was also trained to identify PVD status from OCT images. The training dataset consisted of 674 OCT volume scans (33 026 OCT images), while the validation testing set consisted of 73 OCT volume scans (3577 OCT images). Overall, 118 OCT volume scans (5782 OCT images) were used as a separate external testing dataset. Main Outcome Measures: Accuracy, sensitivity, specificity, F1-scores, and area under the receiver operator characteristic curves (AUROCs) were measured to assess the performance of the automated algorithms. Results: Both the customized computer vision algorithm and DL model results were largely in agreement with the PVD status labeled by trained graders. The DL approach achieved an accuracy of 90.7% and an F1-score of 0.932 with a sensitivity of 100% and a specificity of 74.5% for PVD detection from an OCT volume scan. The AUROC was 89% at the image level and 96% at the volume level for the DL model. The customized computer vision algorithm attained an accuracy of 89.5% and an F1-score of 0.912 with a sensitivity of 91.9% and a specificity of 86.1% on the same task. Conclusions: Both the computer vision algorithm and the DL model applied on OCT imaging enabled reliable detection of PVD status, demonstrating the potential for OCT-based automated PVD status classification to assist with vitreoretinal surgical planning. Financial Disclosures: Proprietary or commercial disclosure may be found after the references.

10.
Heliyon ; 9(1): e12945, 2023 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-36699283

RESUMEN

Rationale and objectives: Selecting region of interest (ROI) for left atrial appendage (LAA) filling defects assessment can be time consuming and prone to subjectivity. This study aimed to develop and validate a novel artificial intelligence (AI), deep learning (DL) based framework for automatic filling defects assessment on CT images for clinical and subclinical atrial fibrillation (AF) patients. Materials and methods: A total of 443,053 CT images were used for DL model development and testing. Images were analyzed by the AI framework and expert cardiologists/radiologists. The LAA segmentation performance was evaluated using Dice coefficient. The agreement between manual and automatic LAA ROI selections was evaluated using intraclass correlation coefficient (ICC) analysis. Receiver operating characteristic (ROC) curve analysis was used to assess filling defects based on the computed LAA to ascending aorta Hounsfield unit (HU) ratios. Results: A total of 210 patients (Group 1: subclinical AF, n = 105; Group 2: clinical AF with stroke, n = 35; Group 3: AF for catheter ablation, n = 70) were enrolled. The LAA volume segmentation achieved 0.931-0.945 Dice scores. The LAA ROI selection demonstrated excellent agreement (ICC ≥0.895, p < 0.001) with manual selection on the test sets. The automatic framework achieved an excellent AUC score of 0.979 in filling defects assessment. The ROC-derived optimal HU ratio threshold for filling defects detection was 0.561. Conclusion: The novel AI-based framework could accurately segment the LAA region and select ROIs while effectively avoiding trabeculae for filling defects assessment, achieving close-to-expert performance. This technique may help preemptively detect the potential thromboembolic risk for AF patients.

11.
J Clin Exp Hepatol ; 13(1): 149-161, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-36647407

RESUMEN

Artificial Intelligence (AI) is a mathematical process of computer mediating designing of algorithms to support human intelligence. AI in hepatology has shown tremendous promise to plan appropriate management and hence improve treatment outcomes. The field of AI is in a very early phase with limited clinical use. AI tools such as machine learning, deep learning, and 'big data' are in a continuous phase of evolution, presently being applied for clinical and basic research. In this review, we have summarized various AI applications in hepatology, the pitfalls and AI's future implications. Different AI models and algorithms are under study using clinical, laboratory, endoscopic and imaging parameters to diagnose and manage liver diseases and mass lesions. AI has helped to reduce human errors and improve treatment protocols. Further research and validation are required for future use of AI in hepatology.

12.
Ophthalmol Sci ; 3(1): 100222, 2023 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-36325476

RESUMEN

Purpose: Two novel deep learning methods using a convolutional neural network (CNN) and a recurrent neural network (RNN) have recently been developed to forecast future visual fields (VFs). Although the original evaluations of these models focused on overall accuracy, it was not assessed whether they can accurately identify patients with progressive glaucomatous vision loss to aid clinicians in preventing further decline. We evaluated these 2 prediction models for potential biases in overestimating or underestimating VF changes over time. Design: Retrospective observational cohort study. Participants: All available and reliable Swedish Interactive Thresholding Algorithm Standard 24-2 VFs from Massachusetts Eye and Ear Glaucoma Service collected between 1999 and 2020 were extracted. Because of the methods' respective needs, the CNN data set included 54 373 samples from 7472 patients, and the RNN data set included 24 430 samples from 1809 patients. Methods: The CNN and RNN methods were reimplemented. A fivefold cross-validation procedure was performed on each model, and pointwise mean absolute error (PMAE) was used to measure prediction accuracy. Test data were stratified into categories based on the severity of VF progression to investigate the models' performances on predicting worsening cases. The models were additionally compared with a no-change model that uses the baseline VF (for the CNN) and the last-observed VF (for the RNN) for its prediction. Main Outcome Measures: PMAE in predictions. Results: The overall PMAE 95% confidence intervals were 2.21 to 2.24 decibels (dB) for the CNN and 2.56 to 2.61 dB for the RNN, which were close to the original studies' reported values. However, both models exhibited large errors in identifying patients with worsening VFs and often failed to outperform the no-change model. Pointwise mean absolute error values were higher in patients with greater changes in mean sensitivity (for the CNN) and mean total deviation (for the RNN) between baseline and follow-up VFs. Conclusions: Although our evaluation confirms the low overall PMAEs reported in the original studies, our findings also reveal that both models severely underpredict worsening of VF loss. Because the accurate detection and projection of glaucomatous VF decline is crucial in ophthalmic clinical practice, we recommend that this consideration is explicitly taken into account when developing and evaluating future deep learning models.

13.
Ophthalmol Sci ; 3(1): 100240, 2023 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-36561353

RESUMEN

Objective: To demonstrate that deep learning (DL) methods can produce robust prediction of gene expression profile (GEP) in uveal melanoma (UM) based on digital cytopathology images. Design: Evaluation of a diagnostic test or technology. Subjects Participants and Controls: Deidentified smeared cytology slides stained with hematoxylin and eosin obtained from a fine needle aspirated from UM. Methods: Digital whole-slide images were generated by fine-needle aspiration biopsies of UM tumors that underwent GEP testing. A multistage DL system was developed with automatic region-of-interest (ROI) extraction from digital cytopathology images, an attention-based neural network, ROI feature aggregation, and slide-level data augmentation. Main Outcome Measures: The ability of our DL system in predicting GEP on a slide (patient) level. Data were partitioned at the patient level (73% training; 27% testing). Results: In total, our study included 89 whole-slide images from 82 patients and 121 388 unique ROIs. The testing set included 24 slides from 24 patients (12 class 1 tumors; 12 class 2 tumors; 1 slide per patient). Our DL system for GEP prediction achieved an area under the receiver operating characteristic curve of 0.944, an accuracy of 91.7%, a sensitivity of 91.7%, and a specificity of 91.7% on a slide-level analysis. The incorporation of slide-level feature aggregation and data augmentation produced a more predictive DL model (P = 0.0031). Conclusions: Our current work established a complete pipeline for GEP prediction in UM tumors: from automatic ROI extraction from digital cytopathology whole-slide images to slide-level predictions. Our DL system demonstrated robust performance and, if validated prospectively, could serve as an image-based alternative to GEP testing.

14.
Comput Struct Biotechnol J ; 21: 158-167, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-36544468

RESUMEN

While deep learning (DL) has brought a revolution in the protein structure prediction field, still an important question remains how the revolution can be transferred to advances in structure-based drug discovery. Because the lessons from the recent GPCR dock challenge were inconclusive primarily due to the size of the dataset, in this work we further elaborated on 70 diverse GPCR complexes bound to either small molecules or peptides to investigate the best-practice modeling and docking strategies for GPCR drug discovery. From our quantitative analysis, it is shown that substantial improvements in docking and virtual screening have been possible by the advance in DL-based protein structure predictions with respect to the expected results from the combination of best pre-DL tools. The success rate of docking on DL-based model structures approaches that of cross-docking on experimental structures, showing over 30% improvement from the best pre-DL protocols. This amount of performance could be achieved only when two modeling points were considered properly: 1) correct functional-state modeling of receptors and 2) receptor-flexible docking. Best-practice modeling strategies and the model confidence estimation metric suggested in this work may serve as a guideline for future computer-aided GPCR drug discovery scenarios.

15.
Comput Struct Biotechnol J ; 21: 238-250, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-36544476

RESUMEN

The process of designing biomolecules, in particular proteins, is witnessing a rapid change in available tooling and approaches, moving from design through physicochemical force fields, to producing plausible, complex sequences fast via end-to-end differentiable statistical models. To achieve conditional and controllable protein design, researchers at the interface of artificial intelligence and biology leverage advances in natural language processing (NLP) and computer vision techniques, coupled with advances in computing hardware to learn patterns from growing biological databases, curated annotations thereof, or both. Once learned, these patterns can be used to provide novel insights into mechanistic biology and the design of biomolecules. However, navigating and understanding the practical applications for the many recent protein design tools is complex. To facilitate this, we 1) document recent advances in deep learning (DL) assisted protein design from the last three years, 2) present a practical pipeline that allows to go from de novo-generated sequences to their predicted properties and web-powered visualization within minutes, and 3) leverage it to suggest a generated protein sequence which might be used to engineer a biosynthetic gene cluster to produce a molecular glue-like compound. Lastly, we discuss challenges and highlight opportunities for the protein design field.

16.
Ophthalmol Sci ; 3(1): 100233, 2023 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-36545260

RESUMEN

Purpose: To compare the diagnostic accuracy and explainability of a Vision Transformer deep learning technique, Data-efficient image Transformer (DeiT), and ResNet-50, trained on fundus photographs from the Ocular Hypertension Treatment Study (OHTS) to detect primary open-angle glaucoma (POAG) and identify the salient areas of the photographs most important for each model's decision-making process. Design: Evaluation of a diagnostic technology. Subjects Participants and Controls: Overall 66 715 photographs from 1636 OHTS participants and an additional 5 external datasets of 16 137 photographs of healthy and glaucoma eyes. Methods: Data-efficient image Transformer models were trained to detect 5 ground-truth OHTS POAG classifications: OHTS end point committee POAG determinations because of disc changes (model 1), visual field (VF) changes (model 2), or either disc or VF changes (model 3) and Reading Center determinations based on disc (model 4) and VFs (model 5). The best-performing DeiT models were compared with ResNet-50 models on OHTS and 5 external datasets. Main Outcome Measures: Diagnostic performance was compared using areas under the receiver operating characteristic curve (AUROC) and sensitivities at fixed specificities. The explainability of the DeiT and ResNet-50 models was compared by evaluating the attention maps derived directly from DeiT to 3 gradient-weighted class activation map strategies. Results: Compared with our best-performing ResNet-50 models, the DeiT models demonstrated similar performance on the OHTS test sets for all 5 ground-truth POAG labels; AUROC ranged from 0.82 (model 5) to 0.91 (model 1). Data-efficient image Transformer AUROC was consistently higher than ResNet-50 on the 5 external datasets. For example, AUROC for the main OHTS end point (model 3) was between 0.08 and 0.20 higher in the DeiT than ResNet-50 models. The saliency maps from the DeiT highlight localized areas of the neuroretinal rim, suggesting important rim features for classification. The same maps in the ResNet-50 models show a more diffuse, generalized distribution around the optic disc. Conclusions: Vision Transformers have the potential to improve generalizability and explainability in deep learning models, detecting eye disease and possibly other medical conditions that rely on imaging for clinical diagnosis and management.

17.
Ophthalmol Sci ; 2(4): 100165, 2022 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-36531583

RESUMEN

Purpose: To evaluate the performance of a deep learning (DL) algorithm for retinopathy of prematurity (ROP) screening in Nepal and Mongolia. Design: Retrospective analysis of prospectively collected clinical data. Participants: Clinical information and fundus images were obtained from infants in 2 ROP screening programs in Nepal and Mongolia. Methods: Fundus images were obtained using the Forus 3nethra neo (Forus Health) in Nepal and the RetCam Portable (Natus Medical, Inc.) in Mongolia. The overall severity of ROP was determined from the medical record using the International Classification of ROP (ICROP). The presence of plus disease was determined independently in each image using a reference standard diagnosis. The Imaging and Informatics for ROP (i-ROP) DL algorithm was trained on images from the RetCam to classify plus disease and to assign a vascular severity score (VSS) from 1 through 9. Main Outcome Measures: Area under the receiver operating characteristic curve and area under the precision-recall curve for the presence of plus disease or type 1 ROP and association between VSS and ICROP disease category. Results: The prevalence of type 1 ROP was found to be higher in Mongolia (14.0%) than in Nepal (2.2%; P < 0.001) in these data sets. In Mongolia (RetCam images), the area under the receiver operating characteristic curve for examination-level plus disease detection was 0.968, and the area under the precision-recall curve was 0.823. In Nepal (Forus images), these values were 0.999 and 0.993, respectively. The ROP VSS was associated with ICROP classification in both datasets (P < 0.001). At the population level, the median VSS was found to be higher in Mongolia (2.7; interquartile range [IQR], 1.3-5.4]) as compared with Nepal (1.9; IQR, 1.2-3.4; P < 0.001). Conclusions: These data provide preliminary evidence of the effectiveness of the i-ROP DL algorithm for ROP screening in neonatal populations in Nepal and Mongolia using multiple camera systems and are useful for consideration in future clinical implementation of artificial intelligence-based ROP screening in low- and middle-income countries.

18.
Inf Sci (N Y) ; 592: 389-401, 2022 May.
Artículo en Inglés | MEDLINE | ID: mdl-36532848

RESUMEN

Chest X-ray (CXR) imaging is a low-cost, easy-to-use imaging alternative that can be used to diagnose/screen pulmonary abnormalities due to infectious diseaseX: Covid-19, Pneumonia and Tuberculosis (TB). Not limited to binary decisions (with respect to healthy cases) that are reported in the state-of-the-art literature, we also consider non-healthy CXR screening using a lightweight deep neural network (DNN) with a reduced number of epochs and parameters. On three diverse publicly accessible and fully categorized datasets, for non-healthy versus healthy CXR screening, the proposed DNN produced the following accuracies: 99.87% on Covid-19 versus healthy, 99.55% on Pneumonia versus healthy, and 99.76% on TB versus healthy datasets. On the other hand, when considering non-healthy CXR screening, we received the following accuracies: 98.89% on Covid-19 versus Pneumonia, 98.99% on Covid-19 versus TB, and 100% on Pneumonia versus TB. To further precisely analyze how well the proposed DNN worked, we considered well-known DNNs such as ResNet50, ResNet152V2, MobileNetV2, and InceptionV3. Our results are comparable with the current state-of-the-art, and as the proposed CNN is light, it could potentially be used for mass screening in resource-constraint regions.

19.
Ophthalmol Sci ; 2(3): 100180, 2022 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-36245759

RESUMEN

Objective: We aimed to develop a deep learning (DL)-based algorithm for early glaucoma detection based on color fundus photographs that provides information on defects in the retinal nerve fiber layer (RNFL) and its thickness from the mapping and translating relations of spectral domain OCT (SD-OCT) thickness maps. Design: Developing and evaluating an artificial intelligence detection tool. Subjects: Pretraining paired data of color fundus photographs and SD-OCT images from 189 healthy participants and 371 patients with early glaucoma were used. Methods: The variational autoencoder (VAE) network training architecture was used for training, and the correlation between the fundus photographs and RNFL thickness distribution was determined through the deep neural network. The reference standard was defined as a vertical cup-to-disc ratio of ≥0.7, other typical changes in glaucomatous optic neuropathy, and RNFL defects. Convergence indicates that the VAE has learned a distribution that would enable us to produce corresponding synthetic OCT scans. Main Outcome Measures: Similarly to wide-field OCT scanning, the proposed model can extract the results of RNFL thickness analysis. The structural similarity index measure (SSIM) and peak signal-to-noise ratio (PSNR) were used to assess signal strength and the similarity in the structure of the color fundus images converted to an RNFL thickness distribution model. The differences between the model-generated images and original images were quantified. Results: We developed and validated a novel DL-based algorithm to extract thickness information from the color space of fundus images similarly to that from OCT images and to use this information to regenerate RNFL thickness distribution images. The generated thickness map was sufficient for clinical glaucoma detection, and the generated images were similar to ground truth (PSNR: 19.31 decibels; SSIM: 0.44). The inference results were similar to the OCT-generated original images in terms of the ability to predict RNFL thickness distribution. Conclusions: The proposed technique may aid clinicians in early glaucoma detection, especially when only color fundus photographs are available.

20.
Ophthalmol Sci ; 2(2): 100127, 2022 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-36249690

RESUMEN

Purpose: Advances in artificial intelligence have produced a few predictive models in glaucoma, including a logistic regression model predicting glaucoma progression to surgery. However, uncertainty exists regarding how to integrate the wealth of information in free-text clinical notes. The purpose of this study was to predict glaucoma progression requiring surgery using deep learning (DL) approaches on data from electronic health records (EHRs), including features from structured clinical data and from natural language processing of clinical free-text notes. Design: Development of DL predictive model in an observational cohort. Participants: Adult patients with glaucoma at a single center treated from 2008 through 2020. Methods: Ophthalmology clinical notes of patients with glaucoma were identified from EHRs. Available structured data included patient demographic information, diagnosis codes, prior surgeries, and clinical information including intraocular pressure, visual acuity, and central corneal thickness. In addition, words from patients' first 120 days of notes were mapped to ophthalmology domain-specific neural word embeddings trained on PubMed ophthalmology abstracts. Word embeddings and structured clinical data were used as inputs to DL models to predict subsequent glaucoma surgery. Main Outcome Measures: Evaluation metrics included area under the receiver operating characteristic curve (AUC) and F1 score, the harmonic mean of positive predictive value, and sensitivity on a held-out test set. Results: Seven hundred forty-eight of 4512 patients with glaucoma underwent surgery. The model that incorporated both structured clinical features as well as input features from clinical notes achieved an AUC of 73% and F1 of 40%, compared with only structured clinical features, (AUC, 66%; F1, 34%) and only clinical free-text features (AUC, 70%; F1, 42%). All models outperformed predictions from a glaucoma specialist's review of clinical notes (F1, 29.5%). Conclusions: We can successfully predict which patients with glaucoma will need surgery using DL models on EHRs unstructured text. Models incorporating free-text data outperformed those using only structured inputs. Future predictive models using EHRs should make use of information from within clinical free-text notes to improve predictive performance. Additional research is needed to investigate optimal methods of incorporating imaging data into future predictive models as well.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA