Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 59
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Front Physiol ; 15: 1412985, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39156824

RESUMEN

In recent years, semantic segmentation in deep learning has been widely applied in medical image segmentation, leading to the development of numerous models. Convolutional Neural Network (CNNs) have achieved milestone achievements in medical image analysis. Particularly, deep neural networks based on U-shaped architectures and skip connections have been extensively employed in various medical image tasks. U-Net is characterized by its encoder-decoder architecture and pioneering skip connections, along with multi-scale features, has served as a fundamental network architecture for many modifications. But U-Net cannot fully utilize all the information from the encoder layer in the decoder layer. U-Net++ connects mid parameters of different dimensions through nested and dense skip connections. However, it can only alleviate the disadvantage of not being able to fully utilize the encoder information and will greatly increase the model parameters. In this paper, a novel BFNet is proposed to utilize all feature maps from the encoder at every layer of the decoder and reconnects with the current layer of the encoder. This allows the decoder to better learn the positional information of segmentation targets and improves learning of boundary information and abstract semantics in the current layer of the encoder. Our proposed method has a significant improvement in accuracy with 1.4 percent. Besides enhancing accuracy, our proposed BFNet also reduces network parameters. All the advantages we proposed are demonstrated on our dataset. We also discuss how different loss functions influence this model and some possible improvements.

2.
Front Plant Sci ; 15: 1349209, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38993936

RESUMEN

Counting nematodes is a labor-intensive and time-consuming task, yet it is a pivotal step in various quantitative nematological studies; preparation of initial population densities and final population densities in pot, micro-plot and field trials for different objectives related to management including sampling and location of nematode infestation foci. Nematologists have long battled with the complexities of nematode counting, leading to several research initiatives aimed at automating this process. However, these research endeavors have primarily focused on identifying single-class objects within individual images. To enhance the practicality of this technology, there's a pressing need for an algorithm that cannot only detect but also classify multiple classes of objects concurrently. This study endeavors to tackle this challenge by developing a user-friendly Graphical User Interface (GUI) that comprises multiple deep learning algorithms, allowing simultaneous recognition and categorization of nematode eggs and second stage juveniles of Meloidogyne spp. In total of 650 images for eggs and 1339 images for juveniles were generated using two distinct imaging systems, resulting in 8655 eggs and 4742 Meloidogyne juveniles annotated using bounding box and segmentation, respectively. The deep-learning models were developed by leveraging the Convolutional Neural Networks (CNNs) machine learning architecture known as YOLOv8x. Our results showed that the models correctly identified eggs as eggs and Meloidogyne juveniles as Meloidogyne juveniles in 94% and 93% of instances, respectively. The model demonstrated higher than 0.70 coefficient correlation between model predictions and observations on unseen images. Our study has showcased the potential utility of these models in practical applications for the future. The GUI is made freely available to the public through the author's GitHub repository (https://github.com/bresilla/nematode_counting). While this study currently focuses on one genus, there are plans to expand the GUI's capabilities to include other economically significant genera of plant parasitic nematodes. Achieving these objectives, including enhancing the models' accuracy on different imaging systems, may necessitate collaboration among multiple nematology teams and laboratories, rather than being the work of a single entity. With the increasing interest among nematologists in harnessing machine learning, the authors are confident in the potential development of a universal automated nematode counting system accessible to all. This paper aims to serve as a framework and catalyst for initiating global collaboration toward this important goal.

3.
Cogn Neurodyn ; 18(3): 907-918, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38826653

RESUMEN

EEG is the most common test for diagnosing a seizure, where it presents information about the electrical activity of the brain. Automatic Seizure detection is one of the challenging tasks due to limitations of conventional methods with regard to inefficient feature selection, increased computational complexity and time and less accuracy. The situation calls for a practical framework to achieve better performance for detecting the seizure effectively. Hence, this study proposes modified Blackman bandpass filter-greedy particle swarm optimization (MBBF-GPSO) with convolutional neural network (CNN) for effective seizure detection. In this case, unwanted signals (noise) is eliminated by MBBF as it possess better ability in stopband attenuation, and, only the optimized features are selected using GPSO. For enhancing the efficacy of obtaining optimal solutions in GPSO, the time and frequency domain is extracted to complement it. Through this process, an optimized features are attained by MBBF-GPSO. Then, the CNN layer is employed for obtaining the productive classification output using the objective function. Here, CNN is employed due to its ability in automatically learning distinct features for individual class. Such advantages of the proposed system have made it explore better performance in seizure detection that is confirmed through performance and comparative analysis.

5.
Front Neurol ; 14: 1217796, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37941573

RESUMEN

Background: Rapid and accurate triage of acute ischemic stroke (AIS) is essential for early revascularization and improved patient outcomes. Response to acute reperfusion therapies varies significantly based on patient-specific cerebrovascular anatomy that governs cerebral blood flow. We present an end-to-end machine learning approach for automatic stroke triage. Methods: Employing a validated convolutional neural network (CNN) segmentation model for image processing, we extract each patient's cerebrovasculature and its morphological features from baseline non-invasive angiography scans. These features are used to detect occlusion's presence and the site automatically, and for the first time, to estimate collateral circulation without manual intervention. We then use the extracted cerebrovascular features along with commonly used clinical and imaging parameters to predict the 90 days functional outcome for each patient. Results: The CNN model achieved a segmentation accuracy of 94% based on the Dice similarity coefficient (DSC). The automatic stroke detection algorithm had a sensitivity and specificity of 92% and 94%, respectively. The models for occlusion site detection and automatic collateral grading reached 96% and 87.2% accuracy, respectively. Incorporating the automatically extracted cerebrovascular features significantly improved the 90 days outcome prediction accuracy from 0.63 to 0.83. Conclusion: The fast, automatic, and comprehensive model presented here can improve stroke diagnosis, aid collateral assessment, and enhance prognostication for treatment decisions, using cerebrovascular morphology.

6.
Network ; 34(4): 250-281, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37534974

RESUMEN

The rapid advancement of technology such as stream processing technologies, deep-learning approaches, and artificial intelligence plays a prominent and vital role, to detect heart rate using a prediction model. However, the existing methods could not handle high -dimensional datasets, and deep feature learning to improvise the performance. Therefore, this work proposed a real-time heart rate prediction model, using K-nearest neighbour (KNN) adhered to the principle component analysis algorithm (PCA) and weighted random forest algorithm for feature fusion (KPCA-WRF) approach and deep CNN feature learning framework. The feature selection, from the fused features, was optimized by ant colony optimization (ACO) and particle swarm optimization (PSO) algorithm to enhance the selected fused features from deep CNN. The optimized features were reduced to low dimensions using the PCA algorithm. The significant straight heart rate features are plotted by capturing out nearest similar data point values using the algorithm. The fused features were then classified for aiding the training process. The weighted values are assigned to those tuned hyper parameters (feature matrix forms). The optimal path and continuity of the weighted feature representations are moved using the random forest algorithm, in K-fold validation iterations.


Asunto(s)
Inteligencia Artificial , Máquina de Vectores de Soporte , Frecuencia Cardíaca , Algoritmos , Aprendizaje Automático
7.
JHEP Rep ; 5(4): 100664, 2023 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-36908748

RESUMEN

Background & Aims: Patterns of liver HBV antigen expression have been described but not quantified at single-cell resolution. We applied quantitative techniques to liver biopsies from individuals with chronic hepatitis B and evaluated sampling heterogeneity, effects of disease stage, and nucleos(t)ide (NUC) treatment, and correlations between liver and peripheral viral biomarkers. Methods: Hepatocytes positive for HBV core and HBsAg were quantified using a novel four-plex immunofluorescence assay and image analysis. Biopsies were analysed from HBeAg-positive (n = 39) and HBeAg-negative (n = 75) participants before and after NUC treatment. To evaluate sampling effects, duplicate biopsies collected at the same time point were compared. Serum or plasma samples were evaluated for levels of HBV DNA, HBsAg, hepatitis B core-related antigen (HBcrAg), and HBV RNA. Results: Diffusely distributed individual HBV core+ cells and foci of HBsAg+ cells were the most common staining patterns. Hepatocytes positive for both HBV core and HBsAg were rare. Paired biopsies revealed large local variation in HBV staining within participants, which was confirmed in a large liver resection. NUC treatment was associated with a >100-fold lower median frequency of HBV core+ cells in HBeAg-positive and HBeAg-negative participants, whereas reductions in HBsAg+ cells were not statistically significant. The frequency of HBV core+ hepatocytes was lower in HBeAg-negative participants than in HBeAg-positive participants at all time points evaluated. Total HBV+ hepatocyte burden correlated with HBcrAg, HBV DNA, and HBV RNA only in baseline HBeAg-positive samples. Conclusions: Reductions in HBV core+ hepatocytes were associated with HBeAg-negative status and NUC treatment. Variation in HBV positivity within individual livers was extensive. Correlations between the liver and the periphery were found only between biomarkers likely indicative of cccDNA (HBV core+ and HBcrAg, HBV DNA, and RNA). Impact and Implications: HBV infects liver hepatocyte cells, and its genome can exist in two forms that express different sets of viral proteins: a circular genome called cccDNA that can express all viral proteins, including the HBV core and HBsAg proteins, or a linear fragment that inserts into the host genome typically to express HBsAg, but not HBV core. We used new techniques to determine the percentage of hepatocytes expressing the HBV core and HBsAg proteins in a large set of liver biopsies. We find that abundance and patterns of expression differ across patient groups and even within a single liver and that NUC treatment greatly reduces the number of core-expressing hepatocytes.

8.
JACC Asia ; 3(1): 1-14, 2023 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-36873752

RESUMEN

Percutaneous coronary intervention has been a standard treatment strategy for patients with coronary artery disease with continuous ebullient progress in technology and techniques. The application of artificial intelligence and deep learning in particular is currently boosting the development of interventional solutions, improving the efficiency and objectivity of diagnosis and treatment. The ever-growing amount of data and computing power together with cutting-edge algorithms pave the way for the integration of deep learning into clinical practice, which has revolutionized the interventional workflow in imaging processing, interpretation, and navigation. This review discusses the development of deep learning algorithms and their corresponding evaluation metrics together with their clinical applications. Advanced deep learning algorithms create new opportunities for precise diagnosis and tailored treatment with a high degree of automation, reduced radiation, and enhanced risk stratification. Generalization, interpretability, and regulatory issues are remaining challenges that need to be addressed through joint efforts from multidisciplinary community.

9.
Clin Transl Radiat Oncol ; 39: 100590, 2023 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-36935854

RESUMEN

Head and neck radiotherapy induces important toxicity, and its efficacy and tolerance vary widely across patients. Advancements in radiotherapy delivery techniques, along with the increased quality and frequency of image guidance, offer a unique opportunity to individualize radiotherapy based on imaging biomarkers, with the aim of improving radiation efficacy while reducing its toxicity. Various artificial intelligence models integrating clinical data and radiomics have shown encouraging results for toxicity and cancer control outcomes prediction in head and neck cancer radiotherapy. Clinical implementation of these models could lead to individualized risk-based therapeutic decision making, but the reliability of the current studies is limited. Understanding, validating and expanding these models to larger multi-institutional data sets and testing them in the context of clinical trials is needed to ensure safe clinical implementation. This review summarizes the current state of the art of machine learning models for prediction of head and neck cancer radiotherapy outcomes.

10.
Ophthalmol Sci ; 3(2): 100254, 2023 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-36691594

RESUMEN

Objective: To develop automated algorithms for the detection of posterior vitreous detachment (PVD) using OCT imaging. Design: Evaluation of a diagnostic test or technology. Subjects: Overall, 42 385 consecutive OCT images (865 volumetric OCT scans) obtained with Heidelberg Spectralis from 865 eyes from 464 patients at an academic retina clinic between October 2020 and December 2021 were retrospectively reviewed. Methods: We developed a customized computer vision algorithm based on image filtering and edge detection to detect the posterior vitreous cortex for the determination of PVD status. A second deep learning (DL) image classification model based on convolutional neural networks and ResNet-50 architecture was also trained to identify PVD status from OCT images. The training dataset consisted of 674 OCT volume scans (33 026 OCT images), while the validation testing set consisted of 73 OCT volume scans (3577 OCT images). Overall, 118 OCT volume scans (5782 OCT images) were used as a separate external testing dataset. Main Outcome Measures: Accuracy, sensitivity, specificity, F1-scores, and area under the receiver operator characteristic curves (AUROCs) were measured to assess the performance of the automated algorithms. Results: Both the customized computer vision algorithm and DL model results were largely in agreement with the PVD status labeled by trained graders. The DL approach achieved an accuracy of 90.7% and an F1-score of 0.932 with a sensitivity of 100% and a specificity of 74.5% for PVD detection from an OCT volume scan. The AUROC was 89% at the image level and 96% at the volume level for the DL model. The customized computer vision algorithm attained an accuracy of 89.5% and an F1-score of 0.912 with a sensitivity of 91.9% and a specificity of 86.1% on the same task. Conclusions: Both the computer vision algorithm and the DL model applied on OCT imaging enabled reliable detection of PVD status, demonstrating the potential for OCT-based automated PVD status classification to assist with vitreoretinal surgical planning. Financial Disclosures: Proprietary or commercial disclosure may be found after the references.

11.
Heliyon ; 9(1): e12802, 2023 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-36704286

RESUMEN

Regardless of their nature of stochasticity and uncertain nature, wind and solar resources are the most abundant energy resources used in the development of microgrid systems. In microgrid systems and distribution networks, the uncertain nature of both solar and wind resources results in power quality and system stability issues. The randomization behavior of solar and wind energy resources is controlled through the precise development of a power prediction model. Fuzzy-based solar PV and wind prediction models may more efficiently manage this randomness and uncertain character. However, this method has several drawbacks, it has limited performance when the volumes of wind and solar resources historical data are huge in size and it has also many membership functions of the fuzzy input and output variables as well as multiple fuzzy rules available. The hybrid Fuzzy-PSO intelligent prediction approach improves the fuzzy system's limitations and hence increases the prediction model's performance. The Fuzzy-PSO hybrid forecast model is developed using MATLAB programming of the particle swarm optimization (PSO) algorithm with the help of the global optimization toolbox. In this paper, an error correction factor (ECF) is considered a new fuzzy input variable. It depends on the validation and forecasted data values of both wind and solar prediction models to improve the accuracy of the prediction model. The impact of ECF is observed in fuzzy, Fuzzy-PSO, and Fuzzy-GA wind and solar PV power forecasting models. The hybrid Fuzzy-PSO prediction model of wind and solar power generation has a high degree of accuracy compared to the Fuzzy and Fuzzy-GA forecasting models. The rest of this paper is organized as: Section II is about the analysis of solar and wind resources row data. The Fuzzy-PSO prediction model problem formulation is covered in Section III. Section IV, is about the results and discussion of the study. Section V contains the conclusion. The references and abbreviations are presented at the end of the paper.

12.
JID Innov ; 3(1): 100150, 2023 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-36655135

RESUMEN

Artificial intelligence (AI) has recently made great advances in image classification and malignancy prediction in the field of dermatology. However, understanding the applicability of AI in clinical dermatology practice remains challenging owing to the variability of models, image data, database characteristics, and variable outcome metrics. This systematic review aims to provide a comprehensive overview of dermatology literature using convolutional neural networks. Furthermore, the review summarizes the current landscape of image datasets, transfer learning approaches, challenges, and limitations within current AI literature and current regulatory pathways for approval of models as clinical decision support tools.

13.
Comput Struct Biotechnol J ; 21: 644-654, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-36659917

RESUMEN

N6-methyladenine (6mA) plays a critical role in various epigenetic processing including DNA replication, DNA repair, silencing, transcription, and diseases such as cancer. To understand such epigenetic mechanisms, 6 mA has been detected by high-throughput technologies on a genome-wide scale at single-base resolution, together with conventional methods such as immunoprecipitation, mass spectrometry and capillary electrophoresis, but these experimental approaches are time-consuming and laborious. To complement these problems, we have developed a CNN-based 6 mA site predictor, named CNN6mA, which proposed two new architectures: a position-specific 1-D convolutional layer and a cross-interactive network. In the position-specific 1-D convolutional layer, position-specific filters with different window sizes were applied to an inquiry sequence instead of sharing the same filters over all positions in order to extract the position-specific features at different levels. The cross-interactive network explored the relationships between all the nucleotide patterns within the inquiry sequence. Consequently, CNN6mA outperformed the existing state-of-the-art models in many species and created the contribution score vector that intelligibly interpret the prediction mechanism. The source codes and web application in CNN6mA are freely accessible at https://github.com/kuratahiroyuki/CNN6mA.git and http://kurata35.bio.kyutech.ac.jp/CNN6mA/, respectively.

14.
J Clin Exp Hepatol ; 13(1): 149-161, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-36647407

RESUMEN

Artificial Intelligence (AI) is a mathematical process of computer mediating designing of algorithms to support human intelligence. AI in hepatology has shown tremendous promise to plan appropriate management and hence improve treatment outcomes. The field of AI is in a very early phase with limited clinical use. AI tools such as machine learning, deep learning, and 'big data' are in a continuous phase of evolution, presently being applied for clinical and basic research. In this review, we have summarized various AI applications in hepatology, the pitfalls and AI's future implications. Different AI models and algorithms are under study using clinical, laboratory, endoscopic and imaging parameters to diagnose and manage liver diseases and mass lesions. AI has helped to reduce human errors and improve treatment protocols. Further research and validation are required for future use of AI in hepatology.

15.
Ophthalmol Sci ; 3(1): 100222, 2023 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-36325476

RESUMEN

Purpose: Two novel deep learning methods using a convolutional neural network (CNN) and a recurrent neural network (RNN) have recently been developed to forecast future visual fields (VFs). Although the original evaluations of these models focused on overall accuracy, it was not assessed whether they can accurately identify patients with progressive glaucomatous vision loss to aid clinicians in preventing further decline. We evaluated these 2 prediction models for potential biases in overestimating or underestimating VF changes over time. Design: Retrospective observational cohort study. Participants: All available and reliable Swedish Interactive Thresholding Algorithm Standard 24-2 VFs from Massachusetts Eye and Ear Glaucoma Service collected between 1999 and 2020 were extracted. Because of the methods' respective needs, the CNN data set included 54 373 samples from 7472 patients, and the RNN data set included 24 430 samples from 1809 patients. Methods: The CNN and RNN methods were reimplemented. A fivefold cross-validation procedure was performed on each model, and pointwise mean absolute error (PMAE) was used to measure prediction accuracy. Test data were stratified into categories based on the severity of VF progression to investigate the models' performances on predicting worsening cases. The models were additionally compared with a no-change model that uses the baseline VF (for the CNN) and the last-observed VF (for the RNN) for its prediction. Main Outcome Measures: PMAE in predictions. Results: The overall PMAE 95% confidence intervals were 2.21 to 2.24 decibels (dB) for the CNN and 2.56 to 2.61 dB for the RNN, which were close to the original studies' reported values. However, both models exhibited large errors in identifying patients with worsening VFs and often failed to outperform the no-change model. Pointwise mean absolute error values were higher in patients with greater changes in mean sensitivity (for the CNN) and mean total deviation (for the RNN) between baseline and follow-up VFs. Conclusions: Although our evaluation confirms the low overall PMAEs reported in the original studies, our findings also reveal that both models severely underpredict worsening of VF loss. Because the accurate detection and projection of glaucomatous VF decline is crucial in ophthalmic clinical practice, we recommend that this consideration is explicitly taken into account when developing and evaluating future deep learning models.

16.
J Bus Res ; 156: 113480, 2023 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-36506475

RESUMEN

Vaccination offers health, economic, and social benefits. However, three major issues-vaccine quality, demand forecasting, and trust among stakeholders-persist in the vaccine supply chain (VSC), leading to inefficiencies. The COVID-19 pandemic has exacerbated weaknesses in the VSC, while presenting opportunities to apply digital technologies to manage it. For the first time, this study establishes an intelligent VSC management system that provides decision support for VSC management during the COVID-19 pandemic. The system combines blockchain, internet of things (IoT), and machine learning that effectively address the three issues in the VSC. The transparency of blockchain ensures trust among stakeholders. The real-time monitoring of vaccine status by the IoT ensures vaccine quality. Machine learning predicts vaccine demand and conducts sentiment analysis on vaccine reviews to help companies improve vaccine quality. The present study also reveals the implications for the management of supply chains, businesses, and government.

17.
Soft comput ; 27(11): 7513-7523, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-36475038

RESUMEN

The outbreak of coronavirus disease 2019 (COVID-19) occurred at the end of 2019, and it has continued to be a source of misery for millions of people and companies well into 2020. There is a surge of concern among all persons, especially those who wish to resume in-person activities, as the globe recovers from the epidemic and intends to return to a level of normalcy. Wearing a face mask greatly decreases the likelihood of viral transmission and gives a sense of security, according to studies. However, manually tracking the execution of this regulation is not possible. The key to this is technology. We present a deep learning-based system that can detect instances of improper use of face masks. A dual-stage convolutional neural network architecture is used in our system to recognize masked and unmasked faces. This will aid in the tracking of safety breaches, the promotion of face mask use, and the maintenance of a safe working environment. In this paper, we propose a variant of a multi-face detection model which has the potential to target and identify a group of people whether they are wearing masks or not.

18.
Comput Struct Biotechnol J ; 21: 238-250, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-36544476

RESUMEN

The process of designing biomolecules, in particular proteins, is witnessing a rapid change in available tooling and approaches, moving from design through physicochemical force fields, to producing plausible, complex sequences fast via end-to-end differentiable statistical models. To achieve conditional and controllable protein design, researchers at the interface of artificial intelligence and biology leverage advances in natural language processing (NLP) and computer vision techniques, coupled with advances in computing hardware to learn patterns from growing biological databases, curated annotations thereof, or both. Once learned, these patterns can be used to provide novel insights into mechanistic biology and the design of biomolecules. However, navigating and understanding the practical applications for the many recent protein design tools is complex. To facilitate this, we 1) document recent advances in deep learning (DL) assisted protein design from the last three years, 2) present a practical pipeline that allows to go from de novo-generated sequences to their predicted properties and web-powered visualization within minutes, and 3) leverage it to suggest a generated protein sequence which might be used to engineer a biosynthetic gene cluster to produce a molecular glue-like compound. Lastly, we discuss challenges and highlight opportunities for the protein design field.

19.
Ophthalmol Sci ; 3(1): 100233, 2023 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-36545260

RESUMEN

Purpose: To compare the diagnostic accuracy and explainability of a Vision Transformer deep learning technique, Data-efficient image Transformer (DeiT), and ResNet-50, trained on fundus photographs from the Ocular Hypertension Treatment Study (OHTS) to detect primary open-angle glaucoma (POAG) and identify the salient areas of the photographs most important for each model's decision-making process. Design: Evaluation of a diagnostic technology. Subjects Participants and Controls: Overall 66 715 photographs from 1636 OHTS participants and an additional 5 external datasets of 16 137 photographs of healthy and glaucoma eyes. Methods: Data-efficient image Transformer models were trained to detect 5 ground-truth OHTS POAG classifications: OHTS end point committee POAG determinations because of disc changes (model 1), visual field (VF) changes (model 2), or either disc or VF changes (model 3) and Reading Center determinations based on disc (model 4) and VFs (model 5). The best-performing DeiT models were compared with ResNet-50 models on OHTS and 5 external datasets. Main Outcome Measures: Diagnostic performance was compared using areas under the receiver operating characteristic curve (AUROC) and sensitivities at fixed specificities. The explainability of the DeiT and ResNet-50 models was compared by evaluating the attention maps derived directly from DeiT to 3 gradient-weighted class activation map strategies. Results: Compared with our best-performing ResNet-50 models, the DeiT models demonstrated similar performance on the OHTS test sets for all 5 ground-truth POAG labels; AUROC ranged from 0.82 (model 5) to 0.91 (model 1). Data-efficient image Transformer AUROC was consistently higher than ResNet-50 on the 5 external datasets. For example, AUROC for the main OHTS end point (model 3) was between 0.08 and 0.20 higher in the DeiT than ResNet-50 models. The saliency maps from the DeiT highlight localized areas of the neuroretinal rim, suggesting important rim features for classification. The same maps in the ResNet-50 models show a more diffuse, generalized distribution around the optic disc. Conclusions: Vision Transformers have the potential to improve generalizability and explainability in deep learning models, detecting eye disease and possibly other medical conditions that rely on imaging for clinical diagnosis and management.

20.
Ophthalmol Sci ; 2(4): 100197, 2022 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-36531577

RESUMEN

Purpose: A deep learning model was developed to detect nonexudative macular neovascularization (neMNV) using OCT B-scans. Design: Retrospective review of a prospective, observational study. Participants: Normal control eyes and patients with age-related macular degeneration (AMD) with and without neMNV. Methods: Swept-source OCT angiography (SS-OCTA) imaging (PLEX Elite 9000, Carl Zeiss Meditec, Inc) was performed using the 6 × 6-mm scan pattern. Individual B-scans were annotated to distinguish between drusen and the double-layer sign (DLS) associated with the neMNV. The machine learning model was tested on a dataset graded by humans, and model performance was compared with the human graders. Main Outcome Measures: Intersection over Union (IoU) score was measured to evaluate segmentation network performance. Area under the receiver operating characteristic curve values, sensitivity, specificity, and positive predictive value (PPV) and negative predictive value (NPV) were measured to assess the performance of the final classification performance. Chance-corrected agreement between the algorithm and the human grader determinations was measured with Cohen's kappa. Results: A total of 251 eyes from 210 patients, including 182 eyes with DLS and 115 eyes with drusen, were used for model training. Of 125 500 B-scans, 6879 B-scans were manually annotated. A vision transformer segmentation model was built to extract DLS and drusen from B-scans. The extracted prediction masks from all B-scans in a volume were projected to an en face image, and an eye-level projection map was obtained for each eye. A binary classification algorithm was established to identify eyes with neMNV from the projection map. The algorithm achieved 82%, 90%, 79%, and 91% sensitivity, specificity, PPV, and NPV, respectively, on a separate test set of 100 eyes that were evaluated by human graders in a previous study. The area under the curve value was calculated as 0.91 (95% confidence interval, 0.85-0.98). The results of the algorithm showed excellent agreement with the senior human grader (kappa = 0.83, P < 0.001) and moderate agreement with the junior grader consensus (kappa = 0.54, P < 0.001). Conclusions: Our network (code is available at https://github.com/uw-biomedical-ml/double_layer_vit) was able to detect the presence of neMNV from structural B-scans alone by applying a purely transformer-based model.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA