Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 55
Filtrar
1.
JMIR Form Res ; 8: e59914, 2024 Sep 18.
Artículo en Inglés | MEDLINE | ID: mdl-39293049

RESUMEN

BACKGROUND: Labeling color fundus photos (CFP) is an important step in the development of artificial intelligence screening algorithms for the detection of diabetic retinopathy (DR). Most studies use the International Classification of Diabetic Retinopathy (ICDR) to assign labels to CFP, plus the presence or absence of macular edema (ME). Images can be grouped as referrable or nonreferrable according to these classifications. There is little guidance in the literature about how to collect and use metadata as a part of the CFP labeling process. OBJECTIVE: This study aimed to improve the quality of the Multimodal Database of Retinal Images in Africa (MoDRIA) by determining whether the availability of metadata during the image labeling process influences the accuracy, sensitivity, and specificity of image labels. MoDRIA was developed as one of the inaugural research projects of the Mbarara University Data Science Research Hub, part of the Data Science for Health Discovery and Innovation in Africa (DS-I Africa) initiative. METHODS: This is a crossover assessment with 2 groups and 2 phases. Each group had 10 randomly assigned labelers who provided an ICDR score and the presence or absence of ME for each of the 50 CFP in a test image with and without metadata including blood pressure, visual acuity, glucose, and medical history. Specificity and sensitivity of referable retinopathy were based on ICDR scores, and ME was calculated using a 2-sided t test. Comparison of sensitivity and specificity for ICDR scores and ME with and without metadata for each participant was calculated using the Wilcoxon signed rank test. Statistical significance was set at P<.05. RESULTS: The sensitivity for identifying referrable DR with metadata was 92.8% (95% CI 87.6-98.0) compared with 93.3% (95% CI 87.6-98.9) without metadata, and the specificity was 84.9% (95% CI 75.1-94.6) with metadata compared with 88.2% (95% CI 79.5-96.8) without metadata. The sensitivity for identifying the presence of ME was 64.3% (95% CI 57.6-71.0) with metadata, compared with 63.1% (95% CI 53.4-73.0) without metadata, and the specificity was 86.5% (95% CI 81.4-91.5) with metadata compared with 87.7% (95% CI 83.9-91.5) without metadata. The sensitivity and specificity of the ICDR score and the presence or absence of ME were calculated for each labeler with and without metadata. No findings were statistically significant. CONCLUSIONS: The sensitivity and specificity scores for the detection of referrable DR were slightly better without metadata, but the difference was not statistically significant. We cannot make definitive conclusions about the impact of metadata on the sensitivity and specificity of image labels in our study. Given the importance of metadata in clinical situations, we believe that metadata may benefit labeling quality. A more rigorous study to determine the sensitivity and specificity of CFP labels with and without metadata is recommended.


Asunto(s)
Retinopatía Diabética , Metadatos , Humanos , Retinopatía Diabética/diagnóstico por imagen , Retinopatía Diabética/diagnóstico , Uganda , Femenino , Masculino , Estudios Cruzados , Bases de Datos Factuales , Persona de Mediana Edad , Fondo de Ojo , Adulto , Sensibilidad y Especificidad , Retina/diagnóstico por imagen , Retina/patología
2.
Telemed J E Health ; 2024 Jul 31.
Artículo en Inglés | MEDLINE | ID: mdl-39082066

RESUMEN

Objective: To determine the cost-effectiveness of a new telemedicine optometric-based screening program of diabetic retinopathy (DR) compared with traditional models' assessments in a universal European public health system. Methods: A new teleophthalmology program for DR based on the assessment of retinographies (3-field Joslin Vision Network by a certified optometrist and a reading center [IOBA-RC]) was designed. This program was first conducted in a rural area 40 km from the referral hospital (Medina de Rioseco, Valladolid, Spain). The cost-effectiveness was compared with telemedicine based on evaluations by primary care physicians and general ophthalmologists, and to face-to-face examinations conducted by ophthalmologists. A decision tree model was developed to simulate the cost-effectiveness of both models, considering public and private costs. The effectiveness was measured in terms of quality of life. Results: A total of 261 patients with type 2 diabetes were included (42 had significant DR and required specific surveillance by the RC; 219 were undiagnosed). The sensitivity and specificity of the detection of DR were 100% and 74.1%, respectively. The telemedicine-based DR optometric screening model demonstrated similar utility to models based on physicians and general ophthalmologists and traditional face-to-face evaluations (0.845) at a lower cost/patient (€51.23, €71.65, and €86.46, respectively). Conclusions: The telemedicine-based optometric screening program for DR in a RC demonstrated cost savings even in a developed country with a universal health care system. These results support the expansion of this kind of teleophthalmology program not only for screening but also for the follow-up of diabetic patients.

3.
Med Image Anal ; 97: 103242, 2024 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-38901099

RESUMEN

OBJECTIVE: The development of myopia is usually accompanied by changes in retinal vessels, optic disc, optic cup, fovea, and other retinal structures as well as the length of the ocular axis. And the accurate registration of retinal images is very important for the extraction and analysis of retinal structural changes. However, the registration of retinal images with myopia development faces a series of challenges, due to the unique curved surface of the retina, as well as the changes in fundus curvature caused by ocular axis elongation. Therefore, our goal is to improve the registration accuracy of the retinal images with myopia development. METHOD: In this study, we propose a 3D spatial model for the pair of retinal images with myopia development. In this model, we introduce a novel myopia development model that simulates the changes in the length of ocular axis and fundus curvature due to the development of myopia. We also consider the distortion model of the fundus camera during the imaging process. Based on the 3D spatial model, we further implement a registration framework, which utilizes corresponding points in the pair of retinal images to achieve registration in the way of 3D pose estimation. RESULTS: The proposed method is quantitatively evaluated on the publicly available dataset without myopia development and our Fundus Image Myopia Development (FIMD) dataset. The proposed method is shown to perform more accurate and stable registration than state-of-the-art methods, especially for retinal images with myopia development. SIGNIFICANCE: To the best of our knowledge, this is the first retinal image registration method for the study of myopia development. This method significantly improves the registration accuracy of retinal images which have myopia development. The FIMD dataset we constructed has been made publicly available to promote the study in related fields.


Asunto(s)
Imagenología Tridimensional , Miopía , Retina , Humanos , Miopía/diagnóstico por imagen , Imagenología Tridimensional/métodos , Retina/diagnóstico por imagen , Algoritmos , Interpretación de Imagen Asistida por Computador/métodos
4.
Math Biosci Eng ; 21(2): 1938-1958, 2024 Jan 05.
Artículo en Inglés | MEDLINE | ID: mdl-38454669

RESUMEN

Retinal vessel segmentation plays a vital role in the clinical diagnosis of ophthalmic diseases. Despite convolutional neural networks (CNNs) excelling in this task, challenges persist, such as restricted receptive fields and information loss from downsampling. To address these issues, we propose a new multi-fusion network with grouped attention (MAG-Net). First, we introduce a hybrid convolutional fusion module instead of the original encoding block to learn more feature information by expanding the receptive field. Additionally, the grouped attention enhancement module uses high-level features to guide low-level features and facilitates detailed information transmission through skip connections. Finally, the multi-scale feature fusion module aggregates features at different scales, effectively reducing information loss during decoder upsampling. To evaluate the performance of the MAG-Net, we conducted experiments on three widely used retinal datasets: DRIVE, CHASE and STARE. The results demonstrate remarkable segmentation accuracy, specificity and Dice coefficients. Specifically, the MAG-Net achieved segmentation accuracy values of 0.9708, 0.9773 and 0.9743, specificity values of 0.9836, 0.9875 and 0.9906 and Dice coefficients of 0.8576, 0.8069 and 0.8228, respectively. The experimental results demonstrate that our method outperforms existing segmentation methods exhibiting superior performance and segmentation outcomes.


Asunto(s)
Aprendizaje , Vasos Retinianos , Vasos Retinianos/diagnóstico por imagen , Redes Neurales de la Computación , Procesamiento de Imagen Asistido por Computador
5.
Graefes Arch Clin Exp Ophthalmol ; 262(8): 2389-2401, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-38358524

RESUMEN

Alzheimer's disease (AD) is a neurodegenerative condition that primarily affects brain tissue. Because the retina and brain share the same embryonic origin, visual deficits have been reported in AD patients. Artificial Intelligence (AI) has recently received a lot of attention due to its immense power to process and detect image hallmarks and make clinical decisions (like diagnosis) based on images. Since retinal changes have been reported in AD patients, AI is being proposed to process images to predict, diagnose, and prognosis AD. As a result, the purpose of this review was to discuss the use of AI trained on retinal images of AD patients. According to previous research, AD patients experience retinal thickness and retinal vessel density changes, which can occasionally occur before the onset of the disease's clinical symptoms. AI and machine vision can detect and use these changes in the domains of disease prediction, diagnosis, and prognosis. As a result, not only have unique algorithms been developed for this condition, but also databases such as the Retinal OCTA Segmentation dataset (ROSE) have been constructed for this purpose. The achievement of high accuracy, sensitivity, and specificity in the classification of retinal images between AD and healthy groups is one of the major breakthroughs in using AI based on retinal images for AD. It is fascinating that researchers could pinpoint individuals with a positive family history of AD based on the properties of their eyes. In conclusion, the growing application of AI in medicine promises its future position in processing different aspects of patients with AD, but we need cohort studies to determine whether it can help to follow up with healthy persons at risk of AD for a quicker diagnosis or assess the prognosis of patients with AD.


Asunto(s)
Enfermedad de Alzheimer , Inteligencia Artificial , Retina , Humanos , Enfermedad de Alzheimer/diagnóstico , Retina/diagnóstico por imagen , Retina/patología , Enfermedades de la Retina/diagnóstico , Tomografía de Coherencia Óptica/métodos , Vasos Retinianos/patología , Vasos Retinianos/diagnóstico por imagen , Algoritmos
6.
Comput Biol Med ; 168: 107633, 2024 01.
Artículo en Inglés | MEDLINE | ID: mdl-37992471

RESUMEN

Recent deep learning methods with convolutional neural networks (CNNs) have boosted advance prosperity of medical image analysis and expedited the automatic retinal artery/vein (A/V) classification. However, it is challenging for these CNN-based approaches in two aspects: (1) specific tubular structures and subtle variations in appearance, contrast, and geometry, which tend to be ignored in CNNs with network layer increasing; (2) limited well-labeled data for supervised segmentation of retinal vessels, which may hinder the effectiveness of deep learning methods. To address these issues, we propose a novel semi-supervised point consistency network (SPC-Net) for retinal A/V classification. SPC-Net consists of an A/V classification (AVC) module and a multi-class point consistency (MPC) module. The AVC module adopts an encoder-decoder segmentation network to generate the prediction probability map of A/V for supervised learning. The MPC module introduces point set representations to adaptively generate point set classification maps of the arteriovenous skeleton, which enjoys its prediction flexibility and consistency (i.e. point consistency) to effectively alleviate arteriovenous confusion. In addition, we propose a consistency regularization between the predicted A/V classification probability maps and point set representations maps for unlabeled data to explore the inherent segmentation perturbation of the point consistency, reducing the need for annotated data. We validate our method on two typical public datasets (DRIVE, HRF) and a private dataset (TR280) with different resolutions. Extensive qualitative and quantitative experimental results demonstrate the effectiveness of our proposed method for supervised and semi-supervised learning.


Asunto(s)
Sistema Cardiovascular , Arteria Retiniana , Arteria Retiniana/diagnóstico por imagen , Vasos Retinianos , Retina , Redes Neurales de la Computación , Procesamiento de Imagen Asistido por Computador
7.
Front Neurol ; 14: 1168836, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37492851

RESUMEN

Background and purpose: As one common feature of cerebral small vascular disease (cSVD), white matter lesions (WMLs) could lead to reduction in brain function. Using a convenient, cheap, and non-intrusive method to detect WMLs could substantially benefit to patient management in the community screening, especially in the settings of availability or contraindication of magnetic resonance imaging (MRI). Therefore, this study aimed to develop a useful model to incorporate clinical laboratory data and retinal images using deep learning models to predict the severity of WMLs. Methods: Two hundred fifty-nine patients with any kind of neurological diseases were enrolled in our study. Demographic data, retinal images, MRI, and laboratory data were collected for the patients. The patients were assigned to the absent/mild and moderate-severe WMLs groups according to Fazekas scoring system. Retinal images were acquired by fundus photography. A ResNet deep learning framework was used to analyze the retinal images. A clinical-laboratory signature was generated from laboratory data. Two prediction models, a combined model including demographic data, the clinical-laboratory signature, and the retinal images and a clinical model including only demographic data and the clinical-laboratory signature, were developed to predict the severity of WMLs. Results: Approximately one-quarter of the patients (25.6%) had moderate-severe WMLs. The left and right retinal images predicted moderate-severe WMLs with area under the curves (AUCs) of 0.73 and 0.94. The clinical-laboratory signature predicted moderate-severe WMLs with an AUC of 0.73. The combined model showed good performance in predicting moderate-severe WMLs with an AUC of 0.95, while the clinical model predicted moderate-severe WMLs with an AUC of 0.78. Conclusion: Combined with retinal images from conventional fundus photography and clinical laboratory data are reliable and convenient approach to predict the severity of WMLs and are helpful for the management and follow-up of WMLs patients.

8.
Int Ophthalmol ; 43(10): 3569-3586, 2023 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-37291412

RESUMEN

BACKGROUND: The eyes are the most important part of the human body as these are directly connected to the brain and help us perceive the imagery in daily life whereas, eye diseases are mostly ignored and underestimated until it is too late. Diagnosing eye disorders through manual diagnosis by the physician can be very costly and time taking. OBJECTIVE: Thus, to tackle this, a novel method namely EyeCNN is proposed for identifying eye diseases through retinal images using EfficientNet B3. METHODS: A dataset of retinal imagery of three diseases, i.e. Diabetic Retinopathy, Glaucoma, and Cataract is used to train 12 convolutional networks while EfficientNet B3 was the topperforming model out of all 12 models with a testing accuracy of 94.30%. RESULTS: After preprocessing of the dataset and training of models, various experimentations were performed to see where our model stands. The evaluation was performed using some well-defined measures and the final model was deployed on the Streamlit server as a prototype for public usage. The proposed model has the potential to help diagnose eye diseases early, which can facilitate timely treatment. CONCLUSION: The use of EyeCNN for classifying eye diseases has the potential to aid ophthalmologists in diagnosing conditions accurately and efficiently. This research may also lead to a deeper understanding of these diseases and it may lead to new treatments. The webserver of EyeCNN can be accessed at ( https://abdulrafay97-eyecnn-app-rd9wgz.streamlit.app/ ).


Asunto(s)
Catarata , Retinopatía Diabética , Glaucoma , Humanos , Retina , Redes Neurales de la Computación , Retinopatía Diabética/diagnóstico , Glaucoma/diagnóstico
9.
Med Image Anal ; 87: 102805, 2023 07.
Artículo en Inglés | MEDLINE | ID: mdl-37104995

RESUMEN

Unsupervised anomaly detection (UAD) is to detect anomalies through learning the distribution of normal data without labels and therefore has a wide application in medical images by alleviating the burden of collecting annotated medical data. Current UAD methods mostly learn the normal data by the reconstruction of the original input, but often lack the consideration of any prior information that has semantic meanings. In this paper, we first propose a universal unsupervised anomaly detection framework SSL-AnoVAE, which utilizes a self-supervised learning (SSL) module for providing more fine-grained semantics depending on the to-be detected anomalies in the retinal images. We also explore the relationship between the data transformation adopted in the SSL module and the quality of anomaly detection for retinal images. Moreover, to take full advantage of the proposed SSL-AnoVAE and apply towards clinical usages for computer-aided diagnosis of retinal-related diseases, we further propose to stage and segment the anomalies in retinal images detected by SSL-AnoVAE in an unsupervised manner. Experimental results demonstrate the effectiveness of our proposed method for unsupervised anomaly detection, staging and segmentation on both retinal optical coherence tomography images and color fundus photograph images.


Asunto(s)
Diagnóstico por Computador , Enfermedades de la Retina , Humanos , Fondo de Ojo , Enfermedades de la Retina/diagnóstico por imagen , Semántica , Tomografía de Coherencia Óptica , Procesamiento de Imagen Asistido por Computador
10.
Life (Basel) ; 12(10)2022 Oct 15.
Artículo en Inglés | MEDLINE | ID: mdl-36295045

RESUMEN

Background: The aim of this study was to assess the performance of regional graders and artificial intelligence algorithms across retinal cameras with different specifications in classifying an image as gradable and ungradable. Methods: Study subjects were included from a community-based nationwide diabetic retinopathy screening program in Thailand. Various non-mydriatic fundus cameras were used for image acquisition, including Kowa Nonmyd, Kowa Nonmyd α-DⅢ, Kowa Nonmyd 7, Kowa Nonmyd WX, Kowa VX 10 α, Kowa VX 20 and Nidek AFC 210. All retinal photographs were graded by deep learning algorithms and human graders and compared with a standard reference. Results: Images were divided into two categories as gradable and ungradable images. Four thousand eight hundred fifty-two participants with 19,408 fundus images were included, of which 15,351 (79.09%) were gradable images and the remaining 4057 (20.90%) were ungradable images. Conclusions: The deep learning (DL) algorithm demonstrated better sensitivity, specificity and kappa than the human graders for all eight types of non-mydriatic fundus cameras. The deep learning system showed, more consistent diagnostic performance than the human graders across images of varying quality and camera types.

11.
J Imaging ; 8(10)2022 Sep 22.
Artículo en Inglés | MEDLINE | ID: mdl-36286352

RESUMEN

Hypertensive retinopathy severity classification is proportionally related to tortuosity severity grading. No tortuosity severity scale enables a computer-aided system to classify the tortuosity severity of a retinal image. This work aimed to introduce a machine learning model that can identify the severity of a retinal image automatically and hence contribute to developing a hypertensive retinopathy or diabetic retinopathy automated grading system. First, the tortuosity is quantified using fourteen tortuosity measurement formulas for the retinal images of the AV-Classification dataset to create the tortuosity feature set. Secondly, a manual labeling is performed and reviewed by two ophthalmologists to construct a tortuosity severity ground truth grading for each image in the AV classification dataset. Finally, the feature set is used to train and validate the machine learning models (J48 decision tree, ensemble rotation forest, and distributed random forest). The best performance learned model is used as the tortuosity severity classifier to identify the tortuosity severity (normal, mild, moderate, and severe) for any given retinal image. The distributed random forest model has reported the highest accuracy (99.4%) compared to the J48 Decision tree model and the rotation forest model with minimal least root mean square error (0.0000192) and the least mean average error (0.0000182). The proposed tortuosity severity grading matched the ophthalmologist's judgment. Moreover, detecting the tortuosity severity of the retinal vessels', optimizing vessel segmentation, the vessel segment extraction, and the created feature set have increased the accuracy of the automatic tortuosity severity detection model.

12.
Front Neurol ; 13: 949805, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35968300

RESUMEN

Purpose: To assess the value of automatic disc-fovea angle (DFA) measurement using the DeepLabv3+ segmentation model. Methods: A total of 682 normal fundus image datasets were collected from the Eye Hospital of Nanjing Medical University. The following parts of the images were labeled and subsequently reviewed by ophthalmologists: optic disc center, macular center, optic disc area, and virtual macular area. A total of 477 normal fundus images were used to train DeepLabv3+, U-Net, and PSPNet model, which were used to obtain the optic disc area and virtual macular area. Then, the coordinates of the optic disc center and macular center were obstained by using the minimum outer circle technique. Finally the DFA was calculated. Results: In this study, 205 normal fundus images were used to test the model. The experimental results showed that the errors in automatic DFA measurement using DeepLabv3+, U-Net, and PSPNet segmentation models were 0.76°, 1.4°, and 2.12°, respectively. The mean intersection over union (MIoU), mean pixel accuracy (MPA), average error in the center of the optic disc, and average error in the center of the virtual macula obstained by using DeepLabv3+ model was 94.77%, 97.32%, 10.94 pixels, and 13.44 pixels, respectively. The automatic DFA measurement using DeepLabv3+ got the less error than the errors that using the other segmentation models. Therefore, the DeepLabv3+ segmentation model was finally chosen to measure DFA automatically. Conclusions: The DeepLabv3+ segmentation model -based automatic segmentation techniques can produce accurate and rapid DFA measurements.

13.
Rev. mex. ing. bioméd ; 43(2): 1246, May.-Aug. 2022. tab, graf
Artículo en Inglés | LILACS-Express | LILACS | ID: biblio-1409795

RESUMEN

ABSTRACT Deep learning (DL) techniques achieve high performance in the detection of illnesses in retina images, but the majority of models are trained with different databases for solving one specific task. Consequently, there are currently no solutions that can be used for the detection/segmentation of a variety of illnesses in the retina in a single model. This research uses Transfer Learning (TL) to take advantage of previous knowledge generated during model training of illness detection to segment lesions with encoder-decoder Convolutional Neural Networks (CNN), where the encoders are classical models like VGG-16 and ResNet50 or variants with attention modules. This shows that it is possible to use a general methodology using a single fundus image database for the detection/segmentation of a variety of retinal diseases achieving state-of-the-art results. This model could be in practice more valuable since it can be trained with a more realistic database containing a broad spectrum of diseases to detect/segment illnesses without sacrificing performance. TL can help achieve fast convergence if the samples in the main task (Classification) and sub-tasks (Segmentation) are similar. If this requirement is not fulfilled, the parameters start from scratch.


RESUMEN Las técnicas de Deep Learning (DL) han demostrado un buen desempeño en la detección de anomalías en imágenes de retina, pero la mayoría de los modelos son entrenados en diferentes bases de datos para resolver una tarea en específico. Como consecuencia, actualmente no se cuenta con modelos que se puedan usar para la detección/segmentación de varias lesiones o anomalías con un solo modelo. En este artículo, se utiliza Transfer Learning (TL) con la cual se aprovecha el conocimiento adquirido para determinar si una imagen de retina tiene o no una lesión. Con este conocimiento se segmenta la imagen utilizando una red neuronal convolucional (CNN), donde los encoders o extractores de características son modelos clásicos como VGG-16 y ResNet50 o variantes con módulos de atención. Se demuestra así, que es posible utilizar una metodología general con bases de datos de retina para la detección/ segmentación de lesiones en la retina alcanzando resultados como los que se muestran en el estado del arte. Este modelo puede ser entrenado con bases de datos más reales que contengan una gama de enfermedades para detectar/ segmentar sin sacrificar rendimiento. TL puede ayudar a conseguir una convergencia rápida del modelo si la base de datos principal (Clasificación) se parece a la base de datos de las tareas secundarias (Segmentación), si esto no se cumple los parámetros básicamente comienzan a ajustarse desde cero.

14.
J Clin Med ; 11(13)2022 Jul 02.
Artículo en Inglés | MEDLINE | ID: mdl-35807135

RESUMEN

Public databases for glaucoma studies contain color images of the retina, emphasizing the optic papilla. These databases are intended for research and standardized automated methodologies such as those using deep learning techniques. These techniques are used to solve complex problems in medical imaging, particularly in the automated screening of glaucomatous disease. The development of deep learning techniques has demonstrated potential for implementing protocols for large-scale glaucoma screening in the population, eliminating possible diagnostic doubts among specialists, and benefiting early treatment to delay the onset of blindness. However, the images are obtained by different cameras, in distinct locations, and from various population groups and are centered on multiple parts of the retina. We can also cite the small number of data, the lack of segmentation of the optic papillae, and the excavation. This work is intended to offer contributions to the structure and presentation of public databases used in the automated screening of glaucomatous papillae, adding relevant information from a medical point of view. The gold standard public databases present images with segmentations of the disc and cupping made by experts and division between training and test groups, serving as a reference for use in deep learning architectures. However, the data offered are not interchangeable. The quality and presentation of images are heterogeneous. Moreover, the databases use different criteria for binary classification with and without glaucoma, do not offer simultaneous pictures of the two eyes, and do not contain elements for early diagnosis.

15.
Diagnostics (Basel) ; 12(6)2022 Jun 02.
Artículo en Inglés | MEDLINE | ID: mdl-35741192

RESUMEN

Glaucoma is a group of eye conditions that damage the optic nerve, the health of which is vital for good eyesight. This damage is often caused by higher-than-normal pressure in the eye. In the past few years, the applications of artificial intelligence and data science have increased rapidly in medicine especially in imaging applications. In particular, deep learning tools have been successfully applied obtaining, in some cases, results superior to those obtained by humans. In this article, we present a soft novel ensemble model based on the K-NN algorithm, that combines the probability of class membership obtained by several deep learning models. In this research, three models of different nature (CNN, CapsNets and Convolutional Autoencoders) have been selected searching for diversity. The latent space of these models are combined using the local information provided by the true sample labels and the K-NN algorithm is applied to determine the final decision. The results obtained on two different datasets of retinal images show that the proposed ensemble model improves the diagnosis capabilities for both the individual models and the state-of-the-art results.

16.
J Clin Med ; 11(10)2022 May 10.
Artículo en Inglés | MEDLINE | ID: mdl-35628812

RESUMEN

BACKGROUND: Coronary heart disease (CHD) is the leading cause of death worldwide, constituting a growing health and social burden. People with cardiometabolic disorders are more likely to develop CHD. Retinal image analysis is a novel and noninvasive method to assess microvascular function. We aim to investigate whether retinal images can be used for CHD risk estimation for people with cardiometabolic disorders. METHODS: We have conducted a case-control study at Shenzhen Traditional Chinese Medicine Hospital, where 188 CHD patients and 128 controls with cardiometabolic disorders were recruited. Retinal images were captured within two weeks of admission. The retinal characteristics were estimated by the automatic retinal imaging analysis (ARIA) algorithm. Risk estimation models were established for CHD patients using machine learning approaches. We divided CHD patients into a diabetes group and a non-diabetes group for sensitivity analysis. A ten-fold cross-validation method was used to validate the results. RESULTS: The sensitivity and specificity were 81.3% and 88.3%, respectively, with an accuracy of 85.4% for CHD risk estimation. The risk estimation model for CHD with diabetes performed better than the model for CHD without diabetes. CONCLUSIONS: The ARIA algorithm can be used as a risk assessment tool for CHD for people with cardiometabolic disorders.

17.
Med Image Anal ; 77: 102340, 2022 04.
Artículo en Inglés | MEDLINE | ID: mdl-35124367

RESUMEN

Automatic artery/vein (A/V) classification, as the basic prerequisite for the quantitative analysis of retinal vascular network, has been actively investigated in recent years using both conventional and deep learning based methods. The topological connection relationship and vessel width information, which have been proved effective in improving the A/V classification performance for the conventional methods, however, have not yet been exploited by the deep learning based methods. In this paper, we propose a novel Topology and Width Aware Generative Adversarial Network (named as TW-GAN), which, for the first time, integrates the topology connectivity and vessel width information into the deep learning framework for A/V classification. To improve the topology connectivity, a topology-aware module is proposed, which contains a topology ranking discriminator based on ordinal classification to rank the topological connectivity level of the ground-truth mask, the generated A/V mask and the intentionally shuffled mask. In addition, a topology preserving triplet loss is also proposed to extract the high-level topological features and further to narrow the feature distance between the predicted A/V mask and the ground-truth mask. Moreover, to enhance the model's perception of vessel width, a width-aware module is proposed to predict the width maps for the dilated/non-dilated ground-truth masks. Extensive empirical experiments demonstrate that the proposed framework effectively increases the topological connectivity of the segmented A/V masks and achieves state-of-the-art A/V classification performance on the publicly available AV-DRIVE and HRF datasets. Source code and data annotations are available at https://github.com/o0t1ng0o/TW-GAN.


Asunto(s)
Arteria Retiniana , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Retina
18.
Sensors (Basel) ; 22(4)2022 Feb 14.
Artículo en Inglés | MEDLINE | ID: mdl-35214351

RESUMEN

Glaucoma is a silent disease that leads to vision loss or irreversible blindness. Current deep learning methods can help glaucoma screening by extending it to larger populations using retinal images. Low-cost lenses attached to mobile devices can increase the frequency of screening and alert patients earlier for a more thorough evaluation. This work explored and compared the performance of classification and segmentation methods for glaucoma screening with retinal images acquired by both retinography and mobile devices. The goal was to verify the results of these methods and see if similar results could be achieved using images captured by mobile devices. The used classification methods were the Xception, ResNet152 V2 and the Inception ResNet V2 models. The models' activation maps were produced and analysed to support glaucoma classifier predictions. In clinical practice, glaucoma assessment is commonly based on the cup-to-disc ratio (CDR) criterion, a frequent indicator used by specialists. For this reason, additionally, the U-Net architecture was used with the Inception ResNet V2 and Inception V3 models as the backbone to segment and estimate CDR. For both tasks, the performance of the models reached close to that of state-of-the-art methods, and the classification method applied to a low-quality private dataset illustrates the advantage of using cheaper lenses.


Asunto(s)
Aprendizaje Profundo , Glaucoma , Disco Óptico , Computadoras de Mano , Técnicas de Diagnóstico Oftalmológico , Glaucoma/diagnóstico por imagen , Humanos
19.
J Digit Imaging ; 35(2): 281-301, 2022 04.
Artículo en Inglés | MEDLINE | ID: mdl-35013827

RESUMEN

Hypertensive retinopathy (HR) refers to changes in the morphological diameter of the retinal vessels due to persistent high blood pressure. Early detection of such changes helps in preventing blindness or even death due to stroke. These changes can be quantified by computing the arteriovenous ratio and the tortuosity severity in the retinal vasculature. This paper presents a decision support system for detecting and grading HR using morphometric analysis of retinal vasculature, particularly measuring the arteriovenous ratio (AVR) and retinal vessel tortuosity. In the first step, the retinal blood vessels are segmented and classified as arteries and veins. Then, the width of arteries and veins is measured within the region of interest around the optic disk. Next, a new iterative method is proposed to compute the AVR from the caliber measurements of arteries and veins using Parr-Hubbard and Knudtson methods. Moreover, the retinal vessel tortuosity severity index is computed for each image using 14 tortuosity severity metrics. In the end, a hybrid decision support system is proposed for the detection and grading of HR using AVR and tortuosity severity index. Furthermore, we present a new publicly available retinal vessel morphometry (RVM) dataset to evaluate the proposed methodology. The RVM dataset contains 504 retinal images with pixel-level annotations for vessel segmentation, artery/vein classification, and optic disk localization. The image-level labels for vessel tortuosity index and HR grade are also available. The proposed methods of iterative AVR measurement, tortuosity index, and HR grading are evaluated using the new RVM dataset. The results indicate that the proposed method gives superior performance than existing methods. The presented methodology is a novel advancement in automated detection and grading of HR, which can potentially be used as a clinical decision support system.


Asunto(s)
Retinopatía Hipertensiva , Disco Óptico , Humanos , Retinopatía Hipertensiva/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos , Vasos Retinianos/diagnóstico por imagen
20.
Front Med (Lausanne) ; 8: 750396, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34820394

RESUMEN

From diagnosing cardiovascular diseases to analyzing the progression of diabetic retinopathy, accurate retinal artery/vein (A/V) classification is critical. Promising approaches for A/V classification, ranging from conventional graph based methods to recent convolutional neural network (CNN) based models, have been known. However, the inability of traditional graph based methods to utilize deep hierarchical features extracted by CNNs and the limitations of current CNN based methods to incorporate vessel topology information hinder their effectiveness. In this paper, we propose a new CNN based framework, VTG-Net (vessel topology graph network), for retinal A/V classification by incorporating vessel topology information. VTG-Net exploits retinal vessel topology along with CNN features to improve A/V classification accuracy. Specifically, we transform vessel features extracted by CNN in the image domain into a graph representation preserving the vessel topology. Then by exploiting a graph convolutional network (GCN), we enable our model to learn both CNN features and vessel topological features simultaneously. The final predication is attained by fusing the CNN and GCN outputs. Using a publicly available AV-DRIVE dataset and an in-house dataset, we verify the high performance of our VTG-Net for retinal A/V classification over state-of-the-art methods (with ~2% improvement in accuracy on the AV-DRIVE dataset).

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA