Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 54
Filtrar
1.
BMJ Open ; 14(9): e081398, 2024 Sep 05.
Artículo en Inglés | MEDLINE | ID: mdl-39237272

RESUMEN

OBJECTIVES: Despite global research on early detection of age-related macular degeneration (AMD), not enough is being done for large-scale screening. Automated analysis of retinal images captured via smartphone presents a potential solution; however, to our knowledge, such an artificial intelligence (AI) system has not been evaluated. The study aimed to assess the performance of an AI algorithm in detecting referable AMD on images captured on a portable fundus camera. DESIGN, SETTING: A retrospective image database from the Age-Related Eye Disease Study (AREDS) and target device was used. PARTICIPANTS: The algorithm was trained on two distinct data sets with macula-centric images: initially on 108,251 images (55% referable AMD) from AREDS and then fine-tuned on 1108 images (33% referable AMD) captured on Asian eyes using the target device. The model was designed to indicate the presence of referable AMD (intermediate and advanced AMD). Following the first training step, the test set consisted of 909 images (49% referable AMD). For the fine-tuning step, the test set consisted of 238 (34% referable AMD) images. The reference standard for the AREDS data set was fundus image grading by the central reading centre, and for the target device, it was consensus image grading by specialists. OUTCOME MEASURES: Area under receiver operating curve (AUC), sensitivity and specificity of algorithm. RESULTS: Before fine-tuning, the deep learning (DL) algorithm exhibited a test set (from AREDS) sensitivity of 93.48% (95% CI: 90.8% to 95.6%), specificity of 82.33% (95% CI: 78.6% to 85.7%) and AUC of 0.965 (95% CI:0.95 to 0.98). After fine-tuning, the DL algorithm displayed a test set (from the target device) sensitivity of 91.25% (95% CI: 82.8% to 96.4%), specificity of 84.18% (95% CI: 77.5% to 89.5%) and AUC 0.947 (95% CI: 0.911 to 0.982). CONCLUSION: The DL algorithm shows promising results in detecting referable AMD from a portable smartphone-based imaging system. This approach can potentially bring effective and affordable AMD screening to underserved areas.


Asunto(s)
Algoritmos , Aprendizaje Profundo , Degeneración Macular , Teléfono Inteligente , Humanos , Degeneración Macular/diagnóstico , Degeneración Macular/diagnóstico por imagen , Estudios Retrospectivos , Anciano , Fondo de Ojo , Femenino , Sensibilidad y Especificidad , Fotograbar/instrumentación , Masculino , Curva ROC , Persona de Mediana Edad , Tamizaje Masivo/métodos , Tamizaje Masivo/instrumentación
2.
Ophthalmol Glaucoma ; 2024 Sep 12.
Artículo en Inglés | MEDLINE | ID: mdl-39277171

RESUMEN

PURPOSE: This study assesses the diagnostic efficacy of offline Medios Artificial Intelligence (AI) glaucoma software in a primary eyecare setting, using non-mydriatic fundus images from Remidio's Fundus-on-Phone (FOP NM-10). AI results were compared with tele-ophthalmologists' diagnoses and with a glaucoma specialist's assessment for those participants referred to tertiary eyecare hospital. DESIGN: Prospective, cross-sectional study PARTICIPANTS: 303 participants from 6 satellite vision centers of a tertiary eye hospital METHODS: At the vision center, participants underwent comprehensive eye evaluations, including clinical history, visual acuity measurement, slit lamp examination, intraocular pressure measurement, and fundus photography using the FOP NM-10 camera. Medios AI-Glaucoma software analysed 42-degrees disc-centric fundus images, categorizing them as normal, glaucoma, or suspect. Tele-ophthalmologists who were glaucoma fellows with a minimum of 3 years of ophthalmology and 1 year of glaucoma fellowship training, masked to AI results, remotely diagnosed subjects based on the history and disc appearance. All participants labelled as disc suspects or glaucoma by AI or tele-ophthalmologists underwent further comprehensive glaucoma evaluation at the base hospital, including clinical examination, Humphrey visual field analysis (HFA), and Optical Coherence Tomography (OCT). AI and tele-ophthalmologist diagnoses were then compared with a glaucoma specialist's diagnosis. MAIN OUTCOME MEASURES: Sensitivity and Specificity of Medios AI RESULTS: Out of 303 participants, 299 with at least one eye of sufficient image quality were included in the study. The remaining 4 participants did not have sufficient image quality in both eyes. Medios AI identified 39 participants (13%) with referable glaucoma. The AI exhibited a sensitivity of 0.91 (95% CI: 0.71 - 0.99) and specificity of 0.93 (95% CI: 0.89 - 0.96) in detecting referable glaucoma (definite perimetric glaucoma) when compared to tele-ophthalmologist. The agreement between AI and the glaucoma specialist was 80.3%, surpassing the 55.3.% agreement between the tele-ophthalmologist and the glaucoma specialist amongst those participants who were referred to the base hospital. Both AI and the tele-ophthalmologist relied on fundus photos for diagnoses, while the glaucoma specialist's assessments at the base hospital were aided by additional tools such as HFA and OCT. Furthermore, AI had fewer false positive referrals (2 out of 10) compared to the tele-ophthalmologist (9 out of 10). CONCLUSION: Medios offline AI exhibited promising sensitivity and specificity in detecting referable glaucoma from remote vision centers in southern India when compared with teleophthalmologists. It also demonstrated better agreement with glaucoma specialist's diagnosis for referable glaucoma participants.

3.
Indian J Ophthalmol ; 72(8): 1162-1167, 2024 Aug 01.
Artículo en Inglés | MEDLINE | ID: mdl-39078960

RESUMEN

PURPOSE: This study aimed to determine the generalizability of an artificial intelligence (AI) algorithm trained on an ethnically diverse dataset to screen for referable diabetic retinopathy (RDR) in the Armenian population unseen during AI development. METHODS: This study comprised 550 patients with diabetes mellitus visiting the polyclinics of Armenia over 10 months requiring diabetic retinopathy (DR) screening. The Medios AI-DR algorithm was developed using a robust, diverse, ethnically balanced dataset with no inherent bias and deployed offline on a smartphone-based fundus camera. The algorithm here analyzed the retinal images captured using the target device for the presence of RDR (i.e., moderate non-proliferative diabetic retinopathy (NPDR) and/or clinically significant diabetic macular edema (CSDME) or more severe disease) and sight-threatening DR (STDR, i.e., severe NPDR and/or CSDME or more severe disease). The results compared the AI output to a consensus or majority image grading of three expert graders according to the International Clinical Diabetic Retinopathy severity scale. RESULTS: On 478 subjects included in the analysis, the algorithm achieved a high classification sensitivity of 95.30% (95% CI: 91.9%-98.7%) and a specificity of 83.89% (95% CI: 79.9%-87.9%) for the detection of RDR. The sensitivity for STDR detection was 100%. CONCLUSION: The study proved that Medios AI-DR algorithm yields good accuracy in screening for RDR in the Armenian population. In our literature search, this is the only smartphone-based, offline AI model validated in different populations.


Asunto(s)
Algoritmos , Inteligencia Artificial , Retinopatía Diabética , Humanos , Retinopatía Diabética/diagnóstico , Retinopatía Diabética/etnología , Masculino , Femenino , Persona de Mediana Edad , Tamizaje Masivo/métodos , Etnicidad , Anciano , Adulto
4.
APL Bioeng ; 8(2): 026121, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38868458

RESUMEN

Lung cancer, the treacherous malignancy affecting the respiratory system of a human body, has a devastating impact on the health and well-being of an individual. Due to the lack of automated and noninvasive diagnostic tools, healthcare professionals look forward toward biopsy as a gold standard for diagnosis. However, biopsy could be traumatizing and expensive process. Additionally, the limited availability of dataset and inaccuracy in diagnosis is a major drawback experienced by researchers. The objective of the proposed research is to develop an automated diagnostic tool for screening of lung cancer using optimized hyperparameters such that convolutional neural network (CNN) model generalizes well for universally obtained computerized tomography (CT) slices of lung pathologies. The aforementioned objective is achieved in the following ways: (i) Initially, a preprocessing methodology specific to lung CT scans is formulated to avoid the loss of information due to random image smoothing, and (ii) a sine cosine algorithm optimization algorithm (SCA) is integrated in the CNN model, to optimally select the tuning parameters of CNN. The error rate is used as an objective function, and the SCA algorithm tries to minimize. The proposed method successfully achieved an average classification accuracy of 99% in classification of lung scans in normal, benign, and malignant classes. Further, the generalization ability of the proposed model is tested on unseen dataset, thereby achieving promising results. The quantitative results prove the efficacy of the system to be used by radiologists in a clinical scenario.

5.
Indian J Otolaryngol Head Neck Surg ; 76(3): 2714-2721, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38883455

RESUMEN

Diagnostic accuracy is vital in otorhinolaryngology for effective patient care, yet diagnostic mismatches between non-otorhinolaryngology clinicians and ENT specialists can occur. However, studies investigating such mismatches in low-resource healthcare environments are limited. This study aims to analyze diagnostic mismatches in otorhinolaryngology within a low-resource healthcare environment. A publicly available dataset assessing diagnostic outcomes from non-otorhinolaryngology clinicians and ENT specialists was analyzed. The dataset included demographic characteristics, referral diagnoses, and final ENT specialist diagnoses. Descriptive statistics and appropriate statistical tests were employed to assess the prevalence of diagnostic mismatches and associated factors. The analysis comprised 1544 cases. The prevalence of diagnostic mismatches between non-otorhinolaryngology clinicians and ENT specialists was 67.4%. Certain specific ENT diseases demonstrated higher frequencies of diagnostic mismatches. Factors such as mismatch in the diagnosis and compliance of patient were found to influence the occurrence of diagnostic mismatches. This study highlights the presence of diagnostic mismatches in otorhinolaryngology within a low-resource healthcare environment. The prevalence of these mismatches underscores the need for improved diagnostic practices in such settings. Factors contributing to diagnostic mismatches should be further explored to develop strategies for enhancing diagnostic accuracy and reducing diagnostic errors in otorhinolaryngology.

6.
J Biosci ; 492024.
Artículo en Inglés | MEDLINE | ID: mdl-38383972

RESUMEN

Rare muscular disorders (RMDs) are disorders that affect a small percentage of the population. The disorders which are attributed to genetic mutations often manifest in the form of progressive weakness and atrophy of skeletal and heart muscles. RMDs includes disorders such as Duchenne muscular dystrophy (DMD), GNE myopathy, spinal muscular atrophy (SMA), limb girdle muscular dystrophy, and so on. Due to the infrequent occurrence of these disorders, development of therapeutic approaches elicits less attention compared with other more prevalent diseases. However, in recent times, improved understanding of pathogenesis has led to greater advances in developing therapeutic options to treat such diseases. Exon skipping, gene augmentation, and gene editing have taken the spotlight in drug development for rare neuromuscular disorders. The recent innovation in targeting and repairing mutations with the advent of CRISPR technology has in fact opened new possibilities in the development of gene therapy approaches for these disorders. Although these treatments show satisfactory therapeutic effects, the susceptibility to degradation, instability, and toxicity limits their application. So, an appropriate delivery vector is required for the delivery of these cargoes. Viral vectors are considered potential delivery systems for gene therapy; however, the associated concurrent immunogenic response and other limitations have paved the way for the applications of other non-viral systems like lipids, polymers, cellpenetrating peptides (CPPs), and other organic and inorganic materials. This review will focus on non-viral vectors for the delivery of therapeutic cargoes in order to treat muscular dystrophies.


Asunto(s)
Atrofia Muscular Espinal , Distrofia Muscular de Duchenne , Ácidos Nucleicos , Humanos , Enfermedades Raras/tratamiento farmacológico , Enfermedades Raras/genética , Distrofia Muscular de Duchenne/tratamiento farmacológico , Distrofia Muscular de Duchenne/genética , Atrofia Muscular Espinal/genética , Atrofia Muscular Espinal/terapia , Músculos
7.
Eye (Lond) ; 38(6): 1104-1111, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38092938

RESUMEN

BACKGROUND/OBJECTIVES: An affordable and scalable screening model is critical for undetected glaucoma. The study evaluated the performance of an offline, smartphone-based AI system for the detection of referable glaucoma against two benchmarks: specialist diagnosis following full glaucoma workup and consensus image grading. SUBJECTS/METHODS: This prospective study (tertiary glaucoma centre, India) included 243 subjects with varying severity of glaucoma and control group without glaucoma. Disc-centred images were captured using a validated smartphone-based fundus camera analysed by the AI system and graded by specialists. Diagnostic ability of the AI in detecting referable Glaucoma (Confirmed glaucoma) and no referable Glaucoma (Suspects and No glaucoma) when compared to a final diagnosis (comprehensive glaucoma workup) and majority grading (image grading) by Glaucoma specialists (pre-defined criteria) were evaluated. RESULTS: The AI system demonstrated a sensitivity and specificity of 93.7% (95% CI: 87.6-96.9%) and 85.6% (95% CI:78.6-90.6%), respectively, in the detection of referable glaucoma when compared against final diagnosis following full glaucoma workup. True negative rate in definite non-glaucoma cases was 94.7% (95% CI: 87.2-97.9%). Amongst the false negatives were 4 early and 3 moderate glaucoma. When the same set of images provided to the AI was also provided to the specialists for image grading, specialists detected 60% (67/111) of true glaucoma cases versus a detection rate of 94% (104/111) by the AI. CONCLUSION: The AI tool showed robust performance when compared against a stringent benchmark. It had modest over-referral of normal subjects despite being challenged with fundus images alone. The next step involves a population-level assessment.


Asunto(s)
Retinopatía Diabética , Glaucoma , Humanos , Inteligencia Artificial , Estudios Prospectivos , Teléfono Inteligente , Retinopatía Diabética/diagnóstico , Tamizaje Masivo/métodos , Glaucoma/diagnóstico
8.
Front Pediatr ; 11: 1197237, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37794964

RESUMEN

Purpose: The primary objective of this study was to develop and validate an AI algorithm as a screening tool for the detection of retinopathy of prematurity (ROP). Participants: Images were collected from infants enrolled in the KIDROP tele-ROP screening program. Methods: We developed a deep learning (DL) algorithm with 227,326 wide-field images from multiple camera systems obtained from the KIDROP tele-ROP screening program in India over an 11-year period. 37,477 temporal retina images were utilized with the dataset split into train (n = 25,982, 69.33%), validation (n = 4,006, 10.69%), and an independent test set (n = 7,489, 19.98%). The algorithm consists of a binary classifier that distinguishes between the presence of ROP (Stages 1-3) and the absence of ROP. The image labels were retrieved from the daily registers of the tele-ROP program. They consist of per-eye diagnoses provided by trained ROP graders based on all images captured during the screening session. Infants requiring treatment and a proportion of those not requiring urgent referral had an additional confirmatory diagnosis from an ROP specialist. Results: Of the 7,489 temporal images analyzed in the test set, 2,249 (30.0%) images showed the presence of ROP. The sensitivity and specificity to detect ROP was 91.46% (95% CI: 90.23%-92.59%) and 91.22% (95% CI: 90.42%-91.97%), respectively, while the positive predictive value (PPV) was 81.72% (95% CI: 80.37%-83.00%), negative predictive value (NPV) was 96.14% (95% CI: 95.60%-96.61%) and the AUROC was 0.970. Conclusion: The novel ROP screening algorithm demonstrated high sensitivity and specificity in detecting the presence of ROP. A prospective clinical validation in a real-world tele-ROP platform is under consideration. It has the potential to lower the number of screening sessions required to be conducted by a specialist for a high-risk preterm infant thus significantly improving workflow efficiency.

9.
Int J Biol Macromol ; 253(Pt 6): 127262, 2023 Dec 31.
Artículo en Inglés | MEDLINE | ID: mdl-37813216

RESUMEN

In this study, we present nanocomposites of bioactive glass (BG) and hyaluronic acid (HA) (nano-BGHA) for effective delivery of HA to skin and bone. The synthesis of the nanocomposites has been carried out through the bio-inspired method, which is a modification of the traditional Stober's synthesis as it avoids using ethanol, ammonia, synthetic surfactants, or high-temperature calcination. This environmentally friendly, bio-inspired route allowed the synthesis of mesoporous nanocomposites with an average hydrodynamic radius of ∼190 nm and an average net surface charge of ∼-21 mV. Most nanocomposites are amorphous and bioactive in nature with over 70 % cellular viability for skin and bone cell lines even at high concentrations, along with high cellular uptake (90-100 %). Furthermore, the nanocomposites could penetrate skin cells in a transwell set-up and artificial human skin membrane (StratM®), thus depicting an attractive strategy for the delivery of HA to the skin. The purpose of the study is to develop nanocomposites of HA and BG that can have potential applications in non-invasive treatments that require the delivery of high molecular weight HA such as in the case of osteoarthritis, sports injury treatments, eye drops, wound healing, and some anticancer treatments, if further investigated. The presence of BG further enhances the range to bone-related applications. Additionally, the nanocomposites can have potential cosmeceutical applications where HA is abundantly used, for instance in moisturizers, dermal fillers, shampoos, anti-wrinkle creams, etc.


Asunto(s)
Ácido Hialurónico , Nanocompuestos , Humanos , Piel , Huesos , Cicatrización de Heridas , Membranas Artificiales , Vidrio
10.
Ophthalmic Res ; 66(1): 1286-1292, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37757777

RESUMEN

INTRODUCTION: Numerous studies have demonstrated the use of artificial intelligence (AI) for early detection of referable diabetic retinopathy (RDR). A direct comparison of these multiple automated diabetic retinopathy (DR) image assessment softwares (ARIAs) is, however, challenging. We retrospectively compared the performance of two modern ARIAs, IDx-DR and Medios AI. METHODS: In this retrospective-comparative study, retinal images with sufficient image quality were run on both ARIAs. They were captured in 811 consecutive patients with diabetes visiting diabetic clinics in Poland. For each patient, four non-mydriatic images, 45° field of view, i.e., two sets of one optic disc and one macula-centered image using Topcon NW400 were captured. Images were manually graded for severity of DR as no DR, any DR (mild non-proliferative diabetic retinopathy [NPDR] or more severe disease), RDR (moderate NPDR or more severe disease and/or clinically significant diabetic macular edema [CSDME]), or sight-threatening DR (severe NPDR or more severe disease and/or CSDME) by certified graders. The ARIA output was compared to manual consensus image grading (reference standard). RESULTS: On 807 patients, based on consensus grading, there was no evidence of DR in 543 patients (67%). Any DR was seen in 264 (33%) patients, of which 174 (22%) were RDR and 41 (5%) were sight-threatening DR. The sensitivity of detecting RDR against reference standard grading was 95% (95% CI: 91, 98%) and the specificity was 80% (95% CI: 77, 83%) for Medios AI. They were 99% (95% CI: 96, 100%) and 68% (95% CI: 64, 72%) for IDx-DR, respectively. CONCLUSION: Both the ARIAs achieved satisfactory accuracy, with few false negatives. Although false-positive results generate additional costs and workload, missed cases raise the most concern whenever automated screening is debated.


Asunto(s)
Diabetes Mellitus , Retinopatía Diabética , Edema Macular , Humanos , Inteligencia Artificial , Retinopatía Diabética/diagnóstico , Estudios Retrospectivos , Tamizaje Masivo/métodos , Edema Macular/diagnóstico , Programas Informáticos
11.
ACS Biomater Sci Eng ; 9(8): 4567-4572, 2023 08 14.
Artículo en Inglés | MEDLINE | ID: mdl-37523785

RESUMEN

We here introduce a novel bioreducible polymer-based gene delivery platform enabling widespread transgene expression in multiple brain regions with therapeutic relevance following intracranial convection-enhanced delivery. Our bioreducible nanoparticles provide markedly enhanced gene delivery efficacy in vitro and in vivo compared to nonbiodegradable nanoparticles primarily due to the ability to release gene payloads preferentially inside cells. Remarkably, our platform exhibits competitive gene delivery efficacy in a neuron-rich brain region compared to a viral vector under previous and current clinical investigations with demonstrated positive outcomes. Thus, our platform may serve as an attractive alternative for the intracranial gene therapy of neurological disorders.


Asunto(s)
Técnicas de Transferencia de Gen , Polímeros , Polímeros/metabolismo , Terapia Genética , Encéfalo/metabolismo
12.
Indian J Otolaryngol Head Neck Surg ; 75(2): 433-439, 2023 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-37275092

RESUMEN

Accurate classification of laryngeal cancer is a critical step for diagnosis and appropriate treatment. Radiomics is a rapidly advancing field in medical image processing that uses various algorithms to extract many quantitative features from radiological images. The high dimensional features extracted tend to cause overfitting and increase the complexity of the classification model. Thereby, feature selection plays an integral part in selecting relevant features for the classification problem. In this study, we explore the predictive capabilities of radiomics on Computed Tomography (CT) images with the incidence of laryngeal cancer to predict the histopathological grade and T stage of the tumour. Working with a pilot dataset of 20 images, an experienced radiologist carefully annotated the supraglottic lesions in the three-dimensional plane. Over 280 radiomic features that quantify the shape, intensity and texture were extracted from each image. Machine learning classifiers were built and tested to predict the stage and grade of the malignant tumour based on the calculated radiomic features. To investigate if radiomic features extracted from CT images can be used for the classification of laryngeal tumours. Out of 280 features extracted from every image in the dataset, it was found that 24 features are potential classifiers of laryngeal tumour stage and 12 radiomic features are good classifiers of histopathological grade of the laryngeal tumor. The novelty of this paper lies in the ability to create these classifiers before the surgical biopsy procedure, giving the clinician valuable, timely information.

13.
Artículo en Inglés | MEDLINE | ID: mdl-37362116

RESUMEN

This letter is in response to the article "Enhancing India's Health Care during COVID Era: Role of Artificial Intelligence and Algorithms". While the integration of AI has the potential to improve patient outcomes and reduce the workload of healthcare professionals, there is a need for significant training and upskilling of healthcare providers. There are ethical and privacy concerns related to the use of AI in healthcare, which must be accompanied by rigorous guidelines. One solution to the overburdened healthcare systems in India is the use of new language generation models like ChatGPT to assist healthcare workers in writing discharge summaries. By using these technologies responsibly, we can improve healthcare outcomes and alleviate the burden on overworked healthcare professionals.

14.
Artículo en Inglés | MEDLINE | ID: mdl-37362133

RESUMEN

This study aims to investigate public sentiment on laryngeal cancer via tweets in 2022 using machine learning. We aimed to analyze the public sentiment about laryngeal cancer on Twitter last year. A novel dataset was created for the purpose of this study by scraping all tweets from 1st Jan 2022 that included the hashtags #throatcancer, #laryngealcancer, #supraglotticcancer, #glotticcancer, and #subglotticcancer in their text. After all tweets underwent a fourfold data cleaning process, they were analyzed using natural language processing and sentiment analysis techniques to classify tweets into positive, negative, or neutral categories and to identify common themes and topics related to laryngeal cancer. The study analyzed a corpus of 733 tweets related to laryngeal cancer. The sentiment analysis revealed that 53% of the tweets were neutral, 34% were positive, and 13% were negative. The most common themes identified in the tweets were treatment and therapy, risk factors, symptoms and diagnosis, prevention and awareness, and emotional impact. This study highlights the potential of social media platforms like Twitter as a valuable source of real-time, patient-generated data that can inform healthcare research and practice. Our findings suggest that while Twitter is a popular platform, the limited number of tweets related to laryngeal cancer indicates that a better strategy could be developed for online communication among netizens regarding the awareness of laryngeal cancer.

15.
J Glaucoma ; 32(4): 280-286, 2023 04 01.
Artículo en Inglés | MEDLINE | ID: mdl-36730188

RESUMEN

PRCIS: The offline artificial intelligence (AI) on a smartphone-based fundus camera shows good agreement and correlation with the vertical cup-to-disc ratio (vCDR) from the spectral-domain optical coherence tomography (SD-OCT) and manual grading by experts. PURPOSE: The purpose of this study is to assess the agreement of vCDR measured by a new AI software from optic disc images obtained using a validated smartphone-based imaging device, with SD-OCT vCDR measurements, and manual grading by experts on a stereoscopic fundus camera. METHODS: In a prospective, cross-sectional study, participants above 18 years (Glaucoma and normal) underwent a dilated fundus evaluation, followed by optic disc imaging including a 42-degree monoscopic disc-centered image (Remidio NM-FOP-10), a 30-degree stereoscopic disc-centered image (Kowa nonmyd WX-3D desktop fundus camera), and disc analysis (Cirrus SD-OCT). Remidio FOP images were analyzed for vCDR using the new AI software, and Kowa stereoscopic images were manually graded by 3 fellowship-trained glaucoma specialists. RESULTS: We included 473 eyes of 244 participants. The vCDR values from the new AI software showed strong agreement with SD-OCT measurements [95% limits of agreement (LoA)=-0.13 to 0.16]. The agreement with SD-OCT was marginally better in eyes with higher vCDR (95% LoA=-0.15 to 0.12 for vCDR>0.8). Interclass correlation coefficient was 0.90 (95% CI, 0.88-0.91). The vCDR values from AI software showed a good correlation with the manual segmentation by experts (interclass correlation coefficient=0.89, 95% CI, 0.87-0.91) on stereoscopic images (95% LoA=-0.18 to 0.11) with agreement better for eyes with vCDR>0.8 (LoA=-0.12 to 0.08). CONCLUSIONS: The new AI software vCDR measurements had an excellent agreement and correlation with the SD-OCT and manual grading. The ability of the Medios AI to work offline, without requiring cloud-based inferencing, is an added advantage.


Asunto(s)
Glaucoma , Disco Óptico , Enfermedades del Nervio Óptico , Humanos , Tomografía de Coherencia Óptica/métodos , Inteligencia Artificial , Estudios Prospectivos , Estudios Transversales , Enfermedades del Nervio Óptico/diagnóstico , Presión Intraocular , Glaucoma/diagnóstico , Programas Informáticos , Fotograbar/métodos , Reproducibilidad de los Resultados
16.
Clin Ophthalmol ; 16: 4281-4291, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36578668

RESUMEN

Purpose: InstaRef R20 is a handheld, affordable auto refractometer based on Shack Hartmann aberrometry technology. The study's objective was to compare InstaRef R20's performance for identifying refractive error in a paediatric population to that of standard subjective and objective refraction under both pre- and post-cycloplegic conditions. Methods: Refraction was performed using 1) standard clinical procedure consisting of retinoscopy followed by subjective refraction (SR) under pre- and post-cycloplegic conditions and 2) InstaRef R20. Agreement between both methods was evaluated using Bland-Altman analysis. The repeatability of the device based on three measurements in a subgroup of 20 children was assessed. Results: The refractive error was measured in 132 children (mean age 12.31 ± 3 years). The spherical equivalent (M) and cylindrical components (J0 and J45) of the device had clinically acceptable differences (within ±0.50D) and acceptable agreement compared to standard pre- and post-cycloplegic manual retinoscopy and subjective refraction (SR). The device agreed within ± 0.50D of retinoscopy in 67% of eyes for M, 78% for J0 and 80% for J45 and within ± 0.50D of SR in 70% for M and 77% for cylindrical components. Conclusion: InstaRef R20 has an acceptable agreement compared to standard retinoscopy in paediatric population. The measurements from this device can be used as a starting point for subjective acceptance. The device being simple to use, portable, reliable and affordable has the potential for large-scale community-based refractive error detection.

17.
BMC Ophthalmol ; 22(1): 498, 2022 Dec 19.
Artículo en Inglés | MEDLINE | ID: mdl-36536321

RESUMEN

BACKGROUND: Refraction is one of the key components of a comprehensive eye examination. Auto refractometers that are reliable and affordable can be beneficial, especially in a low-resource community setting. The study aimed to validate the accuracy of a novel wave-front aberrometry-based auto refractometer, Instaref R20 against the open-field system and subjective refraction in an adult population. METHODS: All the participants underwent a comprehensive eye examination including objective refraction, subjective acceptance, anterior and posterior segment evaluation. Refraction was performed without cycloplegia using WAM5500 open-field auto refractometer (OFAR) and Instaref R20, the study device. Agreement between both methods was evaluated using Bland-Altman analysis. The repeatability of the device based on three measurements in a subgroup of 40 adults was assessed. RESULTS: The refractive error was measured in 132 participants (mean age,30.53 ± 9.36 years, 58.3% female). The paired mean difference of the refraction values of the study device against OFAR was - 0.13D for M, - 0.0002D (J0) and - 0.13D (J45) and against subjective refraction (SR) was - 0.09D (M), 0.06 (J0) and 0.03D (J45). The device agreed within +/- 0.50D of OFAR in 78% of eyes for M, 79% for J0 and 78% for J45. The device agreed within +/- 0.5D of SR values for M (84%), J0 (86%) and J45 (89%). CONCLUSION: This study found a good agreement between the measurements obtained with the portable autorefractor against open-field refractometer and SR values. It has a potential application in population-based community vision screening programs for refractive error correction without the need for highly trained personnel.


Asunto(s)
Errores de Refracción , Selección Visual , Humanos , Adulto , Femenino , Adulto Joven , Masculino , Estudios Prospectivos , Aberrometría , Reproducibilidad de los Resultados , Refracción Ocular , Errores de Refracción/diagnóstico , Pruebas de Visión , Selección Visual/métodos
18.
Clin Ophthalmol ; 16: 2659-2667, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36003071

RESUMEN

Purpose: To evaluate the performance of a validated Artificial Intelligence (AI) algorithm developed for a smartphone-based camera on images captured using a standard desktop fundus camera to screen for diabetic retinopathy (DR). Participants: Subjects with established diabetes mellitus. Methods: Images captured on a desktop fundus camera (Topcon TRC-50DX, Japan) for a previous study with 135 consecutive patients (233 eyes) with established diabetes mellitus, with or without DR were analysed by the AI algorithm. The performance of the AI algorithm to detect any DR, referable DR (RDR Ie, worse than mild non proliferative diabetic retinopathy (NPDR) and/or diabetic macular edema (DME)) and sight-threatening DR (STDR Ie, severe NPDR or worse and/or DME) were assessed based on comparisons against both image-based consensus grades by two fellowship trained vitreo-retina specialists and clinical examination. Results: The sensitivity was 98.3% (95% CI 96%, 100%) and the specificity 83.7% (95% CI 73%, 94%) for RDR against image grading. The specificity for RDR decreased to 65.2% (95% CI 53.7%, 76.6%) and the sensitivity marginally increased to 100% (95% CI 100%, 100%) when compared against clinical examination. The sensitivity for detection of any DR when compared against image-based consensus grading and clinical exam were both 97.6% (95% CI 95%, 100%). The specificity for any DR detection was 90.9% (95% CI 82.3%, 99.4%) as compared against image grading and 88.9% (95% CI 79.7%, 98.1%) on clinical exam. The sensitivity for STDR was 99.0% (95% CI 96%, 100%) against image grading and 100% (95% CI 100%, 100%) as compared against clinical exam. Conclusion: The AI algorithm could screen for RDR and any DR with robust performance on images captured on a desktop fundus camera when compared to image grading, despite being previously optimized for a smartphone-based camera.

19.
Biomed Eng Lett ; 12(2): 175-183, 2022 May.
Artículo en Inglés | MEDLINE | ID: mdl-35529346

RESUMEN

The larynx, or the voice-box, is a common site of occurrence of Head and Neck cancers. Yet, automated segmentation of the larynx has been receiving very little attention. Segmentation of organs is an essential step in cancer treatment-planning. Computed Tomography scans are routinely used to assess the extent of tumor spread in the Head and Neck as they are fast to acquire and tolerant to some movement. This paper reviews various automated detection and segmentation methods used for the larynx on Computed Tomography images. Image registration and deep learning approaches to segmenting the laryngeal anatomy are compared, highlighting their strengths and shortcomings. A list of available annotated laryngeal computed tomography datasets is compiled for encouraging further research. Commercial software currently available for larynx contouring are briefed in our work. We conclude that the lack of standardisation on larynx boundaries and the complexity of the relatively small structure makes automated segmentation of the larynx on computed tomography images a challenge. Reliable computer aided intervention in the contouring and segmentation process will help clinicians easily verify their findings and look for oversight in diagnosis. This review is useful for research that works with artificial intelligence in Head and Neck cancer, specifically that deals with the segmentation of laryngeal anatomy. Supplementary Information: The online version contains supplementary material available at 10.1007/s13534-022-00221-3.

20.
Transl Vis Sci Technol ; 10(12): 21, 2021 10 04.
Artículo en Inglés | MEDLINE | ID: mdl-34661624

RESUMEN

Purpose: Widefield imaging can detect signs of retinal pathology extending beyond the posterior pole and is currently moving to the forefront of posterior segment imaging. We report a novel, smartphone-based, telemedicine-enabled, mydriatic, widefield retinal imaging device with autofocus and autocapture capabilities to be used by non-specialist operators. Methods: The Remidio Vistaro uses an annular illumination design without cross-polarizers to eliminate Purkinje reflexes. The measured resolution using the US Air Force target test was 64 line pairs (lp)/mm in the center, 57 lp/mm in the middle, and 45 lp/mm in the periphery of a single-shot retinal image. An autocapture algorithm was developed to capture images automatically upon reaching the correct working distance. The field of view (FOV) was validated using both model and real eyes. A pilot study was conducted to objectively assess image quality. The FOVs of montaged images from the Vistaro were compared with regulatory-approved widefield and ultra-widefield devices. Results: The FOV of the Vistaro was found to be approximately 65° in one shot. Automatic image capture was achieved in 80% of patient examinations within an average of 10 to 15 seconds. Consensus grading of image quality among three graders showed that 91.6% of the images were clinically useful. A two-field montage on the Vistaro was shown to exceed the cumulative FOV of a seven-field Early Treatment Diabetic Retinopathy Study image. Conclusions: A novel, smartphone-based, portable, mydriatic, widefield imaging device can view the retina beyond the posterior pole with a FOV of 65° in one shot. Translational Relevance: Smartphone-based widefield imaging can be widely used to screen for retinal pathologies beyond the posterior pole.


Asunto(s)
Oftalmología , Telemedicina , Algoritmos , Humanos , Fotograbar , Proyectos Piloto , Teléfono Inteligente
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA