Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 5.271
Filtrar
1.
Vestn Oftalmol ; 140(4): 60-67, 2024.
Artículo en Ruso | MEDLINE | ID: mdl-39254391

RESUMEN

Early detection of diabetic retinopathy (DR) is an urgent ophthalmological problem in Russia and globally. PURPOSE: This study assesses the prevalence of asymptomatic retinopathy and attempts to identify risk groups for its development in patients with type 1 and 2 diabetes mellitus (T1DM and T2DM). MATERIAL AND METHODS: The study involved clinics from 5 cities in the Russian Federation and it included 367 patients with DM, 34.88% men and 65.12% women, aged 50.88±20.55 years. 34.88% of patients suffered from T1DM, 65.12% suffered from T2DM, the average duration of the disease was 9.02±7.22 years. 58.31% of patients had a history of arterial hypertension, 13.08% had a history of smoking. The primary endpoint was the frequency of detection of diabetic changes in the eye fundus of patients with T1DM and T2DM in general; the secondary endpoint - same but separately, and for T2DM patients depending on the duration of the disease. The exploratory endpoint was the assessment of the influence of various factors on the development of DR. The patients underwent visometry (modified ETDRS table), biomicroscopy, mydriatic fundus photography according to the «2 fields¼ protocol. RESULTS: The average detection rate of DR was 12.26%, primarily observed in patients with T2DM (13.81%), women (9.26%), in both eyes (8.17%). Among patients with DR, 26 (19.55%) had glycated hemoglobin (HbA1c) level exceeding 7.5% (p=0.002), indicating a direct relationship between this indicator and the incidence of DR. Logistic regression analysis showed that the duration of diabetes of more than 10 years has a statistically significant effect on the development of DR. In the modified model for odds estimation, the likelihood of developing DR is increased by the duration of DM for more than 10 years; increased blood pressure; HbA1c level >7.5%. CONCLUSION: The obtained results, some of which will be presented in subsequent publications, highlight the effectiveness of using two-field mydriatic fundus photography as a screening for DR.


Asunto(s)
Diabetes Mellitus Tipo 2 , Retinopatía Diabética , Fondo de Ojo , Fotograbar , Humanos , Retinopatía Diabética/diagnóstico , Retinopatía Diabética/epidemiología , Femenino , Masculino , Persona de Mediana Edad , Federación de Rusia/epidemiología , Prevalencia , Fotograbar/métodos , Adulto , Diabetes Mellitus Tipo 2/epidemiología , Diabetes Mellitus Tipo 2/complicaciones , Diabetes Mellitus Tipo 2/diagnóstico , Anciano , Factores de Riesgo , Diabetes Mellitus Tipo 1/complicaciones , Diabetes Mellitus Tipo 1/epidemiología , Diabetes Mellitus Tipo 1/diagnóstico , Diagnóstico Precoz
2.
Ger Med Sci ; 22: Doc07, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39224664

RESUMEN

Objective: The study aimed to investigate the subjective method of estimating linear breast dimensions in comparison to the objective method. Methods: The reproducibility and accuracy of the subjective method of estimating linear breast dimensions during a simplified breast shape analysis were examined. Four linear breast dimensions including the distance from the sternal notch to the nipple, distance from the nipple to the inframammary fold, distance from the nipple to the midline and under-breast width were evaluated based on subjective estimates. Images from 100 women with natural breasts and without any history of breast surgery were reviewed by two examiners three times each. The cases were obtained from a large database of breast images captured using the Vectra Camera System (Canfield Scientific Inc., USA). The subjective data were then compared with the objective linear data from the Vectra Camera System in the automated analysis. Statistical evaluation was conducted between the three repeated estimates of each examiner, between the two examiners and between the objective and subjective data. Results: The intra-individual variations of the three subjective estimates were significantly greater in one examiner than in the other. This trend was consistent across all eight parameters in the majority of the comparisons of the standard deviations and variation coefficients, and the differences were significant in 14 out of 16 comparisons (p<0.05). Conversely, in the comparison between the subjective and objective data, the estimates were closer to the measurements in one examiner than the other. In contrast to the reproducibility observed, the assessment of the accuracy revealed that the examiner who previously presented with less reproducibility of the estimated data overall showed better accuracy in comparison to the objective data. The overall differences were inconsistent, with some being positive and others being negative. Regarding the distances from the sternal notch to the nipple and breast width, both examiners underestimated the values. However, the deviations were at different levels, particularly when considering the objective data from the Vectra Camera System as the gold standard data for comparison. Regarding the distance from the nipple to the inframammary fold, one examiner underestimated the distance, while the other overestimated it. An opposite trend was noted for the distance from the nipple to the midline. There were no differences in the estimates between the right and left sides of the breasts. The correlations between the measured and estimated distances were positive: as the objective distances increased, the subjective distances also increased. In all cases, the correlations were significant. However, the correlation for the breast width was notably weaker than that for the other distances. Conclusions: The error assessment of the subjective method reveals that it varies significantly and unsystematically between examiners. This is true when assessing the reproducibility as well as the accuracy of the method in comparison to the objective data obtained with an automated system.


Asunto(s)
Mama , Humanos , Femenino , Mama/anatomía & histología , Mama/diagnóstico por imagen , Reproducibilidad de los Resultados , Adulto , Persona de Mediana Edad , Variaciones Dependientes del Observador , Anciano , Adulto Joven , Fotograbar/métodos
3.
BMC Ophthalmol ; 24(1): 387, 2024 Sep 03.
Artículo en Inglés | MEDLINE | ID: mdl-39227901

RESUMEN

BACKGROUND: To analyse and compare the grading of diabetic retinopathy (DR) severity level using standard 35° ETDRS 7-fields photography and CLARUS™ 500 ultra-widefield imaging system. METHODS: A cross-sectional analysis of retinal images of patients with type 2 diabetes (n = 160 eyes) was performed for this study. All patients underwent 7-fields colour fundus photography (CFP) at 35° on a standard Topcon TRC-50DX® camera, and ultra-widefield (UWF) imaging at 200° on a CLARUS™ 500 (ZEISS, Dublin, CA, USA) by an automatic montage of two 133° images (nasal and temporal). 35° 7-fields photographs were graded by two graders, according to the Early Treatment Diabetic Retinopathy Study (ETDRS). For CLARUS UWF images, a prototype 7-fields grid was applied using the CLARUS review software, and the same ETDRS grading procedures were performed inside that area only. Grading of DR severity level was compared between these two methods to evaluate the agreement between both imaging techniques. RESULTS: Images of 160 eyes from 83 diabetic patients were considered for analysis. According to the 35° ETDRS 7-fields images, 22 eyes were evaluated as DR severity level 10-20, 64 eyes were evaluated as DR level 35, 41 eyes level 43, 21 eyes level 47, 7 eyes level 53, and 5 eyes level 61. The same DR severity level was achieved with CLARUS 500 UWF images in 92 eyes (57%), showing a perfect agreement (k > 0.80) with the 7-fields 35° technique. Fifty-seven eyes (36%) showed a higher DR level with CLARUS UWF images, mostly due to a better visualization of haemorrhages and a higher detection rate of intraretinal microvascular abnormalities (IRMA). Only 11 eyes (7%) showed a lower severity level with the CLARUS UWF system, due to the presence of artifacts or media opacities that precluded the correct evaluation of DR lesions. CONCLUSIONS: UWF CLARUS 500 device showed nearly perfect agreement with standard 35° 7-fields images in all ETDRS severity levels. Moreover, CLARUS images showed an increased ability to detect haemorrhages and IRMA helping with finer evaluation of lesions, thus demonstrating that a UWF photograph can be used to grade ETDRS severity level with a better visualization than the standard 7-fields images. TRIAL REGISTRATION: Approved by the AIBILI - Association for Innovation and Biomedical Research on Light and Image Ethics Committee for Health with number CEC/009/17- EYEMARKER.


Asunto(s)
Diabetes Mellitus Tipo 2 , Retinopatía Diabética , Fotograbar , Índice de Severidad de la Enfermedad , Humanos , Retinopatía Diabética/diagnóstico , Retinopatía Diabética/diagnóstico por imagen , Estudios Transversales , Femenino , Masculino , Persona de Mediana Edad , Fotograbar/métodos , Anciano , Diabetes Mellitus Tipo 2/complicaciones , Fondo de Ojo , Técnicas de Diagnóstico Oftalmológico , Adulto , Reproducibilidad de los Resultados
4.
J Biomed Inform ; 157: 104722, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-39244181

RESUMEN

OBJECTIVE: Keratitis is the primary cause of corneal blindness worldwide. Prompt identification and referral of patients with keratitis are fundamental measures to improve patient prognosis. Although deep learning can assist ophthalmologists in automatically detecting keratitis through a slit lamp camera, remote and underserved areas often lack this professional equipment. Smartphones, a widely available device, have recently been found to have potential in keratitis screening. However, given the limited data available from smartphones, employing traditional deep learning algorithms to construct a robust intelligent system presents a significant challenge. This study aimed to propose a meta-learning framework, cosine nearest centroid-based metric learning (CNCML), for developing a smartphone-based keratitis screening model in the case of insufficient smartphone data by leveraging the prior knowledge acquired from slit-lamp photographs. METHODS: We developed and assessed CNCML based on 13,009 slit-lamp photographs and 4,075 smartphone photographs that were obtained from 3 independent clinical centers. To mimic real-world scenarios with various degrees of sample scarcity, we used training sets of different sizes (0 to 20 photographs per class) from the HUAWEI smartphone to train CNCML. We evaluated the performance of CNCML not only on an internal test dataset but also on two external datasets that were collected by two different brands of smartphones (VIVO and XIAOMI) in another clinical center. Furthermore, we compared the performance of CNCML with that of traditional deep learning models on these smartphone datasets. The accuracy and macro-average area under the curve (macro-AUC) were utilized to evaluate the performance of models. RESULTS: With merely 15 smartphone photographs per class used for training, CNCML reached accuracies of 84.59%, 83.15%, and 89.99% on three smartphone datasets, with corresponding macro-AUCs of 0.96, 0.95, and 0.98, respectively. The accuracies of CNCML on these datasets were 0.56% to 9.65% higher than those of the most competitive traditional deep learning models. CONCLUSIONS: CNCML exhibited fast learning capabilities, attaining remarkable performance with a small number of training samples. This approach presents a potential solution for transitioning intelligent keratitis detection from professional devices (e.g., slit-lamp cameras) to more ubiquitous devices (e.g., smartphones), making keratitis screening more convenient and effective.


Asunto(s)
Aprendizaje Profundo , Queratitis , Teléfono Inteligente , Humanos , Queratitis/diagnóstico , Algoritmos , Fotograbar/métodos , Tamizaje Masivo/métodos , Tamizaje Masivo/instrumentación
5.
Head Face Med ; 20(1): 45, 2024 Sep 02.
Artículo en Inglés | MEDLINE | ID: mdl-39223562

RESUMEN

BACKGROUND: To support dentists with limited experience, this study trained and compared six convolutional neural networks to detect crossbites and classify non-crossbite, frontal, and lateral crossbites using 2D intraoral photographs. METHODS: Based on 676 photographs from 311 orthodontic patients, six convolutional neural network models were trained and compared to classify (1) non-crossbite vs. crossbite and (2) non-crossbite vs. lateral crossbite vs. frontal crossbite. The trained models comprised DenseNet, EfficientNet, MobileNet, ResNet18, ResNet50, and Xception. FINDINGS: Among the models, Xception showed the highest accuracy (98.57%) in the test dataset for classifying non-crossbite vs. crossbite images. When additionally distinguishing between lateral and frontal crossbites, average accuracy decreased with the DenseNet architecture achieving the highest accuracy among the models with 91.43% in the test dataset. CONCLUSIONS: Convolutional neural networks show high potential in processing clinical photographs and detecting crossbites. This study provides initial insights into how deep learning models can be used for orthodontic diagnosis of malocclusions based on intraoral 2D photographs.


Asunto(s)
Aprendizaje Profundo , Maloclusión , Redes Neurales de la Computación , Humanos , Maloclusión/diagnóstico por imagen , Maloclusión/diagnóstico , Femenino , Masculino , Fotografía Dental/métodos , Fotograbar/métodos , Adolescente
6.
JAMA Netw Open ; 7(8): e2425124, 2024 Aug 01.
Artículo en Inglés | MEDLINE | ID: mdl-39106068

RESUMEN

IMPORTANCE: Identifying pediatric eye diseases at an early stage is a worldwide issue. Traditional screening procedures depend on hospitals and ophthalmologists, which are expensive and time-consuming. Using artificial intelligence (AI) to assess children's eye conditions from mobile photographs could facilitate convenient and early identification of eye disorders in a home setting. OBJECTIVE: To develop an AI model to identify myopia, strabismus, and ptosis using mobile photographs. DESIGN, SETTING, AND PARTICIPANTS: This cross-sectional study was conducted at the Department of Ophthalmology of Shanghai Ninth People's Hospital from October 1, 2022, to September 30, 2023, and included children who were diagnosed with myopia, strabismus, or ptosis. MAIN OUTCOMES AND MEASURES: A deep learning-based model was developed to identify myopia, strabismus, and ptosis. The performance of the model was assessed using sensitivity, specificity, accuracy, the area under the curve (AUC), positive predictive values (PPV), negative predictive values (NPV), positive likelihood ratios (P-LR), negative likelihood ratios (N-LR), and the F1-score. GradCAM++ was utilized to visually and analytically assess the impact of each region on the model. A sex subgroup analysis and an age subgroup analysis were performed to validate the model's generalizability. RESULTS: A total of 1419 images obtained from 476 patients (225 female [47.27%]; 299 [62.82%] aged between 6 and 12 years) were used to build the model. Among them, 946 monocular images were used to identify myopia and ptosis, and 473 binocular images were used to identify strabismus. The model demonstrated good sensitivity in detecting myopia (0.84 [95% CI, 0.82-0.87]), strabismus (0.73 [95% CI, 0.70-0.77]), and ptosis (0.85 [95% CI, 0.82-0.87]). The model showed comparable performance in identifying eye disorders in both female and male children during sex subgroup analysis. There were differences in identifying eye disorders among different age subgroups. CONCLUSIONS AND RELEVANCE: In this cross-sectional study, the AI model demonstrated strong performance in accurately identifying myopia, strabismus, and ptosis using only smartphone images. These results suggest that such a model could facilitate the early detection of pediatric eye diseases in a convenient manner at home.


Asunto(s)
Inteligencia Artificial , Diagnóstico Precoz , Fotograbar , Humanos , Femenino , Masculino , Estudios Transversales , Niño , Preescolar , Fotograbar/métodos , Miopía/diagnóstico , Aprendizaje Profundo , Estrabismo/diagnóstico , Blefaroptosis/diagnóstico , Sensibilidad y Especificidad , China/epidemiología , Oftalmopatías/diagnóstico , Adolescente
7.
Nurs Health Sci ; 26(3): e13155, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-39164006

RESUMEN

Physical therapy students must learn about heart transplantation. They need to know how to care for these patients' emotions and needs. The study aimed to compare the effectiveness of a narrative photography (NP) program and a traditional learning (TL) program in physical therapy students' knowledge, satisfaction, empathy, and moral sensitivity. A two-armed assessor-blinded randomized controlled trial was carried out. One hundred and seventeen physical therapy students participated in the study. They were divided into two groups: (i) NP group (n = 56) and (ii) TL group (n = 61). At the end of the program, NP group's knowledge increased when compared with the TL group (p = 0.02). 90.57% of the sample was very satisfied/satisfied with the NP method, and 88.68% felt that NP helped them to understand the importance of considering subjective realities. In conclusion, NP improved knowledge and satisfaction compared with the TL group. These results suggest that NP may be a useful method to improve the academic outcomes of physical therapy students in the heart transplantation field; thus, NP may be considered a teaching-learning methodology of choice in physical therapy students.


Asunto(s)
Trasplante de Corazón , Fotograbar , Humanos , Femenino , Masculino , Fotograbar/métodos , Trasplante de Corazón/psicología , Trasplante de Corazón/métodos , Adulto , Narración , Estudiantes/psicología , Estudiantes/estadística & datos numéricos , Aprendizaje , Evaluación Educacional/métodos
8.
Pediatr Surg Int ; 40(1): 233, 2024 Aug 19.
Artículo en Inglés | MEDLINE | ID: mdl-39158792

RESUMEN

PURPOSE: This study evaluates the inter-rater agreements of both the Glans-Urethral Meatus-Shaft (GMS) hypospadias score and Hypospadias Objective Penile Evaluation (HOPE) score, aiming to standardize disease classification for consistent agreement in clinically relevant characteristics of hypospadias. METHODS: Photos of hypospadias in children were collected from two separate institutions. Three raters scored the photos using GMS and HOPE, excluding penile torsion and curvature assessment in HOPE due to photo limitations. RESULTS: A total of 528 photos were included. With GMS, Fleiss' multi-rater kappa showed an agreement of 0.745 for glans-urethral plate, 0.869 for meatus, and 0.745 for shaft. For HOPE scores, the agreements were 0.888 for position of meatus, 0.669 for shape of meatus, 0.730 for shape of glans, and 0.708 for the shape of the skin. The lower agreement in the shape of the meatus evaluation may be attributed to the lack of a quantitative classification method in HOPE. Experts rely on their subjective judgment based on the provided example photos and their index patient. CONCLUSIONS: While there is high agreement among experts when evaluating hypospadias using the GMS and HOPE scoring criteria, only the position of the meatus achieved nearly perfect agreement highlighting that the current scoring systems entail a subjective element in disease classification.


Asunto(s)
Hipospadias , Pene , Uretra , Humanos , Hipospadias/clasificación , Masculino , Lactante , Fotograbar/métodos , Preescolar , Variaciones Dependientes del Observador , Reproducibilidad de los Resultados , Niño
9.
IEEE J Transl Eng Health Med ; 12: 580-588, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39155921

RESUMEN

OBJECTIVE: Low-cost, portable RGB-D cameras with integrated motion tracking functionality enable easy-to-use 3D motion analysis without requiring expensive facilities and specialized personnel. However, the accuracy of existing systems is insufficient for most clinical applications, particularly when applied to children. In previous work, we developed an RGB-D camera-based motion tracking method and showed that it accurately captures body joint positions of children and young adults in 3D. In this study, the validity and accuracy of clinically relevant motion parameters that were computed from kinematics of our motion tracking method are evaluated in children and young adults. METHODS: Twenty-three typically developing children and healthy young adults (5-29 years, 110-189 cm) performed five movement tasks while being recorded simultaneously with a marker-based Vicon system and an Azure Kinect RGB-D camera. Motion parameters were computed from the extracted kinematics of both methods: time series measurements, i.e., measurements over time, peak measurements, i.e., measurements at a single time instant, and movement smoothness. The agreement of these parameter values was evaluated using Pearson's correlation coefficients r for time series data, and mean absolute error (MAE) and Bland-Altman plots with limits of agreement for peak measurements and smoothness. RESULTS: Time series measurements showed strong to excellent correlations (r-values between 0.8 and 1.0), MAE for angles ranged from 1.5 to 5 degrees and for smoothness parameters (SPARC) from 0.02-0.09, while MAE for distance-related parameters ranged from 9 to 15 mm. CONCLUSION: Extracted motion parameters are valid and accurate for various movement tasks in children and young adults, demonstrating the suitability of our tracking method for clinical motion analysis. CLINICAL IMPACT: The low-cost portable hardware in combination with our tracking method enables motion analysis outside of specialized facilities while providing measurements that are close to those of the clinical gold-standard.


Asunto(s)
Imagenología Tridimensional , Movimiento , Humanos , Niño , Adolescente , Adulto Joven , Adulto , Masculino , Femenino , Movimiento/fisiología , Imagenología Tridimensional/instrumentación , Imagenología Tridimensional/métodos , Fenómenos Biomecánicos , Preescolar , Reproducibilidad de los Resultados , Grabación en Video/instrumentación , Grabación en Video/métodos , Fotograbar/instrumentación , Fotograbar/métodos
10.
PeerJ ; 12: e17786, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39104365

RESUMEN

Background: Chronic kidney disease (CKD) is a significant global health concern, emphasizing the necessity of early detection to facilitate prompt clinical intervention. Leveraging the unique ability of the retina to offer insights into systemic vascular health, it emerges as an interesting, non-invasive option for early CKD detection. Integrating this approach with existing invasive methods could provide a comprehensive understanding of patient health, enhancing diagnostic accuracy and treatment effectiveness. Objectives: The purpose of this review is to critically assess the potential of retinal imaging to serve as a diagnostic tool for CKD detection based on retinal vascular changes. The review tracks the evolution from conventional manual evaluations to the latest state-of-the-art in deep learning. Survey Methodology: A comprehensive examination of the literature was carried out, using targeted database searches and a three-step methodology for article evaluation: identification, screening, and inclusion based on Prisma guidelines. Priority was given to unique and new research concerning the detection of CKD with retinal imaging. A total of 70 publications from 457 that were initially discovered satisfied our inclusion criteria and were thus subjected to analysis. Out of the 70 studies included, 35 investigated the correlation between diabetic retinopathy and CKD, 23 centered on the detection of CKD via retinal imaging, and four attempted to automate the detection through the combination of artificial intelligence and retinal imaging. Results: Significant retinal features such as arteriolar narrowing, venular widening, specific retinopathy markers (like microaneurysms, hemorrhages, and exudates), and changes in arteriovenous ratio (AVR) have shown strong correlations with CKD progression. We also found that the combination of deep learning with retinal imaging for CKD detection could provide a very promising pathway. Accordingly, leveraging retinal imaging through this technique is expected to enhance the precision and prognostic capacity of the CKD detection system, offering a non-invasive diagnostic alternative that could transform patient care practices. Conclusion: In summary, retinal imaging holds high potential as a diagnostic tool for CKD because it is non-invasive, facilitates early detection through observable microvascular changes, offers predictive insights into renal health, and, when paired with deep learning algorithms, enhances the accuracy and effectiveness of CKD screening.


Asunto(s)
Fotograbar , Insuficiencia Renal Crónica , Humanos , Insuficiencia Renal Crónica/diagnóstico por imagen , Insuficiencia Renal Crónica/diagnóstico , Fotograbar/métodos , Aprendizaje Profundo , Inteligencia Artificial , Retina/diagnóstico por imagen , Retina/patología , Retinopatía Diabética/diagnóstico por imagen , Retinopatía Diabética/diagnóstico , Vasos Retinianos/diagnóstico por imagen , Vasos Retinianos/patología , Diagnóstico Precoz
11.
Waste Manag ; 187: 101-108, 2024 Oct 01.
Artículo en Inglés | MEDLINE | ID: mdl-39002296

RESUMEN

Checking each item placed in a separate collection bin of recyclables to examine contamination is often difficult for a researcher relying on such data. This is because of the time and inconvenience involved to manually identify items. We test a proof-of-concept experiment on the ability of trail cameras to identify items placed within separate collection bins. After a pre-test of seven camera models, we selected one with the best image quality. We use this camera for lab and field trials to count the number of identifiable items based on photos compared to manual hand-counts of the items. Three lab trials of this camera resulted in an average of 82% accuracy in item identification. We then conducted a field experiment, testing photo quality to identify items in six separate collection bins across a university campus over a one-month period with a total of over 9,700 photos. Of the 1343 items placed in the separate collection bins, the trail cameras provided photographs of high enough quality such that successful identification occurred for 68.5% of the items, with poor identification for paper items and small items. We conclude that trail cameras can be useful for data collection in separate collection behavior, especially for items with the largest surface size greater than a credit card.


Asunto(s)
Fotograbar , Fotograbar/métodos , Fotograbar/instrumentación , Recolección de Datos
12.
Braz J Biol ; 84: e279855, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38985068

RESUMEN

Leaf Area Index (LAI) is the ratio of ground surface area covered by leaves. LAI plays a significant role in the structural characteristics of forest ecosystems. Therefore, an accurate estimation process is needed. One method for estimating LAI is using Digital Cover Photography. However, most applications for processing LAI using digital photos do not consider the brown color of plant parts. Previous research, which includes brown color as part of the calculation, potentially produced biased results by the increased pixel count from the original photo. This study aims to enhance the accuracy of LAI estimation. The proposed methods consider the brown color while minimizing errors. Image processing is carried out in two stages to separate leaves and non-leaf pixels by using the RGB color model for the first stage and applying the CIELAB color model in the second stage. Proposed methods and existing applications are evaluated against the actual LAI value obtained using Terrestrial Laser Scanning (TLS) as the ground truth. The results demonstrate that the proposed methods effectively identify non-leaf parts and exhibit the lowest error rates compared to other methods. In conclusion, this study provides alternative techniques to enhance the accuracy of LAI estimation in forest ecosystems.


Asunto(s)
Bosques , Procesamiento de Imagen Asistido por Computador , Fotograbar , Hojas de la Planta , Hojas de la Planta/anatomía & histología , Fotograbar/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Árboles , Color
13.
BMJ Open Ophthalmol ; 9(1)2024 Jul 05.
Artículo en Inglés | MEDLINE | ID: mdl-38969362

RESUMEN

OBJECTIVES: This study aimed to quantitatively evaluate optic nerve head and retinal vascular parameters in children with hyperopia in relation to age and spherical equivalent refraction (SER) using artificial intelligence (AI)-based analysis of colour fundus photographs (CFP). METHODS AND ANALYSIS: This cross-sectional study included 324 children with hyperopia aged 3-12 years. Participants were divided into low hyperopia (SER+0.5 D to+2.0 D) and moderate-to-high hyperopia (SER≥+2.0 D) groups. Fundus parameters, such as optic disc area and mean vessel diameter, were automatically and quantitatively detected using AI. Significant variables (p<0.05) in the univariate analysis were included in a stepwise multiple linear regression. RESULTS: Overall, 324 children were included, 172 with low and 152 with moderate-to-high hyperopia. The median optic disc area and vessel diameter were 1.42 mm2 and 65.09 µm, respectively. Children with high hyperopia had larger superior neuroretinal rim (NRR) width and larger vessel diameter than those with low and moderate hyperopia. In the univariate analysis, axial length was significantly associated with smaller superior NRR width (ß=-3.030, p<0.001), smaller temporal NRR width (ß=-1.469, p=0.020) and smaller vessel diameter (ß=-0.076, p<0.001). A mild inverse correlation was observed between the optic disc area and vertical disc diameter with age. CONCLUSION: AI-based CFP analysis showed that children with high hyperopia had larger mean vessel diameter but smaller vertical cup-to-disc ratio than those with low hyperopia. This suggests that AI can provide quantitative data on fundus parameters in children with hyperopia.


Asunto(s)
Inteligencia Artificial , Hiperopía , Disco Óptico , Fotograbar , Vasos Retinianos , Humanos , Hiperopía/diagnóstico , Hiperopía/fisiopatología , Estudios Transversales , Masculino , Niño , Femenino , Preescolar , Disco Óptico/diagnóstico por imagen , Disco Óptico/patología , Disco Óptico/irrigación sanguínea , Vasos Retinianos/diagnóstico por imagen , Vasos Retinianos/patología , Fotograbar/métodos , Fondo de Ojo , Agudeza Visual/fisiología , Refracción Ocular/fisiología
14.
BMC Oral Health ; 24(1): 828, 2024 Jul 22.
Artículo en Inglés | MEDLINE | ID: mdl-39039499

RESUMEN

BACKGROUND: Dental caries is a global public health concern, and early detection is essential. Traditional methods, particularly visual examination, face access and cost challenges. Teledentistry, as an emerging technology, offers the possibility to overcome such barriers, and it must be given high priority for assessment to optimize the performance of oral healthcare systems. The aim of this study was to systematically review the literature evaluating the diagnostic accuracy of teledentistry using photographs taken by Digital Single Lens Reflex (DSLR) and smartphone cameras against visual clinical examination in either primary or permanent dentition. METHODS: The review followed PRISMA-DTA guidelines, and the PubMed, Scopus, and Embase databases were searched through December 2022. Original in-vivo studies comparing dental caries diagnosis via images taken by DSLR or smartphone cameras with clinical examination were included. The QUADAS-2 was used to assess the risk of bias and concerns regarding applicability. Meta-analysis was not performed due to heterogeneity among the studies. Therefore, the data were analyzed narratively by the research team. RESULTS: In the 19 studies included, the sensitivity and specificity ranged from 48 to 98.3% and from 83 to 100%, respectively. The variability in performance was attributed to factors such as study design and diagnostic criteria. Specific tooth surfaces and lesion stages must be considered when interpreting outcomes. Using smartphones for dental photography was common due to the convenience and accessibility of these devices. The employment of mid-level dental providers for remote screening yielded comparable results to those of dentists. Potential bias in patient selection was indicated, suggesting a need for improvements in study design. CONCLUSION: The diagnostic accuracy of teledentistry for caries detection is comparable to that of traditional clinical examination. The findings establish teledentistry's effectiveness, particularly in lower income settings or areas with access problems. While the results of this review is promising, conducting several more rigorous studies with well-designed methodologies can fully validate the diagnostic accuracy of teledentistry for dental caries to make oral health care provision more efficient and equitable. REGISTRATION: This study was registered with PROSPERO (CRD42023417437).


Asunto(s)
Caries Dental , Fotografía Dental , Humanos , Caries Dental/diagnóstico , Fotografía Dental/métodos , Fotografía Dental/instrumentación , Telemedicina , Fotograbar/métodos , Teléfono Inteligente , Sensibilidad y Especificidad
15.
F1000Res ; 13: 360, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39045173

RESUMEN

Invasive plant species pose ecological threats to native ecosystems, particularly in areas adjacent to roadways, considering that roadways represent lengthy corridors through which invasive species can propagate. Traditional manual survey methods for monitoring invasive plants are labor-intensive and limited in coverage. This paper introduces a high-speed camera system, named CamAlien, designed to be mounted on vehicles for efficient invasive plant species monitoring along roadways. The camera system captures high-quality images at rapid intervals, to monitor the full roadside when following traffic speed. The system utilizes a global shutter sensor to reduce distortion and geotagging for precise localistion. The camera system makes it possible to collect extensive data sets, which can be used for a digital library of the invasive species and their locations, but also subsequent training of machine learning algorithms for automated species recognition.


Asunto(s)
Especies Introducidas , Plantas , Monitoreo del Ambiente/métodos , Monitoreo del Ambiente/instrumentación , Fotograbar/instrumentación , Fotograbar/métodos , Ecosistema
16.
Indian J Ophthalmol ; 72(8): 1199-1203, 2024 Aug 01.
Artículo en Inglés | MEDLINE | ID: mdl-39078965

RESUMEN

PURPOSE: To compare the deviation in cases of horizontal strabismus as assessed from photographs with the measurements as obtained in the strabismus clinic. METHODS: After obtaining informed consent, we recruited subjects with manifest horizontal strabismus. We took a frontal flash photograph from a distance of 50 cm using smart-phone-based cameras with the flash light vertically aligned with the lens. After projecting the photograph on a laptop and using a vernier caliper, we measured the horizontal corneal diameter of the non-strabismic eye and the decentration of reflex in the strabismic eye taking limbus as the reference point. We converted these values to degrees by using a conversion factor of 7.5°/mm and further to prism diopters (PD) by the standard mathematical formula 100*tanθ. RESULTS: We included 74 subjects aged between 5 and 40 years with manifest horizontal deviation from 20 to 85 PD. We found a statistically significant correlation of 82.6% (P value < 0.001) between the clinic and photographic measurements. Agreement analysis suggested that the photographic measurements measured on average 7 PD less (95% confidence interval: 4.6 to 9.2) than clinical measurements along all values of misalignment, although the difference between the two methods decreased as the quantum of deviation increased. Linear regression revealed an r2 of 68% and provided a predictive equation to derive clinic equivalent measurements from photographic estimates. CONCLUSION: We believe our simple method provides robust evidence that a photographic estimation can provide the basic information of the size of the deviation to plan possible surgeries, especially in situations of a tele-consultation. This is an easy approach to both understand and master and should form the armamentarium of most orthopticians and strabismologists.


Asunto(s)
Músculos Oculomotores , Fotograbar , Estrabismo , Humanos , Estrabismo/diagnóstico , Estrabismo/fisiopatología , Masculino , Femenino , Adulto , Fotograbar/métodos , Niño , Adolescente , Adulto Joven , Preescolar , Músculos Oculomotores/fisiopatología , Visión Binocular/fisiología , Movimientos Oculares/fisiología , Reproducibilidad de los Resultados
17.
Sci Rep ; 14(1): 15517, 2024 07 05.
Artículo en Inglés | MEDLINE | ID: mdl-38969757

RESUMEN

CorneAI for iOS is an artificial intelligence (AI) application to classify the condition of the cornea and cataract into nine categories: normal, infectious keratitis, non-infection keratitis, scar, tumor, deposit, acute primary angle closure, lens opacity, and bullous keratopathy. We evaluated its performance to classify multiple conditions of the cornea and cataract of various races in images published in the Cornea journal. The positive predictive value (PPV) of the top classification with the highest predictive score was 0.75, and the PPV for the top three classifications exceeded 0.80. For individual diseases, the highest PPVs were 0.91, 0.73, 0.42, 0.72, 0.77, and 0.55 for infectious keratitis, normal, non-infection keratitis, scar, tumor, and deposit, respectively. CorneAI for iOS achieved an area under the receiver operating characteristic curve of 0.78 (95% confidence interval [CI] 0.5-1.0) for normal, 0.76 (95% CI 0.67-0.85) for infectious keratitis, 0.81 (95% CI 0.64-0.97) for non-infection keratitis, 0.55 (95% CI 0.41-0.69) for scar, 0.62 (95% CI 0.27-0.97) for tumor, and 0.71 (95% CI 0.53-0.89) for deposit. CorneAI performed well in classifying various conditions of the cornea and cataract when used to diagnose journal images, including those with variable imaging conditions, ethnicities, and rare cases.


Asunto(s)
Catarata , Enfermedades de la Córnea , Humanos , Catarata/clasificación , Catarata/diagnóstico , Enfermedades de la Córnea/clasificación , Enfermedades de la Córnea/diagnóstico , Fotograbar/métodos , Inteligencia Artificial , Córnea/patología , Córnea/diagnóstico por imagen , Curva ROC
18.
JAMA Netw Open ; 7(7): e2424299, 2024 Jul 01.
Artículo en Inglés | MEDLINE | ID: mdl-39058486

RESUMEN

Importance: Meticulous postoperative flap monitoring is essential for preventing flap failure and achieving optimal results in free flap operations, for which physical examination has remained the criterion standard. Despite the high reliability of physical examination, the requirement of excessive use of clinician time has been considered a main drawback. Objective: To develop an automated free flap monitoring system using artificial intelligence (AI), minimizing human involvement while maintaining efficiency. Design, Setting, and Participants: In this prognostic study, the designed system involves a smartphone camera installed in a location with optimal flap visibility to capture photographs at regular intervals. The automated program identifies the flap area, checks for notable abnormalities in its appearance, and notifies medical staff if abnormalities are detected. Implementation requires 2 AI-based models: a segmentation model for automatic flap recognition in photographs and a grading model for evaluating the perfusion status of the identified flap. To develop this system, flap photographs captured for monitoring were collected from patients who underwent free flap-based reconstruction from March 1, 2020, to August 31, 2023. After the 2 models were developed, they were integrated to construct the system, which was applied in a clinical setting in November 2023. Exposure: Conducting the developed automated AI-based flap monitoring system. Main Outcomes and Measures: Accuracy of the developed models and feasibility of clinical application of the system. Results: Photographs were obtained from 305 patients (median age, 62 years [range, 8-86 years]; 178 [58.4%] were male). Based on 2068 photographs, the FS-net program (a customized model) was developed for flap segmentation, demonstrating a mean (SD) Dice similarity coefficient of 0.970 (0.001) with 5-fold cross-validation. For the flap grading system, 11 112 photographs from the 305 patients were used, encompassing 10 115 photographs with normal features and 997 with abnormal features. Tested on 5506 photographs, the DenseNet121 model demonstrated the highest performance with an area under the receiver operating characteristic curve of 0.960 (95% CI, 0.951-0.969). The sensitivity for detecting venous insufficiency was 97.5% and for arterial insufficiency was 92.8%. When applied to 10 patients, the system successfully conducted 143 automated monitoring sessions without significant issues. Conclusions and Relevance: The findings of this study suggest that a novel automated system may enable efficient flap monitoring with minimal use of clinician time. It may be anticipated to serve as an effective surveillance tool for postoperative free flap monitoring. Further studies are required to verify its reliability.


Asunto(s)
Inteligencia Artificial , Colgajos Tisulares Libres , Humanos , Masculino , Femenino , Persona de Mediana Edad , Anciano , Adulto , Anciano de 80 o más Años , Fotograbar/métodos , Monitoreo Fisiológico/métodos , Monitoreo Fisiológico/instrumentación , Adulto Joven , Adolescente , Procedimientos de Cirugía Plástica/métodos , Reproducibilidad de los Resultados
19.
BMC Ophthalmol ; 24(1): 285, 2024 Jul 15.
Artículo en Inglés | MEDLINE | ID: mdl-39009964

RESUMEN

AIM: This study aimed to differentiate moderate to high myopic astigmatism from forme fruste keratoconus using Pentacam parameters and develop a predictive model for early keratoconus detection. METHODS: We retrospectively analysed 196 eyes from 105 patients and compared Pentacam variables between myopic astigmatism (156 eyes) and forme fruste keratoconus (40 eyes) groups. Receiver operating characteristic curve analysis was used to determine the optimal cut-off values, and a logistic regression model was used to refine the diagnostic accuracy. RESULTS: Statistically significant differences were observed in most Pentacam variables between the groups (p < 0.05). Parameters such as the Index of Surface Variance (ISV), Keratoconus Index (KI), Belin/Ambrosio Deviation Display (BAD_D) and Back Elevation of the Thinnest Corneal Locale (B.Ele.Th) demonstrated promising discriminatory abilities, with BAD_D exhibiting the highest Area under the Curve. The logistic regression model achieved high sensitivity (92.5%), specificity (96.8%), accuracy (95.9%), and positive predictive value (88.1%). CONCLUSION: The simultaneous evaluation of BAD_D, ISV, B.Ele.Th, and KI aids in identifying forme fruste keratoconus cases. Optimal cut-off points demonstrate acceptable sensitivity and specificity, emphasizing their clinical utility pending further refinement and validation across diverse demographics.


Asunto(s)
Topografía de la Córnea , Queratocono , Fotograbar , Curva ROC , Humanos , Queratocono/diagnóstico , Femenino , Masculino , Estudios Retrospectivos , Adulto , Ghana , Topografía de la Córnea/métodos , Fotograbar/métodos , Adulto Joven , Adolescente , Córnea/patología , Córnea/diagnóstico por imagen , Persona de Mediana Edad , Miopía/diagnóstico , Astigmatismo/diagnóstico , Agudeza Visual
20.
Child Abuse Negl ; 154: 106910, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-38908230

RESUMEN

BACKGROUND: The grooming process involves sexually explicit images or videos sent by the offender to the minor. Although offenders may try to conceal their identity, these sexts often include hand, knuckle, and nail bed imagery. OBJECTIVE: We present a novel biometric hand verification tool designed to identify online child sexual exploitation offenders from images or videos based on biometric/forensic features extracted from hand regions. The system can match and authenticate hand component imagery against a constrained custody suite reference of a known subject by employing advanced image processing and machine learning techniques. DATA: We conducted experiments on two hand datasets: Purdue University and Hong Kong. In particular, the Purdue dataset collected for this study allowed us to evaluate the system performance on various parameters, with specific emphasis on camera distance and orientation. METHODS: To explore the performance and reliability of the biometric verification models, we considered several parameters, including hand orientation, distance from the camera, single or multiple fingers, architecture of the models, and performance loss functions. RESULTS: Results showed the best performance for pictures sampled from the same database and with the same image capture conditions. CONCLUSION: The authors conclude the biometric hand verification tool offers a robust solution that will operationally impact law enforcement by allowing agencies to investigate and identify online child sexual exploitation offenders more effectively. We highlight the strength of the system and the current limitations.


Asunto(s)
Abuso Sexual Infantil , Humanos , Niño , Identificación Biométrica/métodos , Mano , Procesamiento de Imagen Asistido por Computador/métodos , Aprendizaje Automático , Ciencias Forenses/métodos , Reproducibilidad de los Resultados , Hong Kong , Fotograbar/métodos , Uñas , Masculino , Femenino , Criminales/psicología
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA