Your browser doesn't support javascript.
loading
Regional choroidal thickness estimation from color fundus images based on convolutional neural networks.
Rong, Yibiao; Chen, Qifeng; Jiang, Zehua; Fan, Zhun; Chen, Haoyu.
Afiliación
  • Rong Y; College of Engineering, Shantou University, Shantou 515063, China.
  • Chen Q; Key Laboratory of Digital Signal and Image Processing of Guangdong Provincial, Shantou University, Shantou 515063, China.
  • Jiang Z; College of Engineering, Shantou University, Shantou 515063, China.
  • Fan Z; Key Laboratory of Digital Signal and Image Processing of Guangdong Provincial, Shantou University, Shantou 515063, China.
  • Chen H; Joint Shantou International Eye Center, Shantou University and the Chinese University of Hong Kong, Shantou, 515051 China.
Heliyon ; 10(5): e26872, 2024 Mar 15.
Article en En | MEDLINE | ID: mdl-38468930
ABSTRACT

Purpose:

This study aims to estimate the regional choroidal thickness from color fundus images from convolutional neural networks in different network structures and task learning models.

Method:

1276 color fundus photos and their corresponding choroidal thickness values from healthy subjects were obtained from the Topcon DRI Triton optical coherence tomography machine. Initially, ten commonly used convolutional neural networks were deployed to identify the most accurate model, which was subsequently selected for further training. This selected model was then employed in combination with single-, multiple-, and auxiliary-task training models to predict the average and sub-region choroidal thickness in both ETDRS (Early Treatment Diabetic Retinopathy Study) grids and 100-grid subregions. The values of mean absolute error and coefficient of determination (R2) were involved to evaluate the models' performance.

Results:

Efficientnet-b0 network outperformed other networks with the lowest mean absolute error value (25.61 µm) and highest R2 (0.7817) in average choroidal thickness. Incorporating diopter spherical, anterior chamber depth, and lens thickness as auxiliary tasks improved predicted accuracy (p-value = 6.39×10-44, 2.72×10-38, 1.15×10-36 respectively). For ETDRS regional choroidal thickness estimation, multi-task model achieved better results than single task model (lowest mean absolute error = 31.10 µm vs. 33.20 µm). The multi-task training also can simultaneously predict the choroidal thickness of 100 grids with a minimum mean absolute error of 33.86 µm.

Conclusions:

Efficientnet-b0, in combination with multi-task and auxiliary task models, achieve high accuracy in estimating average and regional macular choroidal thickness directly from color fundus photographs.
Palabras clave

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Idioma: En Revista: Heliyon Año: 2024 Tipo del documento: Article País de afiliación: China Pais de publicación: Reino Unido

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Idioma: En Revista: Heliyon Año: 2024 Tipo del documento: Article País de afiliación: China Pais de publicación: Reino Unido