Your browser doesn't support javascript.
loading
MAG-Net : Multi-fusion network with grouped attention for retinal vessel segmentation.
Jiang, Yun; Chen, Jie; Yan, Wei; Zhang, Zequn; Qiao, Hao; Wang, Meiqi.
Afiliación
  • Jiang Y; College of Computer Science and Engineering, Northwest Normal University, Lanzhou 730070, China.
  • Chen J; College of Computer Science and Engineering, Northwest Normal University, Lanzhou 730070, China.
  • Yan W; College of Computer Science and Engineering, Northwest Normal University, Lanzhou 730070, China.
  • Zhang Z; College of Computer Science and Engineering, Northwest Normal University, Lanzhou 730070, China.
  • Qiao H; College of Computer Science and Engineering, Northwest Normal University, Lanzhou 730070, China.
  • Wang M; College of Computer Science and Engineering, Northwest Normal University, Lanzhou 730070, China.
Math Biosci Eng ; 21(2): 1938-1958, 2024 Jan 05.
Article en En | MEDLINE | ID: mdl-38454669
ABSTRACT
Retinal vessel segmentation plays a vital role in the clinical diagnosis of ophthalmic diseases. Despite convolutional neural networks (CNNs) excelling in this task, challenges persist, such as restricted receptive fields and information loss from downsampling. To address these issues, we propose a new multi-fusion network with grouped attention (MAG-Net). First, we introduce a hybrid convolutional fusion module instead of the original encoding block to learn more feature information by expanding the receptive field. Additionally, the grouped attention enhancement module uses high-level features to guide low-level features and facilitates detailed information transmission through skip connections. Finally, the multi-scale feature fusion module aggregates features at different scales, effectively reducing information loss during decoder upsampling. To evaluate the performance of the MAG-Net, we conducted experiments on three widely used retinal datasets DRIVE, CHASE and STARE. The results demonstrate remarkable segmentation accuracy, specificity and Dice coefficients. Specifically, the MAG-Net achieved segmentation accuracy values of 0.9708, 0.9773 and 0.9743, specificity values of 0.9836, 0.9875 and 0.9906 and Dice coefficients of 0.8576, 0.8069 and 0.8228, respectively. The experimental results demonstrate that our method outperforms existing segmentation methods exhibiting superior performance and segmentation outcomes.
Asunto(s)
Palabras clave

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Asunto principal: Vasos Retinianos / Aprendizaje Idioma: En Revista: Math Biosci Eng Año: 2024 Tipo del documento: Article País de afiliación: China Pais de publicación: Estados Unidos

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Asunto principal: Vasos Retinianos / Aprendizaje Idioma: En Revista: Math Biosci Eng Año: 2024 Tipo del documento: Article País de afiliación: China Pais de publicación: Estados Unidos