Your browser doesn't support javascript.
loading
A 2.5D Self-Training Strategy for Carotid Artery Segmentation in T1-Weighted Brain Magnetic Resonance Images.
de Araújo, Adriel Silva; Pinho, Márcio Sarroglia; Marques da Silva, Ana Maria; Fiorentini, Luis Felipe; Becker, Jefferson.
Afiliación
  • de Araújo AS; School of Technology, Pontifícia Universidade Católica do Rio Grande do Sul, Porto Alegre 90619-900, Brazil.
  • Pinho MS; School of Technology, Pontifícia Universidade Católica do Rio Grande do Sul, Porto Alegre 90619-900, Brazil.
  • Marques da Silva AM; Hospital das Clínicas, Faculdade de Medicina, Universidade de São Paulo, São Paulo 05403-010, Brazil.
  • Fiorentini LF; Centro de Diagnóstico por Imagem, Santa Casa de Misericórdia de Porto Alegre, Porto Alegre 90020-090, Brazil.
  • Becker J; Grupo Hospitalar Conceição, Porto Alegre 91350-200, Brazil.
J Imaging ; 10(7)2024 Jul 03.
Article en En | MEDLINE | ID: mdl-39057732
ABSTRACT
Precise annotations for large medical image datasets can be time-consuming. Additionally, when dealing with volumetric regions of interest, it is typical to apply segmentation techniques on 2D slices, compromising important information for accurately segmenting 3D structures. This study presents a deep learning pipeline that simultaneously tackles both challenges. Firstly, to streamline the annotation process, we employ a semi-automatic segmentation approach using bounding boxes as masks, which is less time-consuming than pixel-level delineation. Subsequently, recursive self-training is utilized to enhance annotation quality. Finally, a 2.5D segmentation technique is adopted, wherein a slice of a volumetric image is segmented using a pseudo-RGB image. The pipeline was applied to segment the carotid artery tree in T1-weighted brain magnetic resonance images. Utilizing 42 volumetric non-contrast T1-weighted brain scans from four datasets, we delineated bounding boxes around the carotid arteries in the axial slices. Pseudo-RGB images were generated from these slices, and recursive segmentation was conducted using a Res-Unet-based neural network architecture. The model's performance was tested on a separate dataset, with ground truth annotations provided by a radiologist. After recursive training, we achieved an Intersection over Union (IoU) score of (0.68 ± 0.08) on the unseen dataset, demonstrating commendable qualitative results.
Palabras clave

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Idioma: En Revista: J Imaging Año: 2024 Tipo del documento: Article País de afiliación: Brasil Pais de publicación: Suiza

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Idioma: En Revista: J Imaging Año: 2024 Tipo del documento: Article País de afiliación: Brasil Pais de publicación: Suiza