Your browser doesn't support javascript.
loading
Dual-stage semantic segmentation of endoscopic surgical instruments.
Chen, Wenxin; Wang, Kaifeng; Song, Xinya; Xie, Dongsheng; Li, Xue; Islam, Mobarakol; Li, Changsheng; Duan, Xingguang.
Afiliação
  • Chen W; Beijing Institute of Technology, Beijing, P. R. China.
  • Wang K; Peking University People's Hospital, Beijing, P. R. China.
  • Song X; Beijing Institute of Technology, Beijing, P. R. China.
  • Xie D; Beijing Institute of Technology, Beijing, P. R. China.
  • Li X; Beijing Institute of Technology, Beijing, P. R. China.
  • Islam M; University College London, London, UK.
  • Li C; Beijing Institute of Technology, Beijing, P. R. China.
  • Duan X; Beijing Institute of Technology, Beijing, P. R. China.
Med Phys ; 2024 Sep 10.
Article em En | MEDLINE | ID: mdl-39255375
ABSTRACT

BACKGROUND:

Endoscopic instrument segmentation is essential for ensuring the safety of robotic-assisted spinal endoscopic surgeries. However, due to the narrow operative region, intricate surrounding tissues, and limited visibility, achieving instrument segmentation within the endoscopic view remains challenging.

PURPOSE:

This work aims to devise a method to segment surgical instruments in endoscopic video. By designing an endoscopic image classification model, features of frames before and after the video are extracted to achieve continuous and precise segmentation of instruments in endoscopic videos.

METHODS:

Deep learning techniques serve as the algorithmic core for constructing the convolutional neural network proposed in this study. The method comprises dual stages image classification and instrument segmentation. MobileViT is employed for image classification, enabling the extraction of key features of different instruments and generating classification results. DeepLabv3+ is utilized for instrument segmentation. By training on distinct instruments separately, corresponding model parameters are obtained. Lastly, a flag caching mechanism along with a blur detection module is designed to effectively utilize the image features in consecutive frames. By incorporating specific parameters into the segmentation model, better segmentation of surgical instruments can be achieved in endoscopic videos.

RESULTS:

The classification and segmentation models are evaluated on an endoscopic image dataset. In the dataset used for instrument segmentation, the training set consists of 7456 images, the validation set consists of 829 images, and the test set consists of 921 images. In the dataset used for image classification, the training set consists of 2400 images and the validation set consists of 600 images. The image classification model achieves an accuracy of 70% on the validation set. For the segmentation model, experiments are conducted on two common surgical instruments, and the mean Intersection over Union (mIoU) exceeds 98%. Furthermore, the proposed video segmentation method is tested using videos collected during surgeries, validating the effectiveness of the flag caching mechanism and blur detection module.

CONCLUSIONS:

Experimental results on the dataset demonstrate that the dual-stage video processing method excels in performing instrument segmentation tasks under endoscopic conditions. This advancement is significant for enhancing the intelligence level of robotic-assisted spinal endoscopic surgeries.
Palavras-chave

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Idioma: En Revista: Med Phys Ano de publicação: 2024 Tipo de documento: Article País de publicação: Estados Unidos

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Idioma: En Revista: Med Phys Ano de publicação: 2024 Tipo de documento: Article País de publicação: Estados Unidos