Your browser doesn't support javascript.
loading
Self-supervised monocular depth estimation for high field of view colonoscopy cameras.
Mathew, Alwyn; Magerand, Ludovic; Trucco, Emanuele; Manfredi, Luigi.
Afiliación
  • Mathew A; Division of Imaging Science and Technology, School of Medicine, University of Dundee, Dundee, United Kingdom.
  • Magerand L; Discipline of Computing, School of Science and Engineering, University of Dundee, Dundee, United Kingdom.
  • Trucco E; Discipline of Computing, School of Science and Engineering, University of Dundee, Dundee, United Kingdom.
  • Manfredi L; Division of Imaging Science and Technology, School of Medicine, University of Dundee, Dundee, United Kingdom.
Front Robot AI ; 10: 1212525, 2023.
Article en En | MEDLINE | ID: mdl-37559569
Optical colonoscopy is the gold standard procedure to detect colorectal cancer, the fourth most common cancer in the United Kingdom. Up to 22%-28% of polyps can be missed during the procedure that is associated with interval cancer. A vision-based autonomous soft endorobot for colonoscopy can drastically improve the accuracy of the procedure by inspecting the colon more systematically with reduced discomfort. A three-dimensional understanding of the environment is essential for robot navigation and can also improve the adenoma detection rate. Monocular depth estimation with deep learning methods has progressed substantially, but collecting ground-truth depth maps remains a challenge as no 3D camera can be fitted to a standard colonoscope. This work addresses this issue by using a self-supervised monocular depth estimation model that directly learns depth from video sequences with view synthesis. In addition, our model accommodates wide field-of-view cameras typically used in colonoscopy and specific challenges such as deformable surfaces, specular lighting, non-Lambertian surfaces, and high occlusion. We performed qualitative analysis on a synthetic data set, a quantitative examination of the colonoscopy training model, and real colonoscopy videos in near real-time.
Palabras clave

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Tipo de estudio: Qualitative_research Idioma: En Revista: Front Robot AI Año: 2023 Tipo del documento: Article País de afiliación: Reino Unido Pais de publicación: Suiza

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Tipo de estudio: Qualitative_research Idioma: En Revista: Front Robot AI Año: 2023 Tipo del documento: Article País de afiliación: Reino Unido Pais de publicación: Suiza