Your browser doesn't support javascript.
loading
Artificial Intelligence Model for a Distinction between Early-Stage Gastric Cancer Invasive Depth T1a and T1b.
Chen, Tsung-Hsing; Kuo, Chang-Fu; Lee, Chieh; Yeh, Ta-Sen; Lan, Jui; Huang, Shih-Chiang.
Afiliación
  • Chen TH; Department of Gastroenterology and Hepatology, Linkou Medical Center, Chang Gung Memorial Hospital, Taoyuan, Taiwan.
  • Kuo CF; Division of Rheumatology, Allergy, and Immunology, Chang Gung Memorial Hospital- Linkou and Chang Gung University College of Medicine, Taoyuan, Taiwan.
  • Lee C; Department of Information and Management, College of Business, National Sun Yat-sen University, Kaohsiung city, Taiwan.
  • Yeh TS; Department of Surgery, Chang Gung Memorial Hospital, Linkou, Taoyuan, Taiwan.
  • Lan J; Department of Anatomic Pathology, Kaohsiung Chang Gang Memorial Hospital, College of Medicine, Chang Gung University, Taoyuan, Taiwan.
  • Huang SC; Department of Anatomical Pathology, Chang Gung Memorial Hospital, Chang Gung University College of Medicine, Taoyuan, Taiwan.
J Cancer ; 15(10): 3085-3094, 2024.
Article en En | MEDLINE | ID: mdl-38706899
ABSTRACT

Background:

Endoscopic submucosal dissection (ESD) is a widely accepted treatment for patients with mucosa (T1a) disease without lymph node metastasis. However, the inconsistency of inspection quality of tumor staging under the standard tool combining endoscopic ultrasound (EUS) with computed tomography (CT) scanning makes it restrictive.

Methods:

We conducted a study using data augmentation and artificial intelligence (AI) to address the early gastric cancer (EGC) staging problem. The proposed AI model simplifies early cancer treatment by eliminating the need for ultrasound or other staging methods. We developed an AI model utilizing data augmentation and the You-Only-Look-Once (YOLO) approach. We collected a white-light image dataset of 351 stage T1a and 542 T1b images to build, test, and validate the model. An external white-light images dataset that consists of 47 T1a and 9 T1b images was then collected to validate our AI model. The result of the external dataset validation indicated that our model also applies to other peer health institutes.

Results:

The results of k-fold cross-validation using the original dataset demonstrated that the proposed model had a sensitivity of 85.08% and an average specificity of 87.17%. Additionally, the k-fold cross-validation model had an average accuracy rate of 86.18%; the external data set demonstrated similar validation results with a sensitivity of 82.98%, a specificity of 77.78%, and an overall accuracy of 82.14%.

Conclusions:

Our findings suggest that the AI model can effectively replace EUS and CT in early GC staging, with an average validation accuracy rate of 86.18% for the original dataset from Linkou Cheng Gun Memorial Hospital and 82.14% for the external validation dataset from Kaohsiung Cheng Gun Memorial Hospital. Moreover, our AI model's accuracy rate outperformed the average EUS and CT rates in previous literature (around 70%).
Palabras clave

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Idioma: En Revista: J Cancer Año: 2024 Tipo del documento: Article País de afiliación: Taiwán Pais de publicación: Australia

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Idioma: En Revista: J Cancer Año: 2024 Tipo del documento: Article País de afiliación: Taiwán Pais de publicación: Australia