Your browser doesn't support javascript.
loading
Multimodal Deep Learning Network for Differentiating Between Benign and Malignant Pulmonary Ground Glass Nodules.
Liu, Gang; Liu, Fei; Mao, Xu; Xie, Xiaoting; Sang, Jingyao; Ma, Husai; Yang, Haiyun; He, Hui.
Afiliación
  • Liu G; Department of Radiological Interventional, Qinghai Red Cross Hospital, Xining, China.
  • Liu F; Department of Radiological Interventional, Qinghai Red Cross Hospital, Xining, China.
  • Mao X; Department of Radiological Interventional, Qinghai Red Cross Hospital, Xining, China.
  • Xie X; Department of Radiological Interventional, Qinghai Red Cross Hospital, Xining, China.
  • Sang J; Department of Radiological Interventional, Qinghai Red Cross Hospital, Xining, China.
  • Ma H; Department of Thoracic Surgery, Qinghai Red Cross Hospital,Xining, China.
  • Yang H; Qinghai Red Cross Hospital Department of Radiological Interventional Xining China.
  • He H; Department of Radiological Interventional, Qinghai Red Cross Hospital, Xining, China.
Curr Med Imaging ; 2024 Sep 10.
Article en En | MEDLINE | ID: mdl-39257154
ABSTRACT

OBJECTIVE:

This study aimed to establish a multimodal deep-learning network model to enhance the diagnosis of benign and malignant pulmonary ground glass nodules [GGNs].

METHODS:

Retrospective data on pulmonary GGNs were collected from multiple centers across China, including North, Northeast, Northwest, South, and Southwest China. The data were divided into a training set and a validation set in an 82 ratio. In addition, a GGN dataset was also obtained from our hospital database and used as the test set. All patients underwent chest computed tomography [CT], and the final diagnosis of the nodules was based on postoperative pathological reports. The Residual Network [ResNet] was used to extract imaging data, the Word2Vec method for semantic information extraction, and the Self Attention method for combining imaging features and patient data to construct a multimodal classification model. Then, the diagnostic efficiency of the proposed multimodal model was compared with that of existing ResNet and VGG models and radiologists.

RESULTS:

The multicenter dataset comprised 1020 GGNs, including 265 benign and 755 malignant nodules, and the test dataset comprised 204 GGNs, with 67 benign and 137 malignant nodules. In the validation set, the proposed multimodal model achieved an accuracy of 90.2%, a sensitivity of 96.6%, and a specificity of 75.0%, which surpassed that of the VGG [73.1%, 76.7%, and 66.5%] and ResNet [78.0%, 83.3%, and 65.8%] models in diagnosing benign and malignant nodules. In the test set, the multimodal model accurately diagnosed 125 [91.18%] malignant nodules, outperforming radiologists [80.37% accuracy]. Moreover, the multimodal model correctly identified 54 [accuracy, 80.70%] benign nodules, compared to radiologists' accuracy of 85.47%. The consistency test comparing radiologists' diagnostic results with the multimodal model's results in relation to postoperative pathology showed strong agreement, with the multimodal model demonstrating closer alignment with gold standard pathological findings [Kappa=0.720, P<0.01].

CONCLUSION:

The multimodal deep learning network model exhibited promising diagnostic effectiveness in distinguishing benign and malignant GGNs and, therefore, holds potential as a reference tool to assist radiologists in improving the diagnostic accuracy of GGNs, potentially enhancing their work efficiency in clinical settings.
Palabras clave

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Idioma: En Revista: Curr Med Imaging Año: 2024 Tipo del documento: Article País de afiliación: China Pais de publicación: Emiratos Árabes Unidos

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Idioma: En Revista: Curr Med Imaging Año: 2024 Tipo del documento: Article País de afiliación: China Pais de publicación: Emiratos Árabes Unidos