Systematic review finds "spin" practices and poor reporting standards in studies on machine learning-based prediction models.
J Clin Epidemiol
; 158: 99-110, 2023 06.
Article
en En
| MEDLINE
| ID: mdl-37024020
OBJECTIVES: We evaluated the presence and frequency of spin practices and poor reporting standards in studies that developed and/or validated clinical prediction models using supervised machine learning techniques. STUDY DESIGN AND SETTING: We systematically searched PubMed from 01/2018 to 12/2019 to identify diagnostic and prognostic prediction model studies using supervised machine learning. No restrictions were placed on data source, outcome, or clinical specialty. RESULTS: We included 152 studies: 38% reported diagnostic models and 62% prognostic models. When reported, discrimination was described without precision estimates in 53/71 abstracts (74.6% [95% CI 63.4-83.3]) and 53/81 main texts (65.4% [95% CI 54.6-74.9]). Of the 21 abstracts that recommended the model to be used in daily practice, 20 (95.2% [95% CI 77.3-99.8]) lacked any external validation of the developed models. Likewise, 74/133 (55.6% [95% CI 47.2-63.8]) studies made recommendations for clinical use in their main text without any external validation. Reporting guidelines were cited in 13/152 (8.6% [95% CI 5.1-14.1]) studies. CONCLUSION: Spin practices and poor reporting standards are also present in studies on prediction models using machine learning techniques. A tailored framework for the identification of spin will enhance the sound reporting of prediction model studies.
Palabras clave
Texto completo:
1
Colección:
01-internacional
Base de datos:
MEDLINE
Asunto principal:
Aprendizaje Automático
Tipo de estudio:
Guideline
/
Prognostic_studies
/
Risk_factors_studies
/
Systematic_reviews
Límite:
Humans
Idioma:
En
Revista:
J Clin Epidemiol
Asunto de la revista:
EPIDEMIOLOGIA
Año:
2023
Tipo del documento:
Article
Pais de publicación:
Estados Unidos