Your browser doesn't support javascript.
loading
Inter-reviewer reliability of human literature reviewing and implications for the introduction of machine-assisted systematic reviews: a mixed-methods review.
Hanegraaf, Piet; Wondimu, Abrham; Mosselman, Jacob Jan; de Jong, Rutger; Abogunrin, Seye; Queiros, Luisa; Lane, Marie; Postma, Maarten J; Boersma, Cornelis; van der Schans, Jurjen.
Afiliación
  • Hanegraaf P; Pitts, Zeist, The Netherlands.
  • Wondimu A; Health-Ecore, Zeist, The Netherlands.
  • Mosselman JJ; Pitts, Zeist, The Netherlands.
  • de Jong R; Pitts, Zeist, The Netherlands.
  • Abogunrin S; F. Hoffmann-La Roche, Basel, Switzerland.
  • Queiros L; F. Hoffmann-La Roche, Basel, Switzerland.
  • Lane M; F. Hoffmann-La Roche, Basel, Switzerland.
  • Postma MJ; Health-Ecore, Zeist, The Netherlands.
  • Boersma C; Unit of Global Health, Department of Health Sciences, University Medical Center Groningen, Groningen, The Netherlands.
  • van der Schans J; Department of Economics, Econometrics & Finance, University of Groningen, Groningen, Netherlands.
BMJ Open ; 14(3): e076912, 2024 Mar 19.
Article en En | MEDLINE | ID: mdl-38508610
ABSTRACT

OBJECTIVES:

Our main objective is to assess the inter-reviewer reliability (IRR) reported in published systematic literature reviews (SLRs). Our secondary objective is to determine the expected IRR by authors of SLRs for both human and machine-assisted reviews.

METHODS:

We performed a review of SLRs of randomised controlled trials using the PubMed and Embase databases. Data were extracted on IRR by means of Cohen's kappa score of abstract/title screening, full-text screening and data extraction in combination with review team size, items screened and the quality of the review was assessed with the A MeaSurement Tool to Assess systematic Reviews 2. In addition, we performed a survey of authors of SLRs on their expectations of machine learning automation and human performed IRR in SLRs.

RESULTS:

After removal of duplicates, 836 articles were screened for abstract, and 413 were screened full text. In total, 45 eligible articles were included. The average Cohen's kappa score reported was 0.82 (SD=0.11, n=12) for abstract screening, 0.77 (SD=0.18, n=14) for full-text screening, 0.86 (SD=0.07, n=15) for the whole screening process and 0.88 (SD=0.08, n=16) for data extraction. No association was observed between the IRR reported and review team size, items screened and quality of the SLR. The survey (n=37) showed overlapping expected Cohen's kappa values ranging between approximately 0.6-0.9 for either human or machine learning-assisted SLRs. No trend was observed between reviewer experience and expected IRR. Authors expect a higher-than-average IRR for machine learning-assisted SLR compared with human based SLR in both screening and data extraction.

CONCLUSION:

Currently, it is not common to report on IRR in the scientific literature for either human and machine learning-assisted SLRs. This mixed-methods review gives first guidance on the human IRR benchmark, which could be used as a minimal threshold for IRR in machine learning-assisted SLRs. PROSPERO REGISTRATION NUMBER CRD42023386706.
Asunto(s)
Palabras clave

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Asunto principal: Aprendizaje Automático / Revisiones Sistemáticas como Asunto Límite: Humans Idioma: En Revista: BMJ Open Año: 2024 Tipo del documento: Article País de afiliación: Países Bajos Pais de publicación: Reino Unido

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Asunto principal: Aprendizaje Automático / Revisiones Sistemáticas como Asunto Límite: Humans Idioma: En Revista: BMJ Open Año: 2024 Tipo del documento: Article País de afiliación: Países Bajos Pais de publicación: Reino Unido