Your browser doesn't support javascript.
loading
Evaluating Artificial Intelligence Competency in Education: Performance of ChatGPT-4 in the American Registry of Radiologic Technologists (ARRT) Radiography Certification Exam.
Al-Naser, Yousif; Halka, Felobater; Ng, Boris; Mountford, Dwight; Sharma, Sonali; Niure, Ken; Yong-Hing, Charlotte; Khosa, Faisal; Van der Pol, Christian.
Afiliación
  • Al-Naser Y; Medical Radiation Sciences, McMaster University, Hamilton, ON, Canada; Department of Diagnostic Imaging, Trillium Health Partners, Mississauga, ON, Canada. Electronic address: alnasery@mcmaster.ca.
  • Halka F; Department of Pathology and Laboratory Medicine, Schulich School of Medicine & Dentistry, Western University, Canada.
  • Ng B; Department of Mechanical and Industrial Engineering, University of Toronto, ON, Canada.
  • Mountford D; Medical Radiation Sciences, McMaster University, Hamilton, ON, Canada.
  • Sharma S; Department of Radiology, Faculty of Medicine, University of British Columbia, Vancouver, British Columbia, Canada.
  • Niure K; Department of Diagnostic Imaging, Trillium Health Partners, Mississauga, ON, Canada.
  • Yong-Hing C; Department of Radiology, Faculty of Medicine, University of British Columbia, Vancouver, British Columbia, Canada.
  • Khosa F; Department of Radiology, Faculty of Medicine, University of British Columbia, Vancouver, British Columbia, Canada.
  • Van der Pol C; Department of Diagnostic Imaging, Juravinski Hospital and Cancer Centre, Hamilton Health Sciences, McMaster University, Hamilton, Ontario, Canada.
Acad Radiol ; 2024 Aug 16.
Article en En | MEDLINE | ID: mdl-39153961
ABSTRACT
RATIONALE AND

OBJECTIVES:

The American Registry of Radiologic Technologists (ARRT) leads the certification process with an exam comprising 200 multiple-choice questions. This study aims to evaluate ChatGPT-4's performance in responding to practice questions similar to those found in the ARRT board examination. MATERIALS AND

METHODS:

We used a dataset of 200 practice multiple-choice questions for the ARRT certification exam from BoardVitals. Each question was fed to ChatGPT-4 fifteen times, resulting in 3000 observations to account for response variability.

RESULTS:

ChatGPT's overall performance was 80.56%, with higher accuracy on text-based questions (86.3%) compared to image-based questions (45.6%). Response times were longer for image-based questions (18.01 s) than for text-based questions (13.27 s). Performance varied by domain 72.6% for Safety, 70.6% for Image Production, 67.3% for Patient Care, and 53.4% for Procedures. As anticipated, performance was best on on easy questions (78.5%).

CONCLUSION:

ChatGPT demonstrated effective performance on the BoardVitals question bank for ARRT certification. Future studies could benefit from analyzing the correlation between BoardVitals scores and actual exam outcomes. Further development in AI, particularly in image processing and interpretation, is necessary to enhance its utility in educational settings.
Palabras clave

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Idioma: En Revista: Acad Radiol Asunto de la revista: RADIOLOGIA Año: 2024 Tipo del documento: Article Pais de publicación: Estados Unidos

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Idioma: En Revista: Acad Radiol Asunto de la revista: RADIOLOGIA Año: 2024 Tipo del documento: Article Pais de publicación: Estados Unidos