Your browser doesn't support javascript.
loading
Harnessing LLMs for multi-dimensional writing assessment: Reliability and alignment with human judgments.
Tang, Xiaoyi; Chen, Hongwei; Lin, Daoyu; Li, Kexin.
Afiliación
  • Tang X; School of Foreign Studies, University of Science and Technology Beijing, Beijing 100083, China.
  • Chen H; School of Foreign Studies, University of Science and Technology Beijing, Beijing 100083, China.
  • Lin D; Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100094, China.
  • Li K; School of Foreign Studies, University of Science and Technology Beijing, Beijing 100083, China.
Heliyon ; 10(14): e34262, 2024 Jul 30.
Article en En | MEDLINE | ID: mdl-39113951
ABSTRACT
Recent advancements in natural language processing, computational linguistics, and Artificial Intelligence (AI) have propelled the use of Large Language Models (LLMs) in Automated Essay Scoring (AES), offering efficient and unbiased writing assessment. This study assesses the reliability of LLMs in AES tasks, focusing on scoring consistency and alignment with human raters. We explore the impact of prompt engineering, temperature settings, and multi-level rating dimensions on the scoring performance of LLMs. Results indicate that prompt engineering significantly affects the reliability of LLMs, with GPT-4 showing marked improvement over GPT-3.5 and Claude 2, achieving 112% and 114% increase in scoring accuracy under the criteria and sample-referenced justification prompt. Temperature settings also influence the output consistency of LLMs, with lower temperatures producing scores more in line with human evaluations, which is essential for maintaining fairness in large-scale assessment. Regarding multi-dimensional writing assessment, results indicate that GPT-4 performs well in dimensions regarding Ideas (QWK=0.551) and Organization (QWK=0.584) under well-crafted prompt engineering. These findings pave the way for a comprehensive exploration of LLMs' broader educational implications, offering insights into their capability to refine and potentially transform writing instruction, assessment, and the delivery of diagnostic and personalized feedback in the AI-powered educational age. While this study attached importance to the reliability and alignment of LLM-powered multi-dimensional AES, future research should broaden its scope to encompass diverse writing genres and a more extensive sample from varied backgrounds.
Palabras clave

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Idioma: En Revista: Heliyon Año: 2024 Tipo del documento: Article País de afiliación: China Pais de publicación: Reino Unido

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Idioma: En Revista: Heliyon Año: 2024 Tipo del documento: Article País de afiliación: China Pais de publicación: Reino Unido