Your browser doesn't support javascript.
loading
Digesting Digital Health: A Study of Appropriateness and Readability of ChatGPT-Generated Gastroenterological Information.
Toiv, Avi; Saleh, Zachary; Ishak, Angela; Alsheik, Eva; Venkat, Deepak; Nandi, Neilanjan; Zuchelli, Tobias E.
Afiliación
  • Toiv A; Department of Internal Medicine, Henry Ford Hospital, Detroit, MI, USA.
  • Saleh Z; Department of Gastroenterology, Henry Ford Hospital, Detroit, MI, USA.
  • Ishak A; Department of Internal Medicine, Henry Ford Hospital, Detroit, MI, USA.
  • Alsheik E; Department of Gastroenterology, Henry Ford Hospital, Detroit, MI, USA.
  • Venkat D; Department of Gastroenterology, Henry Ford Hospital, Detroit, MI, USA.
  • Nandi N; Department of Gastroenterology, University of Pennsylvania, Philadelphia, PA, USA.
  • Zuchelli TE; Department of Gastroenterology, Henry Ford Hospital, Detroit, MI, USA.
Article en En | MEDLINE | ID: mdl-39212302
ABSTRACT
BACKGROUND AND

AIMS:

The advent of artificial intelligence-powered large language models capable of generating interactive responses to intricate queries marks a groundbreaking development in how patients access medical information. Our aim was to evaluate the appropriateness and readability of gastroenterological information generated by ChatGPT.

METHODS:

We analyzed responses generated by ChatGPT to 16 dialogue-based queries assessing symptoms and treatments for gastrointestinal conditions and 13 definition-based queries on prevalent topics in gastroenterology. Three board-certified gastroenterologists evaluated output appropriateness with a 5-point Likert-scale proxy measurement of currency, relevance, accuracy, comprehensiveness, clarity, and urgency/next steps. Outputs with a score of 4 or 5 in all 6 categories were designated as "appropriate." Output readability was assessed with Flesch Reading Ease score, Flesch-Kinkaid Reading Level, and Simple Measure of Gobbledygook scores.

RESULTS:

ChatGTP responses to 44% of the 16 dialogue-based and 69% of the 13 definition-based questions were deemed appropriate, and the proportion of appropriate responses within the 2 groups of questions was not significantly different (P = .17). Notably, none of ChatGTP's responses to questions related to gastrointestinal emergencies were designated appropriate. The mean readability scores showed that outputs were written at a college-level reading proficiency.

CONCLUSION:

ChatGPT can produce generally fitting responses to gastroenterological medical queries, but responses were constrained in appropriateness and readability, which limits the current utility of this large language model. Substantial development is essential before these models can be unequivocally endorsed as reliable sources of medical information.

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Idioma: En Revista: Clin Transl Gastroenterol Año: 2024 Tipo del documento: Article País de afiliación: Estados Unidos Pais de publicación: Estados Unidos

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Idioma: En Revista: Clin Transl Gastroenterol Año: 2024 Tipo del documento: Article País de afiliación: Estados Unidos Pais de publicación: Estados Unidos