Your browser doesn't support javascript.
loading
Online Patient Education in Obstructive Sleep Apnea: ChatGPT versus Google Search.
Incerti Parenti, Serena; Bartolucci, Maria Lavinia; Biondi, Elena; Maglioni, Alessandro; Corazza, Giulia; Gracco, Antonio; Alessandri-Bonetti, Giulio.
Afiliación
  • Incerti Parenti S; Unit of Orthodontics and Sleep Dentistry, Department of Biomedical and Neuromotor Sciences (DIBINEM), University of Bologna, Via San Vitale 59, 40125 Bologna, Italy.
  • Bartolucci ML; Unit of Orthodontics and Sleep Dentistry, Department of Biomedical and Neuromotor Sciences (DIBINEM), University of Bologna, Via San Vitale 59, 40125 Bologna, Italy.
  • Biondi E; Unit of Orthodontics and Sleep Dentistry, Department of Biomedical and Neuromotor Sciences (DIBINEM), University of Bologna, Via San Vitale 59, 40125 Bologna, Italy.
  • Maglioni A; Postgraduate School of Orthodontics, University of Bologna, Via San Vitale 59, 40125 Bologna, Italy.
  • Corazza G; Unit of Orthodontics and Sleep Dentistry, Department of Biomedical and Neuromotor Sciences (DIBINEM), University of Bologna, Via San Vitale 59, 40125 Bologna, Italy.
  • Gracco A; Postgraduate School of Orthodontics, University of Bologna, Via San Vitale 59, 40125 Bologna, Italy.
  • Alessandri-Bonetti G; Unit of Orthodontics and Sleep Dentistry, Department of Biomedical and Neuromotor Sciences (DIBINEM), University of Bologna, Via San Vitale 59, 40125 Bologna, Italy.
Healthcare (Basel) ; 12(17)2024 Sep 05.
Article en En | MEDLINE | ID: mdl-39273804
ABSTRACT
The widespread implementation of artificial intelligence technologies provides an appealing alternative to traditional search engines for online patient healthcare education. This study assessed ChatGPT-3.5's capabilities as a source of obstructive sleep apnea (OSA) information, using Google Search as a comparison. Ten frequently searched questions related to OSA were entered into Google Search and ChatGPT-3.5. The responses were assessed by two independent researchers using the Global Quality Score (GQS), Patient Education Materials Assessment Tool (PEMAT), DISCERN instrument, CLEAR tool, and readability scores (Flesch Reading Ease and Flesch-Kincaid Grade Level). ChatGPT-3.5 significantly outperformed Google Search in terms of GQS (5.00 vs. 2.50, p < 0.0001), DISCERN reliability (35.00 vs. 29.50, p = 0.001), and quality (11.50 vs. 7.00, p = 0.02). The CLEAR tool scores indicated that ChatGPT-3.5 provided excellent content (25.00 vs. 15.50, p < 0.001). PEMAT scores showed higher understandability (60-91% vs. 44-80%) and actionability for ChatGPT-3.5 (0-40% vs. 0%). Readability analysis revealed that Google Search responses were easier to read (FRE 56.05 vs. 22.00; FKGL 9.00 vs. 14.00, p < 0.0001). ChatGPT-3.5 delivers higher quality and more comprehensive OSA information compared to Google Search, although its responses are less readable. This suggests that while ChatGPT-3.5 can be a valuable tool for patient education, efforts to improve readability are necessary to ensure accessibility and utility for all patients. Healthcare providers should be aware of the strengths and weaknesses of various healthcare information resources and emphasize the importance of critically evaluating online health information, advising patients on its reliability and relevance.
Palabras clave

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Idioma: En Revista: Healthcare (Basel) Año: 2024 Tipo del documento: Article País de afiliación: Italia Pais de publicación: Suiza

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Idioma: En Revista: Healthcare (Basel) Año: 2024 Tipo del documento: Article País de afiliación: Italia Pais de publicación: Suiza