The Emerging Role of AI in Patient Education: A Comparative Analysis of the Accuracy of Large Language Models for Pelvic Organ Prolapse


Rahimli Ocakoglu S., COŞKUN B.

Medical Principles and Practice, 2024 (SCI-Expanded) identifier identifier

  • Yayın Türü: Makale / Tam Makale
  • Basım Tarihi: 2024
  • Doi Numarası: 10.1159/000538538
  • Dergi Adı: Medical Principles and Practice
  • Derginin Tarandığı İndeksler: Science Citation Index Expanded (SCI-EXPANDED), Scopus, Academic Search Premier, CINAHL, MEDLINE, Directory of Open Access Journals
  • Anahtar Kelimeler: Artificial intelligence, Large language model, Patient information, Pelvic organ prolapse
  • Bursa Uludağ Üniversitesi Adresli: Evet

Özet

Introduction: This study aimed to evaluate the accuracy, completeness, precision, and readability of outputs generated by three large language models (LLMs); these are GPT by OpenAI, BARD by Google, and Bing by Microsoft, in comparison to patient education material on pelvic organ prolapse (POP) provided by the Royal College of Obstetricians and Gynecologists (RCOG). Methods: A total of 15 questions were retrieved from the RCOG website and input into the three LLMs. Two independent reviewers evaluated the outputs for accuracy, completeness, and precision. Readabilitywas assessed using the Simplified Measure of Gobbledygook (SMOG) score and the Flesch-Kincaid Grade Level (FKGL) score. Results: Significant differences were observed in completeness and precision metrics. ChatGPT ranked highest in completeness (66.7%), while Bing led in precision (100%). No significant differences were observed in accuracy across all models. In terms of readability, ChatGPT exhibited higher difficulty than BARD, Bing, and the original RCOG answers. Conclusion: While all models displayed a variable degree of correctness, ChatGPT excelled in completeness, significantly surpassing BARD and Bing. However, Bing led in precision, providing the most relevant and concise answers. Regarding readability, ChatGPT exhibited higher difficulty. We observed that while all LLMs showed varying degrees of correctness in answering RCOG questions on patient information for POP, ChatGPT was the most comprehensive, but its answers were harder to read. Bing, on the other hand, was the most precise. The findings highlight the potential of LLMs in health information dissemination and the need for careful interpretation of their outputs.