Assessing the accuracy and completeness of artificial intelligence language models in providing information on methotrexate use


Coşkun B. N., Yağız B., Ocakoğlu G., Dalkılıç H. E., Pehlivan Y.

Rheumatology International, cilt.44, sa.3, ss.509-515, 2024 (SCI-Expanded) identifier identifier identifier

  • Yayın Türü: Makale / Tam Makale
  • Cilt numarası: 44 Sayı: 3
  • Basım Tarihi: 2024
  • Doi Numarası: 10.1007/s00296-023-05473-5
  • Dergi Adı: Rheumatology International
  • Derginin Tarandığı İndeksler: Science Citation Index Expanded (SCI-EXPANDED), Scopus, Academic Search Premier, BIOSIS, CAB Abstracts, Veterinary Science Database
  • Sayfa Sayıları: ss.509-515
  • Anahtar Kelimeler: Accuracy, Artificial intelligence, Completeness, Large language models, Methotrexate
  • Bursa Uludağ Üniversitesi Adresli: Evet

Özet

We aimed to assess Large Language Models (LLMs)—ChatGPT 3.5–4, BARD, and Bing—in their accuracy and completeness when answering Methotrexate (MTX) related questions for treating rheumatoid arthritis. We employed 23 questions from an earlier study related to MTX concerns. These questions were entered into the LLMs, and the responses generated by each model were evaluated by two reviewers using Likert scales to assess accuracy and completeness. The GPT models achieved a 100% correct answer rate, while BARD and Bing scored 73.91%. In terms of accuracy of the outputs (completely correct responses), GPT-4 achieved a score of 100%, GPT 3.5 secured 86.96%, and BARD and Bing each scored 60.87%. BARD produced 17.39% incorrect responses and 8.7% non-responses, while Bing recorded 13.04% incorrect and 13.04% non-responses. The ChatGPT models produced significantly more accurate responses than Bing for the “mechanism of action” category, and GPT-4 model showed significantly higher accuracy than BARD in the “side effects” category. There were no statistically significant differences among the models for the “lifestyle” category. GPT-4 achieved a comprehensive output of 100%, followed by GPT-3.5 at 86.96%, BARD at 60.86%, and Bing at 0%. In the “mechanism of action” category, both ChatGPT models and BARD produced significantly more comprehensive outputs than Bing. For the “side effects” and “lifestyle” categories, the ChatGPT models showed significantly higher completeness than Bing. The GPT models, particularly GPT 4, demonstrated superior performance in providing accurate and comprehensive patient information about MTX use. However, the study also identified inaccuracies and shortcomings in the generated responses.