Can ChatGPT, an Artificial Intelligence Language Model, Provide Accurate and High-quality Patient Information on Prostate Cancer?


COŞKUN B., OCAKOĞLU G., YETEMEN M., KAYGISIZ O.

Urology, cilt.180, ss.35-58, 2023 (SCI-Expanded) identifier identifier identifier

  • Yayın Türü: Makale / Tam Makale
  • Cilt numarası: 180
  • Basım Tarihi: 2023
  • Doi Numarası: 10.1016/j.urology.2023.05.040
  • Dergi Adı: Urology
  • Derginin Tarandığı İndeksler: Science Citation Index Expanded (SCI-EXPANDED), Scopus, Academic Search Premier, PASCAL, BIOSIS, CAB Abstracts, CINAHL, Gender Studies Database, Veterinary Science Database
  • Sayfa Sayıları: ss.35-58
  • Bursa Uludağ Üniversitesi Adresli: Evet

Özet

OBJECTIVE: To evaluate the performance of ChatGPT, an artificial intelligence (AI) language model, in providing patient information on prostate cancer, and to compare the accuracy, similarity, and quality of the information to a reference source. METHODS: Patient information material on prostate cancer was used as a reference source from the website of the European Association of Urology Patient Information. This was used to generate 59 queries. The accuracy of the model's content was determined with F1, precision, and recall scores. The similarity was assessed with cosine similarity, and the quality was evaluated using a 5-point Likert scale named General Quality Score (GQS). RESULTS: ChatGPT was able to respond to all prostate cancer-related queries. The average F1 score was 0.426 (range: 0-1), precision score was 0.349 (range: 0-1), recall score was 0.549 (range: 0-1), and cosine similarity was 0.609 (range: 0-1). The average GQS was 3.62 ± 0.49 (range: 1-5), with no answers achieving the maximum GQS of 5. While ChatGPT produced a larger amount of information compared to the reference, the accuracy and quality of the content were not optimal, with all scores indicating need for improvement in the model's performance. CONCLUSION: Caution should be exercised when using ChatGPT as a patient information source for prostate cancer due to limitations in its performance, which may lead to inaccuracies and potential misunderstandings. Further studies, using different topics and language models, are needed to fully understand the capabilities and limitations of AI-generated patient information.