BMC Pediatrics, cilt.26, sa.1, 2026 (SCI-Expanded, Scopus)
Aim: The aim of this study is to evaluate the quality, reliability and readability of the answers given by large language model (LLM) chatbots to frequently asked questions about sudden infant death syndrome (SIDS). Method: Three widely used LLM chatbots (ChatGPT 4, ChatGPT − 4o, Gemini 1.5 Pro) were asked 21 questions that parents frequently ask about SIDS. The answers were categorized into three different categories: all questions, general information and preventive questions. The content of the answers was evaluated by two experts. The global quality scale (GQS), the modified DISCERN and five different readability indices were used in the evaluation. Result: In the study, the GQS score of ChatGPT 4 was found to be higher in responses to all questions about SIDS (p < 0.05). No statistically significant difference was found between the GQS scores of the responses given by the LLMs in the questions about general knowledge about SIDS and prevention (p > 0.05). When the modified DISCERN scores were evaluated in terms of reliability, Gemini 1.5 Pro had the highest score compared to other language models. There was a statistically significant difference in all questions and general knowledge question category (p > 0.05). In terms of readability, Gemini 1.5 Pro had a higher level of ease of reading than other LLM responses (p < 0.05). Conclusion: Large language models have the potential to provide accurate and comprehensive answers in providing parents with quality, reliable, and readable information about SIDS, they also have limitations.