TY - JOUR DO - 10.22454/PRiMER.2025.412803 VL - 9 DA - 2025/10/20 N2 - Introduction: ChatGPT, a large language model created by OpenAI, has emerged as a new source of online medical information. This study aimed to evaluate the appropriateness, readability, and educational value of ChatGPT’s responses to frequent patient internet queries regarding 10 common primary care diagnoses. Methods: The responses generated by ChatGPT regarding the 10 most frequently encountered primary care diagnoses were assessed for appropriateness and readability by two primary care physicians. Responses were judged based on educational value in four categories: basic knowledge, diagnosis, treatment, and prevention. We used a 5-point Likert scale based on accuracy, comprehensiveness, and clarity to determine appropriateness. ChatGPT responses that received ratings of 4-5 in all three criteria were considered appropriate. Conversely, if the outputs received ratings of 1-3 in any category, they were deemed inappropriate. We performed readability assessments using the Flesch Reading Ease (FRE) and Flesch-Kincaid Reading Grade Level (FKGL) formulas to determine if the responses were at the recommended average American’s seventh to eighth grade reading level. Results: Most (92.5%) responses were deemed appropriate unanimously by both reviewers. ChatGPT provided more appropriate responses regarding basic knowledge compared to diagnosis, treatment, and prevention. The ChatGPT responses demonstrated a college graduate reading level, as indicated by the mean FRE score of 25.64 and the median FKGL score of 12.61. Conclusion: Our comprehensive analysis found that ChatGPT's responses were appropriate most of the time. These findings suggest that ChatGPT has potential to be a supplementary educational tool for patients seeking health information online. PB - Society of Teachers of Family Medicine AU - Khadka, Monica AU - Rupareliya, Riya AU - Khadka, Deepali AU - Bisht, Ajay L2 - http://journals.stfm.org/primer/2025/rupareliya-2025-0022 L1 - http://journals.stfm.org/media/oiun2uqk/primer-9-54.pdf TI - Evaluating ChatGPT’s Educational Suitability for Patient Primary Care Queries ER -