BOOK AND MEDIA REVIEWS

OpenEvidence

Velyn Wu, MD | Jed Casauay, BS, BA

Fam Med.

Published: 10/18/2024 | DOI: 10.22454/FamMed.2024.587513

Media: OpenEvidence  (www.openevidence.com)

Developer: Daniel Nadler

Artificial intelligence (AI) may expand our options for resources to use as peripheral brains while we provide clinical care and teach. With the recent public explosion of large language models (LLM), to which we can turn for help with activities such as creating schedules, meals, and images in a conversational manner, potential exists for AI to also help us in our daily clinical and teaching activities. 1, 2 Some AI tools have also been shown to have high accuracy in choosing correct answers on medical licensing exams.3, 4 OpenEvidence (OE) is an LLM specifically trained for medicine with the aim “to aggregate, synthesize, and visualize clinically relevant evidence in understandable, accessible formats that can be used to make more evidence-based decisions and improve patient outcomes.” 5, 6 It was created by Daniel Nadler and developed with support from the Mayo Clinic Platform Accelerate program. 6, 7 It has also shown significant accuracy with answering board questions.8 It is now partnered with Elsevier’s ClinicalKey AI through paid subscription with the aim to deliver the “next-generation clinical decision support tool that combines the most recent and reputable evidence-based medical content with generative artificial intelligence (AI) to help physicians at the point of care.” 9

Clinicians and learners can set up an account for unlimited, free access to OE. OE is accessed via an internet browser. The clinical question is typed into the clearly labeled field on the middle of the screen. OE then provides a scholarly style response with citations provided within the text. The references are listed below the response with a “details” button that opens to show the text summarized for OE’s response. Clicking on the reference links directly to the PubMed abstract. OE also suggests relevant follow-up questions to further explore the topic.

For my review of OE, the same clinical questions were researched with OE, commonly used internet-based evidence-based clinical resources (DynaMed, UpToDate) and popular LLM’s (GPT4, Llama-3.1, CoPilot) during 2 weeks of direct patient care. All clinical resources and LLMs provided similar information. For broad questions, OE provided responses in less time than it takes to read through a clinical resource’s text. For very targeted questions, such as medication doses, OE took longer to provide a response. As with other LLMs, OE can analyze a deidentified patient history and provide a possible diagnosis and management plan. It can also suggest a response to patient messages. It does not craft board-style questions or create images.

Regarding the type of references provided by LLMs, OE cites recent articles from reputable journals and society guidelines. Copilot frequently cites other websites, GPT4 and Llama-3.1 do not provide citations within their responses. While a distinct advantage of OE over other LLMs is how it summarizes references, it can only access freely available information, such as abstracts, and not always the entire article. Reading the linked PubMed abstract helps the user quickly validate OE’s response.

The clinician-educator and learner will find OE useful for quickly finding a targeted answer to clinical questions while caring for patients. Learners can use it during clinical learning experiences to help them quickly formulate informed differential diagnoses and opinions for patient care. Whereas other point-of-care resources (eg, Amboss, UpToDate) are either subscription-based or too lengthy to sift through in the fast-paced clinic environment, one can quickly input patient-specific queries into OE and receive reliable responses in the time between seeing a patient and presenting to faculty.

While OE has strong utility as a targeted point-of-care clinical care resource, it may not be as useful as a comprehensive information tool. Due to the short length and concise focus of its responses, it does not readily provide expanded medical knowledge that is relevant to the topic, which may lead to early closure for the novice learner or tired clinician. Therefore, the clinician-educator and learner should be aware of unconscious gaps in knowledge and work together to strengthen their curiosity skills and how to ask high-yield clinical questions.

Overall, OE can provide responses to questions ranging from basic science knowledge to suggestions for patient evaluation and management. OE is not peer-reviewed and as emphasized in the terms of use, OE is not a substitute for clinical expertise and “does not provide medical advice, diagnosis or treatment.” 10 The clinician who uses OE still bears the responsibility of assessing the applicability and validity of the OE response to the clinical context. Nonetheless, as a free and reliable resource, OE is a welcome addition to the clinical toolbox to augment patient care and medical education.

References

  1. Gencer G, Gencer K. A Comparative Analysis of ChatGPT and Medical Faculty Graduates in Medical Specialization Exams: Uncovering the Potential of Artificial Intelligence in Medical Education. Cureus. 2024;16(8):e66517. Published 2024 Aug 9. doi:10.7759/cureus.66517
  2. Young RA, Martin CM, Sturmberg JP, et al. What Complexity Science Predicts About the Potential of Artificial Intelligence/Machine Learning to Improve Primary Care. J Am Board Fam Med. 2024;37(2):332-345. doi:10.3122/jabfm.2023.230219R1
  3. Hanna RE, Smith LR, Mhaskar R, Hanna K. Performance of Language Models on the Family Medicine In-Training Exam. Fam Med. 2024;56(9):555-560. doi:10.22454/FamMed.2024.233738
  4. Wang T, Mainous AG III, Stelter K, O’Neill TR, Newton WP. Performance Evaluation of the Generative Pre-trained Transformer (GPT-4) on the Family Medicine In-Training Examination. J Am Board Fam Med. 2024;jabfm.2023.230433R1. doi:10.3122/jabfm.2023.230433R1
  5. OpenEvidence API. OpenEvidence. Accessed October 17, 2024. https://docs.openevidence.com/index.html#
  6. About. OpenEvidence. Accessed October 17, 2024. https://www.openevidence.com/about
  7. OpenEvidence to Become a Mayo Clinic Platform Accelerate Company. Open Evidence. March 28, 2023. Accessed October 17, 2024.  https://www.openevidence.com/announcements/openevidence-to-become-a-mayo-clinic-platform-accelerate-company
  8. OpenEvidence AI becomes the first AI in history to score above 90% on the United States Medical Licensing Examination (USMLE). OpenEvidence. July 14, 2023. Accessed October 17, 2024.  https://www.openevidence.com/announcements/openevidence-ai-first-ai-score-above-90-percent-on-the-usmle
  9. Elsevier Health partners with OpenEvidence to launch next generation ClinicalKey AI. Elsevier [press release]. November 15, 2024. Accessed October 17, 2024.  https://www.elsevier.com/about/press-releases/elsevier-health-partners-with-openevidence-to-deliver-trusted-evidence-based
  10. Xyla Inc. Network Terms of Use. Updated April 30, 2024. Accessed October 17, 2024.  https://www.openevidence.com/policies/terms

Lead Author

Velyn Wu, MD

Affiliations: Family Medicine - Springhill, University of Florida College of Medicine, Gainesville, FL

Co-Authors

Jed Casauay, BS, BA - University of Florida College of Medicine, Gainesville, FL

Fetching other articles...

Loading the comment form...

Submitting your comment...

There are no comments for this article.

Downloads & Info

Share

Related Content

Tags

Searching for articles...