ORIGINAL ARTICLES

The Association Between Length of Training and Family Medicine Residents’ Clinical Knowledge: A Report From the Length of Training Pilot Study

Patricia A. Carney, PhD, MS | Steele Valenzuela, MS | Annie Ericson, MA | Lars Peterson, MD, PhD | Dang H. Dinh, MS | Colleen M. Conry, MD | James C. Martin, MD | Karen B. Mitchell, MD | Stephanie E. Rosener, MD | Miguel Marino, PhD | M. Patrice Eiff, MD

Fam Med. 2023;55(3):171-179.

DOI: 10.22454/FamMed.2023.427621

Return to Issue

Abstract

Background and Objective: The associations between training length and clinical knowledge are unknown. We compared family medicine in-training examination (ITE) scores among residents who trained in 3- versus 4-year programs and to national averages over time.

Methods: In this prospective case-control study, we compared the ITE scores of 318 consenting residents in 3-year programs to 243 who completed 4 years of training between 2013 through 2019. We obtained scores from the American Board of Family Medicine. The primary analyses involved comparing scores within each academic year according to length of training. We used multivariable linear mixed effects regression models adjusted for covariates. We performed simulation models to predict ITE scores after 4 years of training among residents who underwent only 3 years of training.

Results: At baseline postgraduate year-1 (PGY1), the estimated mean ITE scores were 408.5 for 4-year programs and 386.5 for 3-year programs, a 21.9 point difference (95% CI=10.1–33.8). At PGY2 and PGY3, 4-year programs scored 15.0 points higher and 15.6 points higher, respectively. When extrapolating an estimated mean ITE score for 3-year programs, 4-year programs would still score 29.4 points higher (95% CI=15.0–43.8). Our trend analysis revealed those in 4-year programs had a slightly lesser slope increase compared to 3-year programs in the first 2 years. Their drop-off in ITE scores is less steep in later years, though these differences were not statistically significant.

Conclusions: While we found significantly higher absolute ITE scores in 4 versus 3-year programs, these increases in PGY2, PGY3 and PGY4 may be due to initial differences in PGY1 scores. Additional research is needed to support a decision to change the length of family medicine training.

Introduction

In-training examination (ITE) scores provide formative assessments of residents’ progression toward developing clinical knowledge needed to practice independently, and provide residency programs with comparative data to help determine if a program is meeting its educational objectives. 1 In family medicine, the ITE has been found to be predictive of performance on the American Board of Family Medicine (ABFM) Certification Examination, 2 also reported in other disciplines. 3, 4 Studies that have examined factors that predict ITE scores have found that being married and having higher prior examinations scores (eg, United States Medical Licensing Exam Step 1 and Step 2) were predictive of higher ITE scores, 5-8 while having a high debt load, being an international medical school graduate, having trained in an osteopathic versus allopathic program, and underrepresented race/ethnicity were predictive of lower ITE scores. 9, 10

Studies of ITE in family medicine have examined how predictive scores are when taken in the first year of residency compared to the second year 11 or beyond, 2 the impact of educational interventions for at-risk residents, 12 and how educational innovations in residency training affected ITE scores. 13 O’Neill et al 2 specifically examined how resident performance on the ITE differed over time and found that exam scores tend to increase annually, though the average increase lessens in each successive year. Shokar examined an educational intervention for at-risk residents, which increased ITE scores, though not statistically, 12 and Waller, et al examined the impact of educational innovations as part of the Preparing the Personal Physician for Practice project, which found that residency education redesign did not negatively affect ITE scores. 13

The Length of Training Pilot Study (LoTP) in family medicine has as one of its research questions, “What associations exist between length of training and residents’ clinical knowledge?” 14 To address this question, we partnered with the ABFM to examine ITE scores of residents undertaking 3 versus 4 years of training at an LoTP site to explore the hypothesis that no significant differences in clinical knowledge scores would be found among residents who underwent 3 versus 4 years of training.

Methods

The Length of Training Pilot

The LoTP is a mixed-methods prospective case-control pilot study running from 2013-2023 designed to assess several associations between the length of residency training in family medicine and learner outcomes, such as scope of practice, preparedness for independent practice and clinical knowledge. 14 Residency programs that had already transitioned to 4 years of training or that were planning to do so applied for the pilot in 2012. Those selected included six civilian programs and four Navy programs. The 4-year (4YR) civilian programs were matched to 3-year programs (3YR) based on region, size, and continuity clinic setting. Because of the large size of one 4YR program, two 3YR programs were matched to it to ensure equivalent numbers of residents in 3YR and 4YR groups.

A total of 17 residency programs, all in good standing with the Accreditation Council for Graduate Medical Education and who agreed to participate in required evaluation activities, were selected to participate (seven 3YR civilian programs, six 4YR civilian programs, and four Navy programs). We excluded Navy programs in these analyses because their training setting and content differs from civilian programs. The 4YR programs included two university-based programs, which were located at and administered by Universities that include a medical school as well as residency programs. It also included four community-based programs, which are sponsored by their local hospitals but have an affiliation with medical schools in their region. They ranged in size from six to 22 residents per year. Four of the six 4YR programs required 4 years of training for all graduates, while two offered an optional fourth year of training where residents knew at the time of entry to the program that completing a fourth year was possible. Alterations in curriculum varied in the programs undertaking 4 years of training. 3YR programs included two that were university based, four that were community based, medical school affiliated, and one community based, nonaffiliated, and ranged in size from six to 11 residents per year.

All LoTP evaluation activities are overseen by researchers in the Department of Family Medicine at Oregon Health & Science University (OHSU). All LoTP programs obtained local Institutional Review Board (IRB) approval, and OHSUs IRB granted an educational exemption to obtain data from the study sites (IRB #9770). All participating residents were invited to consent to allow their deidentified data on their ITE scores to be shared with OHSU under a data use agreement between OHSU and the ABFM.

The Family Medicine In-Training Examination and Data Ascertainment

The ITE consists of 200 multiple-choice questions written by ABFM board-certified family physicians who are in private practice or work in an academic setting. 1 Before administration, all questions are reviewed by a committee consisting of current or former residency program directors. The ITE is administered using an online format to approximately 10,000 residents from just over 700 residency programs each year in late October, and the number of residents in each year of residency is fairly evenly distributed. 1 The possible range of scores for the ITE is 200-800.

We obtained ITE scores for all consenting residents in the LoTP programs for 2013-2019 from the ABFM via a secure, password-protected file. This included 278 consenting residents in 3YR programs and 322 in 4YR programs. Thirteen residents (4.5%) did not consent from 3YR programs and 16 (4.7%) did not consent from 4YR programs, and were excluded, leaving data on 600 (90.8% of the full sample). We included resident cohorts for those in an LoTP residency program between 2013 and 2019 and categorized them as PGY1, PGY2, PGY3, and PGY4 for each examination year. Residents’ demographic information included age, gender identity, race, ethnicity, marital and parental status, attended US medical school, and debt load.

Statistical Analyses

We used descriptive statistics to characterize residents’ demographic information by length of training group, including means, standard deviations, frequencies, and percentages. We analyzed continuous variables comparing the two groups using independent samples t tests and χ2 tests for categorical variables. To test the association between ITE scores and length of training, we used two analytic approaches. The first was conducted at the program level and utilized an intent-to-treat analysis 15 where residents in 3YR control programs were compared to residents in 4YR programs at baseline (PGY1), year 2 (PGY2), and year 3 (PGY3). The second approach was conducted at the resident level, and utilized an as-treated analysis (16) where only residents enrolled in and who completed 4 years of training are included. We removed 44 (13.7%) residents in 4YR programs who graduated after 3 years of training (though they are shown in Appendix Figures 1 and 2 showing mean ITE scores over time for comprehensiveness).

Data visualizations of ITE scores were composed of: (1) residents who completed 3 years of training in 3YR control programs, (2) residents who completed 3 years of training even though they trained in 4YR programs; and (3) residents who completed 4 years of training in 4YR programs, according to training year. Additional visualizations included ITE score by training year among individual programs to assess variance.

We compared the ITE scores between study groups during their last year of training (third year is the last year of a 3YR program, the last year of a 4YR program includes those who graduate in either their third or fourth year in the intent-to-treat scenario) and during their third year of training (third year of a 4YR program and third year of a 3YR program). Unadjusted differences in the mean and standard deviation of ITE scores were reported, along with the mean difference between 4YR and 3YR programs (along with their 95% confidence interval). Lastly, we reported whether differences of mean ITE scores between two groups were meaningful using the approach identified by Norman, et al, 16 defined as when a mean difference in ITE scores is greater than one-half of the pooled standard deviation.

Next, we utilized a mixed-effects linear regression model where ITE scores were denoted as the dependent variable. In particular, we assumed that ITE scores follow a quadratic trend as number of training year progresses. We included the interaction term between two programs (4YR and 3YR programs) and training time (by year) in the model to assess the slopes’ difference between study groups after adjusting for age, race, ethnicity, marital and parental status, status as a US medical school graduate, debt load, and examination year. We accounted for repeated measures by random intercepts at the individual participant level. We reported the linear slope and quadratic slope terms only, the remaining covariates and their estimations can be seen in Appendices A and B. To assess differences in ITE scores at PGY1 through PGY3, we derived estimated marginal means and 95% confidence intervals (CI) from the aforementioned models and were estimated for each training year and program. For PGY4, the fourth-year’s ITE scores from 3YR program participants were extrapolated to compare against the fourth year of 4YR program participants.

National ITE data were analyzed by the ABFM (years 2013 through 2019). Residents from the LoTP programs are included in the national data, including nonconsenting residents and those in Navy programs. The increases noted between 2013 and 2015 in national data reflect the implementation period of the fourth year of training. The P values reflect a trend analysis illustrating how exam scores changed from year to year.

Results

Residents in 3YR programs were older than those in 4YR programs on average (29.6 years vs 28.9 years), and residents in all study groups were predominantly female, non-Hispanic White, single, not parents, US medical school graduates, and had a debt load greater than $150,000, though other than age and debt load, none of these findings were statistically different in either the program-level analysis or the resident-level analysis (Table 1).

Residents training in a 4YR program scored higher than those in 3YR programs in the first year, PGY1 (4YR of 431.6 vs 3YR of 406.4; Appendix Figure 1). 4YR program residents continued scoring higher than their counterparts from 3YR programs in PGY2 and PGY3; and their scores increased in their last year of training to an average score of 525.4 compared to 484.0 among residents in 3YR programs in their last year of training. Although residents in both programs consistently increase their ITE scores as time progressed, visually, the slope of 3YR programs begins to flatten after PGY2, whereas the slope of 4YR programs flattens after PGY3.

Appendix Figure 2 presents the mean unadjusted ITE scores in the program-level analysis (intent-to-treat) according to training year, and Appendix Figure 3 presents the mean unadjusted ITE scores in the resident-level analysis (as-treated). The large dots in both figures show individual resident mean scores and the smaller dots represent individual program mean scores. Both analytic approaches produce similar findings, wherein all training years, residents in 4YR programs scored higher than residents in 3YR programs on average and the increase in scores between training years flattened in the final year of training for both groups.

Unadjusted mean differences in ITE scores in the last training year were 37.5 points to 41.4 points higher for 4YR programs compared to 3YR programs in both the intent-to-treat and as-treated scenarios, differences that were clinically meaningful (Table 2). In the third year comparison, the 4YR programs’ scores were 30.8 points higher than the 3YR programs in the intent-to-treat analysis and 31.5 points higher in the as-treated analysis.

The covariate-adjusted linear mixed-effects models for the program level analysis (intent-to-treat) show the estimated model mean ITE scores and their 95% CI (Table 3). Model output prior to postestimation procedures are shown in Appendix Tables A and B. At baseline (PGY1), the estimated mean ITE scores were 407.7 for 4YR programs and 387.7 for 3YR programs, a 20.0 point difference (95% CI=9.0–31.0). At PGY2 and PGY3, 4YR programs scored 14.5 points higher and 15.4 points higher, respectively. Lastly, an extrapolated mean ITE score for the fourth year from 3YR programs was 28.7 points lower (95% CI=14.9–42.6) compared to the fourth year of 4YR programs.

The covariate-adjusted regression models for the resident-level analysis (as-treated) show the estimated marginal mean ITE scores and their 95% CI in Table 3. We observed similar differences between groups at each time point as in the intent-to-treat analyses.

Table 4 shows the linear slope term and the quadratic slope term from full models. Full-model outputs with all covariates are shown in Appendix Tables A and B. In the intent-to-treat scenario, the main effect of the quadratic term (estimate=-13.6, P<.001) suggests that 3YR programs had a curvilinear increase in ITE scores over time (as demonstrated in Appendix Figure 2) where the increase in ITE scores were rapid in the first 2 years and then leveled off in years 3 and 4. The coefficient and P-value of the interaction between the quadratic term and 4YR indicator (estimate=3.2, P=.397) suggests that participants in the 4YR program saw a similar curvilinear trend as participants in the 3YR program. In other words, the change in ITE scores were not significantly different between the 4YR and 3YR programs. We observed similar findings in the as-treated sample.

Table 5 compares mean ITE exam scores among all LoTP participants to those nationally from 2013 to 2019, withP values for trend indicating differences in exam scores from year to year. National ITE scores include all residents, including those who did not consent to be in the LoTP and those in the Navy programs, which explains the dissimilarity among numbers. Residents in any LoTP study group scored higher than residents nationally during PGY 1, 2, and 3 for all study years where relevant test scores are available. Significant variability in terms of exam scores over time is evident among residents in 3YR programs and nationally, while this finding is not evident among residents in 4YR programs until they are combined with LoTP residents in 3 YR programs (Table 5).

Discussion

This study explored the hypothesis that no significant differences in clinical knowledge scores would be found among residents who underwent three versus four years of training. Findings indicate that ITE scores sharply increased between PGY1 and PGY2 for both groups, with residents in 4YR programs starting with higher scores at baseline compared to residents in 3YR programs and maintain this difference in each subsequent year.

The slope analysis found that scores in 3YR programs started to flatten sooner than scores in 4YR programs in both the intent-to-treat and as-treated analyses, indicating that knowledge growth was slightly higher in 4YR programs and this increase continued into the fourth year, though scores were flatter in the final year in both 4YR and 3YR programs. It may be that a focus on finding a future job is distracting in that final year or that residents are reinforcing clinical knowledge learned in the prior years, and therefore do not continue on their previous trajectory of learning as reflected in ITE scores. It may also be that the last year of residency training is focused on factors not included in the ITE, such as practice management and leadership development. If this is the case, then it is imporatant to ensure the last year of residency training has a significant impact.

We found no statistically significant differences in knowledge scores after baseline (PGY1) between those who underwent 3 compared to 4 years of training. This suggests that differences noted between the two study groups in PGY2, PGY3, and PGY4 may be due to the initial difference in PGY1 scores. It may be that the higher scores among residents in the 4YR programs is related to how those programs recruit or rank residents, though in a prior analysis of the match in LoTP programs, including applicant type, number, match positions filled, matched applicant type, and ranks to fill did not differ between 3YR and 4YR residencies. 17 However, those motivated to apply for 4 years of training did report a desire for more flexibility in training and to learn additional skills beyond clinical skills. It may also be that those more skilled at test taking chose to apply to 4YR programs. 17

The relationship between clinical knowledge attainment and length of training is complex. The knowledge family physicians need for effective clinical practice is continually expanding. Educational innovations often influence training approaches, 11, 18 and several factors including gender and marital status affect examination scores in residency, 19, 20-22 all of which were accounted for in our analyses. It is likely that PGY1 scores reflect factors that predate residency, such as medical school curricular content, teaching methods, and emphasis on test preparation.

Analyses conducted by the ABFM indicate that when all LoTP trainees are included and compared to national data, mean ITE scores of both 3YR and 4YR residents are higher than mean ITE scores nationally for all years included in the study. We did observe significant variation affected ITE scores in certain years, where residents performed higher compared to other years; however, this finding is not related to the psychometric properties of the ITE 23, indicating some other reason resulted in residents scoring higher in those years. We found it interesting that trends assessed by the ABFM for residents in 4YR training programs produced more stable scores than occurred nationally or among residents in 3YR programs, though this could be related to the cell sizes in those groups or the test taking-abilities of those who chose to apply to and were selected by 4YR programs. A weakness of the national data is the lack of covariates that were available as part of the LoTP study; thus, it was not possible to determine how adjustment for key characteristics may have affected national data.

Though this study found significant differences in knowledge scores according to length of training, the increases in PGY2, PGY3 and PGY4 may be due to initial difference in PGY1 scores. In addition, this is a single pilot study and should not be used alone to make a decision regarding the length of training in family medicine, a topic that has been intensely debated for more than a decade. 24-27 Several questions remain unanswered. For example, we do not know what effects an additional year of independent clinical practice may have had on clinical knowledge. Though several papers indicate that ITE scores are predictive of board certification scores, 2-4 the exams are not equivalent for direct comparison. We also are unable to determine what specific curricular elements in 4YR programs may be most impactful in terms of knowledge gains. Those who chose to undertake 4 years of training may plan to practice full-scope family medicine, which is not always available to family physicians due to health system policies or geographic locations. Those wanting a broader scope may have performed better on the ITE because of this focus. Peterson et al, found a broader scope of practice was associated with higher board scores among practicing family physicians. 28

 The strengths of this study include data capture of more than 90% of residents participating in the LoTP as well as our ability to conduct several analytic approaches to explore the study hypothesis. We included analyses at the program level (intent-to-treat) and at the resident level (as-treated) to parse out the effects of actually receiving 4 years of training from receiving training in a program where the fourth year was optional. We also collected key variables that have been known or hypothesized to affect examination scores to include in our regression models, so they could be adjusted for in analyses.

Limitations of this study include that some 4YR programs had an optional fourth year, which resulted in residents undertaking 3 years versus 4 years of training by choice. This introduced selection bias, which is an issue across the board in this study because it is not possible to randomly assign residents to their training program and we could not assign which residencies would transition to 4 years of training. We addressed this by adjusting analyses for several covariates that could have affected our outcome to account for these inherent biases. Another weakness involves the small number of training programs that enrolled in the LoTP study. Converting from 3 to 4 years of training is a considerable endeavor, likely requiring time commitments for planning and implementation. The programs that chose to undertake such an effort may have greater resources or resilience compared to other training programs across the nation, though we matched 3YR programs to 4YR programs based on geographic location, size, and continuity clinic setting, and our insignificant findings between the study groups suggests our matching strategy was successful.

In conclusion, we found significantly higher absolute ITE scores in 4- versus 3-year programs, but the increases in PGY2, PGY3, and PGY4 may be due to initial difference in PGY-1 scores. Additional research about associations between length of family medicine training and other aspects of clinical practice, including practice setting, continuity of care, clinical preparedness, and scope of practice will be forthcoming and are needed to inform future decisions about the optimal training model.

Acknowledgments

The authors gratefully acknowledge the contributions made by Samuel Jones, MD, who was a member of the Length of Training Pilot Executive Committee.

Financial Support: The Length of Training Pilot is sponsored by the Accreditation Council for Graduate Medical Education and is funded by the American Board of Family Medicine Foundation. None of the authors have a conflict of interest to declare regarding this article.

 

References

  1. ABFM.  In-training Examination. American Board of Family Medicine. 2021. Accessed January 2, 2021. https://www.theabfm.org/become-certified/acgme-program/in-training-examination 
  2. O’Neill TR, Li Z, Peabody MR, Lybarger M, Royal K, Puffer JC. The predictive validity of the ABFM’s In-training Examination. Fam Med. 2015;47(5):349-356.
  3. Indik JH, Duhigg LM, McDonald FS, et al. Performance on the cardiovascular in-training examination in relation to the ABIM Cardiovascular Disease Certification Examination. J Am Coll Cardiol. 2017;69(23):2862-2868. doi:10.1016/j.jacc.2017.04.020
  4. Yen D, Athwal GS, Cole G. The historic predictive value of Canadian orthopedic surgery residents’ orthopedic in-training examination scores on their success on the RCPSC certification examination. Can J Surg. 2014;57(4):260-262. doi:10.1503/cjs.014913
  5. Kreitz T, Verma S, Adan A, Verma K. Factors predictive of Orthopaedic In-training Examination performance and research productivity among orthopaedic residents. J Am Acad Orthop Surg. 2019;27(6):e286-e292. doi:10.5435/JAAOS-D-17-00257
  6. Carmichael KD, Westmoreland JB, Thomas JA, Patterson RM. Relation of residency selection factors to subsequent orthopaedic in-training examination performance. South Med J. 2005;98(5):528-532. doi:10.1097/01.SMJ.0000157560.75496.CB
  7. Miller BJ, Sexson S, Shevitz S, Peeples D, Van Sant S, McCall WV. US Medical Licensing Exam scores and performance on the Psychiatry Resident In-Training Examination. Acad Psychiatry. 2014;38(5):627-631. doi:10.1007/s40596-014-0130-y
  8. Peterson LE, Boulet JR, Clauser B. Associations between medical education assessments and American Board of Family Medicine Certification Examination score and failure to obtain certification. Acad Med. 2020;95(9):1396-1403. doi:10.1097/ACM.0000000000003344
  9. Phillips JP, Peterson LE, Kovar-Gough I, O’Neill TR, Peabody MR, Phillips RL Jr. Family medicine residents’ debt and certification examination performance. PRiMER Peer-Rev Rep Med Educ Res. 2019;3:7. doi:10.22454/PRiMER.2019.568241
  10. Wang T, O’Neill TR, Eden AR, et al. Racial/ethnic group trajectory differences in exam performance among US family medicine residents. Fam Med. 2022;54(3):184-192. doi:10.22454/FamMed.2022.873033
  11. Sloychuk J, Szafran O, Duerksen K, Babenko O. Association between family medicine residents’ mindsets and in-training exam scores. PRiMER Peer-Rev Rep Med Educ Res. 2020;4:33. doi:10.22454/PRiMER.2020.796230
  12. Shokar GS. The effects of an educational intervention for “at-risk” residents to improve their scores on the In-training Exam. Fam Med. 2003;35(6):414-417.
  13. Waller E, Eiff MP, Dexter E, et al. Impact of residency training redesign on residents’ clinical knowledge. Fam Med. 2017;49(9):693-698.
  14.  Length of Training Pilot Project (LoTP). 2021. Accessed January 2021. https://fmresearch.ohsu.edu/lotpilot.org/
  15. Gupta SK. Intention-to-treat concept: A review. Perspect Clin Res. 2011;2(3):109-112. doi:10.4103/2229-3485.83221
  16. Norman GR, Sloan JA, Wyrwich KW. Interpretation of changes in health-related quality of life: the remarkable universality of half a standard deviation. Med Care. 2003;41(5):582-592. doi:10.1097/01.MLR.0000062554.74615.4C
  17. Eiff MP, Ericson A, Waller E. A comparison of residency applications and match performance according to 3 years versus 4 years of training in family medicine. Fam Med. 2019;51(8):641-648. doi:10.22454/FamMed.2019.558529
  18. O’Neill TR, Peabody MR. ITE Score Results Handbook. American Board of Family Medicine; 2013.
  19. Mainous AG 3rd, Fang B, Peterson LE. Competency assessment in family medicine residency: observations, knowledge-based examinations, and advancement. J Grad Med Educ. 2017;9(6):730-734. doi:10.4300/JGME-D-17-00212.1
  20. Klein R, Ufere NN, Rao SR, et al; Gender Equity in Medicine workgroup. Gender Equity in Medicine workgroup. Association of gender with learner assessment in graduate medical education. JAMA Netw Open. 2020;3(7):e2010888. doi:10.1001/jamanetworkopen.2020.10888
  21. Dayal A, O’Connor DM, Qadri U, Arora VM. Comparison of male vs female resident milestone evaluations by faculty during emergency medicine residency training. JAMA Intern Med. 2017;177(5):651-657. doi:10.1001/jamainternmed.2016.9616
  22. Error in figure labels. [Correction]. JAMA Intern Med. 2017;177(5):747. doi:10.1001/jamainternmed.2017.0967
  23. Personal Communication with Lars Peterson. MD, PhD, Vice President for Research. American Board of Family Medicine. 2021. 
  24. Fields KB. More on the 4-year FM residency program. Fam Med. 2005;37(1):8.
  25. Scherger JE. Residencies: heal thyself before extending. Fam Med. 2006;38(3):158-159.
  26. Carek PJ. The length of training pilot: does anyone really know what time it takes? Fam Med. 2013;45(3):171-172.
  27. Sairenji T, Dai M, Eden AR, Peterson LE, Mainous AG III. Fellowship or further training for family medicine residents. Fam Med. 2017;49(8):618-621.
  28. Peterson LE, Blackburn B, Peabody M, O’Neill TR. Family physicians’ scope of practice and American Board of Family Medicine recertification examination performance. J Am Board Fam Med. 2015;28(2):265-270. doi:10.3122/jabfm.2015.02.140202

Lead Author

Patricia A. Carney, PhD, MS

Affiliations: Oregon Health & Science University, Portland, OR

Co-Authors

Steele Valenzuela, MS - Oregon Health & Science University, Portland, OR

Annie Ericson, MA - Oregon Health & Science University, Portland, OR

Lars Peterson, MD, PhD - American Board of Family Medicine, Lexington, KY | Family and Community Medicine, College of Medicine, University of Kentucky, Lexington, KY

Dang H. Dinh, MS - Oregon Health & Science University, Portland, OR

Colleen M. Conry, MD - University of Colorado, Denver, CO

James C. Martin, MD - Long School of Medicine, University of Texas Health Science Center at San Antonio, San Antonio, TX

Karen B. Mitchell, MD - American Academy of Family Physicians, Leawood, KS

Stephanie E. Rosener, MD - United Family Medicine Residency, St Paul, MN

Miguel Marino, PhD - Oregon Health & Science University, Portland, OR

M. Patrice Eiff, MD - Oregon Health & Science University, Portland, OR

Corresponding Author

Patricia A. Carney, PhD, MS

Correspondence: Oregon Health & Science University, Portland, OR

Email: carneyp@ohsu.edu

Fetching other articles...

Loading the comment form...

Submitting your comment...

There are no comments for this article.

Downloads & Info

Share

Related Content