ORIGINAL ARTICLES

Optimizing Survey Response Rates in Graduate Medical Education Research Studies

Annie Ericson, MA | Kathryn Bonuck, MSEd | Larry A. Green, MD | Colleen Conry, MD | James C. Martin, MD | Patricia A. Carney, PhD, MS

Fam Med. 2023;55(5):304-310.

DOI: 10.22454/FamMed.2023.750371

Return to Issue

Abstract

Background and Objective: Survey response rates of 70% or higher are needed if findings are to be considered generalizable. Unfortunately, survey studies of health professionals have declining response rates. We have conducted survey research with residents and residency directors for over 13 years. Here we describe the strategies we used to obtain optimal response rates in residency training research collaboratives.

Methods: We administered over 6,000 surveys between 2007 and 2019 to evaluate the Preparing the Personal Physician for Practice and Length of Training Pilot studies, both of which involved redesigning residency training. Survey recipients included program directors, clinic managers, residents, graduates, as well as supervising physicians and clinic staff members. We logged and analyzed survey administration efforts and approaches to optimize strategies.

Results: Overall, we obtained response rates of 100% for program director surveys, 98% for resident surveys, 97% for continuity clinic surveys, 81% for graduates surveys, and 48% for the supervising physician and 43% for the clinic staff. Response rates were highest when the relationships between the evaluation team and survey recipients were closest. Strategies for optimizing response rates included (1) building relationships with all participants whenever possible, (2) sensitivity to survey timing and fatigue, and (3) using creative and persistent follow-up measures to encourage survey completion.

Conclusion: High response rates are achievable, though they require an investment in time, resources, and ingenuity in connecting with study populations. Investigators conducting survey research must consider administrative efforts needed to achieve target response rates, including planning for funds accordingly.

Introduction

Survey response rates of 70% or higher are important for findings to be representative of a study population and thus generalizable. Many journals now require more stringent response rates (eg, 70%-80%) before papers reporting on survey research will be reviewed. 1, 2 This is due to a need to reduce nonresponse bias. 1 Unfortunately, studies of medical students and physicians typically attain response rates lower than survey studies of the general population. 3-5 One systematic review that included survey response rates from 1,607 studies conducted between 2000 and 2005 found an average response rate of 52.7% (SD=20.4) for nonhealth professionals, 3 while a typical response rate for health professions, including students, ranged between 3%-50%. 4 Notable declines in response rates, especially among physicians, have occurred over the last few decades. 5 Several studies have examined strategies to improve response rates, with findings that include multiple recruitment methods, small financial incentives, multiple administrative strategies, along with endorsement by professional associations, which resulted in higher response rates. 6-8

Studies of graduate medical education (GME) are vital for ensuring that residency training is producing desired outcomes. Survey research is common in GME studies and determining what is needed to optimize response rates would help investigators ensure their study samples are representative of the target population.

Centralizing GME surveys at the national level may improve survey response rates. One such centralized process is provided by the Council of Academic Family Medicine (CAFM), comprised of the family medicine academic organizations, which oversees the CAFM Educational Research Alliance (CERA). 9 CERA provides an infrastructure that allows investigators to submit questions for review and potential inclusion in CERA surveys which are sent to various audiences at routine interval. Since 2019, the response rates in the 12 studies using program director survey data have ranged from 39%-57%. 9

Residency training networks or collaboratives engaged in educational research can provide additional motivation and structure for participation in survey studies, as participants tend to be invested in the outcomes. 10 Response rates of survey studies in residency networks have ranged from 68%-87%. 11-14 Very few studies have examined how response rates may be influenced by relationship development between study investigators and participants and what is required in both devoted time and strategies to achieve target response rates.

We have undertaken two national studies focused on family medicine residency training, the first of which was Preparing the Personal Physician for Practice (P4), 10 and a summary of study findings has been published elsewhere. 10 The second project was the Length of Training Pilot (LoTP), 15 which is still underway. Here we describe the specific approaches and survey collection strategies we used to yield meaningful response rates.

Methods

Overview of Studies and Data Collection

P4 was a comparative case study (2007-2012) where 14 programs selected by a formal review committee of key stakeholders undertook a variety of programmatic innovations that included changes to residency length, location, structure, and content. Summary findings from 39 published papers on P4 are reported elsewhere. 10 The LoTP is a longitudinal prospective case control study currently underway (2013-2023) designed to examine the effect of lengthening family medicine residency training from 3 to 4 years. A total of 13 family medicine programs were selected by a formal review committee of key stakeholders and are enrolled in LoTP and were then matched with 3-year comparison programs based on size, region, and clinical training setting.

Participants in both P4 and LoTP represent a diverse mix of community- and university-based programs across the United States. None of the participating programs received funding for data collection in either P4 or the LoTP. Oregon Health & Science University’s (OHSU) Institutional Review Board granted exemptions to both P4 (IRB # 3788) and the LoTP (IRB# 9770). Collectively, we at OHSU have surveyed participants for over 13 continuous years in our work with these studies.

Survey Instruments and Administration

Annual core surveys were administered to residents, program directors, continuity clinic managers, and recent graduates, and all participants were informed that our target response rate for study surveys was 70%. All surveys in both studies included a preaffixed unique study identifier so responses could be linked to the appropriate participant (resident or program). In Year 3 of the LoTP, an additional annual Clinical Preparedness Survey was added. Survey features including recipient, timing, method, and length are included in Table 1. Details of survey administration and collection are described below.

Resident Survey. Annually, surveys and administration instructions were distributed to residency program coordinators who then distributed surveys while residents were taking their in-training examinations (ITE). Residents were informed that they could opt out of taking the survey.

Program Director and Continuity Clinic Surveys. These surveys were designed to be completed by the program director and continuity clinic manager, respectively. The Clinic Survey asked for specific clinic data and often required electronic record data extraction. We identified a key point person at each site to shepherd the clinic survey to completion. Programs were given an 8-week window, with routine follow-up, to complete these two surveys.

Graduate Survey. In P4 and LoTP, these internet-based surveys were administered 16 months postgraduation from residency. Each graduate was emailed an invitation to complete the survey. Survey administration typically spanned a 3-month period with reminder emails sent approximately every 2 weeks.

Clinical Preparedness Survey. This survey was designed to assess preparedness for independent practice from two perspectives; a supervising physician and a clinic staff member. This survey was administered approximately 3 months into the graduate’s first posttraining job. Once clinic contact was established, the study team determined from the office manager who the supervising physician and clinic team member who worked closely with the participating study physician. The survey was emailed to the supervising physician and a clinic team member (eg, nurse or medical assistant) who works closely with the recent graduate. Survey recipients were asked to complete the survey within 2 weeks, with follow-up as needed.

Data Analyses

Response rates and efforts to achieve the 70% target were consistently monitored over time. We calculated response rates by dividing the number of surveys administered by the number of surveys returned for each survey according to each project year. We also calculated overall means and ranges for each survey. We then classified the surveys according to the level of interactions between OHSU evaluation team members and the survey respondent. Category 1 represented the closest relationship, characterized as the survey administrator and recipient being on a first name basis (eg, evaluation team member is well known to program director, or residency coordinator is well known to resident) and survey recipients identifying as members of the study, often with a high interest in study outcomes. The Program Director and Resident Surveys had category 1 relationships. Category 2 represented the survey administrator sharing a common relationship (eg, evaluation team and the continuity clinic manager both know the program director but not each other) and the clinic managers know about the study, though interest in outcomes is less than category 1. The Continuity Clinic and Graduate Surveys had category 2 relationships. Category 3 represented a shared identity in family medicine but no personal relationship and little to no knowledge about the LoTP study. The Clinical Preparedness Surveys were category 3. Survey response rates were reported according to these categories for each academic or program year.

For all surveys, records were kept on the launch and completion date as well as follow-up communication and method. For 1 study year (2018-2019), we collected specifics on time spent on all activities needed to obtain a completed survey. Time estimates for these activities were established by consensus among the research associates on the P4 and LoTP projects after logging time for current efforts and reviewing survey response tracking sheets from previous years. Total time was divided by the number of surveys received. For the Clinical Preparedness Survey, only time estimations that resulted in a completed survey were used; however, significant time was spent in outreach efforts that did not result in receipt of surveys.

Results

To date, the OHSU evaluation team has administered over 6,000 surveys as part of both P4 and LoTP. Average response rates for category 1 surveys were: Program Survey – 100% and Resident Survey - 97.3% (range 95.8%-100%; Table 2). For category 2 surveys, the average response rates were: Continuity Clinic Survey – 96.8% (range 84.6%-100%) and Graduate Survey – 82.8% (range 72%-87.3%). For the category 3 survey, the Clinical Preparedness Survey, the average response rate by a supervising physician was 48.3% (range 39.7%-61.2%) and 43.0% (range 25.9%-64.7%) for surveys completed by a clinic staff member.

Survey features and administration efforts for LoTP surveys in 2018-2019, including average minutes per completed survey, administrative activities, response rates, and total time spent, according to relationship categories are shown in Table 1. The number of hours devoted to survey administration ranged from 2 to 289. The Program Director and Resident Surveys (category 1) required about 5 minutes of administrative time per completed survey. Category 2 surveys required about 15 minutes per completion. The Clinical Preparedness Surveys (category 3) required the greatest effort in terms of time spent at about 60 minutes per completed survey. Follow-up for this survey was the most time consuming with a range of 1-14 contacts needed until receiving a completed survey, and an average of 40 days to survey completion (range 1-87). In only 20% of cases, the information provided by the residencies resulted in an established contact at the graduate’s new practice. In the remaining cases, an internet search was used to locate the clinic and appropriate contact. Additional strategies for achieving high response rates are included in Table 3.

Discussion

Obtaining survey response rates high enough to produce generalizable findings are crucial in any type of survey research. Missing survey data can be affected by response bias, which could be considerable and affect study findings. 16-18 Alternatively, it is possible to have a fairly low response rate and have no nonresponse bias. 19 The only way to know is to compare the characteristics of nonrespondents to respondents, 20 which was beyond the scope of this work. While many studies have focused on improving response rates, mainly using incentives, multiple recruitment and administration strategies, 6-8 this paper is the first to our knowledge that has classified response categories related to the interactions or relationships between the participants and the investigative team.

We found response rates to be highest (>90%) when the relationship between the survey administrator and the survey recipient was closest and were sustained over time. We were able to attain very high response rates compared to other studies that used financial incentives (which we did not use). Other studies have also found that multiple administration approaches were needed, but we have moved further by estimating the costs of these efforts. Steps we took to build relationships between the evaluation team and the study participants in a residency collaborative included interactions at annual collaborative meetings, site visits, and conference calls. Though these efforts also represent an investment in time and resources, they provided dividends in data capture that can serve as a guide for other residency collaboratives conducting survey research. Funds invested in establishing these working relationships could prevent wasted funds later on if evaluation efforts fail to yield adequate responses.

We sought to promote a culture where participating residencies were study partners. The in-person events fostered the development of personal connections, greater understanding of the context of each program, shared data collection expectations and target response rates. Importantly, annual collaborative meetings involved data presentations done by the OHSU evaluation team, which also motivated participants and served to build trust in the evaluation team. We also provided reports and data sets and assisted with site-specific data analyses upon request.

Another driver of the response rates we achieved was the common interest in the research being done. All 14 programs in P4 were undertaking various program innovations and had a vested interest in study outcomes. In LoTP, where 4-year programs applied for participation in the pilot and comparable 3-year programs were asked to serve as comparators, we achieved similar response rates from both 3- and 4-year programs despite the likely discrepancy in terms of investment. A shared interest among all LoTP programs is the opportunity to highlight the strengths of their program. Providing survey data back to the programs is a significant way to accomplish that. Participation in the annual collaborative meeting further strengthened partners’ identity as participants of this project.

The discrepancy in survey response rates between category 1 and category 3 relationships highlights the potential that may be achieved by utilizing residencies as partners. However, it may not always be possible or appropriate to establish a relationship with survey participants, as the latter could cause social response bias in survey responses; thus, this must be carefully undertaken. Though we had developed relationships, we also conducted site visits and spoke with residents and clinic members. Thus, we were known to study participants, but our connections with them were ultimately limited (eg, one visit over 7 years). Because cohorts of residents had consistent responses even when they were not involved in the site visits and turn over among program directors still resulted in consistent data, we believed we did not bias study findings.

In addition, proximity of relationships does not necessarily equate with a willingness to complete surveys. When there is less connection between the administrator and recipient, researchers should consider employing other strategies we used to optimize response rates, including a high degree of contextual awareness to facilitate timing and avoid survey fatigue, and creative and persistent follow-up measures to encourage survey completion

Our evaluation team worked to establish realistic timelines and survey windows for survey completion and factored in annual residency activities, such as avoiding data collection during interview season, graduation or orientation. Timelines were flexed to accommodate individual program’s time-consuming events, such as transitioning to a new electronic health record. The residency survey required a high level of preparation to ensure that over 4,000 uniquely identified surveys got into the right residents’ hands and that the residency coordinators at each site were fully prepared to administer the survey. Because the Resident Survey was timed with the scheduled ITE we were able to achieve a nearly 100% response rate.

Persistence is key for achieving a high response rate. Follow-up messages were personalized to encourage nonresponders and included a response rate percentage with encouragement to help meet the desired target of >70%. Our efforts were consistent with other studies that outline efforts required for high response rates. One such study, conducted with members of the National Dental Practice-Based Research Network, achieved a response rate of 87%, which was possible when up to six recruitment steps were used. 6

Only our category 3 surveys required a significant amount of effort, both in terms of time per survey administered and the length of time for survey receipt, which had much lower response rates compared to category 1 and 2 surveys. Even with these efforts, we did not achieve our target response rate. This could be related to the fact that there was no reward for their efforts, little interest in the outcomes of the study and that there was no established relationship. Doing a better job of contextualizing the survey as an important part of a shared purpose (eg, improving residency training for public good) may have served as a motivating factor to complete the Clinical Preparedness Survey. Identifying the appropriate individuals in new practice sites for approximately 100 graduates per year proved difficult.

In addition to being a category 3 relationship, the evaluation team also had little context for factors at these clinic sites that may assist in optimizing timing and communication strategies. However, the American Board of Medical Specialties are developing standards for continuous physician certification that include an expectation that professional duties exist between residencies and practicing physicians to work together to improve the preparation for physicians in practice. These new standards, once adopted, could encourage survey completion, such as our clinical preparedness survey.

Realistic expectations of the time and effort involved is essential for success in survey research. Engaging evaluators early on in the project and having appropriately trained staff will aid research teams in accomplishing higher response rates. Further, our findings underscore that residencies have an appetite for collaborative work that includes measuring outcomes. Future accreditation requirements may call for residencies to work together to learn how to produce the physicians and health care teams that patients and communities need to enhance health metrics.

Strengths of this study include the large number of participants, their geographic and role diversity, and careful tracking of response rates. Limitations include that only 1 year of data were used to convey detailed efforts to attain high response rates, though believe this would not vary greatly. Another limitation is that we did not capture the characteristics of nonresponders, which would have been important to understand response bias in the clinical preparedness survey data. Lastly, our findings are limited in their generalizability to research conducted with residency programs that are similar to those that participated or are participating in P4 and the LoTP study. Findings like those achieved here require the expertise and resources needed to conduct successful educational research versus program evaluation or small studies, which typically involve anonymous survey responses with very low response rates, often due to limited funding.

In conclusion, high response rates allow for representation of a study population and generalizability of findings. Forming residency research collaboratives like P4 and LOTP, engaging with programs around research questions and appropriate measures, prospectively studying how key features change over time and giving programs ongoing access to their own data can help further research in graduate medical education. Identifying administrative efforts required to achieve high response rates based on study population size and existing relationships can allow investigators to plan for realistic staffing needs toward enhancing their study’s analytic capabilities. Achieving high rates requires an investment in time and resources and ingenuity to connect with study populations.

Financial Support

This work was funded by the American Board of Family Medicine Foundation via grants for the Preparing Personal Physicians for Practice and the Length of Training Pilot Project.

Acknowledgments

The authors thank Patrice Eiff, MD, professor emeritus and Elaine Uchison, research associate, both in the Department of Family Medicine at Oregon Health & Science University, for their contributions to P4 and LoTP data collection efforts and Sam M. Jones, MD, distinguished professor and Epperson Zorn Chair for Innovation in Family Medicine and Primary Care at the University of Colorado School of Medicine, Denver for guidance in the development of this manuscript.

References

  1. Fincham JE. Response rates and responsiveness for surveys, standards, and the Journal. Am J Pharm Educ. 2008;72(2):43. doi:10.5688/aj720243
  2. Carley-Baxter LR, Hill CA, Roe DJ, Twiddy SE, Baxter RK, Ruppenkamp J; Carley-Baxter. Does Response Rate Matter? Journal editors use of survey quality measures in manuscript publication decisions. Surv Pract. 2009;2(7):1-7. doi:10.29115/SP-2009-0033
  3. Baruch Y, Holtom BC. Survey response rates levels and trends in organizational research. Hum Relat. 2008;61(8):1139-1160. doi:10.1177/0018726708094863

  4. Cho YI, Johnson TP, VanGeest JB. VanGeest. Enhancing surveys of health care professionals: A meta-analysis. Eval Health Prof. 2013;36(3):382-407. doi:10.1177/0163278713496425
  5. McLeod CC, Klabunde CN, Willis GB, Stark D. Health care provider surveys in the United States, 2000-2010: a review. Eval Health Prof. 2013;36(1):106-126. doi:10.1177/0163278712474001
  6. Funkhouser E, Vellala K, Baltuck C, et al; National Dental PBRN Collaborative Group. Survey methods to optimize response rate in the National Dental Practice-based Research network. Eval Health Prof. 2017;40(3):332-358. doi:10.1177/0163278715625738
  7. VanGeest JB, Johnson TP, Welch VL. Methodologies for improving response rates in surveys of physicians: a systematic review. Eval Health Prof. 2007;30(4):303-321. doi:10.1177/0163278707307899
  8. Phillips AW, Reddy S, Durning SJ. Improving response rates and evaluating nonresponse bias in surveys: AMEE Guide No. 102. Med Teach. 2016;38(3):217-228. doi:10.3109/0142159X.2015.1105945
  9. Society of Teachers of Family Medicine. CAFM Educational Research Alliance. Accessed April 23, 2021. https://www.stfm.org/Research/CERA 
  10. Carney PA, Eiff MP, Waller E, Jones SM, Green LA. Redesigning residency training: summary findings from the Preparing the Personal Physician for Practice (P4) Project. Fam Med. 2018;50(7):503-517. doi:10.22454/FamMed.2018.829131
  11. Kim S, Phillips WR, Stevens NG. Family practice training over the first 26 years: a cross-sectional survey of graduates of the University of Washington Family Practice Residency Network. Acad Med. 2003;78(9):918-925. doi:10.1097/00001888-200309000-00017
  12. Raetz J, Osborn J. Nursing home practice among recent family medicine residency graduates. Fam Med. 2013;45(8):576-579.
  13. Gwynne M, Page C, Reid A, Donahue K, Newton W. What’s the right referral rate? specialty referral patterns and curricula across I3 Collaborative primary care residencies. Fam Med. 2017;49(2):91-96.
  14. Robertson SL, Robinson MD, Reid A. Electronic health record effects on work-life balance and burnout within the I3 Population Collaborative. J Grad Med Educ. 2017;9(4):479-484. doi:10.4300/JGME-D-16-00123.1
  15. Oregon Health & Science University. Length of Training Pilot Project. Accessed April 26, 2021. https://fmresearch.ohsu.edu/lotpilot.org/ 
  16. Cull WL, O’Connor KG, Sharp S, Tang SF. Response rates and response bias for 50 surveys of pediatricians. Health Serv Res. 2005;40(1):213-226. doi:10.1111/j.1475-6773.2005.00350.x
  17. Sedgwick P. Non-response bias versus response bias. BMJ. 2014;348(apr09 1):g2573. doi:10.1136/bmj.g2573
  18. Nulty DD. The adequacy of response rates to online and paper surveys: what can be done? Assess Eval High Educ. 2008;33(3):301-314. doi:10.1080/02602930701293231
  19. Groves RM, Peytcheva E. The Impact of Nonresponse Rates on Nonresponse Bias: A Meta-Analysis. Public Opin Q. 2008;72(2):167-189. doi:10.1093/poq/nfn011
  20. Porter SR, Whitcomb ME. Non-response in student surveys: the role of demographics, engagement and personality. Res High Educ. 2005;46(2):127-152. doi:10.1007/s11162-004-1597-2

Lead Author

Annie Ericson, MA

Affiliations: Oregon Health & Science University, Portland, OR

Co-Authors

Kathryn Bonuck, MSEd - Oregon Health & Science University, Portland, OR

Larry A. Green, MD - University of Colorado School of Medicine, Denver, CO

Colleen Conry, MD - University of Colorado, Denver, CO

James C. Martin, MD - Long School of Medicine, University of Texas Health Science Center at San Antonio

Patricia A. Carney, PhD, MS - Oregon Health & Science University, Portland, OR

Corresponding Author

Annie Ericson, MA

Correspondence: Oregon Health & Science University, Portland, OR

Email: ericsona@ohsu.edu

Fetching other articles...

Loading the comment form...

Submitting your comment...

There are no comments for this article.

Downloads & Info

Share

Related Content