Background and Objectives: Medical educators often need to give high-stake, bad news, and complex feedback to learners without any formal training about how to do this. This study assessed whether a 60- to 90 minute workshop about the novel model of ARCH (ask, reinforce, correct, help) combined with SPIKES (set up, assess perception, invitation for information, give knowledge, address emotion, strategy and summary) would improve the participant-reported competence and comfort with, knowledge of, and use of ARCH-SPIKES by educators in giving high-stake, bad news, and complex feedback.
Methods: This IRB-approved, prospective survey design study evaluated participant-reported competence and comfort, knowledge, and use of ARCH-SPIKES after a 60- to 90-minute workshop where participants practiced giving high-stake, bad news, and complex feedback in small groups from 2019 to 2022. We collected data pre-workshop, immediately post-workshop, and 6 to 9 months post-post workshop.
Results: In unmatched data analyses, participant-reported competence (from 33% to 69% and then 73%; P<0.01) and participant-reported comfort (from 16% to 54% and then 59%; P<0.01) rose from the pre- to post- and post-post surveys. Knowledge of the ARCH and SPIKES acronyms rose sharply from the pre- to the post-surveys (18% to 98% and 9% to 94%, respectively; P<0.01), and remained significantly improved from baseline (46% for both ARCH and SPIKES; P<0.05). Thirty-six percent of post-post participants used ARCH-SPIKES in the 6 months after the workshop.
Conclusions: This highly interactive workshop increased the participant-reported competence and comfort with, knowledge of, and use of ARCH-SPIKES for giving high-stake, bad news, and complex feedback.
Medical learners require feedback from medical educators to improve their medical knowledge, skills, and attitudes. However, many educators feel unprepared to deliver feedback because they have not had formal curricular instruction.1 This feeling of unpreparedness is intensified when medical educators must deliver negative or uncomfortable feedback to learners.1
The literature about feedback gives medical educators multiple models to choose from2-9 and many lists of qualities of effective feedback.4,7,9-15 Our preferred feedback model is ARCH (ask, reinforce, correct, and help; Table 1), which is a simple, useful, and easy to remember model to help move learners toward their goals.16 Strengths of this model include learner self-assessment, relationship building, and ending with a tangible plan. In the ARCH model, educators help learners identify their own personal educational goals, similar to motivational interviewing in the clinical space.17 However, by itself, ARCH does not provide a framework to give high-stake, bad news, and complex feedback to learners. High-stake feedback, like a high-stake exam, weighs significantly on learners’ outcomes such as promotion, graduation, or employment.18 Bad news and complex feedback are behaviors that are not standard-based and are more complex than simple feedback.
|
ARCH |
Spikes |
Set up: Before meeting to give high-stake, bad news, and complex feedback, the educator should prepare materials—for example, looking at evaluations, talking to other educators, collecting the data needed to have a robust and complete conversation about the issue at hand. |
|
X |
Ask/perception: Asking learners to self-reflect and evaluate allows the educator to assess insight into their strengths and areas of growth to guide the conversation. |
X |
X |
Invitation: Asking whether learners would like to have (1) a summary of the events that led to this meeting or (2) all of the details regarding the events. This allows the learner to have some control over the conversation. |
|
X |
Reinforce: Reinforcing what the learner has done correctly or well shows the learner that their educator sees them and all their hard work to this point. This allows the development of the relationship between educator and learner. |
X |
|
Correct/knowledge: Correcting knowledge, skills, or behaviors is vital to help learners see what must change for them to grow. Only one to two pieces of corrective information should be given to prevent the learner from being overwhelmed and defeated. |
X |
X |
Emotion: Addressing emotion is a crucial step in giving high-stake, bad news, and complex feedback. This gives the learner space to express and be validated in the emotions they’re feeling in hearing the bad news. It provides a way to connect with and see the learner as a complete person with complex feelings. |
|
X |
Help/strategy: Helping the learner to develop a practical, concrete plan to move forward gives the learner tangible goals to accomplish. |
X |
X |
With high-stake, bad news, and complex feedback, educators can transplant the clinical skills of giving bad news to patients into the medical education skills of giving bad news to their learners. One of the most well-known models is Baile et al’s SPIKES19 (set up, assess perception, invitation for information, give knowledge, address emotion, strategize and summary) model, which is routinely used in palliative care (Table 1) and has more recently been used in the educational space.15,20
Given the ease of using the ARCH model of feedback and the familiarity the SPIKES model has to giving bad news, our team combined them for educators to use when delivering high-stake, bad news, and complex feedback: (1) set up, (2) ask/perception, (3) invitation, (4) reinforce, (5) correct/knowledge, (6) emotion, and (7) help/strategy.
The aim of the study was to evaluate how a 60- to 90 minute, highly interactive workshop about ARCH-SPIKES impacted the participant-reported competence and comfort with, knowledge of, and use of ARCH-SPIKES by educators in giving high-stake, bad news, and complex feedback to learners both directly after the workshop and 6 months later. The hypothesis was that those who participated in the workshop would improve in all measures.
This was an IRB-approved prospective survey design study (University of Pittsburgh IRB approval for exempt status STUDY19090151 9/26/2019). We used surveys to evaluate participant-reported competence and comfort, knowledge, and use of ARCH-SPIKES by participants immediately before (pre-), immediately after (post-) and 6 to 9 months after (post-post) ARCH-SPIKES training.
Participant Recruitment
After piloting this workshop at an internal institutional medical-educator conference, participants were recruited from 2019 to 2022. First, we included local participants from associated family medicine residency programs (UPMC St. Margaret and Washington Health System, both in-person, 90 minutes). Participants at these programs were given the session as part of their faculty development series, which included all faculty (physician or nonphysician), associate program directors, and program directors from those programs. Participants also were recruited at regional and national conferences, including those of the Family Medicine Education Consortium (in-person) and Society of Teachers of Family Medicine medical student education (virtual) and annual (in-person) meetings (all 60 minutes). There, participants self-selected attendance and ranged in titles from medical students to chairs.
Participants were read a script about the study, including that they gave their consent to use their data by filling out any of the surveys. If participants completed all three surveys, they were entered to win a $ 25 gift card that was awarded to one participant from each session. To link data from one collection time to the next and to be entered in the chance to win, we asked attendees to volunteer their email addresses. For those wanting to participate in the study anonymously, participants could use a unique series of numbers familiar to them for the immediate pre- and post-surveys only. This accommodation allowed us to perform both a matched (by the individual) and an unmatched analysis of the responses over time. Permission for collecting data with the intent to publish the results was obtained and kept on file.
Session Description
The educational session was 60 to 90 minutes based on the time allotted by the venue. The lead presenter was the same for each of the sessions. The sessions had five parts: (1) needs assessment and pre-surveys (5 minutes in both sessions), (2) large-group, interactive discussion about ARCH and SPIKES (20 minutes in 90 minute, 12 in 60 minute session), (3) role-play based practice (30 minutes in both sessions), (4) large-group debrief (25 minutes in 90 minute, 10 minutes in 30 minute session), and (5) post-surveys and (if applicable) conference evaluations (10 minutes in 90 minute, 3 minutes in 60 minute session).
Part 1: A brief needs assessment, both written and verbal, guided the focus of the sessions.
Part 2: A large-group, interactive discussion reviewed the ARCH model for delivering feedback and the SPIKES model for delivering bad news. Three example videos were shown: ARCH alone, SPIKES alone, and the combined ARCH-SPIKES model.
Part 3: The practice of new skills, immediately on learning them, is integral to long-lasting learning. In groups of three (teacher, learner, observer), participants practiced giving high-stake, bad news, complex feedback using the ARCH-SPIKES framework. Simulated cases (medical error, failing a rotation, and dismissing a resident from a program) based on real-life circumstances challenged learners from all experience levels. Small groups were facilitated by the session leaders who rotated between groups. The teacher gave the learner feedback based on the case using ARCH-SPIKES, and the observer gave the teacher feedback based on the interaction using ARCH—thus providing two opportunities per case to practice giving feedback. Three cases were used to provide everyone the opportunity to practice in each of the three roles.
Part 4: After the first case and again after the third case, the large group debriefed to celebrate successes and brainstorm about stumbling blocks. Debriefing after the first case gave participants the chance to practice some of the tips that other participants had developed on their own.
Part 5: Finally, post-surveys and, if applicable, conference evaluations were completed.
Data Collection
All three surveys had close-ended questions about participant-reported comfort and competence in giving high-stake, bad news, complex feedback as well as two open-ended questions asking what the ARCH and SPIKES acronyms were. These questions assessed Kirkpatrick’s evaluation level 1 (learner’s reactions to the learning) and level 2 (learner’s knowledge acquisition).21 To characterize participants in attendance, the pre-survey also asked (1) current position (medical student, faculty, program director, etc.); (2) frequency of giving high-stake, bad news, complex feedback; and (3) current method for giving feedback. The post-survey also asked (1) what feedback model they planned to use in the future and (2) how likely they were to use ARCH-SPIKES in the future. The post-post survey also asked how many times they used ARCH-SPIKES in the last 6 to 9 months to assess for Kirkpatrick’s evaluation level 3 (change of behavior).21 Given the relatively uncommon occurrence of high-stakes, bad news, and complex feedback, the authors allowed a follow-up period of 6 to 9 months for the post-post survey.
Statistical Analysis
Initially, we used basic descriptive statistical measures to examine the responses from each time (pre-, post-, and post-post). We used means, medians, percentile, and frequency distributions to study the three periods, examining the responses by those individuals answering each questionnaire and matching their survey response by a predetermined identification code.
Six individuals answered only the post-survey, and two other individuals answered only the post-post survey. Because they had no pre-workshop information, they were excluded from analysis. A total of 142 respondents answered the pre-, 78 answered the post-, and 22 answered the post-post survey. These groups were analyzed three different ways: (1) handling all three times as independent (unmatched), (2) examining the 22 subjects who answered all three survey questionnaires (matched), and (3) analyzing only the 78 individuals who answered both the pre- and post-survey questionnaires (matched). Descriptive analyses showed that for all the major outcome variables (participant-reported competence and comfort, correct ARCH and correct SPIKES definitions), the frequency distributions were bimodal. Thus, each of these variables was dichotomized into two categories (yes or no) and statistically analyzed separately. Participant-reported competence and comfort was considered yes (positive) if the response was either “moderately” or “very” competent or comfortable. Correctly identifying the acronym for ARCH was considered yes (positive) if the subject got either three or four letters correctly identified, and SPIKES was considered yes (positive) if the subject got five or six letters correctly identified.
We used multiple logistic regression analysis for the unmatched data sets to compute the rates at each time point and their confidence intervals. We used separate generalized linear mixed models (GLMM) to analyze the matched data sets. These analyses were applied for the 22 individuals that responded to the pre-, post-, and post-post surveys, and the 78 individuals that responded to the pre- and post-surveys. Each model used the logit link function for binomial data, an individual subject-specific effect plus a time factor. No P value adjustment was performed for each of the overall models (participant-reported competence and comfort, correct ARCH or correct SPIKES); however, within each model, the 95% confidence intervals were adjusted for multiple comparisons at each time point.
Given the mix of participants (ranging from premedical student to chair of family medicine department), results in the post- and post-post surveys could be biased by some participants not being medical educators. To determine whether the results were biased, a dummy variable, “medical educator vs noneducator,” was created. The variable medical educator (yes/no) was defined as a participant who routinely would (medical educator) or would not (noneducator) give high-stakes, bad news, and complex feedback. Then, for this sensitivity analysis, this dummy variable was entered into the multiple logistic and GLMM models. We compared results between models when the variable was entered versus not, and with and without the interaction term (medical educator cross time, both nominal). All statistical analysis was performed using JMP version 17 (SAS Institute).
Our participants were primarily physician faculty (60.5%), followed by associate program director (7.7%), and behavioral health faculty (7.0%). At baseline, participants most commonly gave feedback every quarter (45.1%), followed by once a month (25.4%). At baseline, most participants did not have a system for giving difficult feedback (72.5%). At baseline, most participants felt somewhat (59.2%) or moderately (31.7%) competent giving difficult feedback, and not at all (27.5%) or somewhat (57.0%) comfortable giving difficult feedback. At baseline, most participants did not correctly identify the ARCH (75.3%) or SPIKES (81.7%) acronyms (Table 2).
Characteristic |
Variable |
n (%) |
Current position |
Educators: |
|
Physician faculty |
86 (60.5) |
Associate/assistant program director |
11 (7.7) |
Behavioral health faculty |
10 (7.0) |
Program director |
9 (6.3) |
Fellow |
6 (4.2) |
Medical resident |
5 (3.5) |
Pharmacy faculty |
1 (0.7) |
Other (chair, clerkship director, nurse practitioner) |
3 (2.1) |
Noneducators: |
|
Medical student |
7 (4.9) |
Other (three coordinators, 1 premedical student) |
4 (2.8) |
Frequency of giving difficult feedback |
Never |
8 (5.6) |
Once in my lifetime |
3 (2.1) |
Once a year |
17 (12.0) |
Once a quarter |
64 (45.1) |
Once a month |
36 (25.4) |
Once a week |
14 (9.9) |
Do you use a system for giving difficult feedback? |
No |
103 (72.5) |
Yes (ARCH, sandwich, ask-tell-ask) |
39 (27.5) |
How competent do you feel when giving difficult feedback to learners? |
Not at all competent |
11 (7.7) |
Somewhat competent |
84 (59.2) |
Moderately competent |
45 (31.7) |
Very competent |
2 (1.4) |
How comfortable do you feel when giving difficult feedback to learners? |
Not at all comfortable |
39 (27.5) |
Somewhat comfortable |
81 (57.0) |
Moderately comfortable |
18 (12.7) |
Very comfortable |
4 (2.8) |
Number correct of identifying ARCH acronym |
0 |
107 (75.3) |
1 |
5 (3.5) |
2 |
5 (3.5) |
3 |
9 (6.3) |
4 |
16 (11.3) |
Number correct of identifying SPIKES acronym |
0 |
116 (81.7) |
1 |
2 (1.4) |
2 |
1 (0.7) |
3 |
4 (2.8) |
4 |
6 (4.2) |
5 |
5 (3.5) |
6 |
8(5.6) |
Session location and pre-survey participation rates |
11/1/19 Annual FMEC Conference |
4/4 (100) |
10/2/20 Annual FMEC Conference |
7/7 (100) |
|
2/1/21 STFM Medical Student Conference |
53/75 (~70) |
9/7/21 UPMC St. Margaret Family Medicine Residency Faculty Meeting |
8/8 (100) |
9/29/21 Washington Hospital (PA) Family Medicine Residency Faculty Meeting |
10/10 (100) |
5/1/22 STFM Annual Conference |
61/148 (41) |
Most sessions were 60 minutes long with only 18 of the participants receiving a 90 minute session. Because the active learning time and the lead presenter were the same, we combined the data from the 60 minute and 90 minute sessions and did not analyze the data separately.
Results from the medical educator sensitivity analyses showed 11 participants out of the 142 total were noneducators and completed the pre-survey. Six of the 11 completed the post-survey, and three of the 11 completed all three surveys. In all but one statistical model, we found no difference in rates between medical educators and noneducators from pre- to post- and post-post surveys. The only exception was in the matched (n = 22) analysis where all three noneducators scored zero in participant-reported competence and comfort and knowledge in the pre-survey but had similar rates to the medical educators for the post- and post-post surveys. Given the small sample sizes, no statistical significance, and similar rates, all results in Table 3 include both medical educators and noneducators.
|
Pre-survey Or (95% CI) |
Post-survey Or (95% CI) |
Post-post survey Or (95% CI) |
A. Unmatched |
N = 142 |
N = 78 |
N = 22 |
Competencea
Comfortb
Correct ARCHc
Correct SPIKESd |
0.33 (0.26, 0.41)
0.16 (0.10, 0.22)
0.18 (0.12, 0.25)
0.09 0.05, 0.15) |
0.69 (0.58, 0.78)
0.54 (0.43, 0.64)
0.98 (0.93, 1.00)
0.94 (0.87, 0.98) |
0.73 (0.52, 0.87)
0.59 (0.39, 0.77)
0.46 (0.27, 0.65)
0.46 (0.27, 0.65) |
B. Matchede |
N = 22 |
N = 22 |
N = 22 |
Competencea
Comfortb
Correct ARCHc
Correct SPIKESd |
0.36 (0.20, 0.57)
0.18 (0.07, 0.39)
0.09 (0.03, 0.28)
0.05 (0.01, 0.22) |
0.78 (0.57, 0.90)
0.50 (0.31,0.69)
1.00 (0.85, 1.00)
0.94 (0.78, 0.99) |
0.73 (0.52, 0.87)
0.59 (0.39, 0.77)
0.46 (0.27, 0.65)
0.46 (0.27, 0.65) |
C. Matchedf |
N = 78 |
N = 78 |
|
Competencea
Comfortb
Correct ARCHc
Correct SPIKESd |
0.26 (0.17, 0.36)
0.13 (0.07, 0.22)
0.21 (0.13, 0.31)
0.09 (0.04, 0.17) |
0.69 (0.58, 0.78)
0.54 (0.43, 0.64)
0.98 (0.93, 1.00)
0.94 0.87, 0.98) |
• –
• –
• –
• – |
Participant-reported competence and comfort significantly increased and stayed elevated in both the post- and post-post surveys compared to the pre-surveys (Table 3). Correctly identifying the ARCH acronym greatly increased from the pre- to the post-surveys from 18% to 98% (P<0.01) and remained significantly increased at 46% (P<0.05) on the post-post survey. Correctly identifying the SPIKES acronym followed the same pattern (9% on the pre-, 94% on the post- (P<0.01), and 46% (P<0.05) on the post-post survey). This data was similar for the unmatched and matched data (Table 3).
Of the 78 participants who completed the post-survey (55% of those who completed the pre-survey), most participants said they were moderately likely (37%) or very likely (46%) to use ARCH-SPIKES in the next 6 months. Of the 22 participants who completed the post-post survey (15% of those who completed the pre-survey and 28% of the post-survey), 36% used the ARCH-SPIKES model two or more times in their previous 6 months (data not displayed in the tables).
This highly interactive 60- to 90 minute workshop improved participant-reported competence and comfort with and participant knowledge of the ARCH-SPIKES model directly after the session. This improvement was long-lasting, with 6- to 9 month participant-reported competence and comfort levels staying at immediate post-survey levels and knowledge of the acronyms staying in about half of the participants after 6 months. Of those who completed the post-post survey, 36% used the ARCH-SPIKES model in their time since the workshop. While individual participants cannot be identified, the percentage of people who recalled the acronyms was similar to how many participants used the model in the 6 months prior to the post-post survey, strengthening the adage of needing to use skills in order not to lose them. This study’s workshop provided the “bolus” learning that is often gained by going to conferences or local faculty development sessions. It also reinforced the need for repeated faculty development on important topics to keep skills sharp, especially for skills that are not used day-to-day, like giving high-stake, bad news, and complex feedback. This reinforcement can be done with small increment “drip” faculty development through internal programming, external programming, or self-directed review of the concepts. Using more interactive strategies, higher on Dale’s cone of learning, leads to higher retention of skills and knowledge.22
The discomfort with giving high-stake, bad news, and complex feedback prior to the workshop aligns with existing literature where many educators try to balance the needs of their learners (e.g., the need to maintain self-esteem while getting the information needed to improve) with their own needs (e.g., the need to help learners to improve while not being perceived as mean).1 As participants practiced the ARCH-SPIKES skills, their participant-reported competence and comfort increased immediately and at 6 months. This sustained increase can help educators perform high-stake, bad news, and complex feedback more readily with their learners. Sharing feedback often and with candor can help learners develop a feedback mindset and help a culture of feedback develop at institutions.23
While one study focused on using SPIKES in feedback, that study looked at giving peer feedback.20 Our study is the first to look at combining the ARCH and SPIKES models to give high-stake, bad news, and complex feedback as medical educators. Given the familiarity and simplicity of these models, we saw an increase in the participant-reported comfort and competence and knowledge with just one exposure. This increase is likely largely due to the workshop’s many interactive teaching techniques: think-pair-share, large-group discussion, videos, cold calling, small-group work, role-play, and large-group debrief. These techniques move learners into more active areas of Dale’s cone of learning and improve knowledge acquisition and retention.22,24
Strengths
This study evaluated a wide range of Kirkpatrick’s levels of evaluation in a wide range of participants over a long period of time.21 The response rate of the post-survey was high, likely because 1) most of these sessions were in-person, 2) time was given to complete the survey at the end of the session, and 3) an incentive was offered to complete all three surveys. The statistical results of the unmatched and matched data analysis were almost identical, and the sensitivity analysis that separated medical educators from noneducators did not alter the results.
Limitations and Future Directions
The post-post survey participants constituted a small group, likely due to that survey being sent electronically. While participants were not asked about the number of years working in medical education, the survey did ask how often participants gave high-stakes, bad news, complex feedback to assess experience with the workshop skills. Even so, years of working within medical education can confound the effects of this study. Six months might have been too long to assess for retainment of skills. The competence with, comfort with, and use of ARCH-SPIKES data were self-identified and subjective. This limitation was exacerbated by some participants never or rarely giving high-stake, bad news, or complex feedback before the workshop. This study did not measure the emotional or behavioral impact this kind of feedback had on the learner (Kirkpatrick’s level 4 of evaluation or impact on results).21 Feedback is a complicated and iterative process highly influenced by the relationship between the recipient and the giver as well as the psychological safety of the learning environment.25,26 This study focused only on one part of that complicated process: the skill development of the giver.
This highly interactive 60- to 90 minute session based on ARCH-SPIKES models increased the participant-reported competence and comfort with, knowledge of, and use of ARCH-SPIKES for giving high-stake, bad news, and complex feedback. Because giving this type of feedback is infrequent, participating in this kind of training gives medical educators an opportunity to practice and develop their skills outside of these stressful situations. Over time, lack of repeated exposure decreased knowledge retention, but not participant-reported competence or comfort. While the study did objectively measure knowledge, skills were not assessed in role-play and could be a way to further investigate this novel model. Because this type of feedback is infrequent and emotionally charged, data from feedback receivers would be best collected in a simulated environment rather than after real-life experiences. This study offers a novel framework for and effective method of learning how to shape educator’s high-stakes, bad news, and complex feedback.
References
-
Kogan JR, Conforti LN, Bernabeo EC, Durning SJ, Hauer KE, Holmboe ES. Faculty staff perceptions of feedback to residents after direct observation of clinical skills.
Med Educ. 2012;46(2):201–215. doi:10.1111/j.1365-2923.2011.04137.x
-
Amonoo HL, Longley RM, Robinson DM. Giving feedback.
Psychiatr Clin North Am. 2021;44(2):237–247. doi:10.1016/j.psc.2020.12.006
-
Hewson MG, Little ML. Giving feedback in medical education: verification of recommended techniques.
J Gen Intern Med. 1998;13(2):111–116. doi:10.1046/j.1525-1497.1998.00027.x
-
Jug R, Jiang XS, Bean SM. Giving and receiving effective feedback: a review article and how-to guide.
Arch Pathol Lab Med. 2019;143(2):244–250. doi:10.5858/arpa.2018-0058-RA
-
-
Milan FB, Parish SJ, Reichgott MJ. A model for educational feedback based on clinical communication skills strategies: beyond the “feedback sandwich”.
Teach Learn Med. 2006;18(1):42–47. doi:10.1207/s15328015tlm1801_9
-
Qureshi NS. Giving effective feedback in medical education.
The Obstetric & Gynaecologis. 2017;19(3):243–248. doi:10.1111/tog.12391
-
Shrivasta SR, Shrivasta PS, Ramasamy J. Effective feedback: an indispensable tool for improvement in quality of medical education. J Pedagog Dev. 2014;4(1):12–20.
-
Weallans J, Roberts C, Hamilton S, Parker S. Guidance for providing effective feedback in clinical supervision in postgraduate medical education: a systematic review.
Postgrad Med J. 2022;98(1156):138–149. doi:10.1136/postgradmedj-2020-139566
-
Anderson PAM. Giving feedback on clinical skills: are we starving our young?
J Grad Med Educ. 2012;4(2):154–158. doi:10.4300/JGME-D-11-000295.1
-
Brukner H. Giving effective feedback to medical students: a workshop for faculty and house staff.
Med Teach. 1999;21(2):161–165. doi:10.1080/01421599979798
-
Kelly E, Richards JB. Medical education: giving feedback to doctors in training.
BMJ. 2019;366:l4523. doi:10.1136/bmj.l4523
-
Natesan S, Jordan J, Sheng A, et al. Feedback in medical education: an evidence-based guide to best practices from the council of residency directors in emergency medicine.
West J Emerg Med. 2023;24(3):479–494. doi:10.5811/westjem.56544
-
Ramani S, Krackov SK. Twelve tips for giving feedback effectively in the clinical environment.
Med Teach. 2012;34(10):787–791. doi:10.3109/0142159X.2012.684916
-
-
Baker SD. The ARCH Feedback and Guidance Model: Practical Strategies for Implementation. The Florida Pediatrician. 2026.
-
Resnicow K, McMaster F. Motivational Interviewing: moving from why to how with autonomy support.
Int J Behav Nutr Phys Act. 2012;9(1):19–27. doi:10.1186/1479-5868-9-19
-
-
Baile WF, Buckman R, Lenzi R, Glober G, Beale EA, Kudelka AP. SPIKES-A six-step protocol for delivering bad news: application to the patient with cancer.
Oncologist. 2000;5(4):302–311. doi:10.1634/theoncologist.5-4-302
-
Kistler EA, Chiappa V, Chang Y, Baggett M. Evaluating the SPIKES Model for Improving Peer-to-Peer Feedback Among Internal Medicine Residents: a Randomized Controlled Trial.
J Gen Intern Med. 2021;36(11):3410–3416. doi:10.1007/s11606-020-06459-w
-
Kirkpatrick DL. Techniques for evaluating training programs. J Am Soc Train Dir. 1959;13:21–26.
-
Dale E. The cone of experience. Dryden Press. 1946:37–51.
-
Bakke BM, Sheu L, Hauer KE. Fostering a feedback mindset: a qualitative exploration of medical students’ feedback experiences with longitudinal coaches.
Acad Med. 2020;95(7):1057–1065. doi:10.1097/ACM.0000000000003012
-
Rudolph AL, Lamine B, Joyce M, Vignolles H, Consiglio D. Introduction of interactive learning into French university physics classrooms.
Phys Rev ST Phys Educ Res. 2014;10(1). doi:10.1103/PhysRevSTPER.10.010103
-
Ajjawi R, Bearman M, Molloy E, Noble C. The role of feedback in supporting trainees who underperform in clinical environments.
Front Med (Lausanne). 2023;10. doi:10.3389/fmed.2023.1121602
-
Molloy E, Ajjawi R, Bearman M, Noble C, Rudland J, Ryan A. Challenging feedback myths: Values, learner involvement and promoting effects beyond the immediate task.
Med Educ. 2020;54(1):33–39. doi:10.1111/medu.13802
There are no comments for this article.