Our approach to medical education has changed dramatically over the past decade. In the twentieth century, experienced-based learning (EBL) was the guiding paradigm.1 Students and residents worked long hours in real life clinical situations with the goal of attaining mastery by repetition. The roles of the faculty in EBL were to assure patient safety, role model excellence, and foster reflection by the learners. The new twenty-first century model, competency-based medical education (CBME), breaks down the practice of medicine into a series of competencies that can be systematically assessed and documented.2 The competencies are organized into six core domains: professionalism, patient care and procedural skills, medical knowledge, practice-based learning and improvement, interpersonal and communications skills, and systems-based practice. Related competencies are grouped into clinical milestones and entrustable professional activities (EPAs) and the role of the faculty is expanded to include closely evaluating whether or not each competency is attained. CBME has been embraced by the accrediting bodies for medical schools and by the Accreditation Council for Graduate Medical Education (ACGME) for residency programs. The model is now so widespread that the readers of this journal are probably familiar with it.
While CBME is generally accepted, few of us are well-versed regarding the evidence, or lack thereof, of its effectiveness. So it is a welcome addition to our literature when we encounter a scholarly assessment of such evidence. In the past year, our journal has published two papers from researchers at the University of Ottawa that evaluate how CBME is being implemented in family medicine residency programs in Canada and the United States. The first, published in our April 2020 issue, was a scoping review of CBME implementation.3 The second appears in this issue and reviews the literature about how resident and practicing physician competencies are being assessed.4 Thirty-seven papers published between 2000 and 2020 met the inclusion criteria for this review, so the body of literature is sparse. Not surprisingly, most of the studies have focused on formative evaluation rather than outcomes. Only 14 of them addressed the reliability of assessment measures or their impact on faculty and residents. None addressed clinical outcomes. Evidence from other medical specialties was not included in either review.
Although CBME is still considered new in today’s world, the concept actually dates at least as far back in history as 1978 when McGaghie and colleagues published a book on the subject for the World Health Organization.5 The model gained traction in the United States after the implementation of resident work hours restrictions early in this century. Two arguments tend to underpin the rationale for CBME. The first is that EBL required learners to keep doing tasks over and over until, eventually, they figured it out. Faculty supervision was often lax raising legitimate concerns about patient safety, not to mention the potential detrimental impact on the learners themselves. The second concern was that EBL seemed to consider competency to be a threshold for learners to meet rather than a continuum through which they pass. Repetition of experience was felt to be central to learning. In his 2008 book, Malcolm Gladwell illustrated this concept when he posited that ten-thousand hours of practice are required to master a technical skill.6 But experience in medical education was always a hit or miss proposition and the effort required to attain the requisite volume of experience was substantial. So, the medical education world was ripe for a new model and CBME filled the gap nicely.
But there are serious problems with CBME too. First, the model has been implemented during a time when the volume of experience for students and residents is decreasing. To illustrate this problem, consider an example. Imagine that a skilled medical educator is asked to assess the competency of two family medicine residents to deliver a baby, one after his or her tenth delivery and the other after the 100th delivery. The pre-assessment likelihood of competency is much higher for the second resident and Bayes' Theorem teaches us that this dramatically affects the predictive value of the assessment. Thus, CBME can only work if faculty members attain a high sensitivity and specificity in evaluating competency. But the amount of time faculty can devote to the education process is also decreasing in the face of patient care productivity demands. The 1997 program requirements for family medicine defined a full time faculty member as one who devotes at least 1400 hours annually to the residency while the 2020 requirements have no such standard.7 Furthermore, there is little evidence to indicate that every faculty member has the requisite skills to assess competency to the rigorous standard needed for CBME to work even if they had the time. Asking inexperienced faculty with insufficient time to assess learners with decreasing volumes of clinical experience is a recipe for failure. This is not to say that CBME should be abandoned, but it would be helpful if policy leaders stopped framing healthy skepticism as resistance to change.8
To some extent, the coronavirus pandemic has brought these problems into sharper focus. On most medical school campuses, students were sent home when the pandemic began over concerns ranging from student safety and legal liability to the shortage of personal protective equipment. But consider this: a student who spent 4 months at home and not on clinical rotations during the all-important clinical years of medical school has missed up to one sixth of the clinical experience required of previous classes. Will these students graduate on time? Of course, most of them will. Will the faculty feel pressure to attest that the students have attained the necessary competencies to graduate even with less experience on which to base these judgements? Of course, they will. Writing from the perspective of general surgery training in a 2015 commentary, Williams and colleagues prophetically stated, “Medical educators are at a dangerous junction in the milestones movement in graduate medical education. The pressure to efficiently use program directors’ and faculty members’ time, particularly in the increasing clinical-revenue-dependent model of the academic medical center, must be balanced with maintaining the integrity of the evaluation process.”9
CBME fits nicely with a reductionist view of the world and naturally appeals to physicians and policy leaders with a similar philosophy. But clinical competency is comprised of more than a laundry list of specific skills. The whole cannot be reduced to the sum of its parts, particularly in a discipline as broad as family medicine. Nowhere is this more evident than when trying to assess competencies in the professionalism domain or when evaluating family physician-patient relationships. Competency cannot always be atomized into measurable pieces and, when it can be, the accumulated experience of both learners and faculty members matters a great deal. In fact, EBL and CBME complement one another. CBME will only work if it goes hand in hand with robust learner experience; competency really is more accurately assessed on the 100th obstetric delivery than on the 10th. And the model completely depends on faculty having the time and the skill to assess learners with high predictive value. Adopting CBME does not provide an excuse for diluting standards for learner experience and cutting corners on faculty time, but that is how it is playing out in today’s world. This should concern all of us.
There are no comments for this article.