Overall ABIM pass rates fell from 90% passing in 2009 to just 78% passing in 2013 for first-time test takers of the Maintenance of Certification (MOC) internal medicine board exam, according to statistics released annually by the American Board of Internal Medicine (ABIM).

The hard numbers from ABIM show that of 5,772 internists attempting the recertification exam in 2013, some 1,270 did not pass on their first attempt, which is nearly triple the number failing on first attempt in 2009.

YearFirst-Time TakersPass RatePassedFailed


In their response letter to the anti-MOC petitioners in July 2014, ABIM attempted to clear up the confusion about first-attempt pass rates versus ultimate pass rates (which include retakes), saying first-attempt pass rates have been declining over the past 5 years, but ultimate pass rates have remained fairly constant at 95% to 98%.

Many Theories for Why ABIM Pass Rates Are Falling

As might be expected, the overall decline in ABIM pass rates has engendered much debate, both within medical teaching and learning communities and across the medical blogosphere. What is surprising, perhaps, is the diversity of opinions about what might be driving the decline in ABIM pass rates.

Opponents of new ABIM Maintenance of Certification (MOC) requirements claim that the lower pass rates are proof that the ABIM is surreptitiously making the exam more difficult in order to bring in additional revenue from repeat attempts to pass the exam — despite explicit ABIM claims to the contrary.

Beyond the usual anti-MOC rhetoric, however, a number of additional theories (some might say straw men) have emerged to explain the alarming downtrend in ABIM pass rates. To summarize briefly:

  • Technology — namely, the ready ability to search for and find just about any piece of information within a few seconds — has diminished the willingness of physicians (especially younger generations) to study, memorize, and carry around certain bases of medical knowledge inside their brains.
  • Work compression — the trend of forcing physicians to accomplish more in less time — is undermining both initial and lifelong learning by providing attending physicians too little time to teach properly and leaving many physicians too fatigued and stressed to focus on consistently reinforcing and updating their medical knowledge.
  • Big data — the rate of increase in the sheer quantity of medical information and evidence is outpacing both human capacities for acquiring and retaining knowledge and medical boards’ abilities to evolve appropriate testing methods (while the boards still test knowledge of facts, they should be testing abilities to rapidly find, synthesize and apply facts for successful outcomes).
  • Electronic record keeping — proliferating use of and reliance on Health Information Technology (HIT) care-record devices in clinical settings is weakening physicians’ development and ongoing refinement of their cognitive skills in clinical settings.
  • Expanding pool of MOC diplomates — greater numbers of physicians are now feeling compelled to participate in MOC (even if they had been grandfathered with board certification originally). Reluctance to participate, long time spans between formal medical training and re-examination, outmoded studying and test-taking skills — or some combination of all three — may also be contributing to declining first-time pass rates.

A 2013 post on The Health Care Blog, by John Schumann, MD — Why Are So Many Younger Doctors Failing Their Boards? — has become ground zero for a heated intergenerational debate, with Schumann suggesting that the study habits of millennials are inferior to those of their boomer and gen X colleagues. Schumann wrote:

One concern that has a ring of truth to it is that young doctors have become great “looker uppers,” and have lost the sense of what it’s like to actually read and study medicine. While doctors enter the profession with a commitment to lifelong learning, some of us fear that the young folk only go far enough to commit to lifelong googling.

Millennials bit right back, however, with Teresa Chan, MD, Assistant Professor, Division of Emergency Medicine, McMaster University writing a post for Boringem.org titled Not Dumber, but Different? Counterpoint from a Millennial. Dr. Chan wrote:

I am a Millennial who will be starting as an attending this year. I still passed my exams and still had to deal with learning the “old fashioned way.” In the end, memorizing factoids and endless lists was the way to beat the exam, but these skills do not always translate well into my daily practice as a physician. I am certainly studying to add to my existing knowledge about cytochrome P450 or the oxidative phosphorylation chain will help someday, but rarely does that problem present itself in my emergency medicine practice. I think the big problem underlying the current examinations systems in most specialties and jurisdictions is that they ask questions that often have not changed with the times. Most important, they value the lower levels of learning (e.g. Bloom’s Taxonomy level = ‘Remember’ and perhaps ‘Apply’) rather than critical reasoning and problem solving.

Meanwhile, over on The Health Care Blog and in direct response to Schumann, David Shaywitz, MD, wrote Are Young Doctors Failing Their Boards — Or Are We Failing Them?:

Let me suggest a third possibility – perhaps today’s doctors are providing better care to patients than their predecessors were a generation ago. Maybe today’s doctors have figured out that, in our information age, your ability to regurgitate information is less important than your ability to access data and intelligently process it. Maybe what makes you a truly effective doctor isn’t your ability to assert dominance by the sheer number of facts you’ve amassed, but rather on how well you are able to lead a care team, and ensure each patient receives the best care possible. In other words, what if the problem isn’t the doctors, who are appropriately adapting, but rather the tests (and the medical establishment), which may not be?

Carving out some middle ground in the debate, a commenter on the Shaywitz rebuttal, identifying herself as Dr. Leora Horwitz, wrote:

Having just (phew!) passed my first recert exam, I read this post and the related one with interest. I think both viewpoints are important. I agree with Dr. Shaywitz that, for complicated hospital inpatients, where you have a lot of time to spend per patient, it is more important that physicians have the skills to look up the latest treatments, differential diagnoses, etc than necessarily trying to memorize a whole ton of facts that change on a regular basis. On the other hand, though, I do most of my clinical work in an outpatient primary care setting. At 20 minutes per patient (generously), I really need a pretty comprehensive and accurate fund of knowledge that I can access without doing a lot of real-time looking up…[At] a minimum I need to know enough to know what to look up. So I think it’s reasonable to test me on the fund of knowledge I should be expected to use on a daily basis and to test my general pattern recognition for common or deadly complaints.

While agreeing with Dr. Shaywitz, another commenter identifying himself as William Hersh, MD, said:

This development of falling board scores is a concern, but before we ascribe blame to health IT, Gen Y/millennial laziness or anything else, can we see some data supporting the assignment of blame? Let’s not let this finding be a Rorschach test for everything each of us does not like about medicine, health care, or society.

ABIM Pass Rates for Initial Certifiers versus Recertifiers

An important point to note is that among physicians seeking initial certification, ABIM pass rates for first-time initial certification exam takers have actually remained fairly stable (and even improved slightly over the past couple of years).

YearFirst-Time TakersPass RatePassedFailed


Millennials may be tempted to take this as a triumph over older generations whose ABIM pass rates for recertification have continued to fall. Higher first-attempt pass rates for initial certification may be a function of residency programs taking more direct actions to improve their first-time board pass rates; they know their graduates’ exam results affect their program’s status with the Accreditation Council for Graduate Medical Education (ACGME) and are reported publicly by ABIM — and, therefore, may be read as general indicators of residency-program quality.

In 2012, researchers at the Cleveland Clinic actually developed a nomogram to predict a resident’s probability of passing the American Board of Internal Medicine’s examination. Interestingly, the strongest predictors for passing the ABIM exam turn out to be scores on other standard exams for residents. This would seem to lend strength to notions that students with the best study habits (and, perhaps, the strongest test-taking skills) are more likely to pass on their first attempts. However, the Cleveland researchers do find that making more time available for dedicated study is also a predictor of success:

An interesting finding is the influence that the number of calls during the [past] 6 months of residency has upon the ABIM probability of success. Although small in comparison to the ITE score relevance, it suggests that easier rotations in months at the end of the residency give residents an additional advantage and increases their chance of passing the board. This could be effectively used by the program directors when scheduling rotations for the residents.

Absent the presence of predictive nomagrams and residency program directors motivated to ensure they pass on their first attempts, physicians practicing outside of active medical teaching environments are on their own when it comes to figuring out precisely what they need to learn and what the best study methods might be for ensuring they pass MOC exams on the first try.

Ultimate Goal: Better Outcomes for Patients

A close reading of various points and counterpoints in the numerous debates surrounding the downtrends in first-time ABIM pass rates actually reveals more underlying agreement than disagreement. Most commentators would appear to agree that physicians do need a certain base of specific and up-to-date medical knowledge — readily recalled under high cognitive stress — so they can at least recognize when their knowledge is insufficient in certain situations, meaning they need to reach out for advice from colleagues or to look up information before making critical patient care decisions.

Some doctors, regardless of age or generation, need to see greater value in learning, retaining, and being able to readily recall current medical knowledge. Others need to become better at knowing and admitting when they don’t know something as well as, perhaps, they should. All of this brings to mind a conversation we reported last spring with Dr. Ulrik Christensen, whose lifetime of work and research into preventing medical errors has evolved into the adaptive learning technology that underpins and powers the new NEJM Knowledge+ medical learning platform. Dr. Christensen said:

We observed through … [medical] simulators that a physician’s inability to recall trivial information would place them under excessive cognitive workload, leading to errors where, for example, they might fail to ask a colleague for assistance. While we could try to train physicians to become better at asking for help, we also became fascinated with the question of “Why are they under high cognitive workload at all?” We started to look at how we might solve the problem from the other end — by improving learning and making it easier for physicians to recall basic medical knowledge so they can keep more cognitive capacity available for addressing the really difficult problems. From there, we started doing research into various tutoring systems. By taking study tools rooted exclusively in repetition and adding even simple intelligence, we discovered that we could make them much more effective. Essentially, we found a way to solve a very fundamental problem that affects learning in every field, including complex ones such as medicine.

While Dr. William Hersh makes a strong point about not allowing the alarming downtrend in ABIM pass rates to become “a Rorschach test for everything each of us does not like about medicine, health care, or society,” the debate inspired by the trend at least seems to be provoking some deep thinking about lifelong medical learning in general and how tools for learning and knowledge assessment must continue to evolve.

The secure exam will remain in place, ABIM confirmed in their July 2014 letter, and will continue to “evolve with time as indeed it has” since 1936 “when ABIM was created by the medical community … as a standard setting organization.”

Whatever changes we make to the exam in the future related to content, format, delivery vehicle, feedback, etc. will need to support the use of the exam as a summative assessment tool that signifies competence in the disciplines of internal medicine. The community has requested a more modular, practice-relevant approach to summative assessment and we are convening a committee to explore how to move those ideas forward.

ABIM suggests that this committee will discuss the “development of the examination, including generation of the exam blueprint and its level of granularity.”

In addition, ABIM has a new initiative called “Assessment 2020,” which aims…

To help us improve and move forward…[W]e seek to engage physicians, the public and other important stakeholders in helping us think through the future of assessment for ABIM Certification and Maintenance of Certification.

We are very interested to hear your thoughts and ideas related to ABIM pass rates. Please share your comments and experience with us.