In Part 3 of our Q&A with Area9 Learning founder and CEO Ulrik Christensen, MD, we focused on how and why Area9 has modified its technology to fit lifelong medical education, Maintenance of Certification (MOC), and board review. Here in Part 4, we learn more about specific Area9 innovations aimed at more effective learning and helping physicians reinforce long-term memory.

Why is there so much emphasis in NEJM Knowledge+ on reinforcing physicians’ long-term memory?

Let’s say that at point A, a physician doesn’t know something. He or she learns it, so moves up to point B. Over a certain period of time, the physician will start forgetting and will drop back down in the direction of point A. He or she may relearn the information, moving back up to point B, and then the forgetting process repeats.

If the physician never relearns the information, he or she will eventually drop below the point-A threshold where memory retrieval becomes impossible. But if the physician continues to relearn, they can make the slope of each successive decline from point B (knowing) to point A (forgetting) more gradual, meaning it takes longer and longer to get to the point of forgetting completely. Although the slope never fully disappears, one can help to flatten it with periodic relearning, which is why we have built an automated recharge function into NEJM Knowledge+ — to reinforce physicians’ long-term memory and retrieval capabilities.

And while the pattern of memory decay I am describing is well proven by science, we also have a theory that the closer one can get to point A (forgetting completely) before relearning or boosting knowledge, the more efficient the relearning becomes. We can’t prove it because it is impossible to predict or know when a person will forget something completely. It is possible, however, for our adaptive learning algorithms to sense, based on various indicators, when a physician might be starting to forget particular things and prompt them to refresh.

Can you give us an example of such an indicator and how you might use it to sense physicians’ long-term memory decay?

One of the ways we do this is to monitor a physician’s confidence in their knowledge. With each question, we ask them to rate their confidence in the answer they are choosing, and we time how long it takes them to rate themselves. This is something we invented, and it turns out to be an incredibly powerful device.

Let’s say, for example, that I ask you to name the capital of Sweden. You think the answer is Stockholm, but since you learned the information 30-plus years ago in grade school, you are not 100% certain. If you retrieve the information and then immediately receive confirmation that you are correct, it is a very strong learning event and a highly predictive indicator that you will be able to retrieve the information again later.

Now, imagine quizzing a medical student on the anatomy of the cardiovascular system. When she gets an answer correct, it may be because she truly knew the correct answer — or it may be that she was unsure between two possibilities and simply guessed right. In both of those scenarios, you and the student could come away from the process believing she knew her anatomy — but in fact, neither of you paid attention to the fact that she had some doubt about the correct answer and that you both need to make some extra effort to ensure that the correct anatomy becomes embedded in memory. If, with each response, you were to ask the student for a quick confidence assessment, you would gain insight into which few areas require additional review so that you could be efficient at helping the student study. You would also be causing the student to reflect in the moment on the idea that she needs to pay closer attention to the correct answer.

We do this in our adaptive learning solutions and, taking it a step further, track the time it takes for a person to assess confidence levels. I do not think anyone else has looked at even a fraction of the numbers of people we have studied this way. What we have found is that, while a person’s ability to judge when they know an answer for sure changes very little over time, they get much faster at judging when they are unsure of answers. In fact, response times for people to decide they are unsure drop linearly over the first half year or so that they are using our adaptive learning solutions. This is a very powerful skill for test preparation and something the best students are already very good at doing. Ultimately, we are looking for patterns that characterize the most successful learners and helping others to replicate those patterns to improve their performance. We do it by helping them to improve metacognitive skills and to use their time better.

So your adaptive learning technology is good enough to sense when a physician is guessing at the correct answers and only declares them proficient when they really know a subject?

Yes, our technology is premised on the idea of helping people to see clearly where their own weaknesses reside. When you think you know a subject better than you do, we say that you are “unconsciously incompetent.” That means you are essentially unable to learn because you will not be willing to commit time and attention to it. Whenever we can move a person from being “unconsciously incompetent” on a subject to “consciously incompetent,” they become like a sponge upon which we can simply pour knowledge.

We do have safeguards, though, to ensure that people do not become discouraged from continuing to work in our systems. Let’s say, for example, that a physician is struggling to master a certain subject. The system will offer a certain number of questions that provide a greater chance of him or her achieving random success. The system will not be fooled into believing a physician knows something they don’t and will continue to challenge them in areas of weakness. But we do understand that a certain level of success is needed to keep people engaged. That is quite a difference between our adaptive learning system and others in the marketplace.

Can you elaborate on the importance and process of creating content for adaptive learning environments such as NEJM Knowledge+?

For an adaptive learning environment to be effective, it is critical that you take cutting-edge technology and combine it with content that has been made specifically for an adaptive environment. NEJM Knowledge+ is a true marriage of technology and content: NEJM Group did not attempt to simply pour off-the-shelf content into an off-the-shelf adaptive learning engine and then hope for a miracle. They have developed board review cases and questions that work perfectly with the adaptive learning environment.

In fact, because NEJM Knowledge+ incorporates higher-level learning objectives — analysis, strategy, and so forth — it has been necessary to modify our base technology. We have custom-built an adaptive engine for this particular kind of medical board review learning and populated it with content (clinical cases in the format of board review questions) that has been custom-built by NEJM Group to be delivered in this very specific way. This is very unusual and, because it’s expensive to do, can only be done in areas with high potential impact and value.

NEJM Knowledge+ has enormous potential to make the Maintenance of Certification (MOC) process more effective and efficient. And the quality of physician education really matters here. NEJM Knowledge+ is easily the most ambitious MOC product available. I have never seen anything that comes even close in terms of scope. Even when you compare it to adaptive learning deployments in colleges and universities, it is a really big initiative and could only be accomplished by an organization such as NEJM Group.

Stay tuned for Part 5 in our Q&A series, which covers how medical professionals can expect to benefit from adaptive learning technology both in lifelong medical learning and in medical board review.