How it often goes wrong and key observations for effective strategies
Behavioral science research has repeatedly demonstrated the effectiveness of cheap, easy strategies for improving health care by subtly influencing physician decisions. For example, unneeded antibiotic prescriptions plummeted when family practice physicians were routinely told they were a “top performer” or “not a top performer” in the goal to prevent overprescribing, according to studies published in JAMA since 2014. In other research, prescription costs dropped sharply when generic equivalents of name-brand drugs were made default options in the electronic record menus physicians used to choose drugs.
These and other “nudge” strategies were breakthroughs in medicine, offering relatively simple solutions to problems that traditional tools had failed to fix. There’s hope that similar nudges aimed at changing clinician behavior can address other public health issues, such as dependence on prescription opioids and adverse medication interactions in the elderly.
Yet nudging continues to face significant criticism among health care professionals — not because of objections to the tactics per se, but rather skepticism that it actually works. Efforts to replicate successful experiments have had mixed results.
Opt In to the Review Monthly Email Update.
Now a group of physicians and behavioral scientists responsible for some of the most impressive nudge results — including Craig R. Fox and Noah J. Goldstein from UCLA Anderson; Jason N. Doctor and Daniella Meeker from University of Southern California; and Stephen D. Persell and Jeffrey A. Linder from Northwestern University — is offering advice published in the journal BMJ for effectively applying their tactics in other settings.
The team speaks from experience. The authors spent years trying various strategies to eradicate antibiotic overuse — suggested drug alternatives, and requirements for doctors to justify prescriptions, among them -— with modest or short-lived success. The routine performance emails to doctors, on the other hand, led to a long-lasting decline in inappropriate prescribing from 20% to less than 4%.
The BMJ authors use experiments with disappointing results to illustrate how seemingly innocuous tweaks to study design can cause proven nudge strategies to fail. When this happens, they suggest, look for differences in implementation and context with the one that worked. Small changes in details, they say, can lead to outsized improvements in results.
A Misfire in Switzerland
Skeptics of physician nudges often point to a failed national experiment in Switzerland. On the surface, the Swiss interventions aimed at reducing antibiotic prescriptions look much like those implemented by the BMJ team in dozens of U.S. practices. The Swiss, however, found no significant changes in prescribing rates afterward.
Fox and his co-authors point to several design differences that may have undermined the Swiss interventions. In the U.S. experiments, all prescribing clinicians received monthly emails with an attention-grabbing “you are (or are not) a top performer” in the subject line. Succinct text in the body revealed his or her rate of inappropriate prescribing next to rates for “top performers,” which were office peers with the lowest rates of inappropriate prescribing. (Antibiotics were considered “inappropriate” when prescribed for respiratory illnesses that did not meet the criteria for bacterial infections likely to respond to them. Overprescribing has spawned deadly antibiotic resistant infections worldwide that no drugs can effectively treat.)
In the Swiss follow-on version, only physicians with the highest rates of antibiotic prescribing, appropriate or not, received regular reports. These doctors got quarterly letters that included charts showing their individual rates of antibiotic prescribing per 100 consultations compared with the average rate for all primary care physicians in the country. Each letter also contained a code for accessing a dashboard with other details, such as names of antibiotics they most often prescribed and ages of their patient recipients.
Those design changes, the BMJ authors suggest, distracted from the peer comparisons researchers had demonstrated would motivate change. The U.S. study inspired more responsible prescribing by showing when the doctor’s prescribing rates were out of sync with better performing peers. But the Swiss version eliminated comparisons to close peers, as well as clear indications that lower rates represented more responsible prescribing.
Linking individual rates to Swiss national averages and including additional data made room for physicians to justify personal high rates of prescribing as necessary for their particular patient populations, even if not appropriate at other practices. Left to self-judge their prescribing practices, the Swiss recipients may not have seen any reason their antibiotic rates should be closer to averages.
The U.S. emails, on the other hand, frequently and boldly hammered home the relevant message: “You are (or are not) doing better than most of your colleagues at solving this problem.”
Sabotaging Solid Nudges
A key feature of nudge strategies is that they guide people toward change without forcing it. In the antibiotic experiments, for example, physicians retained the right to order antibiotics, even when prescribing guidelines suggested they weren’t needed. Default options, which are very popular nudges, also leave open the possibility of a different choice.
Nudges work because they harness natural motivators that humans share, according to an abundance of research. Peer comparisons are effective because most people want to be seen by colleagues as good at their jobs and as a useful team member. Pre-commitments are proven to drive higher rates of voting, recycling and savings, for example, because following through reinforces our self-images as honorable, trustworthy people. And we’re drawn to default options in part because we’re busy people who appreciate recommended choices for complex decisions.
Sometimes new implementation strategies in nudge experiments inadvertently dampen or even lose the motivator necessary to make the nudge effective, Fox and his colleagues explain. Consider two separate studies that asked doctors to sign commitments to responsible antibiotic prescribing.
The effective commitment study — it reported a 20% reduction in inappropriate antibiotic prescribing — had physicians sign their photos on exam room posters promising responsible prescribing.
A few years later, another experiment looking to change prescribing habits asked physicians to sign commitments that were handed over to a study coordinator. Medical assistants pulled them out for physicians to review before treating patients with relevant symptoms. The study reports only small, unsustained changes in prescribing habits.
Taking the signed commitments out of the public eye likely defanged a solid nudge strategy, according to the BMJ authors. Pre-commitments work, in part, because they improve the way others see us. With few witnesses to the promises, the physician got little of this positive feedback, and the patients had no reason to change their expectations about getting prescriptions.
Customized Persuasion
In an earlier paper, UCLA Anderson’s Job Krijnen and Fox and University of Utah’s David Tannenbaum cite numerous failed nudge experiments, beyond health care. They describe defaults that choosers rejected en masse, pre-commitments to retirement accounts that actually reduced savings rates, and other nudge projects that went off the rails. The study offers advice on designing persuasion projects that avoid making the subjects overly self-conscious of their behavior or suspicious of the nudgers’ intentions — unintended triggers that can lead to counterproductive results.
The BMJ advice focuses on how to imitate successful experiments without inadvertently losing key elements, such as a message that holds attention and instills a clear understanding of the desired change. It describes how such factors require customization for the targeted individuals, even when the populations seem very similar. Orthopedic surgeons, for example, reacted very differently to nudges aimed at cutting opioid prescriptions than other specialists.
When a proven nudge does fail to replicate, it’s probably not productive to ask whether nudging itself works, they write. The better question, they conclude, is “What were the critical differences in implementation and context?” Studying these critical implementation details across various contexts, they argue, can lead to advances in both health science and behavioral science that inspired the relevant interventions in the first place.
Featured Faculty
-
Craig Fox
Harold Williams Chair and Professor of Management
-
Noah J. Goldstein
Bing (’86) and Alice Liu Yang Endowed Term Chair in Teaching Excellence; Professor of Management and Organizations; Faculty Advisor, Equity, Diversity and Inclusion
About the Research
Meeker D., Linder J.A., Fox C.R., et. al. Effect of Behavioral Interventions on Inappropriate Antibiotic Prescribing Among Primary Care Practices: A Randomized Clinical Trial. JAMA. 2016;315(6):562–570. doi:10.1001/jama.2016.0275
Linder, J.A., Meeker, D., Fox, C.R., Friedberg, M.W., Persell, S.D., Goldstein, N.J., & Doctor, J.N. Effects of Behavioral Interventions on Inappropriate Antibiotic Prescribing in Primary Care 12 Months After Stopping Interventions. JAMA, 2017;318(14), 1391–1392. doi:10.1001/jama.2017.11152
Hemkens L.G., Saccilotto R., Reyes S.L., et. al. Personalized Prescription Feedback Using Routinely Collected Data to Reduce Antibiotic Use in Primary Care: A Randomized Clinical Trial. JAMA Internal Medicine. 2017;177(2):176–183. doi:10.1001/jamainternmed.2016.8040
Meeker D., Knight T.K., Friedberg M.W., et. al. Nudging Guideline-Concordant Antibiotic Prescribing: A Randomized Clinical Trial. JAMA Internal Medicine. 2014;174(3):425–431. doi:10.1001/jamainternmed.2013.14191
Kullgren J.T., Krupka E., Schachter A., et. al. Precommitting to choose wisely about low-value services: a stepped wedge cluster randomized trial. BMJ Quality and Safety 2018;27:355-64. doi: 10.1136/bmjqs-2017-006699 pmid: 2906661
Krijnen, J.M.T., Tannenbaum, D., & Fox, C.R. (2017). Choice architecture 2.0: Behavioral policy as an implicit social interaction. Behavioral Science & Policy, 3(2), 1–18.