When an unloved cause or political adversary is attached to a nudge, the method itself becomes suspect
There’s a good possibility that your government has, with or without your knowledge, nudged you. Perhaps you thought you were making a choice based entirely on unfiltered facts, but really, Big Brother was subtly pushing you toward the decision he deemed best. The nudge probably involved cunningly crafted wording in the instructions, or a calculated layout of the choices, or some other seemingly minor detail that made his choice a bit more alluring.
How do you feel about that?
Now suppose that same nudge achieved something you wanted. Perhaps it led you to choose a saving plan that landed you loads of money at retirement with minimal sacrifice. Or maybe the nudge kept a lid on your property taxes by convincing all those deadbeat home owners to pony up what they owed.
Did that improve your opinion of nudging?
Public perception of nudging is a serious concern for institutions that engage in this type of intervention. Governments and employers typically use nudges to help people make better decisions for themselves or their communities by, for example, taking full advantage of education or energy saving programs. Governments may or may not be up front about the intentions of these tactics.
But a study in Nature Human Behavior suggests that people have difficulty separating their feelings toward the particular forms of nudges as policy tools from their feelings toward the policy objectives to which they are applied. They grow much more accepting of the general-purpose use of nudges when the nudges are illustrated by a policy objective they support. In the study by University of Utah’s David Tannenbaum, UCLA Anderson’s Craig Fox and Harvard University’s Todd Rogers, objections to the practice of nudging fell off dramatically when the nudge was illustrated with a public policy goal or linked to a political sponsor supported by the individual.
Professional policymakers exhibited biases similar to the groups of average citizens in the studies. All of them tended to view a nudge, such as the strategic use of defaults from which citizens must opt out, as more ethical when it forwarded their own political agendas but less ethical when the same tactic helped the opposition.
The findings add evidence to a popular hypothesis in the field: People are generally fine with being nudged. It’s the goals of the nudge that can be problematic.
Nudges, by definition, are subtle suggestions that don’t take away individual free will to choose. Unlike mandates and fines, there are no serious penalties to ignoring the messages nudges send. Prescription refill reminders, GPS instructions and menu calorie counts are examples of nudges. Offering default options, planning prompts, highlighting pros or cons and providing public disclosures also become nudges when designed with intent to influence choices.
Generally, there has been widespread support for these and other types of nudges when used by governments in developed countries, several recent studies suggest. But individuals sometimes balk at being nudged, leading to rebellion that turns persuasive projects into counterproductive events. Perhaps the nudge seems condescending, a suggestion that the decision maker isn’t smart enough to come to the best decision alone. Or maybe it’s viewed as an unwelcome intrusion into a personal decision. Human nature suggests people don’t like feeling manipulated, but research hasn’t pegged exactly what conditions make this soft paternalism acceptable.
To better understand attitudes toward nudging generally, Tannenbaum, Fox and Rogers tapped into individual political biases. Participants in four different experiments were asked to read a short description of one or more types of nudge. These were common, proven persuasion tactics, such as making the desired choice a default option, or highlighting the negative effects of less desired choices.
In three of the experiments, each tactic was randomly paired with an example of how the nudge could be applied to advance a particularly liberal, or particularly conservative, policy. Some participants, for example, read about how default nudges could increase enrollment in supplemental nutrition programs. Others read how the same nudge could be used to increase uptake of tax breaks on capital gains. The control groups read politically neutral examples.
Another experiment described the Pension Protection Act, a law that promoted an auto-enrollment nudge for 401(k) plans. The act description was randomly paired with information that the act was enforced by the Obama or the Bush administration. (In reality, they both supported it.)
The participants then were asked to put aside policy examples and endorsements. Rate the nudge on a 1–5 scale, they were told, “ as a general approach to public policy.” (The phrase was emphasized in the experiments also.) Afterward, each was asked to rate his own political leanings, ranging from very liberal to very conservative, as well as his opinion of the specific policy goal he received as an example.
In the control groups with neutral examples, liberals and conservatives were similarly accepting of the policy nudges, according to the results. But when politically charged support was added (either via example or endorsement), acceptance of the nudge predictably skewed. Participants found the nudges more ethical when illustrated by examples or endorsers in accord with their politics, and more unethical if the illustrations were at odds with their politics. Political leanings of participants were roughly three times as influential as their libertarian sensibilities (that is, their preferences concerning how much government interference is permissible) in predicting their ethical responses to nudges.
Avoid Bad Outcomes, Whatever Those Are
Experts in law, economics and psychology have devised informal guidelines for governments and employers to ensure that their nudges are proper from a professional ethics standpoint. These works typically suggest attention to preserving freedom of choice and preventing coercion, often by explaining exactly how the designers intend their default option or other nudge to work.
An understanding of how lay people judge nudges isn’t so well developed. Seemingly harmless acts of nudging still occasionally surprise their architects with bad outcomes. Perhaps explaining the design and purpose of a nudge makes it seem offensively paternalistic or intrusive.
Harvard Law School’s Cass Sunstein, who along with 2017 Nobel Laureate and Chicago Booth Professor Richard Thaler popularized modern-day nudge techniques, has devoted numerous recent research studies to questions of nudge acceptability. His work investigates conditions that might make a nudge attempt more or less offensive to the targets, with possibilities ranging from the presentation of information as facts or warnings, to the size of the group helped or harmed by the tactic.
Generally, he finds widespread support for nudges that promote the common good, such as calorie labels on foods to promote healthier diets, or warning labels on cigarettes to reduce disease. In many cases, he concludes, people are quite accepting of covert nudges that they may not consciously register.
But support evaporates when people suspect that citizens could unwittingly end up aiding in goals that go against their own values, Sunstein finds. No one wants to be tricked into helping a cause they oppose.
Tannenbaum, Fox and Rogers note that criticism of recent nudge attempts by U.S. and U.K. governments came from opposite political persuasions. In the U.K., where a conservative party prime minister adopted nudging for improving tax collection, liberals likened the practice to mind control. In the U.S., where the Obama administration tried to nudge for reduced energy consumption and higher retirement savings, conservatives also tagged the practice as mind control, as well as propaganda.
People appear to conflate their feelings about the method of persuasion with their feelings about the goals these nudges can push forward. They’re not so much opposed to nudging, the research suggests, as they are against outcomes they dislike. Unfortunately for nudge designers, a bad outcome for one individual may be a good outcome for another.
Harold Williams Chair and Professor of Management
About the Research
Tannenbaum, D., Fox, C. R., & Rogers, T. (2017). On the misplaced politics of behavioural policy interventions. Nature Human Behavior, 1(0130).
Sunstein, C.R. (2016). Do people like nudges?