Research Brief

How-To Guide for Happiness Surveys, Which Increasingly Drive Public Policy

Advice for researchers aims to help improve both data collection and its interpretation

Economists are asking if we, the people, are happy with our lives. Unfortunately, they don’t understand all of us when we answer.

Researchers peg the sentiment correctly for many people by looking at their answers on happiness surveys. But they unwittingly misinterpret answers to those same questions from quite a few other people who took the surveys. That means the conclusions they draw don’t necessarily reflect reality. 

Opt In to the Review Monthly Email Update.

While this might seem like a niche little issue for research communities, the possible consequences of these errors range from mildly entertaining to alarming. Late-night TV hosts take note: Finland may not be the happiest country on earth, despite the prestigious World Happiness Report declaring it so year after year. More seriously, New Zealand may have steered too much money toward mental health and not enough toward education when it incorporated findings from happiness surveys into government spending priorities. Other nations on the path toward similar happiness-based policies may get it wrong, too.

Overestimating on Happiness?

UCLA Anderson’s Daniel Benjamin and his co-authors have published multiple papers describing credibility-killing issues that sometimes arise when researchers use self-reported well-being (happiness surveys) to measure collective desires. And they have analyzed and tweaked hundreds of survey questions in attempts to fix the problems.

Their cautionary message — they heartily encourage survey usage but point out a whole lot of red flags — is a bit of a wet blanket on a powerful international movement. Around the world, governments want to incorporate more happiness data as criteria for policy decisions, such as whether to aggressively lower unemployment or invest more money in health efforts. Benjamin’s team supports the efforts by working with policymakers to measure national happiness but warns that the field still needs a lot more research to make the data consistently reliable. 

A new paper by Benjamin, Gordon College’s Kristen Cooper, Cornell’s Ori Heffetz and University of Colorado’s Miles Kimball pulls together practical advice for improving happiness data that they uncovered in past studies. Aimed at designers who generate the survey questions, as well as at the researchers and policymakers who analyze the answers, the study gives concrete suggestions for avoiding those red-flag issues. 

The core problem with happiness data, Benjamin recounts in a phone interview, is widespread inconsistency in how people interpret the survey questions. 

Survey Answers and Assumptions

Consider this popular survey question: “Taking all things together, how happy would you say you are (on a scale of 1-10)?” Does “all things together” mean my entire lifetime or all the things affecting me now or what I’m worried about for the future? What if I’m normally very happy, but my kid’s temporary problem has me seriously stressed out? And is my 7 the same level of happiness as everyone else’s? Researchers and their subjects offer different answers to those questions with worrisome frequency, according to studies by Benjamin and co-authors. (Benjamin’s previous work offers a detailed explanation of those studies and more examples of question confusion.)

The new study walks through assumptions researchers make about answers captured in the surveys, as well as evidence of why they are problematic — including showing, in some cases, how a researcher’s conclusions might be reversed by making the wrong assumption.

The big picture advice for surveyors and researchers, Benjamin says, is to think about the assumptions that underlie how the answers to any given survey question are interpreted. Then consider what it means if those assumptions are not the same as those held by any of your survey takers. 

The more specific suggestions range from the straightforward — request and incorporate paradata, which is the process by which data was collected,  (which survey centers often withhold), or add calibration questions that measure how people use response scales — to the highly technical. They are neatly divided and summarized with different options for those generating the data (the surveyors writing the questions and collecting answers) or researchers and policymakers working with datasets they did not create.

The paper was created at the invitation of the Annual Review of Economics, which unlike most peer-reviewed journals, publishes summaries of research in a field rather than original research.

Featured Faculty

About the Research

Benjamin, D.J., Cooper, K., Heffetz, O., & Kimball, M. (2023). From Happiness Data to Economic Conclusions.

Related Articles

Illustration of a man water a plant while it grows Research Brief / Happiness

Patience Is a Virtue — Up to a Certain Point

The happiest people are moderately patient, not into extreme delayed gratification

Bottles of pills arranged to represent a bar graph showing the rising cost of medicine. Research Brief / Health Care

$52.6 Billion: Extra Cost to Consumers of Add-On Drug Patents

The figure is a subset, not covering huge expense of extended patents on high-priced biologics like Humira

Hero image of a bar graph with excitedness decreasing with age as peaceful happiness increases Research Brief / Happiness

Age-Old Question: What Makes You Happy?

We define happiness differently as we age

Pharmacists preparing perscriptions Feature / Health Care

Curbing Over-Prescription of Antibiotics: Ranking Clinicians Is a Nudge That Sticks

Peer ranking might cut dangerous antibiotic overuse