How many cans of beer — or six packs or cases — will your party guests consume?
Uncertainty is a kind of risk, and logic says our assessment of risk shouldn’t change based on the yardstick we use. A 3-feet-deep hole is exactly as risky as a 36-inch hole, right?
But our minds struggle to adapt when it comes to metrics. Research published in Cognition advances our understanding of such peculiarities by studying how the metrics used can influence our uncertainty when making a prediction about future events.
The U.S. Securities and Exchange Commission’s David Zimmerman, UCLA Anderson’s Stephen A. Spiller, University of Colorado’s Nicholas Reinholtz and University of Toronto’s Sam J. Maglio investigate how the way we measure something changes how uncertain we feel about it.
These aren’t just academic curiosities. A landmark study found that focusing on miles per gallon (rather than gallons-per-distance) causes people to systematically misunderstand fuel efficiency improvements. People tend to assume that bigger MPG increases mean bigger fuel savings, when the opposite is often true — improving from a very low MPG saves far more gas than the same numerical increase at higher mpg levels. (An improvement from 5 mpg to 10 mpg saves 5 gallons on a 100-mile trip; an improvement from 20 mpg to 25 mpg only saves a single gallon on a 100-mile trip.) Similarly, in health contexts, “1 in 100” can sound much scarier than “1%” even though they’re identical.
Zimmerman, Spiller, Reinholtz and Maglio, across six online experiments with over 1,600 participants, observe that our brains don’t have a stable, internal risk measurement. Instead, as the authors note, people “seem to reason about uncertainty in the metric that is provided to them.” The researchers show three ways that measurement units distort our perception of uncertainty:
- Bigger numbers feel riskier.
- We don’t adjust enough for unit size.
- We start with the assumption that things are symmetric.
In one experiment, 167 participants recruited from Amazon’s Mechanical Turk platform estimated uncertainty about a fortune teller’s weekly earnings. Half estimated total revenue (money coming in), while the other half estimated profit (revenue minus the $1,500 booth rental cost), giving their best guess and creating a range within which they thought 80% of real outcomes would fall.
This can be described as an “additive constant” relationship — profit always equals revenue minus $1,500, no matter what happens. If revenue might range anywhere from $2,000 to $3,000, then profit must range from $500 to $1,500. The width of uncertainty should be identical: $1,000 in both cases.
But our brains don’t see it that way. Participants consistently gave much wider prediction ranges when thinking about revenue than about profit — the bigger numbers simply felt more uncertain. It’s as if our minds conflate “bigger numbers” with “more risk,” even when the underlying variability is mathematically identical.
In another experiment, 587 participants recruited from CloudResearch’s Connect panel estimated seltzer water sales for two hypothetical stores. They were randomly assigned to estimate sales in different units (single cans, 8-packs or 24-packs). (Each, of course, could be derived as a multiple of the others.)
Here the researchers tested for “unit insensitivity” by comparing the width of the sales prediction ranges. The hypothesis was that estimates made in larger units (packs) would result in wider real prediction ranges (after converting all estimates to the same unit, e.g., individual cans) because people do not sufficiently adjust their nominal interval estimates for the unit size.
Zimmerman, Spiller, Reinholtz and Maglio observed that people failed to properly scale their uncertainty when using larger units, creating meaningfully wider ranges. The larger the unit (24-packs versus 8-packs), the bigger the mental error. Our brains simply don’t adjust properly when units change.
Dollars per Gallon vs. Gallons per $40
Another experiment tested inverse metrics: 294 participants recruited from Amazon’s Mechanical Turk were asked to estimate the distribution of gas prices across the United States. They were given the average price and asked to provide the 10th and 90th percentiles, and participants were randomly assigned to make those estimates in one of two inversely related metrics: dollars/gallon or gallons/$40.
Here, the researchers were testing the hypothesis that people assume symmetry in the given metric, leading to ranges that are more cockeyed when transformed to the inverse metric.
When asked about gas prices in dollars per gallon, people created balanced prediction ranges around their best guess: If the average price is $1 a gallon, perhaps the 10th percentile would be 50 cents and the 90th would be $1.50.
When asked about gallons-per-$40, they also created symmetric ranges. But mathematically, this is impossible. At $1 a gallon, the average is 40 gallons per $40. And 50 cents per gallon would translate into 80 gallons per $40. But $1.50 per gallon would translate into 26.7 gallons per $40, not symmetric.
When you flip numbers to their reciprocals (like going from 20 to 1/20), bigger numbers get pushed down toward zero more than smaller numbers, which warps the shape of the distribution.
Escaping the Measurement Trap
Nonetheless, our brains insist on seeing a balanced outcome regardless of the metric, revealing we don’t have a stable sense of uncertainty — we build one on the spot based on the frame we’re given.
“Humans do not represent uncertainty in an absolute manner and then translate this absolute uncertainty onto any given metric,” the authors note. We build our sense of risk as we go, and this leads to inconsistent judgments.
A first step in escaping the measurement trap is knowing it exists. When facing an important decision, it’s worth asking yourself: How is a number being framed? Is the unit of measurement making you feel more uncertainty than necessary?
Featured Faculty
-
Stephen Spiller
Professor of Marketing and Behavioral Decision Making
About the Research
Zimmerman, D., Spiller, S. A., Reinholtz, N., & Maglio, S. J. (2025). When metrics matter: How reasoning in different metrics impacts judgments of uncertainty. Cognition, 265, 106277.