Not the people who see coronavirus as an unpredictable threat
When history books recount the worldwide coronavirus crisis, they will praise some leaders for taking actions that limited health and economic damage. Others will be derided for their ineffective, insufficient or counterproductive responses.
Right now, however, we don’t know who these heroes and villains will be. Opinions and today’s statistics aside, it’s too early to identify for certain the most damage-limiting responses. We don’t know yet, for example, whether populations ultimately suffer less if their governors and presidents enforce strict lockdowns now, in the next flu season or the following year. We cannot yet quantify the overall effects of any virus management approach on our health, our jobs and our wealth.
So, what do we think of our leaders as coronavirus, and official responses, continue to play out? Do we blame them for failing to end the worst health and economic crisis in modern times, or see them as doing the best anyone could in the face of a random, capricious threat?
Opt In to the Review Monthly Email Update.
The answer may depend on how we subconsciously categorize the uncertainty that leaders face as the crisis unfolds, according to early results of coronavirus experiments out of the Fox Uncertainty Lab at UCLA Anderson. Are the most effective steps for mitigating virus-related suffering unclear because the spread of the virus follows inherently random patterns, or are they clear because the situation is fundamentally predictable?
“The more knowable the uncertainty, the more credit leaders deserve for decisions that lead to good outcomes and the more blame leaders deserve for decisions that lead to bad outcomes,” lead researcher Craig R. Fox explained in an email. “The more random the uncertainty, the more observers chalk up outcomes to good or bad luck.”
If you believe, for example, that lockdown policies based on data from extensive virus testing and contact tracing could reveal the best responses, you’ll likely appreciate leaders that take those approaches and blame those that don’t, the findings suggest. If you think that no amount of available data or expertise could moderate the actual levels of sickness, death, job losses and economic depression, you might be less judgmental of leaders whose decisions ultimately prove wrong.
The coronavirus experiment is part of a broader research project underway by Fox, University of Southern California’s Gülden Ülkümen, University of Utah’s David Tannenbaum and INSEAD’s Dan Walters. The researchers hypothesize that people give credit or blame to decision makers faced with epistemic uncertainty — uncertainty that can be reduced with information — but generally avoid strong judgments when the situation involves aleatory uncertainty. Aleatory uncertainty implies an outcome similar to a roll of dice; you can figure the probability of rolling two sixes, but you can’t improve the odds of getting that result.
In one experiment, the researchers asked study participants to read and summarize news stories designed to skew their perceptions of uncertainty. Half of the subjects read about an epidemiologist focused on contact tracing for reducing virus spread, a story meant to suggest that the coronavirus uncertainty was epistemic and could be reduced with research. The other half read about an epidemiologist working with statistical models, which are designed to measure the probability of certain randomly occurring, or aleatory, events.
All participants were then shown coronavirus death tolls in Turkey. Some participants saw Turkey’s stats alongside the death toll in Portugal, a country with relatively more deaths than Turkey at the time of the experiment; others saw the death toll in Chile, a country with relatively fewer deaths. Participants then rated the performance of Turkey’s Minister of Health, the person in charge of coronavirus mitigation efforts.
Participants who read about contact tracing generally praised Turkey’s minister when compared to Portugal and blamed the Turkish official for the worse-than-Chile results. But the participants who read about statistical modeling were likelier to attribute the good or bad outcomes under the Turkish minister to random luck.
The results are in line with findings from another experiment in the study, which was unrelated to coronavirus. Participants were screened for their knowledge of five games — chess, poker, darts, bowling and Scrabble — and selected to include a wide spectrum of opinions on how much knowable uncertainty (in effect, skill) versus randomness (luck) is necessary for winning each.
They were then asked to evaluate compensation schemes for different analysts, specializing in each game, who forecast winners. These hypothetical managers (the experiment participants) were then asked about their preferred compensation scheme for the analysts.
When managers saw the uncertainty of a game’s outcome as more knowable, they preferred a greater portion of compensation to come from performance-based bonuses. When they viewed the uncertainty as more random, they preferred greater time between performance evaluations. (More results mean more opportunities for good and bad luck results to cancel each other out, making it easier to see different skill levels among the analysts.)
The researchers expect to produce a working paper with their results in the second half of 2020.
Harold Williams Chair and Professor of Management
About the Research
Fox, C. (in progress) Uncertainty.