Research Brief

Experts Struggle to Accurately Forecast Societal Change

On COVID-19’s impacts, social scientists’ predictions weren’t much better than those of laypeople

When faced with questions about the future path of consequential societal issues such as racial bias and ideological preferences, our seemingly logical habit is to seek out experts for insights on how things might evolve. 

Research published in American Psychologist confirms that at the onset of the COVID-19 pandemic, regular people intuitively expected public health and psychological scientists to deliver the most accurate predictions of how the pandemic might impact societal norms and mental well-being. These two classes of experts were also seen as the most desirable drivers of public policy to address mental and social well-being. 

Alas, the experts did not nail the forecasts. 

Opt In to the Review Monthly Email Update.

A series of studies in that research sought predictions from social and behavioral scientists at the onset of the pandemic on a variety of social and psychological issues. As a control group of sorts, laypeople (nonexperts) were also surveyed for their predictions on the same topics. The researchers then analyzed how all the predictions panned out six months and one year later. 

The experts were no more adept at forecasting than the inexpert predictions from regular people —and often did worse.

Granted, such one-shot “give me your prediction” exercises tend to induce even experts to rely on intuition more than hard data and don’t necessarily prompt thoughtful (and time-consuming) analysis to arrive at the prediction.

Yet, when a bunch of social and behavioral scientists that included UCLA Anderson’s Eugene Caruso agreed to be lab rats for a more formal study of their forecasting chops, the results were not any better.

As reported in Nature Human Behaviour, in two forecasting tournaments designed to encourage more data-driven analysis and a commitment to make monthly forecasts for a full year, the predictions of expert academics turned out no more accurate than forecasts gleaned from the general public. Nor did the experts, on average, deliver more accurate forecasts than simple statistical approaches such as predictions derived from randomly organizing data or relying on historical averages of pertinent data.

“A sobering picture” was how ringleaders of the Forecasting Collaborative, a consortium of social scientists that launched this reflective analysis, summed up the results. 

“Despite the causal nature of most theories in the social sciences, and the fields’ emphasis on prediction in controlled settings, social scientists’ forecasts were generally not very good,” they wrote in a post for The Conversation

Expert Knowledge Is Not Easily Transferable to Future Casting

Between May 2020 and April 2021 the Forecasting Collaborative ran two tournaments open to academics in behavioral, social and data science who agreed to make monthly forecasts for a full year. The focus was on predicting the impact the COVID-19 pandemic might have on 12 broad social issues, including implicit and explicit bias toward Asian Americans and African Americans, political polarization, political party affiliation, life satisfaction and personal well-being. 

The academics could work solo or collaborate with other academics. It was left to each team how many topics they wanted to tackle with a forecast. 

A key element of this research was that participants were provided three years of recent data relating to the topics they chose to tackle. All teams were given the opportunity to revise their forecast six months in, based on new data that was provided by the tournament organizers.

A year later those predictions were compared with how things actually shook out. The expert forecasts were stacked against an average of three “simple statistical models” used as stand-ins for nonexperts: a forecast based on a random analysis of the data, a forecast based on historical averaging and a forecast derived from a basic linear regression.

In the first tournament, the 359 forecasts made by 86 teams were on average worse than the average of the “simple” forecasts in nine of the 12 topics. 

For the initial tournament the researchers also collected forecasts from more than 800 laypeople who were provided the same data as the experts. Expert forecasts were no better than the predictions of the nonexpert general public, and often worse.

Six months later, teams were given the opportunity to revise their forecasts, or tackle new ones, using updated data. In this second tournament, nearly 550 forecasts from 120 teams lagged the average of the nonexpert predictions in eight of the 12 topics.

Outside of gender bias and positive affect — the technical term for whether you navigate the world with an upbeat mood — the general takeaway is that the academics struggled to outperform inexpert benchmarks. 

The graphic below shows the ranking of topics by how hard it was for the experts to forecast. The higher the score, the worse the experts did. The left side is the first tournament, and the right side is the second tournament. Though the addition of new data to analyze (Tournament 2) tended to improve expert forecasts, the general trends persisted: Gender bias and positive affect were areas where experts did better. Implicit racism toward African Americans and Republican support proved consistently harder for experts to predict. 

Moreover, the teams that did best in Tournament 1 didn’t typically outperform in Tournament 2. 

One potential reason for the generally underwhelming expert forecasting among the participating teams is that statistical forecasting and modeling is typically not a central requirement among social scientists in training. Nor is forecasting a central practice in the field; around 80% of the participants in this exercise reported no prior forecasting experience. And, of course, forecasting is hard, even in fields in which it is more commonplace. (See: economists predicting recessions.) 

That said, there were interesting nuances in the forecasting outcomes that point to what works better. 

  • Stay in Your Lane. Teams that had at least one member who was a subject matter expert in a given topic did better. Sounds obvious, but one of the findings from other Forecasting Collaborative research found that social scientists were often all too eager to offer prognostications in the media outside their area of expertise. Moreover, expertise is not in the eye of the beholder. “Notably, we found that publication track record on a topic, rather than subjective confidence in domain expertise or confidence in the forecast, contributed to greater accuracy,” the collaborative’s lead researchers dryly noted.

    At the same time, teams that had members from multiple disciplines did better, suggesting collaboration centered on a subject matter expert(s) can benefit from also including the intellectual rigor and fresh perspective of academics specializing in other social science fields.
  • Data Rules. Teams explained how they made their forecasts: based on theory/intuition, based on data analysis or some hybrid approach. Teams that relied on data were more successful. Among the top five teams in each of the 12 topics studied, 62% were data-driven, 30% relied on intuition theory and 8% took the hybrid approach. 
  • Keep It Simple. Teams that relied on simple algorithms and data analysis did better than those that piled on the complexity. 

That so many social scientists were willing to engage in this exercise is encouraging; self-reflection in the service of learning is often an underutilized tool. 

At a base level the findings illustrate that expertise in explaining the past — the heart of social science — does not easily translate into accurate forecasting. 

Moreover, the researchers point out that the work of social scientists is focused on individuals and groups, and “findings from such work may not scale up when  predicting phenomena on the scale of entire societies.” That suggests social scientists might want to value humility when compelled to make a societal-level prediction. And for all of us who turn to them for insight, to recognize that forecasting is a vexing art, not a science. Especially among experts who lack formal forecasting skills.

Featured Faculty

  • Eugene Caruso

    Associate Professor of Management and Organizations and Behavioral Decision Making

About the Research

The Forecasting Collaborative. Insights into the accuracy of social scientists’ forecasts of societal change. Nat Hum Behav 7, 484–501 (2023). https://doi.org/10.1038/s41562-022-01517-1

Related Articles

Police cars and officers on highway at night. Research Brief / Bias

Smartphone Records Reveal Racial Disparities in Neighborhood Policing

Police patrol Black areas more frequently than others with similar homicide rates and income levels

Clean operating room Research Brief / Health Care

Two-Tier Pricing: Reducing Risk for ACOs and Specialist Practices

Higher prices for the first few procedures, followed by a volume discount, may help balance risks and rewards