Donald Trump’s victory took the world by surprise this week. Like Brexit, most commentators, polls and election modelers were wrong.
Understanding what went wrong will be the subject of much debate over the next few weeks. It also holds valuable lessons for those who use and study global health metrics.
There are many possible reasons why most polls predicted the election results incorrectly. Did many people change their minds? Did people lie about their intentions to vote for Trump, as suggested by the “shy Trump” theory Marketplace reported this week? Did Trump voters disproportionately decline to speak to pollsters? An idea discussed in the FiveThirtyEight.com Elections Podcast was that pollsters’ assumptions about nonrespondents were incorrect.
To better understand what global health professionals should take away from this election surprise, I sat down with Institute for Health Metrics and Evaluation professor Theo Vos, a key member of the Global Burden of Disease research team at the IHME.
“For the Global Burden of Disease study, we spend lots of time understanding what systematic bias is in different data sources,” said Vos.
For example, in countries where vital registration (death certificate) data aren’t available, it is common to carry out household surveys to measure adult deaths. Surveyors ask people about siblings who have died. Scientists have shown that these surveys undercount adult deaths for two main reasons. First, people do not always remember deaths (recall bias). Second, households with many deaths tend to break apart, so the surveys are likely to miss them (selection bias).
Biased data seem to have been a major challenge for pollsters and election modelers, too. There may have been a large difference between the people who responded to the polls and the people who showed up to vote on election day.
According to Vos, another lesson learned from the election was that “you need to make the proper measurement. Polling data are far from the gold standard for what you actually want to measure.”
As election day wore on and the polls began to close, Vos pointed out, “[The election modelers] got much better information, and their predictions improved.”
While the improved predictions came late, there’s an important takeaway here for global health metrics: The collection of and access to high-quality data are essential for improving estimates.
“We regularly call on researchers to use the best quality methods to collect data,” said Vos. He cited multiple examples where researchers have gotten together to establish a consensus about the best ways to measure certain health outcomes. “Multiple-Indicator Cluster Surveys (MICS) and Demographic and Health Surveys (DHS) are fantastic examples.”
MICS and DHS provide comparable data on population and health across many less-developed countries.
Like predicting election outcomes, measuring the world’s health is a complex task and a great responsibility. To improve health, researchers have to get the estimates right. The international community must continue to raise the bar for scientific excellence in health metrics and advocate for better data collection.
David Phillips from the Institute for Health Metrics and Evaluation contributed to this article.