Guest post By Ted Caplow, director of CappSci. org (formerly Whole New World Foundation)
My son spent the first weeks of his life in intensive care, attended by a team of neonatologists. My wife and I could do little more than watch, until one day the doctors asked our permission to proceed with an elective procedure. My immediate impulse was to track down more information about the risks involved so I could maximize my son’s chance of survival.
In developed countries, immersed in the information age, we are accustomed to supporting our decisions with data. Unfortunately, in the rest of the world, the struggle to save children’s lives takes place without the benefit of adequate information. Despite great reductions in child mortality over the past 30 years, more than 18,000 children under the age of five still die every day.
International authorities classify these deaths as “preventable,” meaning that modern medical knowledge and technology could save these children, but it fails to reach them. More support for studying the true impact of life-saving interventions could help bridge this gap.
A 2014 Gates Foundation report estimates that over the past three decades the international community has donated $5,000 in health care interventions for each child’s life that was saved in the developing world. At that rate, it will take $33 billion in additional global spending to save the 6.6 million lives under five that are still lost each year. This large sum will be difficult to raise, so directing funding to the most cost-effective interventions is a priority.
Frustratingly, despite the enormous resources dedicated to combating child mortality, there remains a widespread lack of data on the efficacy of individual intervention programs. There are several reasons for this situation.
First, funding is scarce for scientific field studies that support the comparison of child mortality prevention efforts. Donors expect to know the impacts of their gifts, but they also want to save as many lives as possible. These expectations create a Catch-22: most donors prefer to directly fund life-saving interventions, and are reluctant to fund research on their efficacy.
Second, determining how many children’s lives can be saved with a given amount of money is difficult. Most child deaths occur in the developing world, a challenging place to conduct population surveys. Official records are of uncertain quality; patients are often treated without registration and without follow-up; people are culturally unaccustomed to divulging their medical history to strangers; trained epidemiologists are rare; and communications and transportation networks are undeveloped.
Third, the accuracy of impact estimates is further muddled by an incentive structure in the aid sector that biases almost every actor toward the reporting of higher numbers. Donors want to hear that they have saved a large number of lives; fundraisers wish to report as much success as possible; and healthcare practitioners must compete for funding according to the size of the impact that they report. Everyone acts with good intentions, yet the relationship between investment and impact can become clouded.
Strong evidence of this inflationary effect appeared in the applicant pool for the $1 million Caplow Children’s Prize, developed last year in partnership with global health experts from Harvard University, Emory University, the University of Miami, and various NGOs. Applicants were asked to propose a life-saving intervention for children, calculate the cost per life saved, and justify their estimates with data.
Over 550 applications from 70 countries were received, including some from the world’s largest NGOs. The approaches to saving children were diverse, fascinating, and exciting, but in most cases evidence of impact was limited to anecdotal accounts, and scientific proof of program effectiveness was as rare as hen’s teeth. The strongest 50 applicants moved forward to the second round, but only 40% of this group offered data to support their estimates of how many lives they could save. Careful analysis revealed that the majority of these estimates contained significant errors, and that these errors always inflated program impact, sometimes by a factor of ten.
The prize was ultimately awarded to Dr. Anita Zaidi of Pakistan, who approaches her work in the slums of Karachi with a scientist’s worldview. Zaidi measures everything, before, during, and after her treatment of expectant mothers and newborns with medicine, nutrition, and modern childbirth facilities. The number of lives that she plans to save is modest compared to the typical claims in our applicant pool, but more credible, and her insistence that data on the impact of her work be collected by an independent third party was unique.
This experience inspired the recent launch of the Data for Life Prize, which will award $50,000 to several organizations to scientifically demonstrate how many lives under age five they can save in a year. We hope that the results of these studies will draw more donors into the struggle against child mortality. Accurate and impartial information improves the efficiency of medical care, whether it is delivered halfway around the world or at the hospital down the street.
In the case of my own life and death decision, my sister quickly connected me via email with a doctor friend, unaffiliated with our hospital, who told us the proposed procedure was statistically very safe. That intervention worked, and three weeks later my son came home to join his triplet brother and sister. Today, they are all healthy two-year-olds. As a global family, we owe the same swift application of scientific knowledge to the 17,999 children who were not so lucky that day.
An engineer living in Miami, Caplow is a director of CappSci (for Caplow Applied Science) and the creator of The Children’s Prize and the Data for Life Prize.