Child mortality is widely recognized as an indicator of a community’s overall health, with reductions in child deaths often cited as evidence of the impact of a particular intervention.
Two high-profile events in Washington, DC, and Johannesburg, South Africa recently celebrated the progress made worldwide in reducing maternal and child deaths over the past twenty years – and called for greater international investment to sustain and build on the success. That’s based on the assumption we already know which interventions are succeeding under real-life conditions, and which ones are most effective.
Yet linking cause and effect, even with a global health gold standard like child mortality, is not always a simple matter.
At the event Acting on the call: Ending preventable child and maternal deaths in Washington, DC, the U.S. Agency for International Development (USAID) announced it would focus its efforts on 24 countries in which 70% of maternal and child deaths and half of the unfulfilled need for family planning occurs.
A report released at the event tracks global disease and financing trends for maternal and child health, identifies bottlenecks to achieving these goals, and recommends evidence-based interventions. The screen grab below, taken from the report, shows the growth in health funding for maternal, newborn, and child health in these 24 priority countries.
Note – DAH: development assistance for health, MNCH: maternal, newborn, and child health
Careful monitoring of the scale-up of these interventions in the 24 countries selected by USAID will be important for ensuring that they are having the intended impact. What’s more, it is important to examine whether or not they have the same impact on all members of the population who are in need of the intervention, and that the impact of the interventions doesn’t diminish over time.
A recent evaluation of Zambia’s malaria interventions by the Institute for Health Metrics and Evaluation (IHME) and the University of Zambia, Assessing Impact, Improving Health: Progress in Child Health Across Districts in Zambia, sought to measure the impact of malaria interventions, such as insecticide-treated nets (ITNs) and indoor residual spraying (IRS), on child mortality, while accounting for other key interventions that could affect child mortality.
The evaluation was commissioned years after Zambia’s initial scale-up of malaria control interventions took place. When the researchers went back to trace the impact of this scale-up, they discovered that the use and availability of many other life-saving interventions also rapidly increased at the same time. This made it challenging to isolate the impact of the malaria control interventions alone.
The following graph from the report illustrates changes in coverage of the following interventions in Zambia over time: malaria control, exclusive breastfeeding, pentavalent vaccine (vaccine against diphtheria, pertussis, and tetanus), and the availability of prevention of mother-to-child transmission (PMTCT) of HIV at health facilities between 1990 and 2010.
The IHME and University of Zambia researchers concluded that all of these interventions together accelerated declines in under-5 mortality by an extra 1% per year (see graph below).
They also concluded that it was “statistically impossible” to accomplish their original goal of measuring the impact of malaria interventions alone.
“The project demonstrates how retrospective evaluations using secondary data can be really challenging and sometimes inconclusive,” said K. Ellicott Colson, a co-author of the evaluation. “The more that program directors, policymakers, and funders can design and implement monitoring and evaluation plans before programs begin, the better equipped we will be to understand them, and the more we will know about where to put our money.”
Ensuring that monitoring and evaluation mechanisms are put in place before interventions are rolled out is also crucial for testing the boundaries of our existing knowledge about the effectiveness of interventions. It is also critical to measure health outcomes and other relevant indicators before the intervention is implemented (also known as “assessing the baseline”) and to continue to monitor whether interventions are reaching people and how well they are working.
Nancy Fullman, the lead author of the Zambia report and an IHME Policy Translation Specialist focused on malaria interventions and evaluations, points to the latest developments in the field of malaria research as an example:
“Unfortunately, we can’t assume that an intervention that worked well many years ago still performs in the same way today – you have to keep assessing effectiveness in real time and under routine conditions,” said Fullman.
“In the malaria field, we see more reports of insecticide resistance and changing mosquito behaviors, where mosquitos that carry malaria are biting people earlier in the day rather than at night, when they’re sleeping under bed-nets,” she said. “A lot of malaria interventions were designed based on an understanding of Anopheles mosquito behavioral patterns – biting at night, resting on walls after a blood meal – that may not be the same today.”
While it’s encouraging to see a continued push to achieve the unfinished agenda of Millennium Development Goals 4 and 5 to reduce child and maternal mortality, it’s important to take the time to make sure governments and donors actually invest in programs that are helping achieve these goals. To make sure we’re getting it right, carefully planned evaluations are essential.
Katie Leach-Kemon, a weekly contributor of global health visual information posts for Humanosphere, is a policy translation specialist from the University of Washington’s Institute for Health Metrics and Evaluation.