Data love: The risk of humanitarians acting like scientists | 

Mad Scientist WikimediaWarning: Reductionism can result in distorted vision, poor judgment and difficulty in operating a humanitarian project.

There’s a popular trend today among many humanitarians, aka the aid and development sector, to try to show the benefit of their projects – be it digging a well, feeding kids or improving access to basic health care – with scientific data.

That’s good in principle, if you have a well-designed study that produces meaningful data. But that can be a big if when what you are trying to test is a reduction in poverty, social and economic improvements, healthy behavior change or many of the other aims of aid and development.

It’s much easier for scientists to test a more isolated intervention, like say taking a pill, than it is to even figure out how best to track and attribute the potential impact of many humanitatian efforts. And it’s worth noting that the scientific community is finally acknowledging that even their most refined efforts in reductionist deduction, peer review and attribution often fail.

NY Times Scientific Pride and Prejudice

Economist Trouble at the Lab

Forbes NIH Promises to Make Science Less Wrong

The mainstream scientific community likes to call this a ‘reproducibility’ problem, saying the overall reliability and self-correcting nature of the scientific method(s) remain intact. But when it is noted, as in the NYTimes op-ed, that a team of scientists could only confirm the findings in six of more than 50 ‘landmark’ cancer studies, there is cause for concern.

Meanwhile, the humanitarian sector has a different problem. It tends to suffer from a lack of data or consensus on how best to measure the impact of various initiatives aimed at fighting poverty, diseases of poverty or other kinds of human inequity. The field did not arise, like science, from a desire to know so much as from a desire to help.

So will it help if humanitarians become more like scientists? Maybe. Maybe not. Humanosphere participated in a brief debate that flared up on Twitter over the weekend in between the still-dominant Superbowl Twits (I say Go Hawks! Other say Go Away Already! …). The non-football Twitter debate was prompted by a study done in Ghana that purported to show that eliminating user fees or required out-of-pocket spending in this lower-income country did not result in any ‘overall’ health improvements.

(Here’s a non-paywalled link to download an earlier iteration of this study done mostly by researchers at the London School of Hygiene and Tropical Medicine. Here’s another link that describes their basic methods and conclusions).

Newton, statue outside the British Library
Newton, statue outside the British Library
Flickr, chrisjohnbeckett

One of the more popular methods promoted by the metrics/evaluation crowd in the aid and development sphere is called the Randomized Controlled Trial (or RCT). It is essentially an attempt to use the standard double-blinded approach used in drug clinical trials to evaluate the effectiveness of various humanitarian efforts.

The British team characterized their study as an RCT aimed at measuring if removing the standard user fees or out-of-pocket payment requirements would produce measurable improvements in health. They found some reduction in anemia for high-risk kids, but said they found no evidence of ‘overall health improvements.’

A number of global health or aid/dev experts responded affirmatively to a Tweet from one leading expert at a DC think tank saying the study showed free health care doesn’t improve health.

This response seemed odd since many other studies have shown that user fees in poor countries do undermine health goals – for obvious reasons. Poor people won’t seek health care at an early stage if it costs too much (and ‘too much’ ain’t much if you live on a few dollars a day) so they don’t do preventive care and early-stage treatment which is usually both cheaper and more effective. They wait until it’s a crisis.

Here are a few links to other reports, studies or advocacy briefs that say free access to basic health care services does improve health in poor countries and financial barriers to accessing health services cause harm:

NYTimes In Sierra Leone, new hope for children and pregnant women

Partners in Health Taking a stand against user fees for health

Social Science and Medicine The Hidden Costs of User Fees

As Rob Yates, a senior economist with Britain’s Department for International Development (DFID), noted in the Twitter debate, the Ghana study did not actually define clearly what it meant by ‘overall health improvements’ and really only measured the rates of childhood anemia (which were reduced when user fees were removed). It was a small study, Yates noted, lacking in clear endpoints.

In short, Yates says it is fair to conclude that the Ghana study found little evidence of a big positive ‘overall’ impact from removal of user fees. But this was arguably due to the study’s limitations; it is not accurate to say the study showed removing the fees did not benefit the poor. The study just wasn’t powerful enough to do anything more than it did.

And absence of evidence, as they say, is not evidence of absence.

So do user fees and making poor people pay even a little for visiting a clinic or a nurse undermine ‘overall health’ or not?

One could argue that we’ve been testing the value of user fees and out-of-pocket payments in poor countries for decades, since this was a scheme promoted many years ago by the World Bank and International Monetary Fund as part of a neoliberal anti-poverty strategy known as ‘structural adjustment.’ Given this, you could argue – in a very loose, broad and non-reductionist way – that the approach has obviously failed since there’s no clear evidence the approach improved health or (as intended) improved financing for health care in poor countries.

But that would probably be going beyond the evidence as well, from a scientific standpoint anyway. Here’s two good analyses, one by WHO and the other by the World Bank, looking at the equivocal evidence for either side of the argument.

Perhaps the most evidence-based position here is the studies can’t yet prove that removing financial barriers to health care services improves the health of the poor. More studies needed. But scientists always say that, don’t they? That’s because they get paid to study things and, well, there is almost always room for debate, for challenging this or that fact or data set.

Meanwhile, a poor mother in Ghana with a very sick child is trying to figure out if she should spend money to feed the family or go to the clinic.

Kenyan Mother Child
Wikimedia

So let’s be careful when we’re pretending to be scientists running a simple plus-minus double-blind trial. There’s much more at stake here if we draw the wrong, or premature, conclusions.

  • Joanne Silberner

    Really brilliant piece. Thanks Tom!

    • http://humanosphere.kplu.org Tom Paulson

      Thanks Joanne!

  • Vestias

    Thanks Portugal

  • bee

    dear Mr Poulson, as you know just a few days ago Unesco reported that, despite years of ‘humanitarian’ work in building schools and school feeding, millions of children were not/not learning to read. this was discovered because – finally – measurements were taken. now we may ask if the agencies, who claimed that they have been improving reading skills etc. through their ‘humanitarian’ work of building and feeding, will be held accountable?? this is unlikely of course but let us hope that, at a minimum, more and more measurements of the impact of the activities of UN agencies and ngos will be taken. ‘Humanitarian’ works are not inherently Good Things. Much harm has been done to people and their environments through the implementation of bad projects and programmes, under the cloak of ‘humanitarianism’,

    • http://humanosphere.kplu.org Tom Paulson

      Thanks Bee,

      Yes, it is very important that aid/dev projects be evaluated for impact. I’m certainly not arguing against that. I’m just cautioning against over-interpreting such studies since they are often of short duration, limited in scope and some of the most critical changes sought (empowering women, better governance, etc) are also the most challenging to measure.

      best
      Tom

      • bee

        dear Tom
        yes, it is also very important that badly designed studies
        are not accepted as good, useful studies and/or (obviously) that the
        conclusions of such poor studies are not taken at all seriously. but the
        debate is very important, and more useful evaluations and calls for
        accountability would be such a treat
        thank you and all the best,

        bee

  • Kristof BOSTOEN

    Dear Tom,

    Thanks for your article “Data love: The risk of humanitarians acting like scientists”. It some months old but as the issue will not go away anytime soon I permitted myself to comment. I work for IRC (ircwash.org) a think-do tank that aims at improving water and sanitation services that last. We find that improving service needs to be done by local authorities and public or private service providers which we and other support and we aim at looking at various aspects in more depth than governments can afford. I was hired by IRC from academia because the organisation was asked by its funding agency to have a more epidemiological approach to proving (health) impacts. In our case we are indirectly supporting authorities to achieve their (and our) goal so attribution is not even worth considering. For our field articles by Prof Cairncross (http://www.lboro.ac.uk/well/resources/fact-sheets/fact-sheets-htm/mthiws.htm) were helpful to make our argument towards our funding agencies but while it is easy to say what should NOT be done it becomes more difficult to set out what needs doing. On the other hand organisations often rely on anecdotal evidence and struggle to do measurements that prove their programmes make a difference because they are (relatively) small or the change takes so long that attribution becomes an issue as so many other things happen in the mean time.
    There are various things we need to achieve with monitoring:

    1) steer our projects in the right direction in an objective way;
    2) contribute to global sector learning by well documenting our experiences
    3) be accountable to donors
    4) be accountable to national and local authorities
    5) be accountable to users (indirectly)

    We found that most of our worked focused on donor accountability (3) at the cost of all other efforts in particular (1) and is taking us a lot of effort to turn this around internally. But externally it is hard as well. Proving you make process to the donor is different from critically looking at once project to improve and being open with your success and failures. A lot of these success take longer than you average project and because of the scale of the work it is done in partnership attribution is impossible and makes little sense to the partnership philosophy.

    The same problem in our organisation takes place at the level national and local authorities who are collecting information but often use for internal and external accountability more than guiding day-to-day work. It is logic as using information in that way is often a shift in the way people worked before and needs learning. Monitoring also requires resources which are not always available. Monitoring in our case helps to improve services but rarely saves money and in resource stressed areas informed subjective decision is just cheaper (and not necessary wrong) than collecting information for objective decision taking.

    All this to say that while I fully agree with the points you raised my bigger question is what to do to instead to:
    1) improve the work we do (day to day monitoring)
    2) provide adequate information on accountability

    One of things we considered is enlightening our funding agencies on these issues but the enlightenment is often on a personal than a organisational level. The other is that we get brand recognition in the sense that people trust the process we work in as well as the result that we achieve because by becoming familiar with how we work. I would love to see a discussion around what to do after funding agencies agree double blinded, cross-over randomised control trials are not useful for project monitoring and what to do in for example projects in which support improved service delivery to sanitation projects through national governments.
    It is a question I still struggle with five years after successfully pushing off RCT where they are not appropriate.

    Would love to hear you view on this.
    Kind regards
    Kristof

  • David

    Reminds me of the problems with Jeff Sachs’s Millennium Villages Projects. While he has no lack of compassion, his ambition has gotten in the way of making realistic planning decisions.