Water advocates sour on Philanthropedia’s crowd-sourced rankings

Everyone these days wants to find a better way to evaluate the performance of humanitarian and non-profit organizations, in part because many of these groups have for too long operated on the assumption that having good intentions, a noble mission, was good enough.

It’s no longer good enough for many donors, philanthropists and aid agencies. Funders want proof of impact.

But it’s not as easy as it might sound to evaluate do-gooders. A commercial enterprise can report on profit margins, production pace and inventory flow to give shareholders and investors a look at its performance. But groups working to reduce poverty, improve equity and empower the poor often have a harder time pointing to a simple impact.

So what could be cooler, more tech-savvy and more innovative than ‘crowd-sourcing’ these impact evaluations?

PhilanthropediaThat’s Philanthropedia.

“What we do is expert crowd-sourcing,” said Jasmine Morrow, manager of research for Philanthropedia, which is a subsidiary of GuideStar, one of several organizations that provides information, and quantitative rankings, on non-profit organizations in the U.S. “We want to help donors and individuals evaluate impact and decide which organization to give too based on surveys of select experts in particular fields. We’ve been doing this for about five years now.”

Recently, Morrow and her colleagues at Philanthropedia asked experts in the field of water, sanitation and hygiene (aka, WASH) to update a 2011 ranking of the sector in which 115 experts were asked to list the organizations that do the best work in this crucial area of aid and development.

One set of water experts, based in Seattle and widely recognized as leading the charge for better impact evaluation in the WASH arena refused to participate.

“Why? Because it’s worse than useless,” said Kirk Anderson, director of programs for Water 1st International. “We do need to improve accountability and our performance measures, but this is not much better than a popularity contest.”

“This is not better than nothing; we see it as a step backward,” said Marla Smith-Nilson, founder of Water 1st, one of the original founders of Water.org and a 20-year veteran of the humanitarian water wars who, with Anderson and others, have been pushing for a much more rigorous method of evaluating water projects. Here is a link describing their Water for Life rating system.

“Billions of people lack access to clean water today and we’re still doing the same things that don’t really help,” Smith-Nilson said. Groups are still running around the developing world, installing wells that break down after a few years, she said, noting that some estimate anywhere from 35-50 percent of all water projects fail within a few years.

How can you tell if this a good or bad water project?  (Americans help install water pump in Ethiopia.)

How can you tell if this a good or bad water project? (Americans help install water pump in Ethiopia.)

The ‘water sector’ within the aid and development community is huge, because access to clean water and sanitation is widely recognized as fundamental to fighting poverty and inequity. But the community can’t even agree on a definition of what constitutes ‘access,’ Smith-Nilson said, let alone find consensus on how to best measure impact and effectiveness of any given strategy.

Is it a success to build a well that still requires poor women and children to walk miles every day carrying some 30-40 lb. containers on their head? Is that improved access? Or should we abandon the whole well-digging frenzy and push for piped systems? What are the long-term goals here?

The cynical might conclude that the lack of evaluation in this field persists because many of these organizations don’t really want to be evaluated too closely since revealing a high failure rate tends to hurt fund-raising. Smith-Nilson and Anderson have been among those pushing for more independent and rigorous system of performance measures that don’t depend upon the community’s opinion of itself or select experts.

“Consumer Reports goes out and buys a product off the shelf, tests it and writes a review,’ Anderson said. “That’s what a responsible organization trying to do evaluations should do. These guys are trying to do something on the cheap by just collecting people’s opinions. So it’s meaningless … or worse than meaningless because funders think their rankings actually measure something.”

Lindsay Nichols, spokeswoman for GuideStar, said it’s inaccurate to describe the Philanthropedia approach as simply a popularity contest or a collection of poorly informed opinions.

“The organizations can’t recommend or rate themselves,” Nichols explained. Philanthropedia asked academics, activists, other NGOs and experts in water and sanitation (for this WASH survey) to recommend four established organizations doing well in the field as well as four ‘start-up’ organizations, she said, to avoid the big, more widely known established organizations dominating. “The methodology and findings are totally transparent.”

Nichols said the rankings that result from the survey are not just based on the expert community’s opinions. GuideStar also factors into its vetting program methods for evaluating if a non-profit has well-defined goals and it meeting its mission, if its finances are in order, how they are governed and transparency of operations.

Morrow added that she agrees the Philanthropedia approach is imperfect, but said it was launched in 2008 out of frustration with the dominance of even more inadequate evaluation methods such as the dreaded ‘overhead ratio’ – a simple metric that asks organizations to describe how much of their money goes to actually helping people versus how much goes to ‘overhead.’

This method, almost everyone agrees, causes more harm than good since some interventions are easily and cheaply delivered while others may require much more administration to achieve success.

Philanthropedia was originally the brainchild of Howard Bornstein, who is now a private equity manager for Bain Capital. In 2008, Bornstein was working at the Bill & Melinda Gates Foundation, which has long been an advocate for better impact evaluation and metrics in the fight against poverty and inequity.

Bornstein and others sought help from some business whizzes at Stanford University and, voilà, Philanthropedia was born. Whether it constitutes progress or regress is, it appears, debatable. But it is definitely evidence of a growing effort within the aid and development community to bring better and harder measures of effectiveness to this field so lacking in good, reliable metrics.

Share.

About Author

Tom Paulson

Tom Paulson is founder and lead journalist at Humanosphere. Prior to operating this online news site, he reported on science,  medicine, health policy, aid and development for the Seattle Post-Intelligencer. Contact him at tom[at]humanosphere.org or follow him on Twitter @tompaulson.

  • Lindsay Jo Keller Nichols

    Tom, thank you for giving us the opportunity to weigh-in. I want to underscore the point of Philanthropedia’s methodology – that it is the experts who decide what impact means and what should be measured, not Philanthropedia (or GuideStar.) It’s for that very reason that we wish thought leaders would join our expert panel and help inform our process, rather than just suggesting we nix it altogether. We welcome constructive feedback at any point. Again – we appreciate your balanced reporting, Tom. ~Lindsay J.K. Nichols, GuideStar’s senior director, marketing & communications, lnichols (at) guidestar (dot) org