It is gratifying to see that we are not alone. As the Poverty
Action Lab[1] at the Massachusetts Institute of Technology concluded,
“the World Bank spent more than a billion dollars in one country without knowing
why they were doing what they were doing”. 200 studies of World Bank projects in
India were not rigorous enough to measure whether the projects made a
difference. A senior economist with the Bank identifies as reason that the
Bank’s highly trained, well-meaning professionals think they know the solutions[2].
Our own Meta-evaluation found one-third of all our evaluations to be of good
quality, and only one third unacceptable. But that doesn’t account for the
evaluations that should have been done, but never were. And while our
professionals are at least as well-meaning as those from the Bank, it doesn’t
take an impossible leap of insight to see that we have a larger proportion of
staff who rely on faith rather than scientific rigour.
We are much better at producing theories and ideas, than evidence. The typical
Lessons Learned report on extremely small quantities of fact. This
hurts, not only because of the newly discovered tenets of results based
management, but because we have to provide policy advice in an ever increasing
arena of possibly conflicting arguments.
But we can avert the final dive into obscurity. Yesterday, the GMT eyeballed
measures to increase management attention to the quality of evaluations, and
strengthen technical capacity of our staff. We are also bolstering our programme
guidance. So that we not only can better assist those millions who leave no
document that they ever lived. But also those of our programme staff who
otherwise leave no evidence that they ever worked.
[1] See World Bank
Challenged: Are the Poor Really Helped?, or go directly to the Poverty Action
Lab.
[2] Next to randomized evaluations the Poverty
Action Lab also suggests piloting of interventions.
(10 September 2004)