Over 200 million people are in urgent need of humanitarian assistance across the world today. In 2017, the UN-coordinated appeals reported a shortfall of 41 per cent, despite receiving a record amount of funding. As the demands on these limited funds increase, there is a concurrent increase in the need for high-quality evidence on the most effective ways to improve humanitarian programming.
Impact evaluations contribute to high-quality evidence on what works, how and why by examining causal links between interventions and outcomes. Answering these questions of attribution can be extremely challenging in humanitarian settings, a 3ie scoping paper highlights some areas where evidence is lacking. An impact evaluation is often dismissed as an impractical luxury for being too difficult to conduct, too disruptive to ongoing interventions and teams on the ground and too lengthy to bring practical insights. A frequently raised question is how can you set aside funds for evaluations when people lives are at stake? But when the stakes are so high, it is all the more important to have evidence on the effectiveness of what we are doing.
3ie’s Humanitarian Assistance Evidence Programme, with support from Danida, DFID, UNOCHA, USAID and WFP, set out to address evidence gaps in the humanitarian sector by supporting the production of rigorous, high-quality impact evaluations related to food security, multi-sectoral humanitarian programming and interventions targeting malnutrition. The seven impact evaluations under this programme have been conducted in Chad, Democratic Republic of Congo (DRC), Mali, Niger, Pakistan, Sudan and Uganda. They were undertaken in close consultation with implementing agency partners and employ a range of innovative research methods.
One of the main lessons from the 3ie-funded studies is that there may be solutions at hand even for the most unanticipated challenges that may arise while conducting impact evaluations in humanitarian contexts.
Why are impact evaluations considered too difficult in humanitarian settings?
Getting buy-in from programme staff for an impact evaluation can be an uphill battle. A 3ie- supported study in DRC faced a long and at times difficult process of securing an approval from the implementing agency. Significant staff turnover, agency scepticism towards the research approach, and a tense socio-political climate, led to delays in starting the evaluation and posed significant risks to its completion. In these difficult circumstances, involving programme staff in the design of the study and having informative conversations on the advantages of a rigorous, independent evaluation will go a long way. In this particular case, the implementing agency not only agreed to the suggested design eventually, they also added their own resources to cover the costs of randomising additional households into the programme.
Read also about grants now available under 3ie’s Humanitarian Assistance Thematic Window HERE.