Five years on: Is monitoring, evaluation and learning catching up with pre-arranged financing

Author: Chandler Klein

Photo: Getty Images

Investment in pre-arranged finance for disasters is growing, and today’s toolkit is far broader than it was just a few years ago. With lives and livelihoods on the line, we need to know which approaches work, which don’t—and why. What governments and donors must ask: Is our learning keeping pace?

Mancala—the centuries-old game found from Dakar to Dar es Salaam and now on phone screens everywhere—starts with a scoop of stones dropped one by one around the board, a steady rhythm of gather, share, learn, repeat.

Learning around pre-arranged financing (PAF) should move the same way. In 2020, we argued that the effectiveness of PAF could be compromised without serious monitoring, evaluation and learning (MEL). At the time, PAF was a promising but largely untested field. Since then, investment in PAF has increased, the instruments have diversified, and the stakes—for governments, donors and the people whose lives and livelihoods depend on protection from disasters—have shot up. Now is the moment to ask: is MEL keeping pace with PAF’s growth?

Anticipatory Action: a tiny beacon showing the way

The clearest progress has unfolded in the anticipatory action (AA) corner of the PAF continuum. Because AA deliberately releases funds before a disaster, agencies accept the risk of acting ‘in vain’. This gamble invites tough questions. Is early cash better than the traditional post-shock relief? The scrutiny has encouraged evidence-gathering and collaboration. More guidance emerged, and results were put on display. Examples include randomised control trials in Nigeria and Mongolia, process reviews in South Sudan and a recent evidence synthesis from the World Food Programme. AA’s budget is small, but it hints at what’s possible when learning and transparency are prioritised.

Beyond AA

There are bright spots elsewhere, though they are harder to find. The African Risk Capacity (ARC) evaluation, published in February 2025, is the third in a planned series of four spanning a decade of sovereign insurance at scale, asking not just whether premiums were paid on time, but also whether households fared better. In Mexico, a 2024 study took a fresh look at the now-defunct Fonden disaster fund—this time asking the big question: did it save lives? Early signs say yes. The InsuResilience Global Partnership’s Vision 2025 indicators—though heavy on counting outputs and lighter on quality— nudged the sector toward a shared language of results. However, the future of the initiative is unclear.  

The Global Shield is a young platform which, given its role in funding flows, has the opportunity to influence good practice. It has inherited Vision 2025, and the Global Shield Ambition highlights MEL as a key lever for stimulating innovation in PAF. It is too early to say how it will change MEL in PAF.  The demand-led model and wariness of top-down bureaucracy don’t have to get in the way of shared learning; instead, the platform can nurture a flexible MEL culture as the first batch of pathfinder countries start to implement PAF activities.

Why MEL struggles to keep up with PAF

So why hasn’t learning surged in PAF?

  • The pay-out fallacy. There’s a tendency to think evaluation and learning only matter once a payout lands —and yes, post-shock results are the ultimate test. But ex-ante phases can reveal gains, like stronger contingency planning, and also surface design flaws, coordination gaps, and unrealistic triggers—long before disaster strikes.

  • Culture and politics. Shared learning is baked into humanitarian practice — driven by high-stakes work with crisis-affected communities, reinforced through habitual coordination, and codified in the Core Humanitarian Standards. By contrast, multilateral development banks and risk pools tend to be more risk-averse: their interventions are often larger, and any misstep carries a higher political cost. It’s also easy to defer learning by pointing to client governments’ reluctance to publish.  Private insurers, meanwhile, may see transparency as a liability – reluctant to release commercially sensitive data. Where disclosure is risky, learning stalls.

  • The technical headache. Hazards can strike unpredictably, so evaluators should ideally make preparations in potential strike zones—creating tailored evaluation plans and placing teams on standby, ready to mobilise if a shock occurs. This is expensive and logistically messy. Even after a shock, the usual before–after lens can mislead: no change may be the desired result if early funding prevented losses. Agile evaluation designs become essential.

  • The causal maze. PAF’s theory of change stretches from global capital markets to household recovery, and a glitch anywhere along that chain—premium payment, trigger accuracy, treasury release, activation of response plans—can sever the link between a payout and relief. In addition, many PAF instruments disburse funds as general budget support. This makes it hard to trace how resources are used and who benefits.

Heading into 2030

Illustration: Camille Aubry

When catastrophe bonds mature without a public evaluation, policymakers cannot tell if the hefty costs were worth it. When payout timeliness is reported only in aggregate, implementers cannot pinpoint which mechanism needs fixing. And when evaluations stay locked inside archives, the same design flaws resurface in the next project, costing both money and trust.

The story so far is one handful of encouraging examples separated by vast quiet zones. AA proves that collaborative learning can flourish when culture and skills support this; ARC shows that  multi-year, multi-country evaluations examining the downstream impacts of PAF on communities are possible, and shared indicators promise a standardised approach to measuring quality. The task for the next five years is to build on these efforts and support the rest of the sector to catch up. The Global Shield has a valuable opportunity to raise the bar on what good MEL looks like in PAF.

How we can help

At the Centre, we offer free MEL support to governments, risk pools, UN agencies, NGOs and MDBs with an appetite for learning—especially those experimenting with under-evaluated instruments and programmes. We run lesson-learned studies, process and outcome evaluations and offer hands-on support to help practitioners turn vague ambitions into practical MEL systems. If you are wrestling with PAF questions and want to share your learning with the wider community, drop us a line at cklein@disasterprotection.org, or subscribe to our newsletter, The Dispatch to be the first to learn about our new MEL offer.

Next
Next

Counter Crisis: A brighter future for global disaster response?