There is wide variation in the competence of UN agencies’ use of Results-Based Management. As bilateral aid agencies consider how they can offload responsibility for managing aid programmes to multilateral agencies, it is worth examining how the UN agencies stack up in terms of accountability for results. This is the first of four posts comparing how bilateral and UN aid agencies define results. It reviews publicly available guidelines and policy documents to try to understand the underlying causes for poor results reporting at the UN agencies.
Level of Difficulty of the reviewed documents: Moderate to complex
Primarily useful for: Those trying to understand the UN’s inconsistent RBM system
Coverage: 8 documents reviewed, 546 p.
Most useful: The Draft ILO RBM Guide
Limitations: Dense language in most of the documents, and unresolved ambiguities about what results mean in the UN context.
Who this post is for
This post is intended for bilateral aid agency representatives, host country government agencies, project managers, evaluators and monitors who want to know why there is such a wide variation in the standards applied by different UN agencies to managing for and reporting on results. While most readers will not end up any more satisfied with the UN approaches to RBM after reading these posts, they may understand why there is such variation in performance reporting.
One of the problems in reviewing donor agency policies on results reporting, and sometimes the guides on results based management, is the huge amount of essentially tedious material that the reader must wade through before getting to the heart of what each agency requires, and in particular what it means by “results”. The sites and documents referred to in the first two posts of this series of four, are intended to be of use to people who need, for one reason or another, to make sense of, take guidance from or work within UN agency results frameworks. These posts are likely to be of limited interest to readers who don’t need to worry about UN agency RBM, other than perhaps to gain some insight into why it can be difficult sometimes to pin down “what difference” development assistance is making.
Inconsistent UN application of Results-Based Management
National governments and bilateral donors are often confounded by how some UN agencies appear, on the surface, to resist using results-based management in multi-donor projects, or in projects where bilateral agencies provide grants to a UN agency to act as the implementing agency. Most bilateral donors insist that bilateral projects and programmes track progress on results, reporting not just at the level of completed activities, but for mid-term and even long-term results.
Bilateral donor agencies are held accountable for how they spend taxpayers’ money on bilateral projects. While their results reporting is not always done with complete effectiveness, even in some of the agencies which pay the most attention to results, as recent criticism of DfID reporting shows, the organizational culture of many bilateral agencies at least validates the pressure put on project managers to test assumptions, to collect baseline data, and to report not just on completed project activities, but on progress towards results.
The basic definitions of results – often labelled "Outputs" and "Outcomes" – is roughly the same for both bilateral and UN agencies. The frustration for many of those who are trying to account for how public money is spent on aid projects, however, is that there appear to be different standards for how, on the one hand, NGOs and private sector agencies implementing bilateral projects have to account for results, and on the other hand, the often slack results accountability requirements applied to the bilateral aid that is channelled through UN organizations.
While even relatively small bilateral NGO projects are often expected to demonstrate to bilateral donors some progress in realizing results, these same bilateral agencies often apply different standards to grants made to UN agencies, often requiring and receiving -- even for large projects -- reports dealing in detail only with the completion of project activities.
In one case I can think of, in return for a grant of several million dollars by a bilateral agency to one of the development banks, the progress report -- two pages in length -- described not even the activities undertaken, but simply listed sub-project names.
The report had no details on context, on rationale, or even on activities, and certainly nothing on results.
Meanwhile, several NGOs getting $20,000 from the same bilateral agency were wading laboriously through indicators, collecting baseline data and providing detailed reporting not just on activities, but on results.
Not all UN agencies are alike, of course. FAO and the ILO make a real effort to apply results based approaches creatively and effectively. The draft ILO RBM GUIDE [34 p] for example, is easy to understand, and does put the emphasis on reporting results, not just completed activities.
Other UN agencies such as UNICEF and UNIFEM also sometimes do (in UNIFEM's case "did") report effectively, trying to distinguish between activities and results. But there are still other agencies, such as UNDP, where the approach is being applied -- to put it as gently as possible -- inconsistently. This despite the attempt in 2009, with a new Handbook on Planning, Monitoring and Evaluating for Development Results [221 p], to reinforce a results culture. [Note: by October 2011, this handbook was, at least temporarily, no longer easily available.]
A 52-page 2008 study by Alexander MacKenzie on systemic challenges to the effective use of Results-Based Management at the country level in the UN system reviewed a number of previous studies on RBM-implementation problems and makes the point that while there has been progress in reporting an results at the project level, this is not the case at the country level under the UNDAF.
Some UNDP projects, in fact, do report on results and this may be what the “progress” noted in this report refers to. But it is commonly acknowledged among bilateral agencies collaborating at the project level that -- despite their repeated pleas for reports on results -- many of the UNDP-managed projects to which they contribute millions of dollars are continuing to provide reports that year after year deal only with logistical issues related to completed activities.
The question that needs to be asked is:
Why do some UN aid agencies appear able to get away with inadequate - and in some cases clearly sloppy -- reporting, while in other situations even in the same types of agencies, managers strive valiantly to deliver thorough and frequently highly insightful reports on results?
Assessments of RBM in the UN
There are a number of UN sites with a wide range of guides available. However, the country programming reference guides on results based management at the United Nations Development Group website, and the UNDG RBM working group list of documents and study materials are most frequently referenced in other UN papers as relevant to country-level programming. When I reviewed it for this post, the country programming reference site had 11 substantive papers focusing on how Results-Based Management is being, or should be, implemented within the UN Development Assistance Framework at the country level. Some of those are referenced here, and some in the next post. Documents on the UNDG site relevant to this discussion include:
A UN RBM Action Plan [4 p.] produced in January 2009. This short paper called for a common approach to results based management by UN agencies, common UN RBM training programmes and materials, but not much more.
A number of earlier studies cited in the MacKenzie review [see above] noted many RBM problems at the UNDAF level that will be familiar to bilateral donors trying, even at the project level, to encourage their UN counterparts to comply with some minimal standards for results-based planning, management and reporting:
- “Outcomes that are too broad, not strategic, and whose contribution to national priorities is unclear;
- Outputs that are not linked to those accountable for them;
- Results chains with poor internal logic;
- Indicators that don’t help to measure whether results – particularly outcomes – are being achieved and a lack of baselines and targets; and
- Poor use and monitoring of risks and assumptions.” [p. 1]
The study’s suggestion that results reporting may be functioning at the project level, but not at the UNDAF level, is a depressing thought given the low level of systematic and coherent reporting on results actually being done at the project level.
Weak UN Results Culture
The 2007 Evaluation of Results-Based Management at UNDP [131 p.] made a crucial point: “In practice, lack of good data in the reporting system is because those responsible for inputting the data don’t see it as something important they are accountable for. It’s just one more imposition from headquarters.”
“In some ways,” as the 2009 UNDP Handbook on Planning, Monitoring and Evaluating for Development Results put it “ it is similar to the difference between having RBM systems and having a culture of results orientation—while it is important to have the systems, it is more important that people understand and appreciate why they are doing the things they are doing and adopt a results-oriented approach in their general behaviour and work.” [p. 12]
The MacKenzie study supports this perspective in noting that, while there had been efforts to coordinate the use of RBM in planning formats, there was little progress on using results information for decision making.
“RBM is understood mainly as planning system – less as a reporting system – and almost not at all as a system to help with day-to-day decisions about programme management. So high quality results information isn’t really needed, or only needed periodically to meet outside reporting requirements.” [p. 7]
RBM systems were viewed by many staff, he noted, as externally imposed things that need feeding….” [p. 14]
Inconsistent UN Agency Leadership in RBM
The 2007 evaluation of RBM at UNDP noted what is, I think, the real cause of inconsistent implementation of Results-Based Management in the UN agencies, in suggesting that the removal in 2002 of mandatory project monitoring tools may have led to a decline in project-level monitoring and evaluation capacity in some country offices, [p. 46 ]. While “… it has stimulated the creation of diverse M&E approaches in others - especially where there is a staff member dedicated to M&E” and “some progress has been made in country offices towards monitoring outcomes”, overall, approaches “fail to explain how projects are contributing to programme outcomes.” [p. ix]
This 2007 evaluation also found that project-level management in UN agencies was overly geared to Outputs (rather than Outcomes), and that although managers have the mandate to adjust operations on the basis of results, “the evaluation found no evidence of results being a significant consideration in that process.” [p. 44].
My point here is that, while variations in approach to RBM are now permitted in the UN system, variations in commitment to results-based management by agency leaders at the country level may in large part explain why, within and among individual agencies such as UNDP or UNICEF, results-based planning and reporting are handled so differently: reasonably well in some countries, but essentially ignored in others.
UNDP can be observed taking project results reporting seriously in one country, but not in another. And even within the same country, UNDAF notwithstanding, it is possible to see UNICEF, for example, treating results seriously, while the UNDP office nearby regards it as, what seems to outsiders, a bothersome afterthought.
Bilateral assessments of UN agency RBM
The 2009 partner and bilateral agency assessments of multilateral aid agencies reflect this difference. On the whole, government partners rate UN agencies more highly on Results-Based Management, than do donor agencies, but donors too see a difference between how, UNICEF, for example, manages for development results (reasonably well) and how they rate UNDP’s Results-Based Management (barely adequate).
The 2009 donor (MOPAN) assessment of UNDP [46 p.], published in 2010, stated that “Donors rate the UNDP as inadequate in ensuring the application of results management across the organisation” [p. 15]. The same document noted that donors “have some reservations about the UNDP institutional culture for supporting a focus on results.”
This is putting it mildly.
In a comparison of the details of answers to specific questions in the donor assessment of UNICEF [47 p.] and UNDP, donors ranked UNICEF higher than UNDP on all five questions related to country focus on results, higher on senior management’s leadership on results management, higher on issues such as whether results frameworks have measurable indicators at Outcome and Output levels, and whether organization-wide strategies have causal links from Outputs through Outcomes to Impacts.
While UNDP rated highly for delegating authority to country offices, this suggests that local leadership on results based management is sometimes lacking. It also suggests that some earlier hopes for improvement in UNDP reporting had not materialized by the time these assessments were produced.
As a colleague and evaluator who has worked as a consultant with many UN agencies explains it, however, the criticisms of some UN agencies’ performance on project-level results reporting may need to be moderated by the realities of context:
- UN agency Field staff often face considerable pressure from within the organization to report on activities and expenditures.
- Field staff have few professional incentives to take the time necessary for detailed work on indicator-based data collection and reporting at the project level.
- Field staff often have to cope with reporting to multiple donors, ranging from major bilateral agencies to small community charities, each with different formats and information expectations.
All of this said, however, where a results framework for a project has been agreed upon between UN agencies, national government agencies and bilateral donors, there can be little to justify not using it.
The bottom line:
United Nations Agencies have an inconsistent record in applying results based management at the project level. Agency leadership at the country level and a generally weak results culture through the system contribute to this, but I think the way results themselves are defined in the UN system also has an effect.
The issue of the UN results chains and results definitions is the focus of my next post.
Further reading on UN Results-Based Management issuesMultilateral Organizations Performance Assessment Network reports on UN agencies
United Nations Development Group country programming guides and documents
The UNDG RBM Group page has a large number of country reports and RBM guides from a wide variety of UN agencies.
A short YouTube interview with the team leader of the Evaluation of Results-Based Management at UNDP, Derek Poete, summarises the issues related to problems in the results culture.
This post edited to update links on August 5, 2010