google-site-verification: googlefccb558c724d02c7.html

Translate

Tuesday, August 12, 2014

Building a Monitoring and Evaluation System: IFAD’s useful M&E RBM Guide

by Greg Armstrong

[Updated July 2018]

A “Guide for Project M&E”  produced in 2002 by the International Fund for Agricultural Development and written in clear language, still provides solid, practical, and actionable advice on the design, and implementation of a Monitoring and Evaluation system.  The RBM tools provided here, although replaced in new and updated guides, remain useful for project design, planning, monitoring, reporting or evaluation, in 2018 or beyond.

IFAD Guide for Project M&E cover

Level of Difficulty: Easy to Moderate
Length:378 pages
Languages:  English
Primarily Useful For: Planners, evaluators, monitors, managers
Most Useful:  “Navigating the Guide” identifies sections of use to different readers
Limitations:  Originally produced in English, French, Spanish and Arabic, this guide has been replaced by new and updated M&E guides, and only the English language version of this 2002 edition is easily available. 

Background


In November 2013 I worked briefly with IFAD Vietnam, helping them develop a simple reporting format for their projects.  I was struck then by the practical nature of the discussions, working with participants who were literally grounded in the realities of rural development, and with the interest IFAD staff take in developing workable tools for project management.

It was then that I first came across the IFAD  Guide  for Project M&E,  written by Irene Guijt and Jim WoodhillIt is something I regret not seeing when it was first produced in 2002. The fact that it was produced 12 years ago does not, however, detract from its ongoing utility.

Who This is for


While those getting the most obvious benefit from this RBM guide will be IFAD staff, partners and consultants working with them, the basic ideas and the clear organization of the document will make if useful for programme and project planners, monitors, evaluators, and  field staff working with any organization,who are trying to figure out how to make something practical out of the usual donor results-based management technical terms.  The primary target audience for the document is obviously going to be people who focus on agriculture, food production and associated activities, and while they will know a lot about those topics, they, like most people, will not want to wade through, nor are they likely to remember, pages of donor-specific RBM jargon.  

This 2002 Guide is written in relatively clear language, [and in 2018 I find it remains more user-friendly than the more recent guides] and while it does refer to some basic RBM terms, it is easy to get past this.    The terminology different donors use in Results-Based Management change with the winds of management fashion, but underlying the jargon, if you can find them, are some basically simple ideas.  This document presents these basic ideas well.

Format:


The IFAD M&E Guide, in English, French, Spanish and Arabic, was produced in 14 separate files (an introduction, 7  substantive sections, and a number of useful annexes) [Edited, July 2016: which, in the past, could be  read online at the website].  They could, until 2014 be downloaded individually, for those people who have a very specific interest in a limited sector of results-based management – such as for example,  project design, monitoring and evaluation, capacity building in M&E,or learning lessons from M&E.

Dividing the guide into discrete components is a fairly intelligent approach, given that many of the people who may want to use the ideas could be working in remote areas, may not have access to high-speed internet connections, and may not be able to download the entire guide. If you are going to read only individual chapters, however, [I suggested, when I originally wrote this review], trying to download the individual chapter PDF files, because the illustrations are better in those, and navigation on the online version is sometimes erratic. [Edit:  The online version no longer appears available but the new 2015 IFAD Evaluation Manual is available as a PDF file from IFAD and this link will take you to an archived PDF version of the still useful, and much more easily usable 2002 IFAD Guide for Project M&E].  The PDF file for the entire manual is a little over 2 megabytes in size, and if that is too big for your internet connection, you can download the individual chapter files [most of which still appear to be available at the addresses below, in 2018].

Navigating the Guide

To make the choice of which chapters to download – or read – easier,  the section in the preface titled “Navigating the Guide” provides a good visual guide, with clickable links, to which sections of the report might be of most use to individuals – project managers, M&E staff,  consultants, and IFAD staff and partners.   This is divided into project phases – design, start-up and implementation.
Box showing which sections of the M&E Guide will be of interest to different readers.
Navigating the IFAD M&E Guide
[click to enlarge]

Below is a brief description of some (but not all) of the topics covered in the main sections of the Guide deal, with links to the appropriate sections where they can be read in detail online, or downloaded. In all of these sections, the Guide provides easy to understand examples, numerous illustrations and charts and suggestions for further reading.

Monitoring and Evaluation in IFAD


Introducing the M&E Guide  - Section 1 (16 pages) defines the purpose and target audience,  describes the IFAD project cycle, and common problems with monitoring and evaluation in IFAD projects.  These are topics that will be primarily of interest to those working, or about to work with IFAD, but I suspect people working with other organizations will recognize many of these same problems in monitoring and evaluation..


M&E and Impact


Using M&E to Manage for Impact, Section 2 - (32 pages)  quite sensibly makes the case for adaptive management, giving a project room to change as it evolves, and how intelligent monitoring and evaluation can help managers and implementing agencies cope with changes that occur during implementation.   Among other things, section 2 deals with topics such as
  • Developing the logic of the project, to ensure there is a clear link between activities and results,
  • Linking the provision of information through M&E to key decision schedules,
  • Creating a learning environment with the project and using M&E within this learning environment to increase the chances that long term results (impacts) will be achieved,
  • Setting Up and Using an M&E system that is simple, and which provides data for decision making
  • Understanding participatory M&E – and how to make it work.

It is worth noting that when this was written in 2002 the terms used for results were “Outputs, Outcomes and Impact”.  By 2004, in a new IFAD Results and Impact Monitoring System (RIMS)  that had become First Level Results (Outputs), Second Level Results (Outcomes) and Third Level Results (Impact) – something much easier to convey in a second language situation, and for that matter, I have found, even for those whose native language is English.  [2017 IFAD project evaluations including project theories of change suggest this may now be "Outputs", "Intermediate Outcomes" and "Impact"]

M&E, project design and annual work planning

Chapter 3 - Linking M&E to project design and work planning
Linking Project Design and Annual Work Planning 
to Monitoring and Evaluation
[Click to enlarge]

Linking Project Design, Annual Planning and M&E – Section 3 (32 pages) approaches project design as something which continues throughout the life of the project, and includes a description of the types of design  issues which need to be addressed at 7 stages in the life of a project, and how an M&E system can fit into this design process –

  • At the initial project design phase,  
  • During project start-up, 
  • During the annual work planning phase,
  • During ongoing project operations, 
  • At the end of the early implementation stage, during mid-term reviews, and 
  • At the beginning of the project phase-out.  
Diagram on how assumptions need to be tested in project design
IFAD - Testing assumptions in project design
[click to enlarge]

This discussion includes issues such as planning for M&E capacity development, planning to encourage learning and adaptation during project implementation, dealing with the simplistic format of the logical framework when working with a complex project, testing assumptions in project design and how to conduct a situation analysis.  

Assumptions


There is a good, although short section on the need to critically review assumptions  - not just to treat them as an afterthought, and tied to this, to test the logic of the project design to really establish if activities are likely to contribute to results – what today might be called the theory of change. 

Moving from the Logical Framework to an annual work plan


Finally, this section shows how to translate the logical framework into an annual work plan, with a list of specific topics to be covered, and 12 specific issues to be addressed in the M&E component of the plan.

Building a Monitoring & Evaluation System


Setting up a Monitoring and Evaluation System -Section 4 (21 pages)  reviews 6 stages in the initial establishment of a monitoring and evaluation system, including 

  • Clarification of the purpose and scope of monitoring and evaluation with key stakeholders,
  • Specifying information and indicators required, and the relevance of such information for specific groups, 
  • Planning and testing the feasibility of information gathering methods required, 
  • Specifying the target audiences for M&E data, how these audiences will use the data, the schedule on which they need the data, and how it will be transmitted,
  • Specifying how such data will be used in critical reflection by stakeholders, with what methods, for what purpose, and on what schedule, and 
  • Planning for the precise number of M&E staff, their roles and responsibilities, the budget, and the information management system that will be used for M&E.

Critical reflection on M&E data


In the overview of critical reflection on data, something that, in my experience is usually glossed over both in planning and in the course of project implementation, the guide discusses 

  • Examples of 7 stages where specific attention could be given to the specific participants in reflection activities, 
  • The schedule and format for discussions, 
  • How to critically examine project strategy, 
  • Whether M&E is meeting information needs, 
  • How data can be used during quarterly project reviews, during field visits, and during annual project reviews, How M&E data can be used during period review workshops for specific project components and in preparation for supervision or monitoring missions.  
The Guide comes back to the discussion of critical reflection in more detail in section 8 (below).


The M&E operational plan


This section provides an example of a specific format that could be used to clarify the M&E timeline and a suggestion on 8 sections for the M&E operational plan itself

Information needs and Indicators


Deciding what to monitor and evaluate- Section 5 (36 pages) deals with some of the practical issues of selecting indicators and collecting data. The clear focus here is on attention to results, not just to Outputs, or completed activities – but on obtaining data which will show whether desired changes are actually occurring.  Sections 5 and 6 (below) could be read as one coherent chapter.

Varying information needs of stakeholders


The Guide provides an example of how consulting stakeholders can reveal multiple different information needs in the same project, with donors identifying 9 broad issues on which they wanted data, field workers focusing on information needs in 12 largely different areas, and farmers interested in data on 14 more issues.

Using the M&E Matrix


This section of the M&E Guide also provides a description of how to use a Monitoring and Evaluation Matrix (what some other agencies call a Performance Measurement or Monitoring Framework). This section includes a review of the potential problems with complex quantitative indicators, with compound indicators, indices, proxy indicators, and different types of qualitative indicators.  
Table comparing different types of project indicators
Comparing different types of project indicators
[click to enlarge]

It also reviews different means of determining if the project has in fact contributed to meaningful changes.

Table comparing means of determining if a project led to changes
Comparing methods of determining change in indicators
[Click to enlarge]


Detailed discussion of these issues is, in my experience, always necessary.  I have found in RBM workshops that going  through the practical stakeholder discussions about the details of such a results and indicator framework can take days of group discussion, to find practical, useful indicators. These indicators often need to be reviewed again on an annual basis – no simple task, and one which can be time consuming, and for some managers, an annoying distraction.

Failing to conduct such indicator reviews on a regular basis, however, almost inevitably results in later problems for project management, so spending a bit of time at the beginning of a project is better than spending a lot of time near the end, trying to invent new indicators and collect retrospective baseline data.


Collecting and Using Meaningful Indicator Data



Gathering, Managing and Communicating Information - Section 6 – (32 pages)  could logically have been included with section 5.  It is really a discussion of the very practical issues of data collection coming out of the selection of indicators. 

This section begins with a quote that very nicely distinguishes between data, information and knowledge, and why, therefore, we must be careful not just in our selection of indicators, but in how we analyse indicator data, who analyses, and when we analyse it:

"Data travel. On this journey they are gradually collated and analysed as the data move from field sites or different project staff and partner organisations to be centrally available for management decisions and reports. The journey involves a transformation from data to information and knowledge that is the basis of decisions. Data are the raw material that has no meaning yet. Information involves adding meaning by synthesising and analysing it. Knowledge emerges when the information is related back to a concrete situation in order to establish explanations and lessons for decisions. Many rural development projects have much data lying around, less information, little knowledge and hence very little use of the original data for decision making." [Knowing the Journey Data Will Take, section 6, p.1]
Cartoons illustrating how indicator data are used differently at different stages of a project
How indicator data are used at different stages in a project
[Click to enlarge]

Data collection methods


Most of the section goes through a series of very detailed questions to be considered for specific data collection methods, and how the utility of these methods may be perceived differently by different stakeholders (farmers, researchers, policy makers and funding agencies).  

The list of questions for assessing the utility of indicators is exactly the kind of thing that should be discussed, but often is neglected when indicators are selected.  Among the other very useful discussions:

  • Important steps in the preparation for data collection
  • Methods of ensuring that the M&E data collected is reliable
  • Options for recording data
  • What we do with the M&E data after we collect it – collating  data, analyzing data, documenting information derived from the data, and communicating it effectively to important decision makers – in other words, transforming M&E data into information, and then into actionable knowledge.

This section is supplemented by a 52-page  annex (see below) which summarizes 34 different M&E data collection methods, in seven categories

  • Sampling-related methods
  • Core M&E methods
  • Discussion methods (for groups)
  • Methods for spatially-distributed information
  • Methods for time-based patterns of change
  • Methods for analyzing linkages and relationships
  • Methods for ranking and prioritizing

M&E Capacity Development


Putting in Place the Necessary Capacities and Conditions -Section 7 (40 pages) focuses on how to build M&E capacity within an organization.   

It  outlines the different skills, and responsibilities for monitoring and evaluation needed not just by specialized M&E staff, but by project managers, sector specialists, data collectors, and stakeholders, emphasizing that skills essential for one level of staff might not be necessary for others, and facing the implications of what participatory monitoring and evaluation means, in reality, if we go beyond simply labeling everything as “participatory” and actually want to make the process participatory.

M&E Training


Section 7 of the M&E Guide also discusses how we can assess training needs for M&E,  and the different options for building monitoring and evaluation capacity for

  • National M&E systems,  
  • Institutional M&E systems, 
  • Establishing a monitoring and evaluation system at the project level.  

Incentives to conduct and use M&E

The guide also deals with something which is often neglected, in my experience – incentives for M&E: Why should staff and stakeholder actually want to improve monitoring and evaluation, given all of the other pressures they face?

We know from studies of how innovations and new policies are implemented, that adoption motivation is a critical element in whether innovations such as new approaches to M&E or RBM are sustainably implemented over time. Making such system easy to understand and easy to use, increases such motivation.

Locating an M&E unit in the project structure


The component on organizing M&E structures and responsibilities is interesting,  discussing whether M&E skills (and responsibility) should be put into a specialized M&E unit, and if so where in the organization it should be located, to ensure that monitoring and evaluation actually occurs, and has an impact on management.  What is refreshing about this is that the guide provides several example from IFAD projects where the location of the M&E unit did not work effectively.  

Many agencies really avoid talking about problems they have, but we learn the most from honestly analysed mistakes, and these are issues and problems that other people, in other agencies, can recognize from their own experience.

Balancing M&E technical assistance with M&E capacity developmen


This section also deals with how to make the best use of technical assistance and consultants for monitoring and evaluation– the need for clarity on the role of external M&E advisors, the advantages and disadvantages of using consultants, what to do when different consultants provide conflicting advice, and the need to balance the requirement for short-term, high quality technical advice with the need to build sustainable indigenous capacity for M&E within an organization.

Computerizing M&E data – and establishing the M&E Budget


A brief discussion on 8 steps to establish a computerized information system for tracking M&E data, and the advantages and disadvantages of doing this, is followed by a discussion of efficient budgeting for monitoring and evaluation, and a list of 41 items, in 5 categories, that should be included in, or at least reviewed for the M&E budget.


Learning from the Monitoring and Evaluation Process



Reflecting Critically to Improve Action  -Section 8 (28 pages)  confronts the biggest question in Monitoring and Evaluation:  What good are an elegant M&E system, coherent indicators and a logical sequence of links between a problem and results – if nobody actually uses M&E data?

This goes back in more detail to the issue first raised in the earlier section on  how set up a new M&E system, to how and when, in the project lifecycle we can use monitoring and evaluation data to influence decisions.  The basic points made here are that critical reflection will only produce actionable management results if the reflection on M&E data is purposive, and focused.

Making Lessons Learned more than a cliché


The subtitle of “lessons learned” is included in almost every evaluation or monitoring report, but the supposed lessons are often not new, and sometimes not actionable.  This section of the Guide deals briefly with how to structure an intelligent, and relevant statement of what has been learned.

Reflection on M&E data is a Learning Process


The authors make the point here that getting anything useful out of critical reflection on M&E data is only likely to be possible, if we see this as part of a learning process, something that takes time, and needs to be approached as other learning events are, in a careful, thoughtful manner. 

Learning,” as the Guide notes, and as anyone who has facilitated complex learning events knows,  “does not happen in one sitting…. Knowing how to construct a useful learning sequence is a skill that must be learned.” 

This is something that many project managers or senior policy makers I have met may find frustrating as they rush to meet deadlines, but the time spent on giving people the margin to carefully think about data, and its implications,  is always necessary if we expect M&E to really be useful for project management.  But giving them that time, of course, means starting early in a project and providing ongoing support, not waiting until the final year of a project and then wondering why there is no resident M&E capacity.

Effective reflection on the data produced by monitoring and evaluation requires different processes for different levels of decision-making.  This means tailoring the learning process to the specific needs, and skills of different participants, at different times in the project or programme cycle, including

  • Making project team meetings reflective
  • Reflecting on issues with different stakeholders (such as, in the IFAD case, water users’ associations, micro-credit groups, and village associations)
  • Using Steering Groups for critical reflection (something that I have seen  often studiously avoided in rushed, rubber-stamp meetings of donors, for example)
  • Using Annual Project Review and Annual Work Planning processes as a focus for critical reflection and learning
  • Learning Useful and productive lessons from external Supervision or monitoring missions
  • Assessing Impact at Project Completion.

Evaluation methods, Writing terms of Reference, and Logical Frameworks


Finally, in addition to the substantive sections, summarized above, the IFAD M&E Guide also has several annexes with more information for specific management interests:

Examples of Logical Frameworks,    (20pages) 
An Annotated Example of an M&E Matrix  (20 pages)
A review of 35 Monitoring and Evaluation Methods   (52 pages)
Writing  Job Descriptions and Terms of Reference for Monitoring and Evaluation  tasks (24 pages)  for project directors, M&E coordinators, M&E staff, general staff (including job descriptions for gender officers, financial officers, technical staff, and communication officers) for stakeholders, for consultants hired to set up the M&E system, for consultants hired to develop participatory M&E, for information management consultants, and terms of reference for mid-term reviews.
  

Limitations


There is not much negative than I can say about the content of  this M&E Guide  I found it all useful, either as a coherent summary of issues with which I was familiar, or as an introduction to new ideas.  It is because of the utility of the Guide, that I think it would have been useful to have it produced in more languages – beyond English, French, Spanish,  and Arabic.  It is possible that this M&E Guide has been produced in other languages – but if so, I could not find evidence of this.

There is, for example, a useful, and shorter 2012 M&E Manual Guide for IFAD Funded Projects in Vietnam  (195 pages) produced by the independent Vietnamese Development and Policies Research Centre (DEPOCEN) which combines both some of the material from the longer IFAD Guide for Project M&E, and very detailed guidance specifically on how staff can collect indicator data for IFAD’s Results and Impact Monitoring System (RIMS)  – but it appears to exist only in English – not in Vietnamese.  

Extrapolating from my own experience,  producing guides like this in the indigenous languages can greatly increase learning throughout an organization, and among stakeholders, and the chances that sustainable M&E capacity could be created.  Sure, it takes a bit of time, and money, to do such translations, but given that IFAD regularly spends, according to the 2016 IFAD Programme of Work $20 -$30 million a year, per country, in countries such as Indonesia and Vietnam, it would appear to be a relatively good investment, for the potential of long-term results in M&E capacity.  

The bottom line:  


The 2002 IFAD  Guide  for Project M&E provides a detailed, intelligent and practical discussion of what is required to both establish a monitoring and evaluation system, and to implement it effectively.  Even if you don’t need all of it, some of these chapters are bound to be of use to almost any development professional tasked with implementing a monitoring and evaluation system [and I think it is likely to be easier to understand than later versions].

_____________________________________________________________





GREG ARMSTRONG
Greg Armstrong is a Results-Based Management specialist who focuses on the use of clear language in RBM training, and in the creation of usable planning, monitoring and reporting frameworks.  For links to more Results-Based Management Handbooks and Guides, go to the RBM Training website


RBM Training

RBM Training
Results-Based Management

Subscribe to this blog by email

Enter your email address:

Delivered by FeedBurner

 
Read the latest posts