google-site-verification: googlefccb558c724d02c7.html

Translate

Tuesday, February 09, 2010

Indicators: a simple analysis


-- Greg Armstrong -- 

The Ants and the Cockroach: A challenge to the use of indicators,  and A Pot of Chicken Soup, A response

 by Chris Whitehouse and Thomas Winderl; and Rick Davies’ commentary


Aesop’s Fables explains indicators. A  deceptively simple and engaging introduction to some complex arguments on indicator development and use. Simple metaphors outline the case both against, and for the use of indicators in results-based management.


Level of Difficulty:  Simple to moderate
Primarily useful for:  Anyone who wants to review the basic arguments on indicators
Length: 12 pages, combined
Most useful sections: Engaging metaphors for RBM
Limitations:  Too short to guide practice


Who this is for


Anyone who wants an introduction to what the past ten years of debate on indicators and RBM are about, or anyone who is fatigued with more sophisticated analyses and  just wants a laugh.  Trainers might find this useful too when the usual RBM exercises fade.

Simplifying the indicator debate


Development professionals know how complex the process of developing good indicators can be.  Doing it effectively - or at least in a manner that will give you a fighting chance of producing information relevant to how a project is performing - requires the major participants in project planning and implementation to work together.  Exploring the validity of an indicator, its political implications, and the practicality of data collection, can take a long time and can be, as evaluators say, challenging.


It is not that the process of defining usable indicators is necessarily intellectually difficult, but it does require sustained attention.  There are so many questions that need to be answered as you weed out the useless indicators, that the process invariably takes time, and patience.


While answering them may be time consuming, the basic questions that need to be asked in assessing indicators, are simple, something that often gets lost in more complex analyses.


In 2004, Chris Whitehouse, who was a United Nations Volunteer Programme Officer in Bhutan, took on his colleague, Thomas Winderl, who was the UNDP’s Head of the Poverty Unit in Bhutan, in what looks like a good-natured debate on the utility of indicators. The result is this short combination of two points of view, apparently simplified to the very basics. I say “apparently” because a careful reading of the articles reveals the complexities the authors were obviously aware of, when they produced these papers. This  particular article appeared on  the Monitoring and Evaluation News website, in 2004, and the exchange was later included in an anthology of articles on monitoring and evaluation, published in 2006, titled: Why did the Chicken Cross the Road  


Other critiques of the LFA and RBM

The core of the arguments against indicators and the LFA were not particularly new at the time.  The article appeared several years after Rick Davies and Jess Dart had begun working on an alternative to the use of the LFA, at about the time they were writing The Most Significant Change Guide. The Outcome Mapping Approach had also been developed largely by IDRC, roughly 3 years earlier, and was also starting to gain some momentum. This was also at a time when interest was just beginning to appear in what is now referred to as “impact evaluation” or more confusingly as “counterfactual” analysis.


The debate on the use or misuse of indicators, the LFA and RBM,  has continued since 2004, but never in such a user-friendly format as was the exchange between Chris Whitehouse and Thomas Winderl.

Challenging the validity of indicators


Chris Whitehouse introduces what he sees as some basic issues in indicator development --
  • whether the indicators are measurable,
  • who will collect the information,
  • cost and baseline data.
But the potentially dry discussion is made more engaging through the metaphor he uses - the stakeholders are millions of ants, the project involves first, the movement of a dead cockroach to their nest, and second, after learning lessons from that experience, moving a dead beetle to the nest. The project manager is the Queen ant, persuaded, perhaps against her better judgment, to measure the effectiveness of individual and group progress on getting the cockroach to its destination.


Assigning 10% of the work force to monitoring, a host of different indicators are tested, data collection becomes chaotic, the workers are distracted from their task, focus obsessively on indicator data collection, and decisions that will achieve the indicator, but ignore the underlying long-term result - more food.  At the end of the  whole sorry process, nobody knows anything useful about the result, but they have learned a lesson about the perils of using indicators.


Interspersed with the saga of the cockroach is the less engaging, but more realistic discussion of the same issues as they might appear in a UNDP project focused on providing computer training for civil servants.  Together the two stories make the points that are now familiar to anyone who has been following attempts to make the logical framework approach more usable, or to develop alternatives to the approach:
  • Technically valid indicators are often impossible to measure
  • Measurable indicators are often trivial or misleading
  • Good indicators can take so much time and money to develop, that they interfere with more important work
  • Focusing on indicators can bias programming decisions to the exclusion of more useful activities that might not produce measurable indicator data.
  • The use of indicators implies cause and effect, a scientific validation for activities - and without control groups, this is not valid.
Chris Whitehouse’s conclusion is that the logical framework approach is primarily useful as a tool to test the logic and assumptions implicit in project design, but the use of indicators misrepresents and ultimately undermines that simple utility.

Defending the use of reasonable indicators

Thomas Winderl replied with the analogies of cooking soup, and trying to figure out what the weather is.  What indicators do we have that will tell us when the soup is ready to eat? What indicators tell us whether we should wear an overcoat or take an umbrella on a walk?


The soup is relatively easy - watch it, does it boil?  Is it too hot to eat?  Too cold?  No need for complex equipment, or outside expert monitors here.   Similarly, if we look out the window we will have some fairly useful indicators about the weather?  Is it snowing, raining?  What are people wearing?  Shorts?  Gloves?


The points he makes are, again familiar, but well summarised here:
  • Money spent on indicator development and monitoring  should be proportional to the overall project -- but in fact most project vastly underallocate resources to monitoring and evaluation.
  • Monitoring bad or poorly developed indicators is a waste of time but putting  reasonable time and money on indicator development and monitoring is a good investment.
  • Indicators will skew management decisions in an unhelpful manner only if the indicators are irrelevant to the longer-term goals of the project (Outcomes or Impacts, depending on which agency’s jargon is involved).
  • The process of developing indicators is itself a valuable part of the design process, helping - or forcing - participants to be clear with each other about what they mean when they talk about results.
  • The arguments against bad indicators skewing behaviour are an argument for spending time to develop good indicators.


Measurement and verification

Finally, there is a third short and simple paper, produced separately at roughly the same time, that addresses these issues.  Rick Davies responded to Chris Whitehouse’ Ants and the Cockroach, with a one and a half-page commentary in September 2009.   His primary points, as I see them, are:
  • No single indicator is ever likely to capture the process of change, but that multiple indicators might contribute differently to our understanding.
  • Obsession with measurement is a problem - but while objectively verifiable indicators do not necessarily have to be measureable, they must be verifiable.
  • Control groups are not the only means of attributing results to projects.  Most large projects have enough internal variation that with a little thought differences in results can be compared to differences in the way assistance was rendered.

Conclusions about indicators, results logic and assumptions

I agree with Thomas Winderl, and Rick Davies that indicator problems can usually be solved - if enough time and attention are paid to them.   


But for me the most interesting part of Rick Davies’ response is in how he deals with the horizontal and vertical logic of a project.  He notes that within the context of the Logical Framework, indicators are usually critiqued in terms of their external validity: do they measure what they are supposed to measure?


But, he adds, the more important question is whether the whole change process is actually clear: Do inputs, assumptions, completed activities, results and indicators hang together in a reasonable way?  Questioning this internal validity of project design is more important, and deserves more time than it gets now.  “We need more attention to theory, and perhaps a little less obsessing about measurement…. And a verified theory holds the potential of replicating successful processes of change elsewhere.”


In my experience, the serious examination of assumptions:  taking the time to clarify -- and then to monitor --assumptions about the development problem, assumptions about our theory of development, and assumptions about the working situation necessary for a reasonable chance of success, remains in practice one of the great unexplored areas of project implementation in a results-based management context.   If few resources are spent on developing and monitoring indicators, even fewer are allocated to the most fundamental of all issues in project and programme design:  examination of our assumptions.


Without serious attention to, and examination of  the multiple dimensions of the assumptions we are working with over the lifetime of a project, we are unlikely to learn lessons we can use in future practice, we are unlikely to identify underlying misunderstandings that can undermine our current work, and we are unlikely to spend public money in a responsible and effective manner.


The bottom line: None of these three very brief articles will walk you through the sometimes complex process of finding a good indicator, but they do point out the main arguments that have arisen over indicators, and which have been expressed in often more complex articles or books over the past 15 years.


Further reading on indicators and results


If this was too simple for you, there are more complex, yet interesting assessments of indicators and of alternative approaches to reporting and evaluation of results:


_____________________________________________________________


GREG ARMSTRONG
Greg Armstrong is a Results-Based Management specialist who focuses on the use of clear language in RBM training, and in the creation of usable planning, monitoring and reporting frameworks.  For links to more Results-Based Management Handbooks and Guides, to to the RBM Training website





RBM Training

RBM Training
Results-Based Management

Subscribe to this blog by email

Enter your email address:

Delivered by FeedBurner

 
Read the latest posts