google-site-verification: googlefccb558c724d02c7.html

Translate

Sunday, December 27, 2009

The LFA Debate: A summary of the theory behind the Logical Framework Approach

-- Greg Armstrong--


[Edited to update links July 2019]

While SIDA has recently updated its publications on the Logical Framework, some of the earlier publications are still useful.  The first of three early SIDA papers describes the strengths, and the sequence in using the LFA in results-based management.

SIDA's LFA Papers: 1-The Logical Framework Approach: A Summary of the theory behind the LFA method


Level of difficulty: Moderate
Primarily useful for: Project planners
Limitations: Some ambiguity in the definition of results
Length: 35 pages
Most useful sections: Relating problem analysis to activity design (p. 9-12)


Background: Evolution of a debate on the LFA


Cover of the SIDA publication titled the Logical Framework Approach
The Logical Framework Approach - SIDA

The Swedish International Development Agency published three papers online between 2004-2006 that illustrate in some ways the evolution of a debate that exists over the utility of the Logical Framework Approach for results-based project planning, management and evaluation of international development projects. While Results-Based Management is not invariably tied to the LFA, the general approach of a sequence of activities, short-term and long term results, with or without a formal framework, is common to most RBM approaches. In this, and two separate posts, I will review these three documents.
  • The first was The Logical Framework Approach, A summary of the theory behind the LFA method, which emphasises the strengths of the logical framework approach for RBM,
  • The second is a paper probing in some detail the weaknesses of the LFA, and
  • The third a proposal to link more qualitative methods with the LFA.

[Update: More recent publications have been added since this original review, and they are listed at the end of this post]


A Logical approach to development planning


Kari Ortengren's 2004 paper, "The Logical Framework Approach: A Summary of the Theory behind the LFA Method" is a straight-forward explanation of nine steps in using the logical framework approach for designing a project, or programme.

The general presentation is unexciting, in comparison, to say, the 2005 Philip Dearden guide on using the LFA for multi-agency planning which I reviewed earlier, but that, I assume, is a function of the donor's format. While this paper is not quite as easy to read, it does make some good points about the sequence of project and programme design. And although it is not expressly noted here, in a subsequent paper in 2006 on applying the general approach to environmental project design, the author made the clear distinction between the Logical Framework Approach and the Logical Framework Matrix.



The matrix, of course, is the grid - the approach is the process by which the information in the grid can be developed -- and much more important than the matrix.


An example of the LFA matrix for a  drinking water project showing the relationship between resources, activities, results and indicators
Example of the Logical Framework Matrix for a project on drinking water
-Click to enlarge-

Problem identification - by stakeholders


The most useful part of this paper is its focus on problem identification. The paper notes - and many readers will recognise this from their own experience, that many projects appear to be started with a solution in search of a clear problem. By "clear" the writer appears to mean a problem defined in a way in which not just the bold problem statement is made, but in which there is a clear and detailed discussion of both the multiple causes, and multiple downstream effects of the problem.

In particular the paper is useful when it urges planners to avoid stating problems in terms of pre-ordained solutions. The writer urges us to avoid defining problems in terms of "absence" -- "absence of funding", or "absence of pesticides" for example, which more or less ignore what might be revealing causes of a problem, and limit discussion of alternative solutions.


"Lack of funding" as a problem statement, she notes, does not explore the possible details of the problem, which may be inadequate organizational capacity for financial control, planning, or other issues which could be directly susceptible to project intervention; and "Lack of pesticides" pre-empts discussion of alternative solutions by assuming pesticides are the only solution to crop failures.


The paper makes it clear that donors are not the best placed to conduct a problem analysis: "The problem analysis has to be made by the relevant stakeholders, including the owners of the problem, the people who know the situation, not by consultants or financing agencies." The paper does suggest, however, that donors and consultants can facilitate the discussion and exploration of problems.


Activity design



It is true that some projects become obsessed with activities, rather than with how activities relate to a problem, and prescription here - a reasonable one - is that...
"...the causes of the problem shall be treated by the activities, which are implemented within the framework of the project. The effects are handled automatically by treating the causes of the focal problem. Hence, no separate activities are needed for handling the effects."



This paper includes a short introduction to planning a project design workshop. The discussion lists seven major topics for discussion, the physical layout required and the time that should be allocated.



Limitations: Assumptions


There are two limitations here, although neither are serious, and both may be a product of the donor agency's prescriptions:

  1. Results are defined rather amorphously here, using the usual donor jaron - objectives, purposes, outputs, and this never makes life any simpler.
  2. Risks are usually discussed in guides or papers about results-based management or the Logical Framework Approach, but there is rarely any detailed discussion of assumptions.


Comment:  This paper mentions both identification of risks and discussion of assumptions in two pages. We could read between the lines here in the more detailed discussion of problem identification, to see the possibility for discussion of assumptions, but it would be useful if this were explored in more detail.  

If assumptions are so central to project success - to sustainability but also to examination of whether interventions make sense in development terms, then much more detailed discussion of assumptions should be a priority for everybody - donors, implementing agencies and other participants.  [[Update:  Later SIDA publications including a 2016 update by the same author, do explore this in more detail]


In the rbm training workshops I have conducted I have found several times that clarifying assumptions about a problem, and about implicit theories of what works (and what does not work) in project interventions, often reveals fundamental differences of opinion among stakeholders about what a project or programme should be doing, why it should be doing it, and how we will know if it makes progress. Yet this paper, like most, really concentrates on assumptions about a situation - what else is going on in the environment, that might impede project implementation.


Without a discussion of assumptions about the development theory in project interventions, projects are unlikely to be learning events, unlikely to learn from failure or success, and unlikely to advance our understanding of development and what works.

[Update 2019:] Fortunately, SIDA has updated the resources it makes available on results-based management and these now include the 2016 update on the use of the LFA, referenced above, a 2014 handbook on the use of RBM in SIDA research cooperation projects, and a long, but interesting 2018 YouTube video discussion on how SIDA uses Results-Based Management.


The bottom line: This paper is moderately useful in its stated purpose - providing an outline of the Logical Framework Approach. It is most useful in discussions of problem analysis but there are more accessible guides to the process available.


_____________________________________________________________






GREG ARMSTRONG
Greg Armstrong is a Results-Based Management specialist who focuses on the use of clear language in RBM training, and in the creation of usable planning, monitoring and reporting frameworks.  For links to more Results-Based Management Handbooks and Guides, go to the RBM Training website


Thursday, December 24, 2009

A Comprehensive Monitoring and Evaluation Website


--Greg Armstrong--


The MaNDE website provides a host of resources on training, evaluation methods, and results-based management.

[Edited to update links July 2018]

The Monitoring and Evaluation News


Level of difficulty:  Moderate to complex 
Primarily useful for: Field managers, planners, monitoring & evaluation specialists 
Limitations: A large, complex site, which will take some time to explore 
Length: Roughly 2,000 web pages and documents
Most useful sections: The "search" function, and the archives


The Monitoring and Evaluation News website (MandE) established by Rick Davies, in 1997 is one of the most useful resources available to development professionals interested in results-based management or other approaches to planning, monitoring and evaluation.  The site could be a useful professional resource regardless of readers' level of expertise - for complete beginners, experienced managers, or those interested in the theoretical underpinnings of monitoring and evaluation; whether readers work in the field or in an academic institution or donor agency headquarters.  


This website offers a wide range of materials of potential interest, including practical guides to results-based management and other forms of evaluation and monitoring, articles on the theory of performance measurement, reports from practitioners working in the field, notices of monitoring and evaluation training events, commentaries on donor policies and guidelines, and a wide range of discussions between development professionals on planning, monitoring and evaluation issues.


The Monitoring and Evaluation website is, itself, a blog -- something like Slate for evaluators -- with the wide variety of news and opinion it includes.  But readers can also get to Rick Davies' blog and website from there.


Organization of the site



The home page for the site may mislead a casual visitor.  The most prominent feature is a list of the most recent additions to the site, but on the right, there is a search box, and a long list of other available topics, both of which deserve a careful reader's attention.


The material on the Monitoring and Evaluation News is organized in archives, with 73 categories, when I last visited, including posts related to:


Most of the web pages the reader will initially find for these categories have been developed since 2008, and the reader can get a complete list of the contents of categories since 2008 using the "search" function.  But using the search box on the web site will also lead to a large number of useful tools produced before 2008.


Narrowing the Search for results-based management material



There are over 1,800 pages on the Mande site, and it can take some time to go through the material that is here. Not all of the material may suit your own needs, and some may be slightly outdated, but what this site does is to focus on performance assessment, monitoring, and evaluation, something that can narrow considerably the time we spend on searching for results-based management tools on the internet.


In the first post on this blog in early December of 2009, on how to search for RBM tools, I noted how many web pages you might get if you did a search for  "RBM" on Google, Yahoo or Bing.  At the time, the number of hits on Google was 296,000 pages - including a lot  not even remotely related to results-based management.  And if you were not careful, you could get over 100 million pages returned on a general search for results-based management.  But using the Mande search, you will get about 190 pages returned, all of them more or less relevant to what I, at least, am looking for.

The Logical Framework Approach: Documents on MandE

I will take one example, from one category here, to illustrate the range of materials available on the Monitoring and Evaluation news website.  If you search for "LFA" on Google, you will get close to 4 million pages listed - and not one in the first 100, when I looked today, was related to the Logical Framework Approach. Doing the same search at MandE, you will get 8 pages produced since 2008, and another 25 produced prior to 2008, all directly relevant to the Logical Framework.  


One of those is particularly relevant - The Logical Framework: A list of Useful Documents, produced in January of 2008.  This page links itself to 40 sites, in four categories:
  • Explanations of the Logical Framework - containing a number of donor references and guides
  • A Wider Discussion of Logic Models - some fairly complex pages discussing alternative approaches to the Logical Framework, and some online courses
  • Critiques of the Logical Framework
  • Alternative Versions of the Logical Framework - including discussions of Outcome Mapping, Appreciative Inquiry and other topics
  • The Editor's Concerns (about uses of the Logical Framework) - Rick Davies' thoughts on the problems of using the LFA - and links to some alternatives.
The bottom line: You can spend a long time exploring the Monitoring and Evaluation News but it will be time well spent.


_____________________________________________________________


GREG ARMSTRONG



Greg Armstrong is a Results-Based Management specialist who focuses on the use of clear language in RBM training, and in the creation of usable planning, monitoring and reporting frameworks.  For links to more Results-Based Management Handbooks and Guides, go to the RBM Training website

Tuesday, December 15, 2009

Project Cycle Management, the LFA and RBM

An Introduction to Multi-Agency Planning Using the Logical Framework Approach

A practical guide to using the LFA in project planning and management.,
-Reviewed by Greg Armstrong-


[Review Updated March 2016, July 2018]

Level of Difficulty: Easy-to-moderate
Primary Useful for: Project field managers
Limitations: More detail on baseline would have been useful
Length: 58 pages
Most useful sections: p. 20-24 (risks and assumptions); p. 49-58 (LFA examples)
Diagram of a problem tree and objective tree - results-based management
Problem Tree


This document, prepared in 2005 by Philip Dearden, Head of the University of Wolverhampton's Centre for International Development and Training, is a practical, hands-on guide to the use of the Logical Framework Approach in project planning. It is placed in the context of Project Cycle Management, but it is clearly relevant to results-based planning and management, which also uses the Logical Framework -- even if in reality many RBM practitioners stop with the "Framework", and forget the "Approach". In fact, there is not much difference between the Project Cycle Management and many forms of results-based management, which incorporate the Logical Framework -- except that Project Cycle Management emphasises stakeholder involvement in a way that RBM does in theory, but often ignores in practice.


Who the Guide is for

While I think anybody generally interested in results-based management could get some benefit from this guide, it will be most useful for project field managers and others trying to design a practical consultation process aimed at clarifying results, results chains and indicators.

Clear Language RBM

Most of the guide is easy to understand, using clear language to set out the basic steps in using the logical framework approach - something that should be central to the use of RBM. Where the UNIFEM guide, reviewed earlier, was essentially an introduction to RBM terms, this guide goes further, walking the reader through seven "core questions" beginning with a stakeholder analysis, and ending with a discussion of indicators.

Basic RBM Concepts


Philip Dearden and his colleagues have a long history of working in the field with development practitioners, and this is reflected in the way this document is written. It uses a simple, largely jargon-free approach to explain seven necessary stages in the Logical Framework Approach (LFA).

Participatory RBM Processes


The first section ("Who are we?") establishes the participatory nature of the Logical Framework Approach, something that really should apply to results-based management overall. Many of the criticisms I have seen and heard of RBM are that it is too top-down and technocratic in nature, but in fact it need not be this way. 


Chart with examples of primary stakeholders, key stakeholders and secondary stakeholders, and what their interests are
Stakeholder analysis


Bringing stakeholders into the process, from problem identification, through testing of assumptions, identifying potential results, and focusing on the most useful activities to reach results should be an essential component of RBM . In practice, however, because it takes time, effort, and money, participatory processes are often viewed as annoying distractions by many implementing agencies and donors. Talk about participation is cheap, but practice takes commitment.

Problem Identification


In a section called "Where are we now?", the guide discusses how to identify both the problem that the project or programme will address, putting it into context through a problem tree, and the potential strengths in the situation, those on which the project can build.
Maintaining a focus on the underlying problem is important. Many projects I have seen, bilateral and multilateral, essentially abandon the problem as the underlying foundation of the intervention once the activities are designed, and the projects become essentially activity-focused enterprises.


Clarifying Results-Chains


The section of the document "Where do we want to be?" focuses on choosing among broad results, then finding the short-term and mid-term results likely to contribute to what we want in the long run.
The term "Outputs" has been used differently in the past among different donor agencies - for some meaning essentially "completed activities" and for others "short-term results". In this section, Outputs are essentially defined as short term results - for example -- improvements in community capacity to manage activities and resources -- although later in the discussion of indicators, it labels them as "deliverables".

Defining Activities to reach results

The section called "How do we Get There?" ties the development of activities to the earlier analysis of problems.
And it really is important to go through the first three stages before identifying activities, because keeping in mind a clear idea of what the problem is, and what the logical sequence of possible results may be to the problem, tests the logic and soundness of any proposed activity. Unfortunately, in practice, again, the fact is that many organizations start with activities and then try to find a problem that might possibly be used as a justification for funding what they have already decided to do.


Risk Assessment and Assumptions


The section called "What may stop us getting there?" explains risk analysis -- discussing what potential problems could derail the project -- and the need to redesign activities to minimise those risks. This section also deals with the testing of assumptions -- how these assumptions need to be clarified and explicitly addressed in design of activities, and identifying the assumptions underlying the logic between short, mid-term and long-term results.

I found this section of the document very helpful. In my experience the great under-valued component of results-based planning has been the casual and obscure manner in which assumptions are often handled in the logical frameworks and, more importantly, in the discussion process which should precede the completion of the Framework. Seriously focusing on what stakeholders and implementing partners assume to be the relationships between activities and results, and about the underlying conditions necessary for solving problems, can reveal profound differences of opinion among these groups, not just about the political, social or economic conditions necessary to make a project work, but also about what types of interventions are likely to be most effective, about what our underlying, and often unspoken theories of learning and development are, and even assumptions about what the basic problems are, that the development activity will purportedly address.

Spending more time on clarifying these assumptions at the project design stage can prevent a lot of problems, and save considerable time and money during implementation, as, invariably, the different perspectives slowly start to surface. Of course, it is also important to reassess these initial assumptions at regular intervals as a programme or project evolves. When this is encouraged by donor agencies it makes it easier to make constructive adjustments to project management, perhaps even to the project's design, while always keeping an eye on the problem to be addressed. But this is something on which many donors, and most implementing agencies really don't want to spend time.

Indicators and Data Collection


Sections six and seven of the Guide ("How Will we Know if We've Got There" and "How do we Prove it?") deal with indicators and data collection. The indicator development discussion makes a useful distinction between indicators for completion of activities, and indicators for results.

The section on data collection deals with sources of data, and means of data collection. I think more space could usefully have been given to this, because the big problem in most indicator development is the impracticality of collecting data for many proposed indicators.

LFA Checklist

The concluding section provides a checklist of 29 issues against which to assess the utility of the Logical Framework for the project. Using the checklist without having reviewed the earlier text is probably feasible, but is likely to be considerably less useful than applying it after taking the time to review the rest of the document.

The appendices (p. 37-58) provide a glossary of terms, a list of advantages and disadvantages of using the Logical Framework for project management, a description of the Project Management Cycle approach, and a brief discussion of the purpose of monitoring and evaluation. But the most useful of the appendices may be the nine pages of examples of Logical Frameworks for the three children's projects in Sheffield, showing in detail how the indicators for these projects related to assumptions, results, and activities.

Limitations of the Guide


While the data collection discussion in the Guide does mention baseline data, it does not discuss it. This is the one area of the guide in which I think more detail would have been useful, even for beginners. It is something that all projects, and all donor agencies need to spend more time on at the beginning of the indicator development process.

Collecting baseline data is not necessary just for telling us if we have results --whether anything has changed -- but also for testing whether we can actually collect the data for the indicators we have agreed on. Yet, I have rarely seen international development projects, funded by any donor, where baseline information is actually collected even within the first year. Often, and this is no exaggeration - the baseline information is never collected - or it is collected retrospectively, or simply faked, three, four or five years after the project begins. This makes a mockery not just of the whole concept of "results-based" management, but also means that in a very substantial number of cases, only after the project has been implemented for years, does the realization hit home that many of the indicators are completely useless. If donors really took RBM seriously, and not just as window-dressing for their management committees or Auditors-General, they would insist that genuine baseline data -- and the consequent redesign of indicators, be completed before project activities are funded.

The bottom line: Overall, this is a practical and reasonably straight-forward Guide to the development of Logical Frameworks as part of the planning, monitoring and evaluation process. It is likely to be useful for many project field managers, in government, private sector or civil society implementing organizations.

More resources:

The Centre for International Development and Training now also offers an online course on RBM. 


_____________________________________________________________


GREG ARMSTRONG



Greg Armstrong is a Results-Based Management specialist who focuses on the use of clear language in RBM training, and in the creation of usable planning, monitoring and reporting frameworks.  For links to more Results-Based Management Handbooks and Guides, go to the RBM Training website




Wednesday, December 09, 2009

The UNIFEM Results-Based Management Guide


Results-Based Management in UNIFEM: Essential Guide



--Greg Armstrong--

[ This post edited to update links, July 2018 ]
Existing guidance for RBM from the UNIFEM 2005 RBM Guide
The UNIFEM RBM Guide

Level of Difficulty: Easy
Primarily Useful for: Introducing partners to elementary RBM concepts
Limitations: No access to the associated training modules, limited detail on procedures
Length: 34 pages
Most Useful sections: p. 16-22


UNIFEM’s RBM guide was produced in 2005 as a draft, intended for use by UNIFEM staff and partners. When I did a week of results-based management training with UNIFEM in 2006, in Bangkok, I found that none of the participants were aware of the document. This was unfortunate because, while it is very basic, it is also quite accessible and could have been useful, not just for UNIFEM staff themselves, but for field staff and any partners who want an introduction to the basics of results-based management.

The draft document is designed to be used with a series of modules for further training, but those are not, as far as I can see, available to the general public. Without those modules, the utility of the Guide is limited, but it still provides a non-threatening introduction to results-based management terms.




Basic RBM concepts


As a results-based management guide, the UNIFEM material is, as I noted earlier, very basic and appears to be aimed at people with little or no background in RBM. There is some UNIFEM - and UN-specific terminology scattered throughout the document (discussion of Multi-Year Funding Frameworks, for example), but this should not put off readers who are not familiar with the terms. Much of the document is jargon-free.

There is not a lot of detail on individual components of results-based management covered here, but of all of the RBM guides produced by donors, this is among the easiest to understand, perhaps because it is so limited in scope. There are many other places readers can look for detail and complexity.

No doubt some donors with more sophisticated RBM or Managing for Development Results frameworks will not be impressed. But the problem with these other frameworks is that they are often context-specific and use technically obscure RBM terms. Many cannot be used by people outside the organization or even by people inside the organization, who may have limited tolerance for bureaucratic jargon. The UNIFEM guide is simple, but not simplistic, and it does at least introduce some elementary and important RBM terms and concepts in terms people not obsessed by RBM can understand.

The most of important of the concepts the Guide introduces is about how results-based management should be used. UNIFEM acknowledges in the guide what other agencies, in their more sophisticated approaches to RBM, often forget: that results-based management can be the most powerful where it is used not primarily as a technical planning tool, but as an opportunity for partners to come together for participatory planning. This is explained in other websites in more detail, but often ignored in practice.


Results Context



The Guide gets into the basics of results-based management beginning on page 8, with a discussion of ”context”. This is important: Understanding the particular problem that a programme or project is addressing is the foundation of realistic and usable results-based management. In this guide, the context is framed specifically and primarily in terms of the Human Rights-Based Approach to programming, but the general idea behind the discussion is sound and applicable elsewhere: Understand the context, identify the problem, and then focus on what results you want.

What follows in the next 25 pages, are short discussions of the development of a results chain, phrasing of results statements, indicator development and planning for monitoring and evaluation.



Results Statements


Reading the section titled “The nuts and bolts of developing results statements” (pages 16-19) is something that could be useful not just to people new to RBM, but as a refresher on clear writing for a lot of development practitioners who get tied up in ponderous results statements. This section emphasises the use of plain, understandable language, and the need for realistic results that might plausibly be achieved given the resources of time and money available. This is something that, in my experience in project design, monitoring, and results-based management training a lot of donors and implementing agencies stumble on - complex and implausibly grandiose results statements.

The section of the Guide on results reporting (p. 26-30) is largely oriented to UNIFEM requirements, but the very brief section on “The difference between reporting on results, reporting on process, and reporting on activities” on pages 28-29 reiterates a point that any implementing agency could usefully review. Many UN agencies have a real problem of credibility on results at the project level because they hold themselves accountable only for completed activities (Outputs). But UNIFEM, while it acknowledges this, does move beyond activity completion and both in the Guide (and in its implementation in the field, from what I have seen) genuinely tries to understand whether its activities have led to results (Outcomes).



Indicator Development


The section on “development of results indicators” (p. 19-22) is useful as an introduction. Some of the indicator examples, however, use the kind of vague language that, on the whole, can come back to haunt an agency (as has happened with UNIFEM on occasion) as it comes to collect the data. For example, the following is cited as an output indicator: 

“Number and quality of analyses of gender discriminatory provisions of national legislation undertaken by women’s organizations“.
UN agencies often use the “number and quality” indicator, but while the “number” can work, “quality” always requires more explanation - which means yet another level of indicators, or targets, if we are going to be able to specify what the criteria for “quality” will be. This doesn’t mean we shouldn’t look at quality, just that we need to be specific about what we mean when we use the word.


The Bottom Line:


Despite its limitations,  for agencies or individuals coming to RBM for the first time, or simply rebounding from the unintelligible material often served them by donors, this UNIFEM Guide is a useful place to start. Where they go from here is another issue, but beginning with this, might leave readers interested, rather than intimidated, and increase the likelihood that they will pursue more detailed and substantive investigations of RBM.

 [ Update Note: June 2011 - Since the creation of UN Women in 2010, the links to the original UNIFEM site have disappeared or been redirected.  The UNIFEM RBM Guide is no longer listed on the UN Women site, but a copy of the UNIFEM RBM Guide is available on SCRIBD. ]

_____________________________________________________________




GREG ARMSTRONG
Greg Armstrong is a Results-Based Management specialist who focuses on the use of clear language in RBM training, and in the creation of usable planning, monitoring and reporting frameworks.  For links to more Results-Based Management Handbooks and Guides, go to the RBM Training website

QYEMDCUX5F4G

Thursday, December 03, 2009

113 Million RBM Websites

--Greg Armstrong--


How many results-based management websites are there? How many web pages do field managers for a health project in India, a parliamentary development project in Cambodia, or a gender project in Aceh have to review before they can find something that will actually help them in their work?

There are two related problems here: quantity - too many hits; and quality - too few that are relevant. The point of this blog is to sift through the vast array of potential sites, and to point out what I think might be helpful.


RBM Jargon


Like it or not, results-based management, managing for development results, and similar approaches are frameworks that require increasing time and attention from development field workers. But few of these approaches are, in the way they are presented, user-friendly.

For the past 15 years or so, I have been working, primarily in Asia, to try to simplify and demystify the bureaucratic jargon behind results-based management, jargon that intimidates some people, annoys others, and discourages almost everyone at the field level from using it. It is possible to cut through the jargon and make these processes usable, if we focus on simple language, if we make the frameworks user-specific, and if organizations are willing to spend a week or so clarifying what they mean and what they are expecting to do. My colleagues and I have had some success in this: helping people to use results-based management effectively and regularly in their governance programming and environment, education and rural development projects, throughout Asia.

But many organizations can’t wait for onsite training; others can’t afford the cost of bringing someone in to work with them for a week; some simply need a quick introduction or refresher on how to get the RBM job done efficiently and effectively. In these cases, they turn to the web.

I am asked sometimes by people I have worked with, which web pages I have found useful for different RBM-related purposes. This is why I have started this blog: to review web pages I come across in the normal course of my work, related to outcomes or results-based planning, monitoring and evaluation. My hope is that these reviews may prove useful to individuals or organizations seeking help from the web to manage their results-based management and similar - or alternative - processes.


How many RBM Web pages are there?


So, how many results-based management web pages are there? I say “web pages” rather than websites, because any website might have numerous individual pages with an RBM entry. One ridiculous estimate is that are roughly 112 million. That, at least, is what we get today if we do look for results-based management in a general search on the Google Canada site. On the world-wide Google site, [which you can usually get to after you are on your country-specific site, by clicking on the "go to Google.com" link on the bottom right] we get 113 million, including , all of the pages that have the three words somewhere on the page - but not necessarily in a coherent phrase. Given that, according to Google's keyword tool, there are only approximately 3,000 searches a month conducted world-wide using these words, there would appear to be a mind-boggling imbalance between supply and demand.




Narrowing the Search to "Results-Based Management"


Of course, we can narrow things down if we specify an exact phrase in the search terms and “results-based management” as an exact phrase will eliminate roughly 111 million, nine hundred thousand extraneous entries. But we still have 99,000 -110,000 pages returned

On Microsoft’s search engine,
Bing, a general search for results based management (the hyphen is often ignored by the search engines) will return at different times anywhere from 171 million to 502 million pages, and an exact phrase search about 209,000. The numbers can change from minute to minute. On Yahoo search, which purportedly uses the Bing search engine, but which sometimes produces substantially different results, the returns were 322 million and 436,000 respectively. It is still worth looking at Yahoo search, because it remains #1 in some parts of the world, such as Hong Kong,  and Yahoo is still a significant search tool in others.

The search engines themselves may automatically narrow this down to more locally relevant results depending on a user’s IP country address. A search on Google Thailand, in English, will sometimes return 305,000 entries in an exact phrase search for results-based management and in Vietnam, 110,000.


Searching for “RBM”


Searching for “RBM” as an abbreviation produces “only” 240,000 entries on Bing; 296,000 on Google; and almost 3.5 million entries on Yahoo Search. In all three cases, many of these are for Roll Back Malaria, various biomedical topics and bicycle manufacturers, to say nothing of the Reserve Bank of Malawi and at least one rock group. The highest ranked pages include many related to Mercedes Benz -- and one Saab parts site named RBMPerformance.com, a real disappointment to results-based management fans! All of this perhaps explains why there are so many more searches for “RBM” (roughly 60,000 a month) than for “results-based management”.

Even if we narrow the search further to something like "RBM Training”, we still get roughly (at the time of this writing) 2,500 entries on Google, 1,279 on Yahoo (636 on Yahoo Canada) and 709 on Bing worldwide. An exact phrase search for “Simple RBM"
 produces 167 entries on Bing,  and 1,190 on Google, and in both cases most of the entries have nothing to do with results-based management.   On the other hand, the same search on Yahoo worldwide gives 4.  The more specific we can be, however, in specifying the search terms, the more useful sites we can find. Searching with an exact phrase of "simple results based management" as opposed to "simple RBM" for example produces between 4-9 results on all of the search engines, and all relevant.

If you are interested in comparing search terms, try the "Search" box on the upper right corner of this blog's home page, where you can search both the links on this page, plus the web in general.

RBM Page rank and utility


From a different perspective, it is a mistake to think that all of the top rated sites for results-based management are actually the most useful. A web page may appear high in the search engine ranking because of the credibility of its links to another site. For example, many UN sites rank high in almost any search because of their multiple links to other UN agencies even though, in some cases, they may have only a page or two of potentially useful information.

Many people, naturally, never go past the first page or two of search results, which means between 20-50 entries on average. These users are missing some of the potentially most useful RBM sites. I know that aside from my own
results-based management training site, which does from time to time turn up on the first page of results, there are other sites that can help in the search for a simpler approach to RBM or outcome assessments -- but many of those potentially useful sites are not listed anywhere in the top five or ten pages of search results. In many cases these pages don't actually mention RBM at all, but use terms such as the LFA (Logical Framework Approach) or Project Cycle Management, which are not exactly the same, but are certainly relevant, and potentially very useful.


What the reviews of RBM pages will focus on


Development professionals obviously have different needs when searching the web for assistance on results-based management. For some, the need is for simple guidelines or concrete, implementable tools. For others, the goal is to find a more theoretical or systems approach. The primary focus in my own RBM training is to make results-based management simple enough that it can be used quickly and effectively by people on the implementation end of development programming, specifically in the context of their own work. However, where I see something of potential interest on the more analytical or macro-side of RBM, I will also deal with that.

This site then is intending its reviews, in rough order of precedence, for:



  • Field workers and field managers of implementing agencies, private, public or nongovernment;
  • Managers in government agencies implementing development projects and dealing with their own internal reporting requirements, the design of internal monitoring and evaluation systems, or requirements from donors;
  • Evaluators and monitors;
  • Planning staff, in donor agencies involved in project or programme design;
  • Other specialists in Results-Based Management who are designing results based planning, monitoring or evaluation systems.
The bottom line: I know that there are some interesting RBM websites that don’t make the top 20, and I know that there are some very highly ranked websites that really have little of value on them for practitioners. I will be dealing with both in the weeks ahead.

Any readers who have an opinion on a website are also welcome to submit the web address and their review of its strengths and limitations.



[Edited to update some links, January 2012]


_____________________________________________________________

GREG ARMSTRONG
Greg Armstrong is a Results-Based Management specialist who focuses on the use of clear language in RBM training, and in the creation of usable planning, monitoring and reporting frameworks.  For links to more Results-Based Management Handbooks and Guides, go to the RBM Training website


RBM Training

RBM Training
Results-Based Management

Subscribe to this blog by email

Enter your email address:

Delivered by FeedBurner

 
Read the latest posts