Results-Based Management in CIDA: 
An Introductory Guide to the Concepts and Principles 
Results-Based Management Division
Performance Review Branch 

January 1999


Versión en español 
1.0 Introduction
1.1 Public Sector Management Context
1.2 CIDA Context, RBM Policy and Principles
2.0 Stakeholder Participation
3.0 Building a Performance Framework
3.1 Internal Logic
3.2 Activities versus Results
3.3 Developmental Results: Outputs, Outcomes and Impact
3.4 Asking Some Fundamental Questions
3.5 The Performance Framework
4.0 Identifying Assumptions and Risk
4.1 Identifying Assumptions
4.2 Risk Assessment
4.3 Monitoring Risk
5.0 the Performance Measurement framework
5.1 Performance Measurement and Evaluation
5.2 The Performance Measurement Framework
5.3 Selecting Performance Indicators
5.4 Collecting Performance Information
6.0 Performance Monitoring
6.1 Selecting an Approach
6.2 Internal Monitoring Option
6.3 External Monitoring Option
6.4 External Support Option
7.0 Using Performance Information for Management Decision-Making
8.0 Tools for Performance Monitoring & Reporting
8.1 The Annual Project Progress Report (APPR)
8.2 The Bilateral Project Closing Report (BPCR)
9.0 Summary
Abbreviations
Annex I : Results-Based Management in CIDA Policy Statement
Annex A : Key RBM Definitions
Annex II : Framework of Results and Key Success Factors
1.0 Introduction

In April 1996, as part of its commitment to becoming more results-oriented, CIDA's President issued the Results-Based Management in CIDA Policy Statement (Annex I). This Policy Statement consolidated the Agency's experience in implementing Results-Based Management (RBM) and established some of the key terms, basic concepts and implementation principles. It has since served as the basis for the development of a variety of management tools, frameworks and training programs. The Agency Accountability Framework, approved in July 1998, is another key component of the results-based management approach practised in CIDA. The framework articulates CIDA's accountabilities in terms of developmental results and operational results at the overall agency level, as well as for its various development initiatives. This distinction is cruicial to this guide, since the former is defined in terms of actual changes achieved in human development through CIDA's development initiatives, while the later represents the administration and management of allocated resources (organisational, human, intellectual, physical/material, etc.) aimed at achieving development results. Although the achievement of operational results will ultimately determine the efficiency and effectiveness of CIDA as an organisation, the focus of this document is on the RBM concepts and principles related to the achievement of developmental results.

This introductory guide has been produced to support the consistent interpretation of the RBM Policy and its implementation across the Agency's geographic and partnership programs. It is one among many management tools that have been developed to support CIDA and its partners in using RBM throughout the program/project1 life-cycle. This document has been benefited from the comments of many individuals. We would like to express our appreciation to them for assisting in the drafting and reviewing of this document, particularly the Bilateral Roadmap Committee, the RBM Practitioners' Network, and Mr. Werner Meier of the Results-Based Management Group.

1 The use of "program" refers to funding mechanisms that support several projects. 

1.1 Public Sector Management Context 

Over the past ten years, there has been increasing pressure on governments around the world to demonstrate the efficient and effective use of public resources. Public concern for national debt reduction, a declining confidence in political leadership, the globalization of the economy, free trade and consequently, increased competitiveness in the open market have all been important factors. These global pressures have contributed to the emergence of performance and results-based management approaches in the public sector. "Reinventing government", "doing more with less", "demonstrating results that citizens value" have all become popular catch phrases to describe a move toward the new public sector management prevalent in OECD countries such as Great Britain, Australia, New Zealand, as well as the United States.2

2 See Peter Aucoin The New Public Sector Management: A Comparative Perspective, Dalhousie University Press, 1996, for a comparison of the Canadian experience with these OECD countries.

In Canada, public concern over a burgeoning national debt increased the demand for more accountability for the use of tax dollars. To respond to this increasing pressure, previous and current governments proceeded with a variety public sector reforms. An examination of the need, affordability, and efficiency of all programs during a series of program reviews led to departmental budget reductions and significant down-sizing. Coinciding with these public sector reforms was a renewed emphasis on results-oriented business planning, accountability and performance reporting. The new approach is changing the way departments do business and involves ongoing systemic change in public sector management. The Office of the Auditor General (OAG) and the Treasury Board Secretariat (TBS) have been the primary drivers of these reforms at the federal government level.

CIDA has enjoyed a close working relationship with the OAG since the 1993 Audit, which coincided with the Agency's own Strategic Management Review initiated the year before. The findings and subsequent recommendations of this Audit addressed the need to: 

    • clarify the strategic policy framework; 
    • establish a results-oriented and accountable style of operation; 
    • improve internal management procedures and practices; and 
    • improve the transparency of results reporting. 
Poised for change, CIDA launched its Corporate Renewal initiative in 1994 and adopted a more results-oriented and accountable style of management. In collaboration with the OAG, an innovative three-phased follow-up approach was designed to: 1) monitor the Agency's progress in dealing with the above concerns; 2) assess whether CIDA's Renewal plan was being implemented as planned; and 3) formulate an opinion on the extent to which CIDA's actions have satisfactorily resolved the concerns raised in the 1993 Audit. The Phase II Self-Assessment Report concluded that significant first steps had been taken toward promoting more effective results-based management at CIDA that would enhance its ability to improve development effectiveness and promote sustainable development in developing countries3

3 See Report of the Auditor General of Canada to the House of Commons, Chapter 29 Canadian International Development Agency, Phased Follow-up of the Auditor General's 1993 Report - Phase II, November 1996.

In 1996, the Treasury Board Secretariat, through its "Reform of the Estimates Project", introduced three main results-oriented documents that have become a requirement across the federal government system. 

  • Business Plan: A three-year rolling horizon departmental plan with key results commitments, performance indicators and estimated costs. 
  • Planning, Reporting & Accountability Structure (PRAS): An annex to the Business Plan that replaces the Operational Planning Framework and provides a clearer description of departmental program, management and delivery structure. 
  • Performance Report: An annual report on progress achieved toward the department's results commitments which replaces the Part III of the Estimates. The OAG may audit the use of performance indicators. 
CIDA, like many other federal government departments, has produced a first iteration of these reports and has submitted them to the TBS, the House of Commons Standing Committee on Public Accounts and other Parliamentary Committees, thus demonstrating its commitment to and support for the current public sector reform. It has done so in an attempt to demonstrate the optimal allocation of the resources granted to Canada's Official Development Assistance (ODA) program, as well as to communicate to Parliament, the Canadian public and the international community, the effective use of those resources in achieving developmental results.

1.2 CIDA Context, RBM Policy and Principles 

CIDA's early experimentation phase (1993-1996) in applying RBM led to a proliferation in the use of terms, frameworks, indicators and results-based contracts. Although initially popular, RBM became increasingly misinterpreted, misunderstood and confusing for CIDA staff and its partners. The Agency quickly moved into a consolidation phase (1995-1997) as it attempted to mainstream some of the more successful RBM initiatives. It adopted a more focused corporate approach by creating a dedicated RBM Unit located within the Performance Review Branch. Soon thereafter, the RBM Policy Statement (April, 1996) was issued which further re-enforced the decision to adopt Results-Based Management as its main management tool. Significant resource allocations were made to the RBM Unit and to a network of performance review staff in the Branches to support the implementation of the RBM Policy throughout the Agency.

The RBM Policy established the key terms, basic concepts and implementation principles from which we have drawn to produce this guide. A brief explanatory overview of these terms, concepts and principles is perhaps warranted at this juncture beginning with the most common question, "What is results-based management? A more descriptive than definitive answer to the question is that RBM is a means to improve management effectiveness and accountability by involving key stakeholders in defining realistic expected results, assessing risk, monitoring progress toward the achievement of expected results, integrating lessons learned into management decisions and reporting on performance. 

Resources, reach and results are concepts that have been a part of management for many years, but their usefulness for performance planning, measurement and management has only recently been fully exploited. Resources are well known as the human, organisational, intellectual and physical/material inputs that are directly or indirectly invested by an organisation. "Reach refers to the breadth and depth of influence over which the organisation wishes to spread its resources."4

4 For a more detailed explanation of these concepts, see Steve Montague's The Three Rs of Performance: Core concepts for planning, measurement and management, 1997. 

As defined in the RBM Policy (1996), "a result is a describable or measurable change in state that is derived from a cause and effect relationship." There are two key elements of this definition: 1) the importance of measuring change; and 2) the importance of causality as the logical basis for managing change. Consequently, results are those changes that are attributable to the breadth and depth of influence an organisation has had through the use of its resources. Managers must have performance information about resources, reach and results in order to plan, manage and evaluate their programs and projects.

RBM is comprised of six distinct components: 

    • stakeholder participation; 
    • defining expected results; 
    • identifying assumptions and risks; 
    • selecting performance indicators; 
    • collecting performance information, andperformance reporting. 
      •  
    .
To manage for results means to fully integrate these components into the program/project life-cycle from planning through to evaluation. Each of these characteristics is discussed in detail in the remainder of this document.

2.0 Stakeholder Participation RBM 

and participatory development approaches are not only complementary, but essential to one another. In RBM, programs/projects must be designed, planned and implemented using a participatory approach where all stakeholders are involved throughout the program/project life-cycle as illustrated in Figure 1. below. Expected results must be mutually defined and agreed upon through a consensus building process involving all major stakeholders. This enhances stakeholder's sense of ownership and subsequent commitment to continuous performance assessment, annual performance appraisal, program/project adjustments and annual workplanning. 

As the primary stakeholders, CIDA, the Executing Agency, the developing country partner organisation and beneficiaries (men and women) should participate, to the extent possible, in planning, measurement and management decisions. The extent of their participation will be dictated by such factors as the size of the program/project in terms of available resources versus the potential cost that would be incurred. While a participatory approach usually requires a good deal of time and resources in the planning phases of a program/project, this approach can yield enormous and sustainable benefits in the long term. Several studies carried out by the World BankSee Participation in Practice, World Bank, 1996. and other donors support this claim. They have found higher developmental rates of return in projects that seek the involvement of beneficiaries at all stages of the project cycle, and conversely, markedly higher socio-economic costs when there is little attention paid to their participation, particularly in program/project design and planning.

Fostering stakeholder participation throughout the program/project life-cycle has many more immediate benefits. There are three principal reasons why fostering stakeholder participation is crucial to RBM:

1. Stakeholder participation expands the information base needed for program/project design and planning. Identifying, defining and measuring results/risks hinges on comprehensive information collection. Bringing together project/program stakeholders will help ensure that the information and knowledge held by stakeholders is identified and coordinated. This is especially important in terms of obtaining information about local, cultural, and socio-political practices, institutions, and capabilities. This also gives stakeholders the opportunity to discuss their needs and interests. 

2. A participatory approach will help establish clear roles and responsibilities. This will lead to commitment to achieving the stated results. Close collaboration and participation of all partners including the women and men who will be the end-users, is crucial to create a sense of ownership of the progam/project. Partners will be more committed to achieving results for which they are clearly responsible and for which they have helped define. Clearly defined roles and responsibilities also leads to greater efficiency as duplication is eliminated. When responsibility for data collection, analysis and reporting are clarified, the performance measurement system is more likely to work as it was intended. 

3. The participation of all stakeholders helps create a work environment where individuals accept that their accountability includes delivering on results. Having all partners, including beneficiaries, participate throughout the program/project life-cycle from design through implementation eases the transition from being accountable for activities to being accountable for results. With a participatory approach, all stakeholders provide input and consensus on results targets is achieved. In this sense, because stakeholders participated in defining results targets, they are confident that the targets can be met and are therefore willing to accept the responsibility of achieving them. Stakeholders will also accept accountability for results when they feel empowered. 

There are several participatory mechanisms or techniques available to help foster participation. Some of these are: Rapid Rural Appraisal (RRA), Participatory Rural Appraisal (PRA)6 , Participatory Learning and Action (PLA) and others. There are also a number of discussion papers7 and source books8 that describe the experience of donor agencies and other stakeholders with these participatory development approaches. These are available from CIDA as well as the World Bank.

6 See Robert Chambers, Rural Appraisal: Rapid, Relaxed and Participatory. IDS Discussion Paper 311. Sussex: Institute of Development Studies. This paper contains a comprehensive list of RRA and PRA methods, followed by the six key aspects which explain the growth of these participatory approaches.
7 See Jennifer Rietbergen-McCracken, Participation in Practice: The Experience of the World Bank and Other Stakeholders, World Bank Discussion Paper No. 333, 1996.
8 See World Bank, The World Bank Participation Source Book, Environmentally Sustainable 

3.0 Building a Performance Framework

A Performance Framework is an RBM tool that is used to conceptualise projects by asking some fundamental questions of the key stakeholders i.e. the funders, program/project delivery partners and beneficiaries: 

    • Why are we doing this program/project? 
    • What results do we expect to achieve for the resources being invested? 
    • Who will the program/project reach out too in terms of beneficiaries? 
    • How would the program/project best be implemented? 
The effectiveness of this RBM tool depends on the extent to which it incorporates the full range of stakeholder views. As mentioned earlier, stakeholder participation is an essential ingredient because it helps generate the necessary level of consensus with regard to these questions. However, before we go any further in building a Performance Framework, we have to understand the internal logic of the developmental results chain.

3.1 Internal Logic

The internal logic of RBM is based on the cause and effect relationships between inputs, activities and results. As illustrated in Figure 2., inputs are invested by CIDA in a program or project. 
 

.
These inputs are brought together in time and space and transformed by some management or implementation activity. These activities, then in turn, generate developmental results. For example, the immediate result of a two-day RBM Workshop could be described in terms of the participants' raised level of awareness, knowledge and skills in applying RBM concepts and principles. A clear distinction is made here between the activity and the developmental result, a distinction that may take some getting used to for those who understand 'outputs' as products and services.

3.2 Activities versus Results

For the purposes of discussing development work, inputs are the organisational, intellectual, human and physical resources contributed directly or indirectly by the stakeholders of a program/project. Activities are the co-ordination, technical assistance, training, and other program/project related tasks organised and executed by program/project personnel. Therefore, managing a program/project is in essence managing this process of transforming organisational, intellectual, human and/or physical/material resources through activities that will generate developmental results. In an RBM context, as illustrated in Figure 2., carrying out or completing a program/project activity does not in itself constitute a developmental result.

3.3 Developmental Results: Outputs, Outcomes and Impact

Three terms are generally used throughout the Canadian federal government and international development community to describe the different levels of results. At CIDA, the development results chain is composed of outputs, outcomes and impact level results that are linked by virtue of a chain of cause and effect relationships as illustrated in Figure 3. Development results should always reflect the actual changes in the state of human development that are attributable to a CIDA investment. Impact level results correspond with goal level objectives, while outcomes correspond with purpose level statements. The meaning of outputs, however, has gone beyond what is commonly considered the goods and services produced by an organisation, and has acquired a more developmental meaning. 

 
Expected results at the output, outcome and impact level are linked in a sequence of three cause-effect relationships, in which each level of results is related to the next higher one by means of achievement. This results-chain is a continuation of the cause-effect relationship between input and activities explained earlier. The cause-effect linkages can be expressed with "if..then" phrases, representing the internal logic of the program/project. For example: "if" the developmental outputs are achieved as expected, "then" the we should achieve the outcomes, and; "if" the outcomes are achieved as expected, "then" we should achieve the impact.

There are three dimensions to the results chain which can be helpful in articulating output, outcome and impact statements. The first is the timeframe where outputs are considered to be short-term results, while outcomes and impact correspond to medium and longer-term results respectively. Outcomes can be achieved throughout the program/project lifetime, while impacts manifest themselves well after termination. In principle, outcomes should be articulated so that they are realistically achievable within the timeframe, budget allocation and extent of the intended reach of the program/project. In this way, if the outcomes are achieved, then the program/project will have achieved its stated purpose.

When articulating an outcome statement one must also take into consideration the program/project reach. The outcomes of a public sector reform project with a $500,000 budget and a one year term would certainly not be the same as those of a $5.0M five year project in the same sector and country. The stakeholders of the former might only aspire to raising the level of awareness of public officials of the need for employment equity legislation. Given the additional resources at their disposal, the stakeholders in the latter case might expect the approval and implementation of such a legislation. The intended beneficiary group reached in the first instance would be the public officials, while that of the latter would presumably be women and other disadvantaged groups. 

This example also illustrates the third important dimension of the results chain which is the depth of change. This refers to the depth of change in human development at either the individual, institutional, sectoral or societal levels expected by program/project stakeholders. In some cases, it might only be realistic to expect certain institutions in a country to implement an employment equity policy, whereas in other circumstances it might be feasible to expect larger political jurisdictions i.e. municipal, provincial or federal/state to adopt such legislation. The expected depth of change must be in balance with the resources available and the extent of the intended reach.

Defining expected results is no easy task for any stakeholder group. Developmental results defy simple standardisation and don't lend themselves easily to a cookie-cutter approach. Because of the differing circumstances, from one country to the next, it is impossible to establish standard outcome statements, but a rather more flexible approach is required. In some sectors e.g. economics, education, environment, etc. the cause and effect relationships are well researched and documented, thus facilitating the building of a performance framework. However, in other sectors such as human rights and governance, where our experience is more limited, these relationships are less well known and projects must be more experimental in nature. In any case, we should rely on the consensus of informed key stakeholder groups, working in partnership, to develop a realistic performance framework under any given set of circumstances. 

3.4 Asking Some Fundamental Questions

In an LFA9 , the goal and purpose are the long and medium-term objectives, respectively, for which the resources have been allocated. In a bilateral context, a goal statement should, in effect, be the same as one of the program objectives from the Country/Regional Planning Framework to which the project and other related projects would contribute. Similarly, in other contexts, the goal statement would link up to a higher order of strategic objective. The purpose statement should define the contribution the program/project will make in the context of this broader strategic objective. To develop a Performance Framework, one should ask a series of questions, as illustrated in Figure 4., in order to articulate the cause-effect results chain that would correspond to the LFA's goal and purpose statements. 

9 The Results-Oriented LFA is available in AmiPro format and the accompanying guide (The Logical Framework: Making it Results-Oriented, November 1997) is available to CIDA staff as a desk top utility in Smart Text. 

When reading the graphic in Figure 4. from right to left, the first question to ask is Why should we do this program/project? The next question is What results do we want? and Who? are the direct beneficiaries of these results. Only once these questions have been answered in terms of developmental results that respond to the identified needs of the direct beneficiaries, and the society they live in, should we move on. The last question can then be posed as to How? the initiative should be undertaken. In this way, the program/project design and planning process becomes demand driven by asking the Why? and What? questions before deciding on the How? All too often the reverse is true which is typical of a "supply driven" development process. 

3.5 The Performance Framework

The Performance Framework is a complementary management tool to the results-oriented LFA. Its strength lies in its ability to graphically represent the cause and effect relationships between activities, reach and developmental results. Once completed, the results statements can be used to fill in the "Expected Results" column of the LFA. In addition, it is also an excellent means by which to communicate the "vision" of the program/project to participants and external audiences. A Performance Framework, illustrated in Figure 5., should: 

    • identify strategic objectives; 
    • define a chain of expected results; 
    • identify key stakeholders; and 
    • outline the major activity components. 
Although producing a Performance Framework that all stakeholders understand and agree with is the initial objective, the framework should not remain static throughout the life of the program/project. In a context of results-oriented management, the framework should be modified, as required, in order to reflect changes in the program/project as it progresses toward the achievement of expected results. Changes to the original Performance Framework are inevitable and should reflect an improved understanding of the causal relationships between different levels of expected results and underlying assumptions made about them. 

4.0 Identifying Assumptions and Risk

Completing a risk analysis is part of the appraisal process when designing and planning a program/project. All stakeholders should participate in identifying assumptions, assessing risk and establishing risk indicators. Once completed, the "Assumptions - Risk Indicators" column of the results-oriented LFA can be filled in. The first step in conducting a risk analysis is the identification of assumptions.

4.1 Identifying Assumptions

When planning a program/project and defining expected results, it is important to remember that the environment within which these programs/projects are managed is ever-changing and may be volatile. Since development programs/projects are not implemented in a controlled environment, external factors can often be the cause of their failure. When in the planning and design stage, the necessary conditions for success must be identified. Accordingly, care should be taken to make explicit the important assumptions upon which their internal logic is based. These are the assumptions made about how the cause and effect relationships are supposed to behave in any given implementation environment. Assumptions describe the necessary conditions that must exist if the cause-effect relationships between levels of results are to behave as expected. In general terms, if the assumptions hold true, the necessary conditions for success exist. Because assumptions are conditions over which we have no or very little control, identification during planning is critical. Figure 6. illustrates how the integrity of the results chain is dependent on the underlying assumptions about conditions external to the program/project.

The conditional logic of program/project design begins with the initial assumptions about the necessary preconditions for program/project start-up. "If" these assumptions hold true, "then" the inputs can be mobilised and the activities undertaken. "If" the activities are delivered, "and" provided that the assumptions about the factors affecting the activity-output relationships hold true, "then" the outputs should be achievable. "If" the outputs are achieved, "and" provided that the assumptions about the external factors affecting the outputs-outcomes relationships hold true, "then" the outcomes should be achievable. "If" the outcomes are achieved, "and" provided that the assumptions about the factors affecting the outcomes-impact relationships hold true, "then" the impact should eventually manifest itself. 

4.2 Risk Assessment

Simply identifying assumptions does not guarantee that the situation or external implementation environment will be stable. For each assumption made, there is the probability that something may jeopardise the integrity of the performance framework, in other words, there is a risk that some assumptions will not hold true. It is also true that we make assumptions about internal factors as well that could jeopardise the success of an initiative, e.g., program/project complexity, ineffective technical advisors, poor communication among delivery partners, etc. Although they should also be monitored, they pose less of a threat than the external factors, since they are more easily brought within the manageable control of the key stakeholders.

In RBM, the approach is to accept the presence of risk and plan accordingly by attempting to bring the internal and external factors under management control. However, the further one progresses along the performance framework the less control one has over these factors and the ability to bring risk within manageable control becomes increasingly difficult. Consequently, as manageable control decreases and the level of risk increases, the probability of achieving expected results declines as we move from outputs, to outcomes and impact. For example, as illustrated in Figure 7., it is usually safe to assume that the selective use of inputs and careful organisation of activities will generate the expected outputs. The risk of failing to generate outputs is relatively low because of the high level of control over any internal or external factors that might be determinant in the activity - output relationship. At the outcome level however there is a moderate risk that the assumptions linking outputs to outcomes might not hold true. At this point in the performance framework stakeholders have limited management control over external factors. And finally, at the impact level, the risk that assumptions might not hold true is even higher. Usually, long after the original program/project activities are completed, there is no control over the external factors that might be determinant in the outcome - impact relationship. 

A risk analysis should be conducted during program/project design to determine the probability that the underlying assumptions will not hold true and what the potential effect this would have on program/project sustainability. When this risk assessment is completed, each assumption should be rated in terms of its potential risk10 i.e. low, medium or high. Management strategies can then be considered and resources allocated, if it is feasible and cost-effective, to bring the necessary external factors under the manageable control of the program/project delivery partners (in which case they are no longer external, but are within the manageable interests of the program/project). However, this is not generally possible when financial resources are limited. In these cases, the best alternative is to monitor the status of those assumptions giving greatest attention to those with the highest risk rating and taking corrective action when required using any available Risk Allowance.11

10 A more sophisticated approach would be to multiply a numeric probability (%) that the assumption will not hold true by an weighted scale (1-10) of importance in order to estimate a risk factor for each assumption. 
11 For more details, please see Section 8.4 of Geographic Programs Roadmap '97 which is available to CIDA staff as a desk top ulitiy in Smart Text.

Risk need not necessarily be negative and can in fact open up new opportunities. An unexpected event could have a positive reinforcing effect and lead to positive unintended consequences or results. But program/project managers should be less concerned with this type of risk since it is unlikely to jeopardise the achievement of planned results. Risk only causes concern when it leads to negative unintended consequences or undesirable results.

4.3 Monitoring Risk

A risk analysis should be conducted during program/project design to determine the probability that the underlying assumptions are wrong or will not hold true. Assumptions should therefore be carefully monitored during program/project implementation. As time passes, the necessary conditions underlying the causal relationships may change and immediate corrective action will have to be taken to assure the success of the program/project. Delivery partners should therefore maintain implementation flexibility to address those risks which are inevitably unpredictable and potentially damaging. For some programs/projects, the use of risk indicators to monitor the status of the assumptions would be recommended. As illustrated in Figure 7., risk indicators should be identified for each high risk assumption to monitor the status of internal and external factors that could affect implementation.

Very simply, such a technique would involve a regular scanning of the environment in which the program/project is operating to determine whether the necessary conditions for success remain present. Attention should be paid to assessing the cost-effectiveness of this approach, particularly for either very large, complex, innovative, or risky initiatives where the potential benefits could outweight the additional cost of data collection and analysis. When development partners jointly manage the implementation process to mitigate high risk conditions, the achievement of developmental results at the output and outcome levels is no longer left to chance. 

5.0 the Performance Measurement framework

5.1 Performance Measurement and Evaluation 

Performance measurement differs from the traditional evaluation practice in that it is a continuous process of performance self-assessment undertaken by the program/project delivery partners. The traditional approach has been to schedule mid-term and end-of-term evaluations that are, generally, formative and summative in nature. These types of evaluations are typically conducted by external evaluators who are mandated to execute terms of reference set out by the funder which not only guide, but control the evaluation process.12 The evaluation exercise is often imposed on the other stakeholder groups as an administrative requirement. Because of the short timeframe within which to conduct these evaluations and a lack of familiarity the evaluators have with the program/project implementation challenges, evaluations have tended to focus on management processes and not the achievement of developmental outcomes. Furthermore, evaluation recommendations are all too often written in an opaque manner so as not to offend the stakeholder groups. Evaluation research has shown that the utility value of traditional evaluations has been very low for program/project delivery partners and other stakeholder groups.

12 Although this description may not fully apply to participatory evaluations, they are still relatively rare in comparison to the large number of traditional evaluations referred to here.

Within an RBM context, performance measurement is customised to respond to the performance information needs of program/project managers and stakeholders. Since the stakeholders are involved in one aspect or another of measuring performance, the information that is generated is more accessible and transparent to the users. Performance measurement is also more result-oriented, because the focus is on measuring progress made toward the achievement of developmental results. Consequently, the performance information generated from performance measurement activities enhances learning and improves management decision-making. However, there are also certain implications of moving towards a greater emphasis on performance self-assessment. For performance measurement to work as it was intended, CIDA managers need to exhibit an increased confidence and trust in program/project delivery parters/executing agencies. This will require a flexible "hands-off" approach to management, as stakeholders are empowered with more of the responsibility for performance measurement than they would have with a traditional evaluation approach. Partners may also need to develop new skills and tools in order to take responsibility for performance measurement. 

At the heart of the RBM approach is performance measurement. When performance measurement is undertaken on a continuous basis during implementation, it empowers managers and stakeholders with "real time" information about the use of resources, the extent of reach and the achievement of developmental results. This performance information will inform management as to progress made along the results chain, as well as help identify programming strengths and weaknesses in order to take corrective action. The enlightened manager should be able to answer the following question with evidence-based performance information: "Are we achieving the developmental results expected by the targeted beneficiaries at a reasonable cost"? Building a performance measurement framework during the design and planning phase is an important first step in answering this question.

5.2 The Performance Measurement Framework 

Because measuring performance is a vital component of the RBM approach, it is important to establish a structured plan for data collection, analysis, use and dissemination of performance information. This plan must describe who will do what, when and how. A Performance Measurement Framework will help structure the answers to these questions. It will document the major elements of the monitoring system and ensure that comparable performance information is collected on a regular and timely basis. Its main components are organised in a matrix format as illustrated in Figure 8.

Figure 8. The Performance Measurement Framework

5.3 Selecting Performance Indicators 

Building a Performance Measurement Framework begins with the identification of performance indicators. It is important that the stakeholders agree a priori on the indicators that will be used to measure program/project performance. Performance indicators are qualitative or quantitative measures of resource use, extent of reach and developmental results achieved used to monitor program/project performance. Quantitative indicators are statistical measures such as number, frequency, percentile, ratios, variance, etc. Qualitative indicators are judgement and perception measures of congruence with established standards, the presence or absence of specific conditions, the extent and quality of participation, or the level of beneficiary satisfaction, etc. It is a popular myth that information collected on quantitative indicators is inherently more objective than that collected on qualitative indicators. Both can be either more or less objective or subjective depending on whether or not the principles of social science research have been rigorously applied in the data collection and analysis process.13

13 See E.G. Guba and Y.S. Lincoln, Fourth generation evaluation, Sage Publications, 1989. 

There are six criteria that should be used when selecting performance indicators. Each one is presented below along with an illustrative question in guise of an explanation.

1. Validity - Does it measure the result?
2. Reliability - Is it a consistent measure over time?
3. Sensitivity - When the result changes will it be sensitive to those changes?
4. Simplicity - Will it be easy to collect and analyse the information?
5. Utility - Will the information be useful for decision-making and learning?
6. Affordability -Can the program/project afford to collect the information? 
Although performance indicators should be identified across the entire spectrum of the performance framework, from resources through to impact level results, it should be noted that RBM emphasises measuring the achievement of developmental results more so than the use of resources. A minimalist approach to measuring resources would be advised by tracking financial expenditures by program/project component. Gender, age, scholarity, profession, income, geographic location (rural/urban) and other indicators are generally useful when measuring the extent of reach. The choice of performance indicators to measure the achievement of results, especially at the output and outcome levels, will depend wholly on the nature of the result, how it is articulated and the implementation context including cost, level of effort, the size and complexity of the program/project. 

At the outcomes level, the information collected on performance indicators would be analysed and used in management decision-making to keep a program/project on track toward the achievement of its purpose. Information collected on the same indicators would also constitute evidence regarding program/project success, or failure at termination. It is suggested that at least three indicators per expected result at the outcomes level should be used: at least one quantitative, one qualitative and one of your choice. In many cases, a total of two indicators at the output level would be sufficient. For each quantitative indicator, it is important to specify the unit of analysis or calculation, existing baseline information and useful benchmarks for comparison. Benchmarks should also be specified for each qualitative indicator as well as expected perceptions or judgement of progress by stakeholders and a detailed description of expected conditions or situation to be observed.

Program/project stakeholders should begin the process of identifying and selecting performance indicators by preparing a comprehensive list. The next step is to decide how many are needed and apply the selection criteria above to the list. Those that don't meet these criteria should be discarded. The best performance indicators from those remaining should be used and the rest kept in a reserve pool. Developing a performance measurement system is a trial and error experience that can only be improved after several cycles of data collection, analysis and appraisal. Some performance indicators may, after some use, prove not to meet the above criteria and must then be replaced from the reserve pool. The RBM principle of learning by doing applies best to performance measurement.

5.4 Collecting Performance Information

Once the performance indicators have been selected, the next step in completing the Performance Measurement Framework is to resolve issues surrounding the collection of the performance information. More specifically data sources, methods and techniques of collection and analysis as well as frequency of collection will have to determined. Roles and responsibility for each of these tasks should be clarified and confirmed.

It is necessary to identify the data source for each indicator that has been selected. Data sources are the individuals or organisations from which the data will be obtained. This exercise should be completed during the project/program planning in order to assess the availability of the data and identify any potential problems. It is important to choose data sources wisely to avoid having to switch data sources mid-way through a program/project as this may jeopardise data reliability. We should first focus on existing sources to maximise value from existing data. In some cases, organisations that have been identified as a data source may require some capacity building in data collection. This should not be viewed negatively. It is an opportunity to obtain information that is tailored to program/project information needs. 

Now that data sources have been identified, the next step is to decide how the information should be obtained. There are several methods and techniques of data collection from which to choose. For example, information can be captured through Participatory Rural Appraisal (PRA), Participatory Learning and Action (PLA), self-assessment, testimonials, focus groups, surveys, case studies participant observation, and the list is endless. There are also other things to consider such as sampling techniques and data collection instruments. Some of the data collection methods may require developing data collection instruments. For example, questionnaires would need to be developed if surveys were chosen as one of the collection methods. The same would be true of participant observation. This would require developing a participant observation reporting format to ensure consistency and validity of the information collected from one observer to another.

Once the data source has been identified and a method or technique decided upon to collect the information, the question of the frequency of data collection must be addressed in the framework. Financial information on the use of resources should be captured by a good accounting system on a continuous basis. Information on targeted beneficiary groups may have to be captured initially to create a baseline and subsequently on a periodic basis in conjunction with the measurement of output and outcome achievement. Since outputs are the short-term logical consequences of activities, data collection on output achievement should begin shortly after the completion of a activity. Information collected on output achievement will support management learning, decision-making and continuous performance improvement. 

Since outcomes are the logical consequences of a combination of outputs, they will not begin to materialise or manifest themselves until a sufficient number and type of outputs have been achieved. For example, the following outputs: a revised gender sensitive curriculum, new instructional materials available, x# of teachers capable of delivering the revised curriculum, and increased community support for girl-child education, may all be needed to achieve this outcome: improved quality of primary education for girls in rural areas. Consequently, the collection of data on performance indicators for this outcome may have to be captured initially to create a baseline, but then only once at the end of every school year beginning at the mid-point of the program/project cycle. Information collected on outcome achievement will support stakeholder learning and decision-making on an annual basis. Performance information should be appraised by stakeholder groups at least annually so as to adjust the implementation plan in an manner that will enhance the probability of successfully achieving the outcome level results and thus the purpose of the endeavour. Responsibility for continuous performance self-assessment by stakeholders should be limited to measuring the achievement of outcomes.

Since impact level results are a logical consequence of a combination of outcomes, and outcomes are generally achieved around program/project termination, then it would be unlikely that measuring impacts during the implementation phase would reveal much in the way of attributable performance information. Consequently, the responsibility for impact evaluation should rest with the Country Program Directors (CPD) or responsible Program Director. Because impact level results are long-term socio-economic changes expected at a country level, it would be virtually impossible to attribute them to discrete program/project initiatives. Ideally, a CPD would commission an impact evaluation of a portfolio of terminated projects that addressed the same program priority in the same country or region. This could be undertaken independently in cases where CIDA is the primary donor agency, or within the context of multi-donor impact evaluations when several agencies are involved in the same sector. The DAC Evaluation Working Group members14 have advocated increased use of this approach as a strategy to better co-ordinate donor agency evaluation activities at the country level and address the need to understand issues around the attribution of results to donor interventions.

14 Based on personal discussions with members of the DAC Evaluation Working Group.

6.0 Performance Monitoring

6.1 Selecting an Approach

It is the responsibility of the CIDA Program/Project Manager and his/her team to define the most appropriate approach to measure and monitor program/project performance. However, it is also important and necessary to discuss these options with Canadian and developing country partners. Involving the major stakeholders early in the design of the Performance Measurement Framework approach enhances commitment to the performance monitoring function. A participatory approach also ensures that the full range of stakeholder information needs (and potential sources of information) are identified, as well as any obstacles or constraints to the collection of performance information.

There are three basic approaches to performance monitoring from which to choose. In all three cases, the Program/Project Manager has overall acccountability for performance monitoring. The first option is internal monitoring. In this case, performance measurement and monitoring is the responsibility of those who are most closely involved in program/project implementation, including the Executing Agency or Canadian Partner. Internal monitoring is essentially a form of continuous performance self-assessment where the project delivery participants have the capacity, and are given the responsibility, to undertake performance measurement and reporting. The second option is external monitoring where a consultant is contracted as the Project Monitor to independently track and report on performance to the CIDA Program/Project Manager and his/her team, as well as the Project Steering Committee where necessary. This option is normally used only for large, complex projects. The third option, external support, combines the above approaches such that the program/project delivery partners are responsible for the performance measurement function, but they are assisted by a Performance Advisor who is contracted to help build their capacity and advise CIDA on the validity and reliability of the performance information being reported.

It is important to select an approach which is cost-effective, appropriate and reflects all the stakeholders needs for timely performance information. Factors to consider are: 

    • the magnitude of the investment; 
    • the program's/project's technical complexity; 
    • the experience and capacity of the delivery partners; 
    • the commitment of partners to self-assess; 
    • the availability of in-house CIDA resources; 
    • the degree of "innovation"; 
    • the level of external risk; 
    • the potential for lessons-learned that may not otherwise be available; and, 
    • the availability of performance information from other donors. 
The three monitoring options are discussed in more detail below.

6.2 Internal Monitoring Option 

Internal monitoring requires that each project delivery participant (i.e. Executing Agency, Canadian Partner, developing country partner(s) and CIDA) takes responsibility for particular aspects of the program's/project's continuous performance assessment (measurement). Agreement is reached on the baseline data and information is then provided through the progress and financial reporting submitted by the Executing Agency or Canadian Partner, as well as reporting and feedback from the field, recipient country partners, others donors, etc. Other sources of information may also be identified, but collection of the information remains the responsibility of one of the project parties. Progress reports on the achievement of results are reviewed by CIDA and other appropriate committees.

All of these methods provide opportunities for all major stakeholders to assess the program's/project's performance based on the performance measurement information collected. Program/project activities can then be adjusted accordingly. Even though the internal monitoring option may have been selected, a formal review of progress, independent or internal, could be requested by CIDA or one of the other parties.

6.3 External Monitoring Option

The external monitoring option involves engaging the services of a Canadian Program/Project Monitor, a Program Support Unit (PSU), or a Local Monitor to independently review and report on performance to the CIDA Program/Project Manager and his/her team, as well as any management committees. 

The Program/Project Monitor would normally review the baseline data, project narrative and financial reports, and performance information; undertake field visits; and participate in management committee meetings . He/she would also provide advice on the technical implementation of the program/project and provide technical liaison and facilitation services, if deemed necessary by the CIDA Program/Project Manager.

This option is normally used on large, complex projects; projects dealing with highly technical subject matters; or projects in fields where in-house expertise can not provide adequate technical review. Choosing this option, especially involving a Canadian Program/Project Monitor, must be considered as a cost-effective and appropriate approach.

6.4 External Support Option

The external support option for performance monitoring can involve engaging the services of a Program Support Unit (PSU), a Canadian Performance Advisor, a local Performance Advisor, or a combination thereof. The PSU and/or Performance Advisor may assist program/project delivery partners in developing the Performance Measurement Framework and strengthening their capacity to implement it by providing the following services: 

    • training in results-based management and performance measurement; 
    • reviewing the selection of performance indicators; 
    • reviewing information collection strategies, systems and instruments; 
    • reporting on the validity and reliability of the information produced; 
    • recommending needed improvements; 
    • coaching and advising program/project delivery partners. 
Like the Program/Project Monitor, a Performance Advisor could also provide advice on the technical implementation of the program/project and provide technical liaison and facilitation services, if deemed necessary by the CIDA Program/Project Manager.

7.0 Using Performance Information for Management Decision-Making

Although the RBM approach may initially appear linear, it is in fact an iterative management model as illustrated on the following page in Figure 9. There is constant feedback to the planning and management process as results are assessed. Based on constant feedback of performance information, inputs and activities can be modified and other implementation adjustments made. This corresponds to the two management functions of continuous performance measurement and iterative implementation. These two management functions are represented in Figure 9. by the semi-circular arrows representing the collecting of performance information and the management decisions based on the analysis of this information. 

In continuous performance measurement, two types of information are collected. Performance information is collected using the indicators which were developed during the planning and design of the program/project. Risk information is collected by using the risk indicators. The performance information is analysed in context of risk information. All stakeholders, including the beneficiaries where feasible, should review the project/program at least once a year and should draw conclusions about its performance. It is crucial that the information collected is accurate and reliable. Without good performance information, the organisation will not learn effectively. 

In iterative management, implementation decisions are based on the lessons-learned and then re-assessed. Based on the performance information collected, corrective action should be taken to adjust the program/project. Stakeholders must make management decisions to allocate or reallocate resources where necessary to ensure program/project sustainability and achieve better results. Stakeholders' power to implement decisions and make the necessary changes is crucial to this iterative process. The effectiveness of the RBM approach depends on the extent to which good performance and risk information is collected and used by managers to manage and then monitored again through a series of performance information feedback loops as illustrated above.

Performance information should be used to make adjustments in program/projects in three key ways: where results are being achieved, actions can be taken to strengthened them; where progress is difficult, different approaches can be tried or activities added; and, where activities-outputs are considered to be obsolete, they should be abandoned. Performance information can also be used to examine strategic trade-offs between resource use, extent of reach and the achievement of developmental results. Managers and stakeholders should ask themselves the following questions: 

    • Can we improve our results given the resources available to reach the targeted beneficiaries? 
    • Can we decrease coverage of beneficiary groups, or increase critical mass for better results? 
    • Can resources be increased, decreased or re-allocated to improve cost-effectiveness?
The answers to these questions will certainly depend on the unique circumstances, but in each case they require a close examination and decision about strategic trade-offs between resources, reach and results. This process of data analysis and examination of trade-offs will enhance organisational learning. 

8.0 Tools for Performance Monitoring & Reporting

The Annual Project Progress Report (APPR) and the Bilateral Project Closing Report (BPCR) are the two main performance monitoring/reporting tools to be used by the Geographic Programs and Countries in Transition Programs to collect performance information on their projects. They both reflect the results-based management approach, but in different ways. The APPR design is based on the Performance Framework, while BPCR is based on the Framework of Results and Key Success Factors (Annex II). The information collected should be useful to managers for monitoring and decision-making as well as for reporting internally to partners, Parliament and the public on resources invested, reach and results achieved. While the APPR's focus is on implementation and on-going management for results, the BPCR provides a unique opportunity for CIDA and its partners to reflect on program/project performance, from design to completion.

8.1 The Annual Project Progress Report (APPR)

The Annual Project Progress Report is the primary mechanism for program/project self-assessment. The information collected in this report is focused on comparing expected results (as set out in the latest Performance Framework and/or LFA) to the results actually achieved to-date. If results achieved fall short of what was expected, this signals possible problems that need to be discussed and resolved.

As its title suggests, the APPR should be completed on a yearly basis, except for the last year. The report should be completed by the Program/Project Manager in consultation with colleagues in the field, the Executing Agency/Canadian Partner, as well as local partners. It is important that all major stakeholders participate in this process. Having those who best know the program/project involved will ensure that the information is accurate. This will also give the stakeholders the opportunity to discuss identified problems and their possible solutions.

During the reporting exercise, each Program/Project Manager should meet with the respective Director to discuss the content of each APPR. Once there is agreement on the report's content, the report should be signed off by the Program Director, who assumes overall responsibility for the quality and accuracy of reporting. 

8.2 The Bilateral Project Closing Report (BPCR)

The Bilateral Project Closing Report (BPCR) is the primary mechanism for results reporting at the Branch and corporate levels (See SmarText for complete BPCR guidelines). The BPCR gives CIDA and its partners the opportunity to reflect on a completed (or inter-phase) project, from design to completion. In addition to a section devoted to measuring results, sections of the BPCR related to the development and management factors of the framework allow BPCR users to convey more completely the full nature of a project, including its relevance, appropriateness, sustainability and cost-effectiveness. The intent of the BPCR is not only to assess whether a project has achieved its results but also explain why it has or has not. The BPCR is not only a self-assessment tool, it also serves the function of recording results for reference and reporting purposes. 

The BPCR should be completed once the project has terminated. The BPCR is used to report on the entire project from beginning to end, including the results achieved in the project's last year. Therefore, an APPR is completed every year except in the last at which time a BPCR is required. It is essentially a cumulative summary of all the project's APPRs.15

15 It should be noted that the Development and Management Factors are not fully integrated into the APPR format, therefore making it difficult to fully document and understand the reasons for success or failure in the achievement of results. 

As with the APPR, the BPCR should be completed using a participatory approach. Those responsible for completing the report should be identified in the Performance Measurement Framework. The report should be reviewed by all stakeholders in order to ensure that it accurately depicts the project's performance. And once again, the report should be signed off by the Program Director, who assumes overall responsibility for the quality and accuracy of reporting. 

9.0 Summary

RBM involves the application of some evaluation research techniques and a healthy measure of common sense. It builds on previous efforts to work towards the achievement of developmental results. CIDA's Program/Project Managers cannot afford to rely on external evaluators to provide important performance information on a continuous basis to manage for results. Evaluation methods and techniques have become the latest tool in a manager's toolkit and, like any other tool, its full potential and mastery will take time, experimentation and effort. CIDA's Program/Project Managers and partners must master the process of planning, doing, evaluating and reflecting on their programs and projects.

Abbreviations

APPR Annual Project Progress Report
BPCR  Bilateral Project Closing Report
CEA Canadian Executing Agency
CIDA Canadian International Development Agency
CPD Country Program Director
LFA Logical Framework Approach
OAG Office of the Auditor General
ODA Official Development Assistance
OECD  Organisation for Economic Cooperation and Development
PF Performance Framework
PMF Performance Measurement Framework
PRA Participatory Rural Appraisal
PRAS Planning, Reporting and Accountability Structure
PLA Participatory Learning and Action
PSU Program Support Unit
TBS  Treasury Board Secretariat
Annex I : Results-Based Management in CIDA Policy Statement

Canada in the World establishes four clear commitments for Canada's ODA program1

    • a clear mandate and set of six ODA priorities; 
    • strengthened development partnerships; 
    • improved effectiveness; and 
    • better reporting of results to Canadians. 
1 In addition, CIDA recently welcomed the international assistance program for the Former Soviet Union/Central and Eastern Europe (FSU/CEE).

CIDA is committed to improving the impact of its work and to achieving increased efficiency and effectiveness in achieving that impact. CIDA launched its Corporate Renewal initiative in 1994 with these aims in mind. CIDA`s adoption of results-based management (RBM) as its main management tool will allow it to systematically address these commitments. 

CIDA has always pursued development results. The RBM approach will assist CIDA in its efforts towards continuous improvement in results-orientation, focus, efficiency and accountability. RBM will also be an important element in CIDA`s continuing development as a learning organization. The process of developing RBM will be iterative and will build on pilot programs now in progress across the Agency.

The purpose of this Policy Statement is to outline: 

    • the basic RBM policy and principles for CIDA; and 
    • a common vocabulary on RBM (Annex A). 
This policy should be viewed in conjunction with CIDA's Accountability Framework. 

What is results-based management?

A result is a describable or measurable change resulting from a cause-and-effect relationship. By results-based management, we mean: 

    • defining realistic expected results, based on appropriate analyses; 
    • clearly identifying program beneficiaries and designing programs to meet their needs; 
    • monitoring progress towards results and resources consumed, with the use of appropriate indicators; 
    • identifying and managing risks, while bearing in mind expected results and the necessary resources; 
    • increasing knowledge by learning lessons and integrating them into decisions; and 
    • reporting on results achieved and the resources involved. 
Policy Statement

Results-based management is integral to the Agency's management philosophy and practice. CIDA will systematically focus on results to ensure that it employs management practices which optimize value for money and the prudent use of its human and financial resources. CIDA will report on its results in order to inform Parliament and Canadians of its development achievements.

Scope

Best efforts will be made to ensure that this results-based management policy and its principles will be applied to all Agency programs and operations. RBM will guide all managers and staff, bearing in mind the changing circumstances facing CIDA in the developing world and the role played by CIDA's partners in achieving results. 

Principles

    • Simplicity 

    • The RBM approach implemented by CIDA will be easy to understand and simple to apply. 
    • Learning by Doing

    • CIDA will implement RBM on an iterative basis, refining approaches as we learn from experience. CIDA will prepare all CIDA managers and staff to implement RBM by providing appropriate, timely and cost-effective training. 
    • Broad Application

    • CIDA will identify expected results and performance indicators for its programs and projects, where feasible, while striving to find a pragmatic balance between the use of qualitative and quantitative indicators. It will develop cost-effective means to monitor and measure results and learn from the best practices of the international community. 
    • Partnership

    • CIDA will identify, in collaboration with our partners, our respective roles and responsibilities.CIDA will share the responsibility for achieving results at the program and project levels with our partners in Canada and in developing countries. CIDA will work with its partners to ensure a common understanding of the principles of RBM.
    • Accountability

    • CIDA will provide a work environment where individuals accept that their accountability includes delivering on results. An essential feature will be that managers will promote a focus on results in a manner that is resource efficient. 
    • Transparency

    • CIDA's implementation of RBM will lead to better reporting on more clearly identified development results. 
Annex A : Key RBM Definitions

Results

    • Result. A result is a describable or measurable change in state that is derived from a cause and effect relationship. 
    • Developmental result2. The output, outcome and impact of a CIDA investment in a developing country. 
    • Operational result. The administrative and management product achieved within the Agency. 
    • Input. The resources required, including money, time or effort, to produce a result. 
    • Results chain. Generally seen to correspond to the output, purpose and goal levels of a logical framework analysis (LFA).3 
    • Output. The immediate, visible, concrete and tangible consequences of program/project inputs. 
    • Outcome. Result at the LFA purpose level, constituting the short-term effect of the program/project. This is generally the level where the beneficiaries or end-users take ownership of the program/project and CIDA funding comes to an end. 
    • Impact. Broader, higher level, long-term effect or consequence linked to the goal or vision. 
2 Given its international assistance mandate, the FSU/CEE Branch will adopt modified definitions of terms such as developmental results suited to its purpose.
3Purpose. a level of objective within the control of program/project activities and which explains what service is being provided, who is the direct beneficiary of the service and why or to what higher goal the project is contributing.
Goal. A level of objective immediately above that of program/project purpose which links the program/project to a wider set of strategies being undertaken to address a specific problem. 

Performance measurement

  • Baseline data. The set of conditions existing at the outset of a program/project. Results will be measured or assessed against such baseline data. 
  • Performance indicators. Specific performance measures chosen because they provide valid, useful, practical and comparable measures of progress towards achieving expected results. 
  • Quantitative indicators. Measures of quantity, including statistical statements. 
  • Qualitative indicators. Judgments and perceptions derived from subjective analysis. 
  • Performance assessment. Self-assessment by program branches/units, comprising program, project or institutional monitoring, operational reviews, end-of-year reporting, end-of-project reporting, institutional assessments and special studies. 
  • Performance review. A comprehensive corporate review of a given program theme and ODA priority across all Agency program branches. 
Annex II : Framework of Results and Key Success Factors

Annex II : Framework of Results and Key Success Factors 


Last Modified:  2000-11-08