This handbook deals with the basics of setting up and using a monitoring and evaluation system for a project or an organization. It clarifies what monitoring and evaluation are, how you plan to do them, how you design a system that helps you monitor and an evaluation process that brings it all together usefully. It looks at how you collect the information you need and then how you save yourself from drowning in data by analyzing the information in a relatively straightforward way.Finally it raises, and attempts to address, some of the issues to do with taking action on the basis of what you have learned. Need of Having Handbook on monitoring and evaluation If you don’t care about how well you are doing or about what impact you are having, why bother to do it at all? Monitoring and evaluation enable you to assess the quality and impact of your work, against your action plans and your strategic plan.
In order for monitoring and evaluation to be really valuable, you do need to have planned well. Planning is dealt with in detail in other toolkits on this website.Application of the Handbook The Handbook can helpful in following events: • To set up systems for data collection during the planning phases of a project or organization. • To analyze data collected through the monitoring process. • To know how efficiently and how effectively you are working. • To evaluate what impact the project is having at any stage. In fact, monitoring and evaluation are invaluable internal management tools. If you don’t assess how well you are doing against targets and indicators, you may go on using resources to no useful end, without changing he situation you have identified as a problem at all.
Monitoring and evaluation enable you to make that assessment. CHAPTER-1 BASIC CONCEPT OF MONITORING & EVALUATION Monitoring is the systematic collection and analysis of information as a project progresses. It is aimed at improving the efficiency and effectiveness of a project or organization. It is based on targets set and activities planned during the planning phases of work. It helps to keep the work on track, and can let management know when things are going wrong.If done properly, it is an invaluable tool for good management, and it provides a useful base for evaluation. It enables you to determine whether the resources you have available are sufficient and are being well used, whether the capacity you have is sufficient and appropriate, and whether you are doing what you planned to do. Evaluation is the comparison of actual project impacts against the agreed strategic plans.
It looks at what you set out to do, at what you have accomplished, and how you accomplished it.It can be formative (taking place during the life of a project or organization, with the intention of improving the strategy or way of functioning of the project or organization). It can also be summative (drawing learnings from a completed project or an organization that is no longer functioning).
What monitoring and evaluation have in common is that they are geared towards learning from what you are doing and how you are doing it, by focusing on: • Efficiency • Effectiveness • Impact Efficiency tells you that the input into the work is appropriate in terms of the output.This could be input in terms of money, time, staff, equipment and so on. When you run a project and are concerned about its replicability or about going to scale, then it is very important to get the efficiency element right. Effectiveness is a measure of the extent to which a development project achieves the specific objectives it set.
If, for example, we set out to improve the qualifications of all the high school teachers in a particular area, did we succeed? Impact tells you whether or not what you did made a difference to the problem situation you were trying to address.In other words, was your strategy useful? Did ensuring that teachers were better qualified improve the pass rate in the final year of school? Before you decide to get bigger, or to replicate the project elsewhere, you need to be sure that what you are doing makes sense in terms of the impact you want to achieve. Need Of Monitoring & Evaluation Monitoring and evaluation enable you to check the “bottom line” of development work: Not “are we making a profit? ” but “are we making a difference? ” Through monitoring and evaluation, you can: • Review progress; • Identify problems in planning and/or implementation; Make adjustments so that you are more likely to “make a difference”. In many organizations, “monitoring and evaluation” is something that that is seen as a donor requirement rather than a management tool. Donors are certainly entitled to know whether their money is being properly spent, and whether it is being well spent.
But the primary (most important) use of monitoring and evaluation should be for the organization or project itself to see how it is doing against objectives, whether it is having an impact, whether it is working efficiently, and to learn how to do it better.Plans are essential but they are not set in concrete (totally fixed). If they are not working, or if the circumstances change, then plans need to change too. Monitoring and evaluation are both tools which help a project or organization know when plans are not working, and when circumstances have changed.
They give management the information it needs to make decisions about the project or organization, about changes that are necessary in strategy or plans. Through this, the constants remain the pillars of the strategic framework: the problem analysis, the vision, and the values of the project or organization.Everything else is negotiable. Getting something wrong is not a crime but failing to learn from past mistakes because you are not monitoring and evaluating, is.
It is important to recognize that monitoring and evaluation are not magic wands that can be waved to make problems disappear, or to cure them, or to miraculously make changes without a lot of hard work being put in by the project or organization. In themselves, they are not a solution, but they are valuable tools. Monitoring and evaluation can: • Help you identify problems and their causes; Suggest possible solutions to problems; • Raise questions about assumptions and strategy; • Push you to reflect on where you are going and how you are getting there; • Provide you with information and insight; • Encourage you to act on the information and insight; • Increase the likelihood that you will make a positive development difference. The effect of monitoring and evaluation can be seen in the following cycle. Note that you will monitor and adjust several times before you are ready to evaluate and replan. EVALUATION Monitoring involves: Establishing indicators of efficiency, effectiveness and impact; • Setting up systems to collect information relating to these indicators; • Collecting and recording the information; • Analyzing the information; • Using the information to inform day-to-day management. Monitoring is an internal function in any project or organization. Evaluation involves: • Looking at what the project or organization intended to achieve – what difference did it want to make? What impact did it want to make? • Assessing its progress towards what it wanted to achieve, its impact targets.
Looking at the strategy of the project or organization. Did it have a strategy? Was it effective in following its strategy? Did the strategy work? If not, why not? • Looking at how it worked. Was there an efficient use of resources? What were the opportunity costs of the way it chose to work? How sustainable is the way in which the project or organization works? What are the implications for the various stakeholders in the way the organization works? In an evaluation, we look at efficiency, effectiveness and impact.There are many different ways of doing an evaluation. Some of the more common terms you may have come across are: • Self-evaluation: This involves an organization or project holding up a mirror to itself and assessing how it is doing, as a way of learning and improving practice. It takes a very self-reflective and honest organization to do this effectively, but it can be an important learning experience.
• Participatory evaluation: This is a form of internal evaluation. The intention is to involve as many people with a direct stake in the work as possible.This may mean project staff and beneficiaries working together on the evaluation. If an outsider is called in, it is to act as a facilitator of the process, not an evaluator. • Rapid Participatory Appraisal: Originally used in rural areas, the same methodology can, in fact, be applied in most communities. This is a qualitative way of doing evaluations.
It is semi-structured and carried out by an interdisciplinary team over a short time. It is used as a starting point for understanding a local situation and is a quick, cheap, useful way to gather information.It involves the use of secondary data review, direct observation, semi-structured interviews, key informants, group interviews, games, diagrams, maps and calendars.
In an evaluation context, it allows one to get valuable input from those who are supposed to be benefiting from the development work. It is flexible and interactive. • External evaluation: This is an evaluation done by a carefully chosen outsider or outsider team. • Interactive evaluation: This involves a very active interaction between an outside evaluator or evaluation team and the organization or project being evaluated.Sometimes an insider may be included in the evaluation team. INTERNAL VS EXTERNAL EVALUATIONS |Advantages |Disadvantages | |Internal Evaluation | |The evaluators are very familiar with the work, the organizational |The evaluation team may have a vested interest in reaching positive | |culture and the aims and objectives. conclusions about the work or organization.
For this reason, other | | |stakeholders, such as donors, may prefer an external evaluation. | |Sometimes people are more willing to speak to insiders than to |The team may not be specifically skilled or trained in evaluation. | |outsiders. | |An internal evaluation is very clearly a management tool, a way of |The evaluation will take up a considerable amount of organizational | |self-correcting, and much less threatening than an external |time. | |evaluation. This may make it easier for those involved to accept | | |findings and criticisms. | | |An internal evaluation will cost less than an external evaluation.
It may cost less than an external evaluation; the opportunity costs | | |may be high. | |External evaluation (done by a team or person with no vested interest in the project) | |The evaluation is likely to be more objective as the evaluators will |Someone from outside the organization or project may not understand | |have some distance from the work. |the culture or even what the work is trying to chieve | | | | | | | | | | |The evaluators should have a range of evaluation skills and |Those directly involved may feel threatened by outsiders and be less| |experience. likely to talk openly and co-operate in the process. | |Sometimes people are more willing to speak to outsiders than to |External evaluation can be very costly. | |insiders.
| | |Using an outside evaluator gives greater credibility to findings, |An external evaluator may misunderstand what you want from the | |particularly positive findings. |evaluation and not give you what you need |Selecting An External Evaluator or Evaluation Team Qualities to look for in an external evaluator or evaluation team: • An understanding of development issues. • An understanding of organizational issues. • Experience in evaluating development projects, programs or organizations. • A good track record with previous clients. • Research skills. • A commitment to quality. • A commitment to deadlines.
• Objectivity, honesty and fairness. • Logic and the ability to operate systematically. • Ability to communicate verbally and in writing. A style and approach that fits with your organization. • Values that are compatible with those of the organization.
• Reasonable rates (fees), measured against the going rates. When you decide to use an external Evaluator: • Check his/her/their references. • Meet with the evaluators before making a final decision. • Communicate what you want clearly – good Terms of Reference • Terms) are the foundation of a good contractual relationship. • Negotiate a contract which makes provision for what will happen if output expectations are not met. Ask for a work plan with outputs and timelines. • Maintain contact – ask for interim reports as part of the contract • Build in formal feedback times.
Do not expect any evaluator to be completely objective. S/he will have opinions and ideas – you are not looking for someone who is a blank page! However, his/her opinions must be clearly stated as such, and must not be disguised as “facts”. It is also useful to have some idea of his/ her (or their) approach to evaluation.
DIFFERENT APPROACHES TO EVALUATION Approach |Major purpose |Typical focus |Likely methodology | | | |questions | | |Goal-based |Assessing |Were the goals |Comparing baseline | | |achievement of goals |achieved? Efficiently? |(see Glossary of | | |and objectives. Were they the right |Terms) and progress | | | |goals? |data (see Glossary of | | | | |Terms); finding ways to | | | | |measure indicators. | |Decision Making |Providing information. |Is the project effective? |Assessing range of | | | |Should it continue?How might |options related to the | | | |it be modified? |project context inputs, | | | | |process, and product. | | | | |Establishing some kind of | | | | |decision-making consensus. |Goal-free |Assessing the full |What are all the |Independent | | |range of project effects, |outcomes? What value do they |determination of needs and | | |intended and |have? |standards to judge project | | |unintended. | |worth.
Qualitative and | | | | |quantitative techniques to | | | | |uncover any possible results. | |Expert judgement |Use of expertise. |How does an outside |Critical review based | | | |professional rate this |on experience, informal | | | |project? surveying, and subjective | | | | |insights. | A combination of all these approaches is recommended as the best option. However an organization can ask for a particular emphasis but should not exclude findings that make use of a different approach CHAPTER-II PLANNING FOR MONITORING AND EVALUATION Monitoring and evaluation should be part of your planning process. It is very difficult to go back and set up monitoring and evaluation systems once things have begun to happen.You need to begin gathering information about performance and in relation to targets from the word go.
The first information gathering should, in fact, take place when you do your needs assessment (see the toolkit on overview of planning, the section on doing the ground work). This will give you the information you need against which to assess improvements over time. When you do your planning process, you will set indicators (see Glossary of Terms). These indicators provide the framework for your monitoring and evaluation system.They tell you what you want to know and the kinds of information it will be useful to collect. In this section we look at: • What do we want to know? This includes looking at indicators for both internal issues and external issues.
• Different kinds of information. • How will we get information? • Who should be involved? There is not one set way of planning for monitoring and evaluation. The ideas included in the toolkits on overview of planning, strategic planning and action planning will help you to develop a useful framework for your monitoring and evaluation system.
If you are familiar with logical framework analysis and already use it in your planning, this approach lends itself well to planning a monitoring and evaluation system. WHAT DO WE WANT TO KNOW? What we want to know is linked to what we think is important. In development work, what we think is important is linked to our values. Most work in civil society organizations is underpinned by a value framework.
It is this framework that determines the standards of acceptability in the work we do. The central values on which most development work is built are: • Serving the disadvantaged; • Empowering the disadvantaged; Changing society, not just helping individuals; • Sustainability; • Efficient use of resources. So, the first thing we need to know is: Is what we are doing and how we are doing it meeting the requirements of these values? In order to answer this question, our monitoring and evaluation system must give us information about: • Who is benefiting from what we do? How much are they benefiting? • Are beneficiaries passive recipients or does the process enable them to have some control over their lives? • Are there lessons in what we are doing that have a broader impact than just what is happening on our project? Can what we are doing be sustained in some way for the long-term, or will the impact of our work cease when we leave? • Are we getting optimum outputs for the least possible amount of inputs? Do we want to know about the process or the product? Should development work be evaluated in terms of the process (the way in which the work is done) or the product (what the work produces)? Often, this debate is more about excusing inadequate performance than it is about a real issue. Process and product are not separate in development work.
What we achieve and how we achieve it are often the very same thing. If the goal is development, based on development values, then sinking a well without the transfer of skills for maintaining and managing the well is not enough. Saying: “It was taking too long that way. We couldn’t wait for them to sort themselves out. We said we’d sink a well and we did” is not enough. But neither is: “It doesn’t matter that the well hasn’t happened yet. What’s important is that the people have been empowered.
” Both process and product should be part of your monitoring and evaluation system.But how do we make process and product and values measurable? The answer lies in the setting of indicators and this is dealt with in the sub-section that follows. What Do You Want To Know? Indicators Indicators are also dealt with in overview of planning, in the section on monitoring and evaluation. Indicators are measurable or tangible signs that something has been done or that something has been achieved. In some studies, for example, an increased number of television aerials in a community has been used as an indicator that the standard of living in that community has improved.An indicator of community empowerment might be an increased frequency of community members speaking at community meetings. If one were interested in the gender impact of, for example, drilling a well in a village, then you could use “increased time for involvement in development projects available to women” as an indicator.
Common indicators for something like overall health in a community are the infant/child/maternal mortality rate, the birth rate, and nutritional status and birth weights. You could also look at less direct indicators such as the extent of immunization, the extent of potable (drinkable) water available and so on.Indicators are an essential part of a monitoring and evaluation system because they are what you measure and/or monitor.
Through the indicators you can ask and answer questions such as: • Who? • How many? • How often? • How much? But you need to decide early on what your indicators are going to be so that you can begin collecting the information immediately. You cannot use the number of television aerials in a community as a sign of improved standard of living if you don’t know how many there were at the beginning of the process.Some people argue that the problem with measuring indicators is that other variables (or factors) may have impacted on them as well. Community members may be participating more in meetings because a number of new people with activist backgrounds have come to live in the area. Women may have more time for development projects because the men of the village have been attending a gender workshop and have made a decision to share the traditionally female tasks. And so on. While this may be true, within a project it is possible to identify other variables and take them into account.It is also important to note that, if nothing is changing, if there is no improvement in the measurement of the key indicators identified, then your strategy is not working and needs to be rethought.
DEVELOPING INDICATORS Step 1: Identify the problem situation you are trying to address. The following might be problems: • Economic situation (unemployment, low incomes etc) • Social situation (housing, health, education etc) • Cultural or religious situation (not using traditional languages, low attendance at religious services etc) • Political or organizational situation (ineffective local government, faction fighting etc)Step 2: Develop a vision for how you would like the problem areas to be/ look. This will give you impact indicators.
What will tell you that the vision has been achieved? What signs will you see that you can measure that will “prove” that the vision has been achieved? For example, if your vision was that the people in your community would be healthy, then you can use health indicators to measure how well you are doing. Has the infant mortality rate gone down? Do fewer women die during child-birth? Has the HIV/AIDS infection rate been reduced?If you can answer “yes” to these questions then progress is being made. Step 3: Develop a process vision for how you want things to be achieved. This will give you process indicators.
If, for example, you want success to be achieved through community efforts and participation, then your process vision might include things like community health workers from the community trained and offering a competent service used by all; community organizes clean-up events on a regular basis, and so on. Step 4: Develop indicators for effectiveness.For example, if you believe that you can increase the secondary school pass rate by upgrading teachers, then you need indicators that show you have been effective in upgrading the teachers e. g. evidence from a survey in the schools, compared with a baseline survey. Step 5: Develop indicators for your efficiency targets.
Here you can set indicators such as: planned workshops are run within the stated timeframe, costs for workshops are kept to a maximum of US$ 2. 50 per participant, no more than 160 hours in total of staff time to be spent on organizing a conference; no complaints about conference organization etc.With this framework in place, you are in a position to monitor and evaluate efficiency, effectiveness and impact. DIFFERENT KINDS OF INFORMATION (QUANTITATIVE AND QUALITATIVE) Information used in monitoring and evaluation can be classified as: • Quantitative • Qualitative Quantitative measurement tells you “how much or how many”. How many people attended a workshop, how many people passed their final examinations, how much a publication cost, how many people were infected with HIV, how far people have to walk to get water or firewood, and so on.Quantitative measurement can be expressed in absolute numbers (3 241 women in the sample are infected) or as a percentage (50% of households in the area have television aerials). It can also be expressed as a ratio (one doctor for every 30 000 people).
One way or another, you get quantitative (number) information by counting or measuring. Qualitative measurement tells you how people feel about a situation or about how things are done or how people behave.So, for example, although you might discover that 50% of the teachers in a school are unhappy about the assessment criteria used, this is still qualitative information, not quantitative information. You get qualitative information by asking, observing, interpreting. Some people find quantitative information comforting – it seems solid and reliable and “objective”. They find qualitative information unconvincing and “subjective”. It is a mistake to say that “quantitative information speaks for itself”.
It requires just as much interpretation in order to make it meaningful as does qualitative information.It may be a “fact” that enrolment of girls at schools in some developing countries is dropping – counting can tell us that, but it tells us nothing about why this drop is taking place. In order to know that, you would need to go out and ask questions – to get qualitative information. Choice of indicators is also subjective, whether you use quantitative or qualitative methods to do the actual measuring. Researchers choose to measure school enrolment figures for girls because they believe that this tells them something about how women in a society are treated or viewed.
The monitoring and evaluation process requires a combination of quantitative and qualitative information in order to be comprehensive. For example, we need to know what the school enrolment figures for girls are, as well as why parents do or do not send their children to school. Perhaps enrolment figures are higher for boys than for girls because a particular community sees schooling as a luxury and prefers to train boys to do traditional and practical tasks such taking care of animals. In this case, the higher enrolment of girls does not necessarily indicate higher regard for girls. HOW WILL WE GET INFORMATION?This is dealt with in some detail in the toolkit on action planning, in the section on monitoring, collecting information as you go along.
Your methods for information collecting need to be built into your action planning. You should be aiming to have a steady stream of information flowing into the project or organisation about the work and how it is done, without overloading anyone. The information you collect must mean something: don’t collect information to keep busy, only do it to find out what you want to know, and then make sure that you store the information in such a way that it is easy to access.Usually you can use the reports, minutes, attendance registers, financial statements that are part of your work anyway as a source of monitoring and evaluation information. However, sometimes you need to use special tools that are simple but useful to add to the basic information collected in the natural course of your work.
Some of the more common ones are: • Case studies • Recorded observation • Diaries • Recording and analysis of important incidents (called “critical incident analysis”) • Structured questionnaires • One-on-one interviews • Focus groups • Sample surveys • Systematic review of relevant official statistics.WHO SHOULD BE INVOLVED? Almost everyone in the organization or project will be involved in some way in collecting information that can be used in monitoring and evaluation. This includes: • The administrator who takes minutes at a meeting or prepares and circulates the attendance register; • The fieldworkers who writes reports on visits to the field; • The bookkeeper who records income and expenditure. In order to maximize their efforts, the project or organization needs to: • Prepare reporting formats that include measurement, either quantitative or qualitative, of important indicators.For example, if you want to know about community participation in activities, or women’s participation specifically, structure the fieldworkers reporting format so that s/he has to comment on this, backing up observations with facts. (Look at the fieldworker report format given later in this toolkit. ) • Prepare recording formats that include measurement, either quantitative or qualitative, of important indicators.
For example, if you want to know how many men and how many women attended a meeting, include a gender column on your attendance list. Record information in such a way that it is possible to work out what you need to know. For example, if you need to know whether a project is sustainable financially, and which elements of it cost the most, then make sure that your bookkeeping records reflect the relevant information.
It is a useful principle to look at every activity and say: What do we need to know about this activity, both process (how it is being done) and product (what it is meant to achieve), and what is the easiest way to find it out and record it as we go along? CHAPTER-III DESIGNING A MONITORING AND/OR EVALUATION PROCESSAs there are differences between the design of a monitoring system and that of an evaluation process, we deal with them separately here. Under monitoring we look at the process an organization could go through to design a monitoring system. Under evaluation we look at: • Purpose • Key evaluation questions • Methodology.
MONITORING When you design a monitoring system, you are taking a formative view point and establishing a system that will provide useful information on an ongoing basis so that you can improve what you do and how you do it. On the next page, you will find a suggested process for designing a monitoring system.For a case study of how an organization went about designing a monitoring system, go to the section with examples, and the example given of designing a monitoring system. DESIGNING A MONITORING SYSTEM Below is a step-by-step process you could use in order to design a monitoring system for your organization or project. For a case study of how an organization went about designing a monitoring system, go to examples. Step 1:At a workshop with appropriate staff and/or volunteers, and run by you or a consultant:Introduce the concepts of efficiency, effectiveness and impact.
Explain that a monitoring system needs to cover all three. • Generate a list of indicators for each of the three aspects. • Clarify what variables need to be linked. So, for example, do you want to be able to link the age of a teacher with his/her qualifications in order to answer the question: Are older teachers more or less likely to have higher qualifications? • Clarify what information the project or organization is already collecting. Step 2:Turn the input from the workshop into a brief for the questions your monitoring system must be able to answer.Depending on how complex your requirements are, and what your capacity is, you may decide to go for a computerized data base or a manual one. If you want to be able to link many variables across many cases (e. g.
participants, schools, parent involvement, resources, urban/rural etc), you may need to go the computer route. If you have a few variables, you can probably do it manually. The important thing is to begin by knowing what variables you are interested in and to keep data on these variables. Linking and analysis can take place later. From the workshop you will know what you want to monitor.You will have the indicators of efficiency, effectiveness and impact that have been prioritized.
You will then choose the variables that will help you answer the questions you think are important. So, for example, you might have an indicator of impact which is that “safer sex options are chosen” as an indicator that “young people are now making informed and mature lifestyle choices”. The variables that might affect the indicator include: • Age • Gender • Religion • Urban/rural • Economic category • Family environment • Length of exposure to your project’s initiative • Number of workshops attended.
By keeping the right information you will be able to answer questions such as: • Does age make a difference to the way our message is received? • Does economic category i. e. do young people in richer areas respond better or worse to the message or does it make no difference? • Does the number of workshops attended make a difference to the impact? Answers to these kinds of questions enable a project or organization to make decisions about what they do and how they do it, to make informed changes to programs, and to measure their impact and effectiveness. Answers to questions such as: Do more people attend sessions that are organized well in advance? • Do more schools participate when there is no charge? • Do more young people attend when sessions are over weekends or in the evenings? • Does it cost less to run a workshop in the community, or to bring people to our training centre to run the workshop? Step 3:Decide how you will collect the information you need (see collecting information) and where it will be kept (on computer, in manual files).
Step 4:Decide how often you will analyze the information – this means putting it together and trying to answer the questions you think are important.Step 5:Collect, analyze, report. EVALUATION Designing an evaluation process means being able to develop Terms of Reference for such a process (if you are the project or organization) or being able to draw up a sensible proposal to meet the needs of the project or organization (if you are a consultant).
The main sections in Terms of Reference for an evaluation process usually include: • Background: This is background to the project or organization, something about the problem identified, what you do, how long you have existed, why you have decided to do an evaluation. Purpose: Here you would say what it is the organization or project wants the evaluation to achieve. • Key evaluation questions: What the central questions are that the evaluation must address. • Specific objectives: What specific areas, internal and/or external, you want the evaluation to address.
So, for example, you might want the evaluation to include a review of finances, or to include certain specific program sites. • Methodology: here you might give broad parameters of the kind of approach you favor in evaluation (see the section on more about monitoring and evaluation).You might also suggest the kinds of techniques you would like the evaluation team to use. • Logistical issues: These would include timing, costing, requirements of team composition and so on. Purpose The purpose of an evaluation is the reason why you are doing it.
It goes beyond what you want to know to why you want to know it. It is usually a sentence or, at most, a paragraph. It has two parts: • What you want evaluated; • To what end you want it done. Examples of an evaluation purpose could be: • To provide the organization with information needed to make decisions about the future of the project.
To assess whether the organization/ project is having the planned impact in order to decide whether or not to replicate the model elsewhere. • To assess the program in terms of effectiveness, impact on the target group, efficiency and sustainability in order to improve its functioning. The purpose gives some focus to the broad evaluation process. Key Evaluation Questions The key evaluation questions are the central questions you want the evaluation process to answer. They are not simple questions. You can seldom answer “yes” or “no” them. A useful evaluation question is: Thought provoking • Challenges assumptions.
• Focuses inquiry and reflection. • Raises many additional questions. Some examples of key evaluation questions related to a project purpose: The purpose of the evaluation is to assess how efficient the project is in delivering benefits to the identified community in order to inform Board decisions about continuity and replicability. Key evaluation questions: • Who is currently benefiting from the project and in what ways? • Do the inputs (in money and time) justify the outputs and, if so/if not, on what basis is this claim justified? What would improve the efficiency, effectiveness and impact of the current project? • What are the lessons that can be learned from this project in terms of replicability? Note that none of these questions deals with a specific element or area of the internal or external functioning of the project or organization. Most would require the evaluation team to deal with a range of project or organizational elements in order to answer them.
Other examples of evaluation questions might be: • What are the most effective ways in which a project of this kind can address the problem identified? To what extent does the internal functioning and structure of the organization impact positively on the program work? • What learning from this project would have applicability across the full development spectrum? Clearly, there could be many, many examples. Our experience has shown us that, when an evaluation process is designed with such questions in mind, it produces far more interesting insights than simply impact are we having? Methodology of Evaluation “Methodology” as opposed to “methods” deals more with the kind of approach you use in your evaluation process. See also more about monitoring and evaluation earlier in the toolkit). You could, for example, commission or do an evaluation process that looked almost entirely at written sources, primary or secondary: reports, data sheets, minutes and so on. Or you could ask for an evaluation process that involved getting input from all the key stakeholder groups.
Most terms of reference will ask for some combination of these but they may also specify how they want the evaluation team to get input from stakeholder groups for example: • Through a survey; • Through key informants; Through focus groups. Here too one would expect to find some indication of reporting formats: Will all reporting be written? Will the team report to management, or to all staff, or to staff and Board and beneficiaries? Will there be interim reports or only a final report? What sort of evidence does the organization or project require to back up evaluator opinions? Who will be involved in analysis? The methodology section of Terms of Reference should provide a broad framework for how the project or organization wants the work of the evaluation done.CHAPTER-IV COLLECTING INFORMATION (This is also dealt with in the toolkit on action planning, in the section on monitoring, collecting information as you go along.
) Here we look in detail at: • Baselines and damage control; • Methods. By damage control we mean what you need to do if you failed to get baseline information when you started out. BASELINES AND DAMAGE CONTROL Ideally, if you have done your planning well and collected information about the situation at the beginning of your intervention, you will have baseline data.Baseline data is the information you have about the situation before you do anything. It is the information on which your problem analysis is based. It is very difficult to measure the impact of your initiative if you do not know what the situation was when you began it. (See also the toolkit on overview of planning, the section on doing the ground work. ) You need baseline data that is relevant to the indicators you have decided will help you measure the impact of your work.
Different Levels of Baseline Data: • General information about the situation, often available in official statistics e. . infant mortality rates, school enrolment by gender, unemployment rates, literacy rates and so on. If you are working in a particular geographical area, then you need information for that area. If it is not available in official statistics, you may need to do some information gathering yourselves.
This might involve house-to-house surveying, either comprehensively or using sampling (see the section after this on methods), or visiting schools, hospitals etc. Focus on your indicators of impact when you collect this information. If you have decided to measure impact through a sample of people or families with whom you are working, you will need specific information about those people or families. So, for example, for families (or business enterprises or schools or whatever units you are working with) you may want specific information about income, history, number of people employed, number of children per classroom and so on. You will probably get this information from a combination of interviewing and filling in of basic questionnaires. Again, remember to focus on the indicators which you have decided are important for your work. If you are working with individuals, then you need “intake” information – documented information about their situation at the time you began working with them. For example, you might want to know, in addition to age, gender, name and so on, current income, employment status, current levels of education, amount of money spent on leisure activities, amount of time spent on leisure activities, ambitions and so on, for each individual participant.
Again, you will probably get the information from a combination of interviewing and filling in of basic questionnaires, and you should focus on the indicators which you think are important.It is very difficult to go back and get this kind of baseline information after you have begun work and the situation has changed. But what if you didn’t collect this information at the beginning of the process? There are ways of doing damage control. You can get anecdotal information (see Glossary of Terms) from those who were involved at the beginning and you can ask participants if they remember what the situation was when the project began. You may not even have decided what your important indicators are when you began your work.You will have to work it out “backwards”, and then try to get information about the situation related to those indicators when you started out. You can speak to people, look at records and other written sources such as minutes, reports and so on.
One useful way of making meaningful comparisons where you do not have baseline information is through using control groups. Control groups are groups of people, businesses, families or whatever unit you are focusing on, that has not had input from your project or organization but are, in most other ways, very similar to those you are working with.For example: You have been working with groups of school children around the country in order to build their self-esteem and knowledge as a way of combating the spread of HIV/AIDS and preventing teenage pregnancies. After a few years, you want to measure what impact you have had on these children.
You are going to run a series of focus groups (see methods) with the children at the schools where you have worked. But you did not do any baseline study with them. How will you know what difference you have made? You could set up a control groups at schools in the same areas, with the same kinds of profiles, where you have not worked.By asking both the children at those schools you have worked at, and the children at the schools where you have not worked, the same sorts of questions about self-esteem, sexual behavior and so on, you should be able to tell whether or not your work has made any difference.
When you set up control groups, you should try to ensure that: • The profiles of the control groups are very similar to those of the groups you have worked with. For example, it might be schools that serve the same economic group, in the same geographical area, with the same gender ratio, age groups, ethnic or racial mix. There are no other very clear variables that could affect the findings or comparisons. For example, if another project, doing similar work, has been involved with the school, this school would not be a good place to establish a control group.
You want a situation as close to what the situation was,with the beneficiaries of your project when you started out. METHODS In this section we are going to give you a “shopping list” of the different kinds of methods that can be used to collect information for monitoring and evaluation purposes. You need to select methods that suit your purposes and your resources.Do not plan to do a comprehensive survey of 100 000 households if you have two weeks and very little money! Use sampling in this case. Sampling is another important concept when using various tools for a monitoring or evaluation process. Sampling is not really a tool in itself, but used with other tools it is very useful. Sampling answers the question: Who do we survey, interview, include in a focus group etc? It is a way of narrowing down the number of possible respondents to make it manageable and affordable.
Sometimes it is necessary to be comprehensive.This means getting to every possible household, or school or teacher or clinic etc. In an evaluation, you might well use all the information collected in every case during the monitoring process in an overall analysis.
Usually, however, unless numbers are very small, for in-depth exploration you will use a sample. Sampling techniques include: • Random sampling (In theory random sampling means doing the sampling on a sort of lottery basis where, for example all the names go into a container, are tumbled around and then the required number are drawn out.This sort of random sampling is very difficult to use in the kind of work we are talking about. For practical purposes you are more likely to, for example, select every seventh household or every third person on the list.
The idea is that there is no bias in the selection. ) • Stratified sampling (e. g. every seventh household in the upper income bracket, every third household in the lower income bracket) • Cluster sampling (e. g.
only those people who have been on the project for at least two years). It is also usually best to use triangulation (See Glossary of Terms).This is a fancy word that means that one set of data or information is confirmed by another.
You usually look for confirmation from a number of sources saying the same thing. |Tool |Description |Usefulness |Disadvantages | |Interviews |These can be structured, |Can be used with almost anyone who has| | | |semi-structured or |some involvement with the project. | | | |unstructured. | | | | | | | | | | | | | | | |Requires some skill in the | | | | |interviewer. | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |They involve asking specific |Can be done in person or on the | | | |questions aimed at getting |telephone or even by e-mail.
| | |information that will enable | | | | |indicators to | | | | |be measured. | | | | |Questions can be open-ended or |Very flexible | | | |closed (yes/no answers). | | | | |Can be a source of | | | | |qualitative and quantitative | | | | |information. | | |Tool |Description |Usefulness |Disadvantages | |Key informant |These are interviews that |As these key informants often have |Needs a skilled | |Interviews |are carried out with |little to do with the project or |interviewer with a good | | |specialists in a topic or |organization, they can be quite |understanding of the | | |someone who may be able to shed|objective and offer useful insights. |topic. Be careful not to | | |a particular light on the |They can provide something of the “big|turn something into an | | |process. picture” where people more involved |absolute truth (cannot be | | | |may focus |challenged) because it has been | | | |at the micro (small) level. |said by a key informant.
| |Questionnaires |These are written questions |This tool can save lots of time if it |With people who do not read and | | |that are used to get written |is self-completing, enabling you to |write, someone has to go through | | |responses which, when analysed,|get to many people.Done in this way |the questionnaire with them which| | |will enable indicators to be |it gives people a feeling of anonymity|means no time is saved and the | | |measured. |and they may say things they would not|numbers one can reach are | | | |say to an interviewer. |limited.
With questionnaires, it | | | | |is not possible to explore what | | | | |people are saying any further. | | | |Questionnaires are also over-used| | | | |and people get tired of | | | | |completing them. Questionnaires | | | | |must be piloted to ensure that | | | | |questions can be understood and | | | | |cannot be misunderstood.
If the | | | | |questionnaire is complex and will| | | | |need computerised analysis, you | | | | |need expert help in | |Tool |Description |Usefulness |Disadvantages | |Focus Group |In a focus group, a group of |This can be a useful way of getting |It is quite difficult to do | | |about six to 12 people are |opinions from quite a large sample of |random sampling for focus groups | | |interviewed together by a |people. |and this means findings may not | | |skilled interviewer/ | |be generalized. Sometimes people | | |facilitator with a carefully | |influence one another either to | | |structured interview schedule. |say something or to keep quiet | | |Questions are usually focused | |about something. If possible, | | |around a specific topic or | |focus groups interviews should be| | |issue. | |recorded and then transcribed. | | | | |Difficult to facilitate – | | | | |requires a very experienced | | | | |facilitator.May require breaking| | | | |into small groups followed by | | | | |plenary sessions when everyone | | | | |comes together again.
| |Community meetings |This involves a gathering of a |Community meetings are useful for |Relies on field workers being | | |fairly large group of |getting a broad response from many |disciplined and insightful. | | |beneficiaries to whom |people on specific issues.It is also | | | |questions, problems, situations|a way of involving beneficiaries | | | |are put for input to help in |directly in an evaluation process, | | | |measuring indicators. |giving them a sense of ownership of | | | | |the process. They are useful to have | | | | |at critical points in community | | | | |projects. | |Tool |Description |Usefulness |Disadvantages | |Field Worker Report |Structured report forms that |Flexible, an extension of normal work,| | | |ensure that indicator-related |so cheap and not time-consuming. | | | |questions are asked and answers| | | | |recorded, and observations | | | | |recorded on every visit. | | |Ranking |This involves getting people to|It can be used with individuals and |Ranking is quite a difficult | | |say what they think is most |groups, as part of an interview |concept to get across and | | |useful, most important, least |schedule or questionnaire, or as a |requires very careful explanation| | |useful etc.
|separate session. Where people cannot |as well as testing to ensure that| | | |read and write, pictures can be used. |people understand what you are | | | | |asking.If they misunderstand, | | | | |your data can be completely | | | | |distorted. | |Visual/audio stimuli |These include pictures, movies,|Very useful to use together with other|You have to have appropriate | | |tapes, stories, role plays, |tools, particularly with people who |stimuli and the facilitator needs| | |photographs, used to illustrate|cannot read or write.
|to be skilled in using such | | |problems or issues or past | |stimuli. | | |events or even future events. | | |Tool |Description |Usefulness |Disadvantages | |Rating Scale |This technique makes use of a |It is useful to measure attitudes, |You need to test the statements | | |continuum, along which people |opinions, perceptions. |very carefully to make sure that | | |are | |there is no possibility of | | |expected to place their own | |misunderstanding. A common | | |feelings, | |problem is when two concepts are | | |observations etc.
People are | |included in the statement and you| | |usually asked to say whether | |cannot be sure whether an opinion| | |they agree strongly, agree, | |is being given on one or the | | |don’t know, disagree, disagree | |other or both. | | |strongly with a statement. You | | | | |can use pictures and symbols in| | | | |this technique if people cannot| | | | |read and write. | | |Critical event/ incident|focusing interviews with |Very useful when something problematic|The evaluation team can end up | |Analysis |individuals or groups on |has occurred and people feel strongly |submerged in a vast amount of | | |particular events or |about it. If all those involved are |contradictory detail and lots of | | |incidents. The purpose of doing|included, it should help the |“he said/she said”.
It can be | | |this is to get a very full |evaluation team to get a picture that |difficult not to take sides and | | |picture of what |is reasonably close to what actually |to remain objective. | | |actually happened. happened and to be able to diagnose | | | | |what went wrong. | | |Tool |Description |Usefulness |Disadvantages | |Participant Observation |This involves direct |can be a useful way of confirming, or |It is difficult to observe and | | |observation of events, |otherwise, information provided in |participate. The process is very | | |processes, relationships and |other ways. |time-consuming. | | |behaviours.
Participant” here | | | | |implies that the observer gets | | | | |involved in activities | | | | |rather than maintaining a | | | | |distance. | | | |Self-drawings |This involves getting |Can be very seful, particularly with |Can be difficult to explain and | | |participants to draw pictures, |younger children. |interpret. | | |usually of how they feel or | | | | |think about something. | | | INTERVIEWING SKILLS Some do’s and don’ts for interviewing: • DO test the interview schedule beforehand for clarity, and to make sure questions cannot be misunderstood.
• DO state clearly what the purpose of the interview is. • DO assure the interviewee that what is said will be treated in confidence. DO ask if the interviewee minds if you take notes or tape record the interview. • DO record the exact words of the interviewee as far as possible. • DO keep talking as you write. • DO keep the interview to the point. • DO cover the full schedule of questions. • DO watch for answers that are vague and probe for more information.
• DO be flexible and note down everything interesting that is said, even if it isn’t on the schedule. • DON’T offend the interviewee in any way. • DON’T say things that are judgmental. • DON’T interrupt in mid-sentence.
• DON’T put words into the interviewee’s mouth. • DON’T show what you are thinking through changed tone of voice. CHAPTER-V ANALYSING INFORMATIONWhether you are looking at monitoring or evaluation, at some point you are going to find yourself with a large amount of information and you will have to decide how to make sense of it or to analyze it.
If you are using an external evaluation team, it will be up to this team to do the analysis, but, sometimes in evaluation, and certainly in monitoring, you, the organization or project, have to do the analysis. Analysis is the process of turning the detailed information into an understanding of patterns, trends, interpretations. The starting point for analysis in a project or organizational context is quite often very unscientific. It is your intuitive understanding of the key themes that come out of the information gathering process.Once you have the key themes, it becomes possible to work through the information, structuring and organizing it. The next step is to write up your analysis of the findings as a basis for reaching conclusions, and making recommendations. So, your process looks something like this: [pic] Taking action Monitoring and evaluation have little value if the organisation or project does not act on the information that comes out of the analysis of data collected.
Once you have the findings, conclusions and recommendations from your monitoring and evaluation process, you need to: • Report to your stakeholders; • Learn from the overall process; Make effective decisions about how to move forward; and, if necessary, • Deal with resistance to the necessary changes within the organization or project, or even among other stakeholders. REPORTING Whether you are monitoring or evaluating, at some point, or points, there will be a reporting process. This reporting process follows the stage of analysing information. You will report to different stakeholders in different ways, sometimes in written form, sometimes verbally and, increasingly, making use of tools such as PowerPoint presentations, slides and videos.
Below is a table, suggesting different reporting mechanisms that might be appropriate for different stakeholders and at different times in project cycles. For writing tips, go to the toolkit on effective writing for organizations. Target group |Stage of project cycle |Appropriate format | |Board |Interim, based on monitoring analysis |Written report | | |Evaluation |Written report, with an Executive Summary, and verbal | | | |presentation | | | |from the evaluation team. | |Management Team |Interim, based on monitoring analysis |Written report, discussed at management team meeting. | | |Evaluation |Written report, presented verbally by the evaluation team.
|Staff |Interim, based on monitoring |Written and verbal presentation at departmental and team | | | |levels. | | |Evaluation |Written report, presented verbally by evaluation team and | | | |followed by in-depth discussion of relevant recommendations at | | | |departmental and team levels. | |Beneficiaries |Interim, but only at significant |Verbal presentation, backed up by summarized document, using | | |points, and evaluation |appropriate tables, charts, visuals and audio-visuals.This is | | | |particularly important if the organization or project is | | | |contemplating a major change that will impact on beneficiaries. | |Donors |Interim, based on monitoring |Summarized in a written report. | | |Evaluation |Full written report with executive summary or a special | | | |version, focused on donor concerns and | | | |interests.
|Wider development |Evaluation |Journal articles, seminars, conferences, websites. | |community | | | OUTLINE OF AN EVALUATION REPORT |EXECUTIVE SUMMARY |Usually not more than five pages – the shorter the better – intended to | | |provide enough information for busy people, but also to tease people’s | | |appetite so that they want to read the full report. |PREFACE |Not essential, but a good place to thank people and make a broad comment | | |about the process, findings etc. | |CONTENTS PAGE |With page numbers, to help people find their way around the report. | |SECTION 1: |Usually deals with background to the project/ organization, background to | |INTRODUCTION |the evaluation, the brief to the evaluation team, the methodology, the | | |actual process and any problems that occurred. |SECTION 2: |Here you would have sections dealing with the important areas of findings, | |FINDINGS: |e. g. efficiency, effectiveness and impact, or the themes that have emerged.
| |SECTION 3: |Here you would draw conclusions from the findings – the interpretation, | |CONCLUSIONS: |what they mean. It is quite useful to use a SWOT Analysis – explained in | | |Glossary of Terms – as a summary here. | |SECTION 4: RECOMMENDATIONS: |This would give specific ideas for a way forward in terms of addressing | | |weaknesses and building on strengths. |APPENDICES: |Here you would include Terms of Reference, list of people interviewed, | | |questionnaires used, possibly a map of the area and so on.
| LEARNING Learning is, or should be, the main reason why a project or organization monitors its work or does an evaluation. By learning what works and what does not, what you are doing right and what you are doing wrong, you, as project or organization management, are empowered to act in an informed and constructive way. This is part of a cycle of action reflection. (See the diagram in the section on why do monitoring and evaluation? ) The purpose of learning is to make changes where necessary, and to identify and build on strengths where they exist.
Learning also helps you to understand, to make conscious, assumptions you have. So, for example, perhaps you assumed that children at more affluent schools would have benefited less from your intervention than those from less affluent schools. Your monitoring data might show you that this assumption was wrong.
Once you realize this, you will probably view your interactions with these schools differently. Being in a constant mode of action-reflection-action also helps to make you less complacent. Sometimes, when projects or organizations feel they “have got it right”, they settle back and do things the same way, without questioning whether they are still getting it right.They forget that situations change, that the needs of project beneficiaries may change, and that strategies need to be reconsidered and revised. So, for example, an organization provided training and programs for community radio stations.
Because it had excellent equipment and an excellent production studio, it invited stations to send presenters to its training centre for training in how to present the programs it (the organization) was producing. It developed an excellent reputation for high quality training and production. Over time, however, the community radio stations began to produce their own programs and what they really wanted was for the organization to send someone to their stations to help them workshop ideas and to give them feedback on the work they were doing.This came out in an evaluation process and organization realized that it had become a bit smug in the comfort zone of what it was good at, but that, if it really wanted to help community radio stations, it needed to change its strategy. Organizations and projects that don’t learn, stagnate. The process of rigorous monitoring and evaluation forces organizations and projects to keep learning – and growing. EFFECTIVE DECISION-MAKING As project or organization management, you need the conclusions and recommendations that come out of monitoring and evaluation to help you make decisions about your work and the way you do it.
The success of the process is dependent on the ability of those with management responsibilities to make decisions and take action. The steps involved in the whole process are: Plan properly – know what you are trying to achieve and how you intend to achieve it • Implement • Monitor and evaluate. • Analyze the information you get from monitoring and evaluation and work out what it is telling you.
• Look at the potential consequences to your plans of what you have learned from the analysis of your monitoring and evaluation data. • Draw up a list of options for action. • Get consensus on what you should do and a mandate to take action. • Share adjustments and plans with the rest of the organization and, if necessary, your donors and beneficiaries. • Implement.
• Monitor and evaluate. The key steps for effective decision making are: • As a management team, understand the implications of what you have learned. Work out what needs to be done and have clear motivations for why it needs to be done. • Generate options for how to do it. • Look at the options critically in terms of which are likely to be the most effective. • Agree as a management team.
• Get organizational/ project consensus on what needs to be done and how it needs to be done. • Get a mandate (usually from a Board, but possibly also from donors and beneficiaries) to do it. • Do it. DEALING WITH RESISTANCE Not everyone will be pleased about any changes in plans you decide need to be made. People often resist change.
Some of the reasons for this include: • People are comfortable with things the way they are – they don’t want to be pushed out of their comfort zones. People worry that any changes will lessen their levels of productivity – they feel judged by what they do and how much they do, and don’t want to take the time out necessary to change plans or ways of doing things. • People don’t like to rush into change – how do we know that something different will be better? They spend so long thinking about it that it is too late for useful changes to be made.
• People don’t have a “big picture”. They know what they are doing and they can see it is working, so they can’t see any reason to change anything at all. • People don’t have a long term commitment to the project or the organization – they see it as a stepping stone on their career path. They don’t want change because it will delay the items they want to be able to tick off on their CV. People feel they can’t cope – they have to keep doing what they are doing but also • Work at bringing about change. It’s all too much. How can you help people accept changes? • Make the reasons why change is needed very clear – take people through the findings and conclusions of the monitoring and evaluation processes, involve them in decision-making.
• Help people see the whole picture – beyond their little bit to the overall impact on the problem analyzed. • Focus on the key issues – we have to do something about this! • Recognize anger, fear, and resistance. Listen to people; give them the opportunity to express frustration and other emotions. • Find common ground – things that they also want to see changed. Encourage a feeling that change is exciting, that it frees people from doing things that are not working so they can try new things that are likely to work, that it releases productive energy. • Emphasize the importance of everyone being committed to making it work. • Create conditions for regular interaction – anything from a seminar to graffiti on a notice board – to discuss what is happening and how it is going. • Pace change so that people can deal with it.
CHAPTER-VI BEST PRACTICE EXAMPLES OF INDICATORS Please note that these are just examples – they may or may not suit your needs but they should give you some idea of the kind of indicators you can use, especially for measuring impact. Economic Development Indicators • Average annual household income • Average weekly/monthly wages • Employment, by age group Unemployment, by age group, by gender • Employment, by occupation, by gender • Government employment • Earned income levels • Average length of unemployment period • Default rates on loans • Ratio of home owners to renters • Per capita income • Average annual family income • % people below the poverty line • Ratio of seasonal to permanent employment • Growth rate of small businesses • Value of residential construction and/or renovation Social Development Indicators • Death rate • Life expectancy at birth • Infant mortality rates • Causes of death • Number of doctors per capita • Number of hospital beds per capita • Number of nurses per capita • Literacy rates, by age and gender Student: teacher ratios • Retention rate by school level • School completion rates by exit points • Public spending per student • Number of suicides • Causes of accidents • Dwellings with running water • Dwellings with electricity • Number of homeless • Number of violent crimes • Birth rate • Fertility rate • Gini distribution of income (see Glossary of Terms) • Infant mortality rate • Rates of hospitalization • Rates of HIV infection • Rates of AIDS deaths • Number of movie theatres/swimming pools per 1000 residents • Number of radios/televisions per capita • Availability of books in traditional languages • Traditional languages taught in schools Time spent on listening to radio/watching television by gender • Number of programs on television and radio in traditional languages and/or dealing with traditional customs • Church participation, by age and gender Political/ Organizational Development Indicators • Number of community organizations • Types of organized sport • Number of tournaments and games • Participation levels in organized sport • Number of youth groups • Participation in youth groups • Participation in women’s groups • Participation in groups for the elderly • Number of groups for the elderly • Structure of political leadership, by age and gender • Participation rate in elections, by age and gender • Number of public meetings held • Participation in public meetings, by age and gender DESIGNING A MONITORING SYSTEM – CASE STUDYWhat follows is a description of a process that a South African organization called Puppets against AIDS went through in order to develop a monitoring system which would feed into monitoring and evaluation processes. The main work of the organization is presenting workshopped plays and/or puppet shows related to lifeskill issues, especially those lifeskills to do with sexuality, at schools, across the country. The organization works with a range of age groups, with different “products” (scripts) being appropriate at different levels. Puppets against AIDS wanted to develop a monitoring and evaluation system that provided useful information on the efficiency, effectiveness and impact of its operations. To this end, it wanted to develop a data base that: Provided all the basic information the organization needed about clients and services given. • Produced reports that enabled the organization to inform itself and other stakeholders, including donors, partners and even schools, about the impact of the work, and what affected the impact of the work. The organization made a decision to go for a computerized monitoring system.
Much of the day-to-day information needed by the organization was already on a computerized data base (e. g. schools, regions, services provided and so on), but the monitoring system would require a substantial upgrading and the development of data base software specific to the organization’s needs.The organization also made the decision to develop a system initially for a pilot project, but with the intention of extending it to all the work over time. This pilot project would work with about 60 schools, using different scripts each year, over a period of three years. In order to raise the money needed for this process, Puppets against AIDS needed some kind of a brief for what was required so that it could be costed. At an initial workshop with staff, facilitated by consultants, the staff generated a list of indicators for efficiency, effectiveness and impact, in relation to their work. These were the things staff wanted to know from the system about what they did, how they did it, and what difference it made. The terms were defined as follows:Efficiency Here what needed to be assessed was how quickly, how correctly, how cost effectively and with what use of resources the services of the organization were offered. Much of this information was already collected and was contained in reports which reflected planning against achievement. It needed to be made “computer friendly”. Effectiveness Here what needed to be assessed was getting results in terms of the strategy and shorter-term impact. For example, were the puppet shows an effective means of communicating messages about sexuality? Again, this information was already being collected and just needed to be adapted to fit the computerized system.Impact Here what needed to be assessed was whether the strategy worked in that it had an impact on changing behavior in individuals (in this case the students) and that that change in behavior impacted positively on the society of which the individuals are a part. The organization had a strong intuitive feeling that it was working, but wanted to be able to measure this more scientifically and to be able to look at what variables made impact more or less likely, or affected the degree of impact. Staff generated a list of the different variables that they thought might be important in assessing and accounting for differences of impact. The monitoring system would need to link information on impact to these variables.The intention was to provide both qualitative and quantitative information. The consultants and a senior staff member then developed measurable indicators of impact and a tabulation of important variables which included: • Gender and age profile of proposed age cohort • Economic profile of school • Religious profile of the school • Teacher profile at the school • Approach to discipline at the school • Which scripts were used • Which acting teams presented the scripts • And so on. Forms/questionnaires were developed to measure impact indicators before the first intervention (to provide baseline information) and then at various points in the process, as well as to categories such concepts as “teacher profile”.With the student questionnaire, it was designed in such a way to make it possible to aggregate a score which could be compared when the questionnaire was administered at different stages in the process. The questionnaire took the form of a series of statements with which students were asked to agree/disagree/strongly agree/strongly disagree etc. So, for example, statements to do with an increase in student self-esteem included “When I look in a mirror, I like what I see”, and “Most of the people I know like the real me”. The organization indicated that it wanted the system to generate reports that would enable it to know: • What difference is there between the indicator ratings on the impact objective at the beginning and end of the process? What difference is there between teacher attitudes at the beginning and end of the process? • What variables to do with the school and school environment impact on the degree of difference between indicators at the beginning and end of the process? • What variables to do with the way in which the shows are presented impact on the degree of difference at the beginning and end of the process? All this was written up as a brief which was given to software experts who then came up with a system that would meet the necessary requirements. The process was slow and demanding but eventually the system was in place and it is currently being tested. FIELDWORKER REPORTING FORMATThis format was used by an early childhood development learning centre to measure the following indicators in the informal schools with which it worked: • Increasingly skilled educate teachers. • Increased amount of self-made equipment. • Records up-to-date. • Payments up-to-date. • Attendance at committee meetings. FIELD VISIT REPORT Date: Name of school: Information obtained from: Report completed by: Field visit number: ——————————————————————————————— List the skills used by the teachers in the time period of your visit to the school: List self-made equipment visible in the school: List the fundraising activities the school committee is currently involved in: 4. Record-keeping assessment: Kind of |Up-to-date |Up-to-date but not |Not up-to- |Not attempted | |Record |and accurate |very accurate |date | | |Bookkeeping | | | | | |Petty cash | | | | | |Filing | | | | | |Correspondence | | | | | |Stock control | | | | | |Registers | | | | | 5. Number of children registered: Average attendance over past two months: 6. Number of payments outstanding for longer than two months: 7. Average attendance at committee meetings over past two months: 8. Comments on this visit: 9. Comparison with previous field visit: PART-II CHAPTER-VII METHODOLOGY OF PROJECT APPRAISALAppraisal involves a careful checking of the basic data, assumptions and methodology used in project preparation, an in-depth review of the work plan, cost estimates and proposed financing, an assessment of the project’s organizational and management aspects, and finally the viability of project It is mandatory for the Project Authorities to undertake project appraisal or atleast give details of financial, economic and social benefits and suitably incorporate it in the PC-I. These projects are examined in the Planning and Development Division from the technical, institutional/ organizational/ managerial, financial and economic point of view depending on nature of the project. On the basis of such an assessment, a judgment is reached as to whether the project is technically sound, financially justified and viable from the point of view of the economy as a whole.In the Planning and Development Division, there is a division of labor in the appraisal of projects prepared by the concerned Executing Agencies. The concerned Technical Section in consultation with other technical sections i. e; Physical Planning ; Housing, Manpower, Governance and Environment sections undertake the technical appraisal, wherever necessary. This covers engineering, commercial, organizational and managerial aspects, while the Economic Appraisal Section carries out the pre-sanction appraisal of the development projects from the financial and economic points of view. Economic appraisal of a project is concerned with the desirability of carrying out the project from the standpoint of its contribution to the development of the national economy.Whereas financial analysis deals with only costs and returns to project participants, economic analysis deals with costs and returns to society as a whole. The rationale behind the project appraisal is to provide the decision-makers with financial and economic yardsticks for the selection/ rejection of projects from among competing alternative proposals for investment. The techniques of project appraisal can be divided under two heads:- • Undiscounted o Pay back period o Profit ; Loss account • Discounted o Net Present Value (NPV) o Benefit Cost Ratio (BCR) o Internal Rate of Return (IRR) o Sensitivity Analysis (treatment of uncertainty) o Domestic Resource Cost (Modified Bruno Ratio) Different investment appraisal criteria are given at Appendix-I.Economic viability of the project is invariably judged at 12 percent discount rate/ opportunity cost of capital. However, in case of financial analysis, the actual rate of interest i. e. the rate at which capital is obtained is used. For the government-funded projects, the discount rate is fixed by the Budget Wing of the Finance Division for development loans and advances on yearly basis. In case the project is funded by more than one source, the financial analysis is carried out on the weighted average cost of capital (WACC) for each project. If the project is financed through foreign grants, the financial analysis is undertaken at zero discount rate. However, the