Building partnerships for delivery
August 18, 2009
Post-needs assessment activities
August 18, 2009

 

    Assessment – process of estimating the value/quality of something before or during a process/event

    Evaluation – the process of measuring the amount of something during and/or after a process/event

    Monitoring – the maintenance of regular surveillance of a process/event

Effective evaluation depends on knowing what you are aiming for, what and why you want to evaluate, and what your time and cost restraints are.  There are different kinds of evaluation for different purposes. Evaluation should assist in learning what works well and thereby assist building the capacity and capability of what is being assessed. It should be a process of collecting information to assess the value of any activity and the quality of the program.

There are a number of stakeholders who will be expecting feedback on the program – for different reasons, at different times and in different formats:

  • The conduct of a needs assessment will result in certain expected outcomes. Ultimately, once a BRIDGE activity has been designed and implemented, one would return to the needs assessment to measure the success or limited achievement.
  • Funders/donors of the program (and of BRIDGE as a whole) will want to know if the aims and objectives listed in the project proposal underlying their funding have been met (and proof thereof).
  • Funders of the program and project managers will expect financial accounting.
  • The program developers will want feedback on the choices made in compiling the agenda and program curriculum – did it work as envisioned?
  • Participants will want to be reminded of the program objectives and their own expectations on the final workshop day in order to feel satisfaction with the investment of time and energy that they have made.
  • Facilitators will want feedback on their performance.
  • The curriculum designers of the V2 modules will want feedback on the activities, Key Understandings and resources used in order to improve the modules.
  • The BRIDGE Office and BRIDGE partners will want both concrete facts and documentation about the program for statistical and archival purposes, as well as feedback on the success and difficulties of the program in order to improve the BRIDGE Package of services and tools.

In the role as program developer or facilitator, meeting these assessments, evaluation and reporting needs will be something to take into consideration in the planning stages. A number of tools are available to help:

  • Needs assessment report and recommendations that will inform the design and implementation phase
  • Program Objectives, as developed in consultation with program stakeholders
  • Key Understandings, Learning Outcomes and Assessment Criteria specified in the modules that you have chosen, which you will have adapted to your specific program
  • Participant expectations, as described on the first day
  • Games and techniques for evaluation and assessment included in this manual, and in the TtF workshop

A workshop administration role assigned either to one of the facilitators or an external person is an invaluable asset, so that the reporting obligations are not forgotten.

In addition to the formal and informal assessment and evaluation requirements – there are some guiding questions that should inform all of the decisions made, both during program development and facilitation of the workshop. Are we improving electoral processes? Are we strengthening the confidence and competence of key stakeholders? It is possible that you may not be able to measure these more fundamental questions in numbers or forms – but you will most probably have anecdotal evidence which can also serve a valuable purpose as a way to explain why BRIDGE programs are effective.

The most important concept to understand with evaluation and assessment with BRIDGE is that we are assessing the success or otherwise of the program, rather than the success of any individual participants to learn or understand. The assessment tools used in BRIDGE are not designed to pass or fail participants.

Planning for evaluation

Any BRIDGE program will cost money, and implementers will therefore inevitably have to account to those who are funding them for the way in which the money has been spent, and the benefits which flow from the expenditure. Processes for measuring the impact of a BRIDGE program will therefore invariably need to be developed. This issue can be approached at a number of different levels. An assessment may be:

  • Made on the validity of the needs assessment
  • Made of the short-term or slightly longer-term impact which a program has had on individual participants. This may be based on the participants self-assessments, and/or on the judgement of the facilitators, and/or on the judgements made by their colleagues of the apparent impact which the program has had.
  • Made of the overall success of a program – this can be done by examining evaluations prepared during and immediately after the workshops by participants, the facilitators, and where relevant the recipient organisation.
  • Made of the impact which the use of BRIDGE has had on the way in which the beneficiary organisation does its work. Such an assessment may be done internally by the organisation, but may also take into account judgements made by stakeholders, such as donors, who work with the organisation.
  • Attempted of the impact BRIDGE has had on the state of democratic development in a country. This will normally be exceptionally difficult to judge, since overall democratic development is influenced by myriad factors, of which interventions in the area of electoral capacity-building are only one.

The BRIDGE founding partners are clearly committed to the process of continuously improving the product, and feedback from evaluations is a critical resource for them in achieving this objective. However, because the time of the client is precious, the extent and degree of evaluation need to be agreed at the planning stage. The evaluation process involves comparing performance against expectations, and therefore needs to be structured taking agreed outputs or outcomes into account.

A clear focus on defining expectations when planning evaluations also helps to ensure that expectations are realistic, and shared by all involved.  This is discussed further in the Facilitators Notes.

Evaluation needs to match program objectives (see section 3.1 Setting Program Objectives, below)

Evaluation stages

Evaluation of a BRIDGE program should take place:

  • at the beginning of the program
  • during and at the end of the workshops. This can include spot checks, informal chats with individuals; observation; feedback or evaluation sheets (open-ended)
  • at the end of the program (after the workshops)
  • a suitable time after the program to assess long term impact

The first two stages of the evaluation process have already been covered in previous chapters – pre-program assessment (which determines what the evaluation is measuring against) and monitoring during the workshop (which contributes to the evaluation process).

Assuming pre-program assessment and workshop monitoring have been conducted adequately, post-program evaluation should be more of a compilation exercise intended to give an overall view of the implementation of the project and measure its achievements. The impact of the project can be assessed by measuring the competence and skills both of the national authorities and individual participants in dealing with matters covered during the program.

Evaluations, if done effectively, comprise several levels and strategies – each targeting different stakeholders with different approaches. Before turning to specific examples of evaluation strategies and questions, we need to clarify exactly what we mean by evaluation.

Refer to: 8.5 Annex 5: BRIDGE Evaluation Cycle for a summary of the main elements of evaluation, and things to consider when designing an evaluation process for BRIDGE.

 

Leave a Reply

Registration

Forgotten Password?