Preparing for a BRIDGE Module Workshop
August 18, 2009
Focus On: The BRIDGE Website
August 18, 2009

Although evaluation happens at the end of an event or program, it should have already been considered from the very first stages of planning.  2.5 Planning for Evaluation gives an introduction to evaluation and outlines the steps taken in planning for evaluation at the beginning of a program.  A good evaluation process is built on strong foundations set at the beginning.

Refer to: 8.6 Annex 6: Post-workshop Evaluation Sheets for questions to both client organisation and participants after BRIDGE workshops and 8.5 Annex 5: BRIDGE Evaluation Cycle for a summary of the main elements of evaluation, and things to consider when designing an evaluation process for BRIDGE.

Evaluation by the client organisation

This would normally be achieved by collating the workshop evaluation sheets (daily, landmark, or end of workshop or program) and creating a written report which summarises the strengths and weaknesses of the program, and makes recommendations based on these findings. The report would normally be prepared by the program organisers.

End-of-workshop evaluation sheets, in which participants rate facilitators and contents, give an indication of how participants felt at the end of the workshop. But participants cannot, at the end of a workshop, tell the full story of whether they have benefited from the training, because they have not had time yet to put into practice what they have learned. It is therefore useful also to distribute evaluation sheets several weeks later and ask participants how they are using in their work environment the skills and information they gained from the workshop; how easy or difficult it is for them to apply new knowledge and skills; and what would make the program more effective. One should remember that the reason for training is not to improve how participants perform in the training room, but how they perform outside it.

Care should be taken when designing surveys: both open and closed questions should be asked. Open-ended questions are questions where there is not one definite answer. These can be useful, but the drawback is that they can sometimes be hard to interpret. Closed questions have a restricted set of answers from which the respondent chooses (one choice may be ‘other’). It is easy to gather data from these types of questions. A report of these collated sheets would need to be prepared by the program organisers.

Evaluation reports should not be so lengthy that decision-makers don’t bother to read them. To make an impact, and increase the likelihood that decision-makers read reports, evaluation reports should be broken up into easy-to-consume ‘chunks’ of information, for example ‘Issues’, ‘Evidence’ and ‘Recommendations’.

If client organisations wish to evaluate the participants of a BRIDGE workshop (separate from the workshop organisers), using tests, they may do so. Formal tests of participant learning could be used some time after the workshop has been completed, ensuring that Learning Outcomes are matched with the test content.

Clients may also wish to assess the level of program stakeholder satisfaction (e.g. donors, sponsors) after a program.

Evaluation by and of the facilitators (and the program team)

If a client wishes to evaluate the facilitators of a BRIDGE workshop, they may do so. The BRIDGE partners have a process of ‘quality control’ of all accredited facilitators, which can draw on information from workshop evaluation reports. Facilitators themselves are encouraged to engage in self-appraisal and peer appraisal during the in-workshop monitoring (a self-evaluation form is included as a Facilitators Resource in every module). They are also encouraged to conduct post-workshop facilitator evaluations as part of their end of workshop debrief. They may also be responsible for preparing post-workshop evaluations on behalf of the program organisers or partner organisations. Results of these meetings could also be included in the final reports of the program.

In order for evaluations to reflect BRIDGE’s capacity development philosophy and values, beneficiaries should not simply provide input or render opinions about activities or interventions; they should be participants who are involved in the evaluation process right from the start. The BRIDGE partners recommend that an ’empowerment’ or ‘participatory’ evaluation approach be adopted where possible. In this approach, which is fundamentally democratic, the entire group – not just an evaluator – is responsible for conducting the evaluation (of a program) and assessing their own achievements. Evaluators are co-equal – with the client, beneficiaries or stakeholders – so that the whole process is a shared and collaborative one. This derives from the partners’ acknowledgement and respect for people’s capacity to create knowledge about, and solutions to, their own experiences.

Post-program evaluation tasks

Post-program evaluations can usefully be spread over three stages, the first of which seeks to assess the immediate impacts, the second of which focuses on mid-term organisational impacts and the third which looks at longer-term organisational impacts. Tasks to be performed at each stage are summarised in the tables below.

Table 5: Short-term evaluation

Who is being evaluated? Immediate post-workshop evaluation (to be conducted as soon as possible after the end of the program) Product of evaluation
BRIDGE partners and country client
  • Project history and outcomes can be collated
  • Donor reports
  • Other reports (including archived information)
Project team and counterpart training unit
  • Debriefing of facilitator
  • Post-program assessment
  • Constructive forward planning
  • Standard evaluation process
  • Standard report format
  • Briefing of country client
  • Collated project information/history
  • Recommendations on future BRIDGE opportunities (standard format)
Facilitators
  • Workshop evaluation
  • End of training evaluation
Participants
  • Application of learning (if operational-related)
  • Improved work plans
  • Expanded view of job
  • Personal enrichment (measurement)

 

Table 6: Medium-term evaluation

Who is being evaluated? Organisational impact (to be assessed on the occasion of the next electoral event or before the end of a six-month period, whichever occurs first) Product of evaluation
BRIDGE partners and country client
  • Stakeholder surveys
  • Collation of information
  • Report to donors
  • Report to country client
  • Proposal for future work/continuity
  • Agreement on further country client strategy
  • Strategy for future training/capacity development
Project team and counterpart training unit
  • Input into impact assessment
  • Report to BRIDGE partners on process
Facilitators
  • Input into impact assessment
  • Increased skill levels
  • Bigger pool of experience
Participants
  • Interviews
  • Improved work plans
  • Changed operations
  • More positive work environment

 

Table 7: Long-term Evaluation

Who is being evaluated? Organisational impact (to be assessed after at least a year) Product of evaluation
BRIDGE partners and country client
  • Stakeholder surveys
  • Collation of information
  • Report to donors
  • Report to country client
  • Proposal for future work/continuity
  • Agreement on further country client strategy
  • Strategy for future training/capacity development
Project team and counterpart training unit
  • Input into impact assessment
  • Report to BRIDGE partners on process
Facilitators
  • Input into impact assessment
  • Increased skill levels
  • Bigger pool of experience
Participants
  • Interviews
  • Improved work plans
  • Changed operations
  • More positive work environment

Evaluation reports

The program organisers would be responsible for preparing the reports associated with workshops and the program. These reports may be tailored according to the audience, which may include a client such as an EMB, donors, or other stakeholders.

The program report should:

  • Be clearly dated
  • Include the clearly stated purpose of the report
  • Specify the training events being evaluated and the time period during which they took place
  • Include an appropriate amount of detail for the needs of the intended audience
  • Include information that is presented in an interesting and understandable way, with graphics that help to make the findings clear
  • Not contain unnecessary information

Also, it should be clear who the audience for the report is, and the evaluators should have clear expectations for how it will be used by that audience.

An evaluation report should include the following components:

  • Executive Summary
  • Details of the training event(s) being evaluated
  • time span
  • number of times conducted
  • number of participants
  • number and names of facilitators (and accreditation status)
  • purpose and objectives of the training event(s)
  • key content areas
  • Methodology
  • composition of evaluation team
  • objectives of evaluation
  • selection of sample (size, characteristics)
  • number and location of sites visited
  • Analysis of findings
  • Interpretation
  • Recommendations (for changes in or maintenance of training, organisational systems and procedures, and environmental factors)
  • Annexes/Appendices that could include data analyses
Ben
Ben

Leave a Reply

Registration

Forgotten Password?