The last two decades have beheld an increased interest in strengthening project Monitoring and Evaluation (M&E) by donors and implementing organizations. More government agencies and organizations, including those in the social justice space, are interested in strengthening their M&E capacities with robust M&E plans. These plans traverse from general to program-specific ones. This exposition aims to provide insight into setting up an M&E plan for Social Justice programs. It will proceed by providing insight on some sections and flow to follow; program overview, purpose (s) of M&E plan, Evaluation framework, Indicator system, Information system (data sources), Impact evaluation design, dissemination and utilization plan in the context of Social Justice M&E plan development. Lastly, it will address the guidance on the adjustments to the M&E plan.
Program Overview
The number Item in every M&E plan is the Program Overview. This section summarizes the entire program with a focus on why the program will/is being implemented, stakeholders involved, the proposed model of delivery, geographical focus, resources, and timeframe of the program. For example, a brief mention if the program will use a direct service delivery, technical assistance or both approaches of service delivery, demand generation strategies, targeting, and how the program would contribute to national goals or world goals such as SDGs. (United Nations Development Programme, 2002)
Purpose (s) of the M&E Plan
The purpose of having an M&E plan in a social justice program is to guide the collection, analysis, and use of information at all levels, use and provision of information to track the progress of meeting program objectives. When designing the plan, considerations are targeted towards responding to the information needs of the donors, implementing partner (s) and largely for the implementers. This may include outlines of specific procedures and persons responsible for monitoring and evaluation activities at data flow processes and how data are collected, stored and used. (International Fund for Agricultural Development (IFAD), 2002)
Evaluation Framework
The evaluation framework section outlines specific strategic planning methodology that is used to prepare a program. This methodology entails a participatory process to clarify outcomes, outputs, activities and inputs, their causal relationships, the indicators with which to measure progress towards results, assumptions, and risks that may influence success and failure of the intervention. It offers a structured, logical approach to setting priorities and building consensus around intended results and activities of a program. For example, if the goal is to increase gender equality through women economic empowerment is identified, the number of women trained in economic empowering activities such as entrepreneurship by the end of the set period would act as an indicator, training registers would be the means of verification while community embracing women led businesses would be the assumption and risk for the program. Additionally, specific objectives such as increasing availability of training centers would be measured by the number of centers ready to offer training. Finally, training materials such as books, data, and computers would be grouped in the “inputs” group, delivering of these materials would be classified as activities, facilities providing training would be the output while the outcome would be classified under the increased prevalence of women actively conducting entrepreneurship activities and the impact would be the reduced dependence on the male folks and increased decision power by women. (Gage, et al., 2005)
Indicator System
Activity level outcomes indicators can be defined in tabular form that provides a detailed definition of each indicator, unit of measurement, source of data, method of data collection, frequency of data collection, and the entity responsible for collecting the data all above each specific objective for the programme. (Usanase, 2019)
Indicator baseline would be collected from various secondary sources including other studies conducted by other organizations or other secondary sources such the world bank HDI report while targets could also be generated based on unmet needs. The frequency of data collection would be at multiple points depending on the project and the level of the indicator. However, the standard cycle of data collection is quarterly, semi-annually and annually synchronized with funders reporting to ensure efficiency. (World Health Organisation , 2019)
In cases were secondary data is unavailable which is likely the case in most instance; Primary data could be collected via a baseline study. In a case for a gender equality program, various thouroughly thought through indices have been developed such as the IndiKit by People In Need that already has predesigned questionnaires that can be used for both baseline and other follow-on evaluations. Of course other custom-designed tools and information management systems with some support from M&E teams could be used based on the needs of the program. The collected data would be stored in using various databases on the market such as the free DHIS2 platform. In addition to collecting data for required reporting, program-level data can be reviewed by the project managers and M&E team every month, quarter or annually to ensure that the projects are on track with project objectives as outlined in the work plan. (Gage, et al., 2005)
Impact Evaluation Design, Dissemination, and Utilisation Plan
A mixed method would be an ideal method of evaluation. Indicators would be compared against targets using secondary data and primary data, and insights to synthesize the findings would be collected from in-depth interviews with a few former clients and key informants would be included suffice as a good evaluation plan. Furthermore, questions such as how tailored demand generation plans affect the number of new entrepreneurship clients could be included. Specifically, Evaluations would measure the following; Effectiveness of program activities, Attribution of measurable outcomes to interventions, Reasons behind the success or failure to achieve goals, objectives and targets, Unintended results of the program (positive and negative), Long-term sustainability of results, Lessons learned applicable to similar projects. Dissemination of evaluation findings could be done during gathering such as during world Women’s day commemoration and/or during conferences. This could be in written narratives for a variety of reporting purposes and audiences to aid use of findings. (Measure Evaluation, 2017)
Adjustments to M&E Plan
Lastly, M&E plans are a living document, it is important to mention that adjustments are required whenever programme is modifdied. (World Health Organisation , 2019)