Start main page content

Co-creation of the National Evaluation Plan and Evaluation Guidelines in Tanzania: Lessons learnt

- Dr Takunda Chirau, Deputy Director; Dr Steven Masvaure, Senior M&E Technical Specialist; Ms Sinenhlanhla Tsekiso, Programme Manager; and Ms Andiswa Neku, Intern, CLEAR-AA

African governments have increasingly demanded results at outcome and impact levels, particularly, for interventions (i.e. programmes, projects, strategies, plans and policies) that are of national commitment or interest captured in their national development plans and national vision. In that regard, developing robust monitoring and evaluation systems/ evaluation systems has become more of a ‘must’ than an option. These systems, therefore, need to be strengthened to ensure greater efficiency and sustainable progress. We have discovered, through our experience in developing evaluation systems for governments across English-speaking countries in Africa, that evaluation guidelines and evaluation plans/agendas are pieces of a puzzle aimed at strengthening the practice, discipline and profession of evaluation within the public sector.

Several countries, namely, Uganda, South Africa, Kenya and Zambia, have developed their national monitoring and evaluation policies and are implementing such policies. Others have developed their policies and are awaiting approval before implementing these policies, for example, Lesotho and Namibia. However, in developing the evaluation guidelines and evaluation plans – we are confronted with a ‘chicken vs. egg’ question, which comes first? We have not yet found an answer to this as we sometimes develop evaluation guidelines and evaluation plans before approval of the national monitoring and evaluation plan. But our general experience is that: the presence of a national monitoring and evaluation policy, or absence thereof, cannot hinder the development of other infrastructure that necessitates the undertaking of evaluations and the embedding of evaluative thinking and culture among government departments. This brings about epistemological consensus among government institutions on what evaluations are? When to conduct them? Who conducts them and when are they conducted. Once this is present, developing a policy somewhat compels government institutions to comply and brings coherence among government institutions. Perhaps, our conclusion to this is that there is no methodological approach as to ‘what comes first’.

Nonetheless, we share our experience in co-creating evaluation plan, which is a mandatory and strategic document that outlines the evaluations planned for a country programme, used to monitor progress (UNDP, 2021) as well as evaluation guidelines with the United Republic of Tanzania through the Performance Monitoring and Evaluation Division (PMED) in the Prime Minister’s Office.

  1. Relying on existing infrastructure is critical to ensure that you do not start from ground zero. In 2014, Tanzania developed a monitoring and evaluation systems framework for public service. With this, we realised that the current evaluation plan and evaluation guidelines being developed should start with these already existing frameworks, as they form a foundational starting point, though more aligned to monitoring compared to evaluations.
  2. Political leadership and administrative proficiency is critical to ensuring the growth of evaluative thinking and use of the evaluation plans and evaluation guidelines. The location of the Performance Monitoring and Evaluation Division (PMED) in the Prime Ministers’ Office is strategic and is indicative of the presence of evaluation in political and social discourse. Both the PMED (as an institution) and personnel tasked with monitoring and evaluation functions are advocates of evaluations, they are champions of evaluations.
  3. Evaluations are already undertaken mostly by the development partners working with line ministries. The government is wanting to develop the internal capacity to conduct evaluations. With the evaluation guidelines in place, there will be uniformity across government institutions on how evaluations are conducted. This will influence the quality of the evaluations as well as the use thereof.
  4. Sharing of the evaluation plan and evaluation guidelines with other partners in the country is critical to ensuring the use thereof. Tanzania is hosting its second monitoring and evaluation week in September 2023. The evaluation plan and evaluation guidelines must be strategically disseminated, combined with advocacy and broad participation for widespread buy-in and ownership.
  5. Building a national evaluation system is an incremental process, and it is therefore crucial to get it right at national level while involving the local government authorities. Some of the assumptions to be made include: 1) the observation that there is presence of political and senior administrative buy-in and will; 2) resources are allocated both at human and financial levels; 3) the Public sector staff has developed a robust evaluation culture where there is curiosity and reflection and evaluative thinking; 4) capacity strengthening and/or building is conducted to improve evaluation planning, undertaking and use skills 5) government institutions including parliamentary portfolio committees and Cabinet, ask for evaluation evidence.

In summary, we note firstly, that there is unanimous consensus in PMED that there is an increase in the demand for evidence, therefore, laying the foundation for meeting that demand is critical. Secondly, there is concrete consensus that evaluation plans and evaluation guidelines are required to provide evaluative consistency across government ministries. And finally, there is consensus, that a national monitoring and evaluation policy development will a ‘turning point’ for evaluation practice in Tanzania.

 

Share