Spotlight on the big M
A reflection on the recent Office of Development Effectiveness (ODE) ‘Evaluation of DFAT Investment Level Monitoring Systems’ (Dec 2018)
Disclaimer: the views expressed below are solely those of Damien Sweeney and do not represent Clear Horizon’s.
The ODE recently released a report on DFAT Investment Level Monitoring Systems, with the purpose of improving how Australian Aid investments are monitored. The report focused on the design and use of monitoring systems by DFAT investment managers and managing contractors.
My first observation was the title of the report, specifically the term ‘monitoring systems’. Why? Because so often Monitoring is joined to Evaluation (M&E), which in my experience can (and often does) lead to confusion between what is monitoring, and what is evaluation…. sometimes with the focus shifting to evaluation, at the expense of monitoring. This confusion between the M and the E is most often seen in field/implementation staff, who are often responsible for the actual data collection on a day-to-day basis.
I’ve been reflecting on this issue a fair bit over the past decade, having provided M&E backstopping to programs facing a distinct lack of monitoring and adaptive management, as well from developing monitoring, evaluation and learning (MEL) frameworks and plans (the jargon and acronyms in this field!).
Differentiating between little ‘e’, and big ‘E’
Monitoring is commonly defined as the systematic collection of data to inform progress, whereas evaluation is a more periodic ‘evaluative’ judgement, making use of monitoring, and other information.
However, as the evaluation points out, good monitoring is critical for continual improvement, by managing contractors (and other implementers) and DFAT investment managers. Continual improvement through monitoring requires an evaluative aspect too, as managing contractors (field/implementation teams, M&E advisors, leadership) and DFAT investment managers reflect on progress, and make decisions to keep going, or adjust course. I refer to this regular reflection process as little ‘e’, as differentiated from more episodic assessment of progress against key evaluation questions, or independent evaluations, which is the big ‘E’ (in M&E).
Keeping monitoring systems simple
Einstein was credited with the quote “Everything should be made as simple as possible, but not simpler”. This should be a principle of all monitoring systems, as it will promote the ownership across all responsible parties, from M&E advisors who develop systems, to those that will collect data and use it for continual improvement.
I have often seen cases where field/implementation teams don’t understand, and therefore don’t feel ownership, of complex M&E systems. A literature review supporting the report (Attachment A) notes the that better-practice monitoring systems are kept as simple as possible to avoid the lack of implementation that generally accompanies complex monitoring systems (too many indicators, too much information, and resultant paralysis).
The need for a performance (and learning) culture
Interestingly but not surprisingly, a survey of managing contractors noted that ‘good news’ often took precedence. This goes back to the importance of a performance culture across DFAT and managing contractors (and subcontractors) that embraces the opportunity to learn and improve (safe-fail vs fail-safe). There needs to me more incentive for managing contractors and investment managers to reflect, learn and adapt, and not just focus on the positives.
The importance of fostering a strong performance (and learning) culture is expressed in the recommendations. Learning should not be from periodic evaluations, but a regular and continuous process, with the regularity of reflection driven by the operational context (more complex contexts requiring more regular reflections of what monitoring information is indicating). I know of investments where implementation staff will informally meet on a weekly or fortnightly basis to track progress and make decisions on how to improve delivery.
Building capacity
The literature review notes the importance of capacity of staff for effective monitoring. I like to use the term capability (knowledge and skills) along with capacity (time and resources), as both are required, and are yet distinct from each other. The literature review focused on the importance of managing contractors recruiting and holding on staff who could design and manage monitoring systems. However, my experience indicates that it is not the M&E advisors that are a constraint or enabler of good monitoring systems, but the ownership of the system by those that implement the programs. Therein, for me, lies a key to good monitoring systems – getting field/implementation staff on board in the design and review of monitoring systems, so that they understand what is to be collected and why, including how it helps their work, through improving performance.
What we’re doing at Clear Horizon to focus on monitoring emergent outcomes and facilitate adaptive management
Clear Horizon has been developing fit-for-purpose plans and tools for our partners and clients, linking theory and practice and continually reflecting and learning on how to improve this.
I’m currently work shopping with my Clear Horizon Aid Effectiveness colleagues how we can make M&E tables more clearly accentuate the M, and how this informs the E. More to come on that front! Byron Pakula will be presenting at next week’s Australasian Aid Conference a we developed titled ‘No plan survives contact with the enemy – monitoring, learning and evaluation in complex and adaptive programming’ that takes in key issues raised in ODE’s evaluation. So check that one out if you’re in Canberra.
What are your thoughts on ODE’s evaluation of monitoring systems?