/
Last year, I had the pleasure of facilitating a panel to consider the challenges of evaluating place-based approaches at @ChangeFest.
The panel consisted of Kylie Burgess and Tanya Brooks-Cooper, who play the role of MEL leads in place-based approaches in Tasmania, and Prerna Mehrotra, a MEL practitioner who sits within a Victorian govt. team charged with evaluating a multiple-site place-based initiative, and Danielle Campbell, who sits within a MEL team at La Trobe University.
In this work, I aimed to position myself in this as a listener rather than a solution provider. As a panel, we deliberately did not shoot for solutions but instead, we tried to sit with the challenges and concluded our debate by offering some generative questions. I promised to write it up, so here it is! I am really hoping that this is just a first step in re-imagining evaluation for community-led and place-based approaches. I am also delighted to be convening another panel on this at #ChangeFest24.
Is the evaluation of place-based approaches stuck?
Evaluation holds the promise of helping us learn about what is/is not working so we can adapt and achieve more impact. It also holds the promise of helping us tell the bigger story of place-based work for both accountability back to the community and for advocacy. And yet, this promise is often not realised. This panel discussed the challenges and bright spots in evaluation for place-based change and asked some hard questions – such as ‘is evaluation of place-based approaches stuck?’, and ‘is government-driven evaluation holding problems in place?’
When evaluation is strongly influenced by government needs and priorities it can be experienced negatively by communities. It can reinforce the inequalities in power that we are trying to shift. It can undermine the model of community-led change. Additionally, there can be a heavy evidence burden for place-based initiatives to prove that what they are doing is working. While our panel explored ways to minimise the burden on communities while “feeding the beast” (meeting reporting requirements), we were more excited by the prospect and examples of community-led evaluation work – where evaluation is done for, by, and as a community. Danielle Campbell shared an inspiring example of First Nations funded and led evaluation by Warlpiri in Central Australia based on a very different evaluation approach grounded in Warlpiri worldviews, language, and culture. But we all noted this approach was rare.
How might we embrace community-led evaluation that is nurturing and useful for communities whilst at the same time negotiating and delivering a minimum level of evidence required by government and funders? What does it take to do a two-world evaluation?
Population-level indicators can be a distraction as they take years to shift. Final welfare outcomes for people are without doubt an important north star for place-based work. However, an over-emphasis on population-level indicators, which do not shift quickly from year to year, may result in a loss of momentum. Linked to this is a need to capture and communicate evidence-based stories that show the impact and progress of place-based work in the shorter term.
How might we hold population indicators lightly while also collecting and using data that helps us navigate and show progress in the short term?
The panel discussed that complex MEL frameworks can be overwhelming and too challenging to implement. An example was provided by Kylie Burgess, who explained how a consultant had developed a MEL plan, but people didn’t fully understand it and were not using it to guide the MEL work. For a time, she put the plan to one side. Instead, she started where the community was, helping them get clearer on the theories of change at the local level and dropping the jargon. Later, she was able to link back to the MEL plan and use it to support the MEL needs of the various initiatives.
How might we hold our evaluation frameworks more lightly or grow them more slowly so that we can start where the community is and support rather than impede community efforts?
Widening the focus on evaluations from outcomes to also evaluate principles and processes. Prerna Mehrotra shared the challenge of working in partnership across diverse sites, where each community had their own, unique goals. She offered a possible solution in terms of supplementing outcomes evaluation with process evaluation. The process evaluation involved evaluating the work, and how all partners (including the government) showed up against a set of practice principles. The panel went on to propose that the work of place-based approaches is really about developing the conditions for change, and they felt it is more congruent to evaluate them more on this basis.
How might we re-think accountability to a wider focus beyond outcomes, to be more principles and process-focused?
A fifth challenge was raised by an audience member around the ability of communities to be able to access the data they need to diagnose and track services. If we can’t share the data, this can exacerbate the issue of over-consultation: multiple services asking the same questions over and over again. This also raised questions about overburdening community groups who may not be equipped to work with messy data sets.
How might we help more communities gain access to service data at the local level, while at the same time building capability for community-based groups to analyse this data at the local level?
A final question for you: which of these challenges do you resonate with the most, and what have we missed? We particularly want to hear from communities, practitioners and funders working in/with place-based approaches.