The Jazz Players of the Evaluation World: Meet our experts on Systems-Change and Place-Based Approaches.
August 12, 2020 / Edited by Kaisha Crupi
August 12, 2020 / Edited by Kaisha Crupi
A conversation with Dr Jess Dart, Anna Powell and Dr Ellise Barkley (Part 1)
While systems-change and place-based approaches to social issues have been growing in number and popularity over the past decade, our sector remains very much in the ‘exploration’ stage when it comes to tried and tested methodologies for answering the key questions of: Do they work? And, are we creating (our desired) impact?
In fact, it was less than two years ago that the Place-Based Evaluation Framework was developed in Australia, a collaborative undertaking that offers a set of minimum standards for evaluators and change-makers. As proof of concept, the framework was tested with the Logan Together, a collective impact initiative working towards population-level change for kids and families in the region of Logan.
It was an illuminating process. And there’s still much to be learnt. While the exercise identified many methodologies and resources from the evaluator’s toolkit that could, with some tweaks, prove useful for evaluating change, there were often more questions raised than answered. These systems-change and place-based approaches surfaced issues around power, trust, and leadership that many evaluators hadn’t previously had to grapple with to the same extent. Instead, evaluators and change makers found themselves needing to draw from a broad range of disciplines, fusing them together and constantly improvising as they went.
In last month’s webinar, we talked with two of the key authors of the national framework, our own Jess Dart and Ellise Barkley, along with systems change thought-leader Anna Powell about the challenges and opportunities presented by these approaches, and why evaluators in the space are truly the jazz players of the evaluation world.
Before we dive into the challenges, what are some of the defining aspects of systems-change and place-based approaches?
“Let’s start with the fact that they are emergent and have complex cause and effect.”
We’re not talking about linear programs where we apply x to problem y and see change z occur. The discipline of evaluation grew up in the world of programs and even when dealing with complicated programs, evaluators tend to rely on the fact that programs can be planned out in a stepped and reasonably predictable manner, and that evaluators can measure progress against pre-set performance targets, using waterfall planning approaches such as results-based accountability. That just doesn’t work for systems change and place-based approaches.
Why? Because if you try something here, it’s likely to affect a different part of the system in unforeseen ways. You need to be constantly alert to how the system is responding, to constantly change your game plan as you learn how change unfolds. This turns planning and measurement on its head. We cannot develop a plan ahead of time and work to that plan through the years – systems change is just not that predictable. You are going to constantly need to replan.
“The theme of complexity carries through to the range of actors and participants typically involved in systems-change and place-based initiatives. We’re dealing with very diverse stakeholders who play very different roles and contribute to change in unique ways.”
Stakeholders can include community members, First Nations and culturally diverse leaders and groups, three different tiers of government, service providers, professionals from the sector of interest, local organisations, philanthropy etc, and the collective governance systems that might be in place. This has held true for all of the collaborations we’ve work with, including Burnie Works, the Hive, Logan Together, Hands Up Mallee and Maranguka.
Designing and delivering inclusive Measurement, Evaluation and Learning systems with diverse stakeholders presents technical, cultural and resource challenges. It takes time and process to support inclusive and culturally safe methods, and we may need to work with different types of knowledge: scientific, local, cultural and traditional. It raises questions such as: whose reality and knowledge counts? What counts as evidence in different community and place contexts? What methods are appropriate for data collection and measurement? To what extent does community have power and agency to define the agenda?
“Power dynamics are a recurring theme in this space. To work effectively, we need to change how we traditionally think about accountability and how we work with power.”
In a systems-change environment, accountability is not just top down. And as evaluators, we need to do more than report up. For us to be able to nudge the system, we need to recognise and work with upwards, downwards and sideways accountability.
All actors with a stake in the shared vision have a role to act and learn in this piece. That means even evaluators need to be ready to show up as learners – and that those funding the initiative and its evaluation need to be supportive of this. This challenges our traditional power dynamics, relationships and ways of working.
At a very practical level, systems-change and place-based approaches require different ways of working, different funding cycles, and different expectations. We need new systems, tools and contracts to support the complex, interconnected nature of the work and the myriad of people and roles involved. This is never more evident than when we consider the timeframes involved with systems-change and place-based approaches.
A report from the Spark Policy Institute looked at 24 collective impact initiatives and found that the average time for achieving population change was 9 years or more. Most funding cycles simply aren’t set up to manage results in this way. So managing expectations around results over such a long timeframe is a big challenge. It can lead to what Mark Cabaj refers to as expectations failure. No matter how much you keep telling funders what’s feasible, they still expect to see population level results earlier.
The final challenge I want to share is personal – we as evaluators need to accept that sometimes evaluation is part of what may be holding problems in place. We need to think about how “we show up”. We need to re-think what our roles are, and how we support others within the initiative. We need to show a willingness to adapt and improvise as the system shifts and we need to respond to unexpected knock-on effects. It’s a big, noisy, interconnected band we’re all part of, and we’ve all got our own notes to play. Sometimes we’re in harmony, sometimes it’s discordant, and we have to be ok with that.
In Part 2 of this series, we’ll delve into the learnings and practical approaches used by our panellists when working in this emergent space.
And for those looking to learn more, check out our Evaluating Systems Change and Place-Based Approaches.