• Fiji Program Support Platform • 

Biscuits, scorecards, and brave conversations: centring collective sense-making in complex MEL

CASE STUDY

If you’re working on a multi-program portfolio and want to shift MEL from compliance to collective sense-making and learning, this case study is for you.

In 2025, we set out to develop tools to support a nested monitoring, evaluation, and learning (MEL) system for the Fiji Program Support Platform (the Platform). We developed these scorecards as part of the collaboration between Abt Global (Abt) as the Platform implementer and Clear Horizon as MEL partner. What emerged was a set of interlinked scorecards that help us collectively make sense of what we’ve done, where we’re headed, and what “good” looks like across the eight-program portfolio. This blog shares how we developed the scorecards with Platform staff and what we’re learning about using them as a catalyst for shared reflection, judgement, and learning in early implementation – and some of the practical challenges we continue to grapple with. We hope this will spark ideas about how scorecards and rubrics can support collective sense-making, learning, and reflection in your own complex MEL work – without glossing over the complexity of implementation.

What is the Fiji Program Support Platform?

The Australian Government’s Fiji Program Support Platform, accounts for around half of Australia’s bilateral aid to Fiji and brings together a management unit and multiple sector programs in areas like education, health, and governance. Working alongside Abt, Clear Horizon has contributed to a collaborative process to develop the Platform’s MEL Plan and several sector MEL plans, with more to come in 2026. Across the portfolio, three themes cut across everything we do: gender equality, disability, and social inclusion (GEDSI); locally led development (LLD); and climate and disaster resilience (CDR). The Platform is implemented by Abt, with Clear Horizon and Talanoa Consulting as consortium partners.

One of the Platform’s principles is solesolevaki – the Fijian concept of working together, mutuality, and give-and-take. For MEL, this means grounding the system and methods in Fiji’s diverse cultural context and in ways that respect Fijian and wider Pacific ways of knowing and sense-making. Talanoa Consulting’s role as the Learning partner, has been central in deepening our understanding of what solesolevaki looks like in practice, through their design of the Pause and Reflect (P&R) architecture – including the veivosaki-yaga approach – as the core methodology for structured, relational reflection. Together with Abt, Talanoa, and the Platform Performance, Learning, and Communications team led by Jo Bawden (PLC Team Leader), and the MEL team led by Dr. Kishan Kumar (Performance and MEL Manager) and Sakeo Moce (MEL Specialist), we have tried to ensure that tools like the scorecards support, rather than crowd out, these locally led, relational ways of working. For MEL, this means grounding the system and methods in Fiji’s diverse cultural context and in ways that respect Fijian and Pacific ways of knowing and sense‑making.

What were we trying to achieve?

We needed a way to bring together diverse people, evidence, and perspectives to ask a simple but demanding question: are we making progress? This meant embedding equity, solesolevaki, and collective sense-making, and creating a foundation for honest, evidence based conversations about where the Platform has been, where it is going, and what “good” and “better” mean for us.

We wanted to democratise sense making without losing rigour. We wanted to take MEL out of a purely technical space and into the space where program staff, partners, and funders could see and feel it. We were interested in conversations about effectiveness informed, of course, by data – but also by heart, first hand experiences, and individual reflection. We also wanted to expand our view of efficiency beyond costs and outputs to include the value of relationships, reputation, engagement with Ministries, and our work with communities.

At the same time, we were conscious that any new tool risks being experienced as ‘just another requirement.’ For the scorecards to be useful, they needed to feel fit for purpose to busy teams, add value to existing processes rather than duplicate them, and ultimately support – not crowd out – locally led, relational ways of working.
Ultimately, we needed a tool that drew on the diversity of experiences, knowledge, and evidence across the Platform. A tool to help us put solesolevaki at the centre of MEL while acknowledging that building shared understanding about how to use the tool would take time and deliberate effort.

What did we use?

Clear Horizon uses scorecards (or rubrics) across several of our projects; we know they can be useful tools for synthesis and analysis at a big picture level. A scorecard is a matrix that has the domains of change, or areas, along the lefthand column, and a maturity scale along the top, often ranging from 1–6, with each cell describing what performance at that level looks like.

For the Platform, we collaboratively designed a series of scorecards that would enable us to draw on existing monitoring data as part of broader conversations about progress. The scorecards were not intended for the MEL team to use in isolation: they were designed for in-person use with visual aids, qualitative and quantitative evidence, and robust conversation. They’re not a tool to be used quietly at your desk; they are designed for rooms full of diverse people, disagreement, and evidence rich conversation to negotiate sense-making and reach shared understanding. But! This can only happen if people understand the scorecards’ purpose and feel confident using them, drawing on visual aids, qualitative and quantitative evidence, and robust conversation. In short, the scorecards are intended to democratise evaluative judgements.

We needed to measure effectiveness, efficiency, and the implementation and impact of our cross cutting themes, while capturing comparable implementation data across all eight programs, each with its own logic, EOPOs, and workplans. To do this, we created six scorecards aligned with the Platform’s program logic, used at both program and Platform levels: effectiveness; locally-led development; GEDSI; climate and disaster resilience; and multi-organisational performance assessment (efficiency and VfM).

We drafted example descriptors drawing on our experiences in other programs and frameworks such as Julian King’s Value for Investment. A six point rating system aligned with DFAT’s Investment Monitoring and Reporting (IMR) ratings helped reduce the tendency to score in the middle – although this has raised questions about how the scorecards would ultimately feed into DFAT’s expectations around IMRs.

From scepticism to biscuits

At first, it was a tough sell. The scorecards looked big and complicated, and it seemed like a huge data collection burden. Some of our Fijian colleagues felt they were just another reporting mechanism that wouldn’t enable reflection or learning and would ultimately be completed by the MEL team in isolation. As our colleague Sakeo said, “at first, the scorecards felt like a lot. But we embraced that complexity because it reflects the reality of the Platform.”​

We listened. Together, we worked through the domains of change and the ratings to understand what ‘good’ looked like across each part of every scorecard. Sakeo and Dr Kishan championed the scorecards, creating opportunities to gather feedback from a wide range of staff and testing the scorecards. Clear Horizon and Platform staff co‑developed practical tools and processes to support this, including a guidance note to explain the steps for using the scorecards, analysing results, and reporting in ways that would work within tight timeframes. In practice, this meant rapidly prototyping and adapting tools ahead of key Platform milestones, with Platform staff playing an active, lead role in shaping how the scorecards would be used rather than simply receiving a finished product.

In August, we returned to Suva, armed with biscuits and, of course, a new scorecard. This time, Dr Kishan had organised a free-flowing session to test the new scorecard with the Australia-Fiji Health Program. Knowing that everyone appreciates a sweet treat, we created a ‘biscuit deliciousness scorecard.’ It included playful but serious domains like ‘chocolate quality’ and ‘mouth feel,’ scored on the same 1-6 scale as our real scorecards.​

We sat together, shared a biscuit, and reflected on whether we liked it and why. Then we introduced the scorecard and asked people to rate the biscuits and explain their reasoning. There were many laughs, but the exercise shifted how people understood scorecards: not as judgements handed down, but as conversations held together to build common understanding and shared ownership. We had all eaten the same biscuit but brought different evidence and preferences to the table, and the scorecard helped us be specific about why that evidence mattered. While MEL data may not be as delicious as biscuits, there is no reason talking about our collective progress cannot be fun, too – even if getting to that point takes more groundwork than a single workshop.

Putting the scorecards to work

After the biscuit exercise, the scorecards started to take root – and Dr Kishan and Sakeo continued to champion the scorecards. We shared revised versions and, drawing on our recent experience, explored how they might fit into the Platform’s MEL system and practices.​ Importantly, the biscuit scorecard exercise did not sit in isolation. By this stage, the Pause and Reflect workshop had already been framed as a veivosaki-yaga space: one grounded in trust, shared responsibility, and collective interpretation rather than compliance. The exercise worked because it was nested within this broader facilitation design.

In November, the veivosaki-yaga/Pause and Reflect workshops, facilitated by Talanoa Consulting and convened by Abt, brought together program teams, Ministries, DFAT, and Platform staff. To prepare for this exercise, we dug into the data required to use the scorecards. We pulled together a series of evidence matrices, drawing on monitoring data from the past year to provide examples for each domain of each program’s scorecard. This helped us see where we had strong evidence and where more data or different methods were needed, but it also highlighted the time and effort required from Platform staff to assemble and interpret this information within already busy workplans.

The Pause and Reflect process undertaken with the Platform programs culminated in a Whole of Platform reflection exercise, informed by a Gallery Walk workshop and all program-level scorecards. Program teams pulled together key achievements, expenditure information, implementation evidence, and challenges, and presented them in a public space with implementation partners and DFAT. Program and Platform staff, along with Ministry and DFAT counterparts, moved around the room reviewing the evidence each team had compiled.

The scorecards were put to the test as groups applied them to the evidence in front of them and added their own knowledge.​ Working with the Talanoa learning team, we were able to ensure that collective judgement-making was structured, evidence-informed, and culturally grounded. Within this architecture, the scorecards were applied as one lens through which participants interpreted evidence.

The Whole of Platform workshop marked a significant moment for the Platform: the room was abuzz with animated discussion and shared reflection. Colleagues in Suva told us this was a meaningful moment for the Platform and for MEL. The scorecards shifted from being viewed with scepticism to sitting at the centre of discussions about progress and what ‘good’ looks like, and they created space for different experiences, types of evidence, and perspectives. Judgement-making about progress moved from a small group of MEL specialists to a wider mix of program staff, Ministry counterparts, and DFAT. At the same time, not all programs were able to use the scorecards in a fully solesolevaki way; competing delivery pressures, perceived complexity, and unfamiliarity with the tool meant that, in places, the process felt more like an additional performance exercise than an integrated management and learning tool.

One practical lesson from this first round was the value of including an explicit ‘evidence’ column alongside each domain of the scorecard. This encouraged teams to document the monitoring data, stories, and observations that underpinned each rating, making it easier to check the robustness of our judgements and to see where additional evidence was needed.

What was the result?

As Sakeo reflected, “using the scorecards in our [Whole of Platform] workshop was a milestone. Feedback from everyone, including DFAT, was positive, even though it was the first time many had seen or used it… It’s now driving discussions at the Senior Leadership level, giving us a way to compare progress across sector programs.”

Some friendly competition has emerged between programs, with teams wanting to deliver the best Gallery Walk materials for the next Pause and Reflect cycle. Program teams who had previously seen MEL as something done to them are beginning to see themselves as collaborators and owners of progress.

This shift was possible not only because of the scorecards as a tool, but also because of leadership from Abt’s Platform MEL team, and because of the way the scorecards were participatorily developed and implemented. It took months of work to finalise the scorecards and to communicate that, yes, they might be useful for reporting, but that they can also be central to learning and reflection. The ongoing championing by Jo, Dr Kishan, and Sakeo was critical throughout this process. Using the scorecards has enabled the Platform to see where it is at and to identify where it might not have enough, or the right types of, data to make some judgements. Reflections on this first round of trying the scorecards has also surfaced questions about how consistently the tool is being used across programs, and how much support different teams need to interpret and act on the results.

We wanted the scorecards to help us focus on the ‘So What?’ (the impact and outcomes of our work) rather than the ‘What Happened (the activities that had been completed).’ The scorecards are a lens through which the Platform could look at a wealth of data. In doing so, we are gradually strengthening a learning culture: we are learning about our data availability, how we connect evidence to outcomes and impact, and what improvement means for individual teams and for the Platform as a whole.

Jo noted that the “scorecards [are] demonstrating the maturity scale of all our programs – and could be used to facilitate targeted improvements” while Dr Kishan said “we were able to talk about areas that were challenging and needed improvement, and at the same time, we discussed solutions and adaptation measures that would help us progress with our projects.”

The process has raised important questions: what the scoring process looks like over time, how we use scores in reporting to DFAT, and how we hold ourselves accountable for planning and using the scorecards in future, including whether and how the approach aligns with DFAT’s IMR expectations.

Developing and implementing the scorecards required a deep understanding of the context, people, and programs we work with, the resources available, and a shared commitment to learning. Using the scorecards across the Whole of Platform workshop took considerable preparation and a leap of faith; assessing our progress collectively with Ministry and DFAT counterparts was an exercise in vulnerability.​ It also revealed the very real workload implications of this approach for Platform staff, particularly during key reporting periods, and the need to keep iterating the tools so that the benefits outweigh the burdens.

The scorecards are useful tools. But it is the development process that has allowed them to become more than just another MEL instrument. We worked together to define what ‘good’ looks like in ways that reflect our values and context. We invested in in‑person workshops and low‑stakes practice sessions (like the biscuit scorecard) to help people get comfortable with structured, collective sense‑making, and to normalise tricky conversations and shared judgements with funders and partners in the room. We are also learning that clearer framing, stronger guidance, and more deliberate support for interpretation and use are essential if tools like this are to be experienced as genuinely helpful rather than simply complex.

Embedding the scorecards within a veivosaki-yaga framework has been intentionally scaffolded through participatory design processes, cultural grounding, and deliberative facilitation – aimed at ensuring collective sense-making remains at the heart of the Platform’s MEL practice. We think Sakeo summed it up best: the scorecards “are more than a tool, they’re a catalyst for sense‑making and keeping the rigour from planning through MEL.” We invite you to consider how interlinking scorecards can centre collective sense‑making and solesolevaki in your MEL work, moving beyond scorecards for reporting and using them to democratise ownership of knowledge and evaluative judgements. If you’re interested in learning more about developing scorecards or rubrics for your own complex or portfolio‑style project, reach out to elena@clearhorizon.com.au and keep an eye out for upcoming resources through the Clear Horizon Academy.