Monitoring Evaluation & Learning (MEL) reflection

Our Monitoring Evaluation and Learning (MEL) course ran at the end of March (2019).  This is our most comprehensive course running over 5 days.

We had an enthusiastic group of participants from a variety of organisations including some of our newer staff.

Carina Calzoni was the lead trainer for the course with some of staff coming in to offer additional insights on areas of their expertise.

Carina reflects on the March MEL program:

“As a presenter, the main highlight was discussions and interaction with the course participants. They were all fully engaged throughout the course and were keen to learn. We had many very interesting and in-depth discussions about how and where to apply MEL in different settings and organisational contexts. It was also great having several Clear Horizon staff (Kaisha, Samiha, Caitlin and Ed) presenting different parts of the course. This really helped to maintain the momentum over the five days.”

If you want to gain extensive training on Measurement and Evaluation check out our course.

GovComms podcast – Social problems & digital solutions

Jen Riley our Digital Transformation lead recently spoke with David Pembroke on a GovComms podcast.  The topic of discussion was Social problems & digital solutions.

Areas discussed in this episode:

  • The importance of simplicity in communication
  • Preparing an evaluation checklist
  • The qualities of a good evaluator
  • The shifting focus from output to outcomes
  • The impact of technology on change measurement
  • Making data work for us (and not vice versa)
  • Developing a toolkit for social change
  • Avoiding data overwhelm
  • Resources for those who want to learn more

Click here to listen to Jen’s podcast Social problems & digital solutions.

Series on Indigenous Evaluation

As part of Clear Horizon’s commitment to supporting Indigenous self-determination, three consultants traveled to Rotorua, New Zealand, to participate in the first Indigenous Peoples’ Conference on Evaluation. To say that we were privileged, humbled, moved and challenged would be an understatement.

We would like to acknowledge with sincere gratitude the hospitality, generosity, wisdom and insight extended to us by the conference organisers from Mā Te Rae, our hosts from the Ohomairangi Marae, the speakers, panelists and presenters as well as the broader community of Indigenous evaluators from around the world with whom we shared the space.  

The three days traversed high level ontological reflections regarding traditional knowledge from diverse world views and value systems, down to community-defined indicators for wellbeing and co-authored stories of change. Indigenous evaluators, social change advocates and Maori elders provided insights and raised important questions, prompting both personal and professional reflection and holding significant implications for the field of evaluation and its role in social change. For example: challenging the dominance of western paradigms and the structures perpetuating the exclusion of Indigenous voices; decolonising access to knowledge and ensuring data sovereignty; acknowledging the inter-generational experience of trauma for indigenous peoples; upholding self-determination for communities; and the critical centrality of people and place, relationships and connection, in supporting wellbeing and creating intergenerational change.

This blog marks the beginning of a series, delving into what we took away from the conference:  

·        Part 2: Connection and community

·        Part 3: We are a tree without roots   

·        Part 4: Self-determination as the defining principle  

Ultimately, altruistic intentions are insufficient. In the words of activist Lilla Watson

“If you have come here to help me, you are wasting your time. But if you have come because your liberation is bound up with mine, then let us work together.”

With this intention in mind, we move forward with humility, with curiosity, prepared to listen more and prepared to expose ourselves to situations in which we feel uncomfortable, but that allow us to expand our understanding of the communities, partners and clients we work with, in supporting and striving for meaningful social change. 

Spotlight on the big M

A reflection on the recent Office of Development Effectiveness (ODE) ‘Evaluation of DFAT Investment Level Monitoring Systems’ (Dec 2018)

Disclaimer: the views expressed below are solely those of Damien Sweeney and do not represent Clear Horizon’s.

The ODE recently released a report on DFAT Investment Level Monitoring Systems, with the purpose of improving how Australian Aid investments are monitored. The report focused on the design and use of monitoring systems by DFAT investment managers and managing contractors.

My first observation was the title of the report, specifically the term ‘monitoring systems’. Why? Because so often Monitoring is joined to Evaluation (M&E), which in my experience can (and often does) lead to confusion between what is monitoring, and what is evaluation…. sometimes with the focus shifting to evaluation, at the expense of monitoring. This confusion between the M and the E is most often seen in field/implementation staff, who are often responsible for the actual data collection on a day-to-day basis.

I’ve been reflecting on this issue a fair bit over the past decade, having provided M&E backstopping to programs facing a distinct lack of monitoring and adaptive management, as well from developing monitoring, evaluation and learning (MEL) frameworks and plans (the jargon and acronyms in this field!).

Differentiating between little ‘e’, and big ‘E’

Monitoring is commonly defined as the systematic collection of data to inform progress, whereas evaluation is a more periodic ‘evaluative’ judgement, making use of monitoring, and other information.

However, as the evaluation points out, good monitoring is critical for continual improvement, by managing contractors (and other implementers) and DFAT investment managers. Continual improvement through monitoring requires an evaluative aspect too, as managing contractors (field/implementation teams, M&E advisors, leadership) and DFAT investment managers reflect on progress, and make decisions to keep going, or adjust course. I refer to this regular reflection process as little ‘e’, as differentiated from more episodic assessment of progress against key evaluation questions, or independent evaluations, which is the big ‘E’ (in M&E).

Keeping monitoring systems simple

Einstein was credited with the quote “Everything should be made as simple as possible, but not simpler”. This should be a principle of all monitoring systems, as it will promote the ownership across all responsible parties, from M&E advisors who develop systems, to those that will collect data and use it for continual improvement.

I have often seen cases where field/implementation teams don’t understand, and therefore don’t feel ownership, of complex M&E systems. A literature review supporting the report (Attachment A) notes the that better-practice monitoring systems are kept as simple as possible to avoid the lack of implementation that generally accompanies complex monitoring systems (too many indicators, too much information, and resultant paralysis).

The need for a performance (and learning) culture

Interestingly but not surprisingly, a survey of managing contractors noted that ‘good news’ often took precedence. This goes back to the importance of a performance culture across DFAT and managing contractors (and subcontractors) that embraces the opportunity to learn and improve (safe-fail vs fail-safe). There needs to me more incentive for managing contractors and investment managers to reflect, learn and adapt, and not just focus on the positives.

The importance of fostering a strong performance (and learning) culture is expressed in the recommendations. Learning should not be from periodic evaluations, but a regular and continuous process, with the regularity of reflection driven by the operational context (more complex contexts requiring more regular reflections of what monitoring information is indicating). I know of investments where implementation staff will informally meet on a weekly or fortnightly basis to track progress and make decisions on how to improve delivery.

Building capacity

The literature review notes the importance of capacity of staff for effective monitoring. I like to use the term capability (knowledge and skills) along with capacity (time and resources), as both are required, and are yet distinct from each other. The literature review focused on the importance of managing contractors recruiting and holding on staff who could design and manage monitoring systems. However, my experience indicates that it is not the M&E advisors that are a constraint or enabler of good monitoring systems, but the ownership of the system by those that implement the programs. Therein, for me, lies a key to good monitoring systems – getting field/implementation staff on board in the design and review of monitoring systems, so that they understand what is to be collected and why, including how it helps their work, through improving performance.

What we’re doing at Clear Horizon to focus on monitoring emergent outcomes and facilitate adaptive management

Clear Horizon has been developing fit-for-purpose plans and tools for our partners and clients, linking theory and practice and continually reflecting and learning on how to improve this.

I’m currently work shopping with my Clear Horizon Aid Effectiveness colleagues how we can make M&E tables more clearly accentuate the M, and how this informs the E. More to come on that front! Byron Pakula will be presenting at next week’s Australasian Aid Conference a we developed titled ‘No plan survives contact with the enemy – monitoring, learning and evaluation in complex and adaptive programming’ that takes in key issues raised in ODE’s evaluation. So check that one out if you’re in Canberra.

What are your thoughts on ODE’s evaluation of monitoring systems?

BreathIn Blog 1 – What are we learning about ToC and reporting?

At a recent BreathIn session,…

(Wait a what? – A Breath In is like of community of practice, its where we get to stop and reflect collectively across the work we and others are doing to test, stretch and create ideas and practice)

… Jess Dart, Ellise Barkley, Mila Waise, Anna Powell, Liz Bloom and I gathered to reflect on what we have learnt recently about working on place-based initiatives, the generic theory of change model we have all had a hand developing our learnings around evaluation in the space.

And so what have learnt? What did we come up with? Here are our significant take-aways from the day:

1.       It is difficult to develop a generic Theory of Change model for place-based work. Because:

The transformative intent and complexity of the work does not lend itself to a single two dimensional diagram. There are many layers to the work, a common refrain during our discussion about the theory of Change model was ‘it happens across the model’ for example ‘leadership, that needs to be in every box of the Theory of Change’. Ellise shared a model that was developed by one of the groups she worked with. It was three dimensional, made of boxes, passage ways, levels and there were choices to be made as you navigated your way through it. I think where we got to is that place-based work is ultimately about transformation,  and that transformation needs to happen within each individual, at all levels (like a contagion) before we get instances and then widespread transformation at the system levels and see the benefits at the population level.

This transformation often happens at the interstices or gaps between the nodes in a system and within the nodes. Interstice can be physical or intangible, they can be literal or figurative gaps. Which is why you often hear people discuss a) the importance of relationships and intangibles in this work and b) the importance of experiencing the work to really understand it.

This work is intrinsically linked with movement building. This means that the work becomes inherently political and relational. It forces us to engage with the deep assumptions that underpin our own worldviews, those of others and those underpinning the systems we are trying to transform. We are often having to deconstruct and destroy what is, to rebuild ourselves, our system, our place towards what is desirable. To do this, it helps to take a learning stance when doing this work.

2.       Evaluation certainly needs repurposing and rethinking in this context

A common starting point with any kind of evaluation is to think through who it is for (audience). But when you are working on an initiative that aims to transform through collaboration the work belongs to everyone. There is not one primary audience but many audiences, everything is owned by everyone. Furthermore, a key purpose of the evaluation work seems to be to articulate, explain and demonstrate the work and its impacts. This requires looking at the whole, as well as the sum of the parts (see contribution point below). Writing to cater for these multiple narratives and audiences is a balancing act. In this context the relationship between communication and evaluation is much closer than evaluators usually like it to be.

A common tool used to guide an evaluation are key evaluation questions. Following on from our discussion about the theory of change the key evaluation question that came to mind was: How (well) are we transforming? The standard questions of efficiency and effectiveness are not really appropriate. For systems change initiatives, we know it takes a lot more time and resources to do this work, ten years seems to be a good start. We also know that over investing in clarifying outcomes can divert people from really working out what needs to be done. As the outcomes are still emerging, this is exactly what they are working out. In these adaptive initiatives the theory of change is never finished and never completely right – they need to keep evolving as we learn. Jess likes to say we need to keep them “loose and hold them lightly”.

A common issue in evaluation is how to address contribution. That is, the need to show the distinct lines of contribution for different partners, in terms of they are contributing to the observed changes and outcomes. This we had greater clarity on. Firstly, Clear Horizon’s “What Else Tool”, is useful starting point for thinking through your contribution story. Secondly, it is important to clearly distinguish between what the ‘backbone’ impacts and what the ‘collective’ impacts. For example, you may need to have two separate reports or nested reports, but you must acknowledge the different contributions.

Finally, I think we were all reminded of the evaluator’s opportunity (and maybe responsibility) to be an integral part of the transformation effort. This only underscores the importance of investing time to Breath in!

From Mila: Thanks Zazie for the opportunity to reflect on the Breath-in session, I could only add:

Whether we are exploring a common theory of change for Place-based initiatives, or reporting and evaluation for Collective Impact initiatives, my biggest take away was from the Breath-in session is the need to use an equity or social justice lens in our work, as much as scientific, partnership or public policy paradigms. Due to the complex nature of disadvantage and vulnerability experienced by children, young people and families, we are constantly required to adapt, think outside the box and test different interventions.

One thing we know for sure is that different ways of thinking and working are required in response to the variations in the context, circumstance and drivers within place. Families, communities and places are dynamic and our collective understanding of what is desirable, positive, acceptable or challenging for individuals and communities keeps on changing. Hence, developing a generic/common theory of change for initiatives working on tackling social issues at place, is complicated.

The guiding light in these circumstance and hopefully a common worldview that can help bridge the different disciplines and competing needs, are the concepts of human rights and equity that had been supporting individuals and communities to reach their full potential amidst the odds: access, equity, rights and participation.

BreathIn Blog 2 – Did we get to generative listening?

In parallel, to our reflections on our work through the Breath In sessions, we have been working out how we can do Breath Ins in a way that is worthwhile for all involved, that respects our associated ‘responsibilities’ and manages for some of the inherent conflicts present in the group.

Participating in the Breath In sessions are a CEO, three consultants, a government employee working in a central backbone and a backbone leader based out of a NGO. We come together well because we are all practitioners and all have a connection to Clear Horizon.

This is our third Breath In, and it feels as if after a rocky start, we have come to a much better place where some of the obvious conflicts have settled down. There is a much greater level of trust and understanding of each other and our different contexts. This is allowing us to have more open discussions … maybe even generative discussions.

Through place based work I have been introduced to different theories, one of them is Otto Sharmer’s change management theory, the Theory U which has a strong focus on listening as a means for transformation. He describes four levels of listening:

1.       Downloading – “yeah, I know that already..” re-confirm what I already know. (I-in-ego/Politeness) – Listening from the assumption that you already know what is being said, therefore you listen only to confirm habitual judgements.

2.       Factual – pick up new information…factual, debates, speak our mind (I-in-it/Debate) – Factual listening is when you pay attention to what is different, novel, or disquieting from what you already know.

3.       Empathic – see something through another person’s eyes, I know exactly how you feel. Forget my own agenda (I-in-thou/Inquiry) Empathic listening is when the speaker pays attention to the feelings of the speaker.  It opens the listener and allows an experience of “standing in the other’s shoes” to take place.  Attention shifts from the listener to the speaker, allowing for deep connection on multiple levels.

4.       Generative – “I can’t explain what I just experienced” (I-in-now/Flow) – This deeper level of listening is difficult to express in linear language.  It is a state of being in which everything slows down and inner wisdom is accessed. In group dynamics, it is called synergy.  In interpersonal communication, it is described as oneness and flow.

I found it useful to reflect on our Breath In journey through these four levels of listening. I can’t speak for everyone else, so from my perspective I have observed myself do the first level of listening really well! I think that I had moments of factual listening (comparing the work across our experiences), I think I had instances of empathetic listening with one person at a time and I’m not sure I was able to reach much beyond that.

I’m curious as to how everyone else felt. I’m also aware that the transformation needs to happen in each of us first before it can happen in the group. So I suppose I have some homework to do!

The above is a reflection on the deeper experience of the Breath In. At a different level, that of developing understanding and theory I think we achieved more than we have in the past. See previous blog!

DFAT Evaluation of Investment Monitoring Systems

At Clear Horizon, we have been grappling with how to effectively – and efficiently – improve the monitoring, evaluation and learning of programmes.  Over the many years of experience, and across the range of programmes and partners we work with, one thing remains abundantly clear: the quality of the monitoring is the cornerstone for effective evaluation, learning and programme effectiveness.  In the international development sector, which has some quite large investments that operate in extremely complex environments, monitoring remains even more important.

At the end of 2017, Byron’s new year’s resolution for 2018 was to “dial M for monitoring”, and to put even more emphasis on improved monitoring systems.  Having conducted stocktakes of MEL systems across a range of aid portfolios, and being involved in implementing or quality assuring over 60 Department of Foreign Affairs and Trade aid investments, really clear messages about what works and what doesn’t have emerged.  This culminated in the presentations at the 2018 Australian Aid Conference and 2018 Australian Evaluation Conference, where Byron and Damien presented on how to improve learning and adaptation in complex programmes by using rigorous evidence generated from the monitoring and evaluation systems.

So we at Clear Horizon welcome the findings and recommendations in DFAT’s Office of Development Effectiveness Evaluation of DFAT Investment Monitoring Systems 2018. Firstly, we welcome the emphasis on improved monitoring systems for investments – it is essential to improving aid effectiveness.  Secondly, we strongly agree that higher quality MEL systems are outcome focused, have strong quality assurance of data and evidence, and where the data services multiple purposes (i.e. accountability, improvement, knowledge generation). Thirdly, that partners and stakeholders that have a culture of performance oversight and improvement are essential – this needs to continue to be fostered both internally and externally.

To achieve this, as recommended, it is essential that technical advice and support is provided to programme teams, investment managers, and decision makers.  This need not be resource intensive, and must be able to demonstrate its own value for money.  However, what is extremely important in this recommendation is that the advice is coherent, consistent and context specific.  Too often we see a dependency on the programme team providing a singular generalist M&E person required to provide a gamut of advice – covering a range of monitoring approaches, evaluation approaches, different sectors, and sometimes even different countries.  Good independent advice often requires a range of people providing input on different aspects of monitoring, evaluation and learning – a reason at Clear Horizon why we have a panel of MEL specialists, with some focusing on evaluation capacity building, others on conducting independent evaluations, or those building MEL systems.

Standardising expectations and advice across aid portfolios of what constitutes good monitoring, evaluation and learning that is fit for purpose is essential for all of us.  We have been fortunate enough to be involved with developing different models of providing third party embedded design, monitoring and evaluation advice.  The ‘Quality and Improvement System Support’ approach provides consistent technical advice across entire aid portfolios, such as what has been developed for Indonesia; ‘Monitoring and Evaluation House’ in Timor Leste in partnership with GHD is based on a neutral broker approach to improving the use of evidence in programme performance; and the ‘Monitoring and Evaluation Technical Advisory Role’ in Myanmar places a stronger emphasis on supporting programme teams through technical and management support.

This report echoes our belief that more monitoring and evaluation is not necessarily the answer, but rather collaborating to do it better and breeding a culture of performance is ultimately what we are striving for.

2019 New Years resolution blog

The New Year has once again reared its head, leaving the dusty resolutions of 2018 on the cupboard shelf next to the re-gifted ‘bad santa’ present from last December’s Christmas party (unless you got home made sweets or condiments that is!!). Whether our Clear Horizonites had relaxing tropical holidays or productive working staycations here in Melbourne, all team members are ready and eager for and exciting 2019.

Last year saw Clear Horizon’s first steps (of many) into digital evaluation techniques, huge steps towards creating frameworks for evaluating place based initiatives and the fine tuning of Clear Horizon’s approach to evaluating co-design processes. Needless to say it was a big year! In 2019 we are looking ahead to hone in our participatory skills, move further into the digital space and build on the co-design work from 2018.

2019, we’re ready for you!

Some of our staff have shared their goals for this year.

Jen Riley, Digital Transformation Lead

“Digital Transformation super highway for Evaluation”

In 2019, I am looking forward to leading Clear Horizon in digitally transforming from the inside out. I want to learn more about artificial intelligence, machine learning and blockchain and what these new developments mean for the social sector. I am especially interested in how we harness the digital transformation super highway for evaluation and make data collection, reporting and evaluation more automated, agile and innovative to meet the demands of evaluating complex social issues. I am excited about getting the Clear Horizon Academy, an online digital learning space for co-evaluators, up and going and seeing Track2Change, our data visualisation and reporting platform become part of everything we do at Clear Horizon.

Kaisha Crupi, Research Analyst

“Breathing life into quantitative data”

In 2019, I would like to further work on my quantitative skills in an evaluation. As I enjoy bringing qualitative voices to life in an evaluation, I would like to work on my skills for quantitative data to ensure that this can also be done. It’s not just making pretty graphs and charts – it’s about making meaning of these numbers and polishing it to make them robust and as effective as can be.

Georgia Vague, Research Analyst

“Using the context that matters”

Being a new member of Clear Horizon in late 2018, my resolution for 2019 is two-fold. Firstly, I would like to strengthen my data-analysis skills, particularly strengthening how to analyse large amounts of data using the most appropriate, context specific techniques. Secondly I want to be able to gain confidence in my facilitation skills, particularly in participatory workshops. This means being aware of any unconscious bias that I might hold and really placing the client and participant voice in the centre of the evaluations.

Eunice Sotelo, Research Analyst

“Capacity development for all”

If 2018 was a big year of learning and discovery, 2019 is no different. In fact, I want to extend myself further – honing skills in facilitation and stakeholder engagement – while continuing to expand my evaluation toolkit. I’m also keen to dig deeper into capacity building, internally at Clear Horizon and with our clients. I think we can do better at making our practice more inclusive and accessible, and what better way than to ‘teach’ by example.

Ellise Barkey, Senior Principle

“Applying, trialling and improving our approaches to co-design”

In 2019 I am looking forward to continuing my learning with the inspired communities and partners around Australia working to create positive change for families, children and young people. My resolution is to deepen my understanding and practice of designing relevant and flexible approaches and tools that cater for the diverse learning and evaluation needs of these fabulous collectives driving place-based approaches and systems level change. Clear Horizon’s work last year developing the Place-based Evaluation Framework for the Commonwealth and Queensland Governments made good ground towards a relevant framework, and was a fascinating exercise as it was co-designed with many stakeholders. This year, I look forward to applying, trialling and improving on these approaches with partners and clients, and embracing a learning stance through the challenges and successes.

Jess Dart, CEO

“Building co-evaluation – getting everyone involved!”

In 2019 I want to think deeply about how we strengthen practice and tools around collaborative and participatory evaluation – the time has come to re-invigorate this practice! The world of co-design has really begun to make inroads, so the time is ripe to build the practice of co-evaluation. I am going to dedicate my year to it!  I would love to see more diverse stakeholders really engaging in planning and analysis and co-designing recommendations.

Victoria Pilbeam, Consultant

“Learn about and from Indigenous evaluation approaches”

In 2019, I want to learn about and from Indigenous approaches to evaluation. Our team is increasingly getting invited to work with Traditional Owners in natural resource management spaces. We need to understand Indigenous evaluation methodologies to engage respectfully and effectively with rights holders. More broadly in the Sustainable Futures team we are always evaluating at the interface between people and environment.  Evaluation methodologies based on a holistic understanding of people and nature could play an important role in informing our practice.

Qualitative Comparative Analysis – a method for the evaluator’s tool-kit

I recently attended a five-day course on Qualitative Comparative Analysis run by the Australian Consortium for Social and Political Research at the Australian National University. Apart from wanting to be a university student again, if only for a week, I wanted to better understand QCA and its use as an evaluation method.

QCA is a case-based method that attempts to bridge qualitative and quantitative analysis through capturing the richness and complexity of individual cases, while at the same time attempting to identity cross-case patterns. QCA does this through comparing factors across a number of cases in order to identify which combination/s of factors are most important for a particular outcome.

The strength of QCA is that enables evaluators to not only identity how factors combine together to generate a particular outcome, as outcomes are rarely due to one factor, but if there is only one combination of factors or several different combinations that can lead to the outcome of interest and in what contexts these combinations occur. QCA is also ideal for evaluations with medium-sized Ns (e.g. 5 to 50 cases), as in such a range there are often too many cases for evaluators to identify patterns across cases without a systematic approach, but too few cases for most statistical techniques.

I left the course with an understanding of QCA as a useful addition to our evaluation tool-kit. Apart from enabling evaluators to identify patterns across cases, it allows us to test theories of change and in particular, whether the relationship between intermediate outcomes and end of program outcomes holds true or if there are other factors required to achieve higher order outcomes. It can also be used to triangulate the findings of other methods, such as key success factors identified through a contribution analysis.

There are of course a number of limitations such as QCA requiring both expertise in applying the method and in-depth case knowledge, as well as the time needed to collect comparable data across cases and then returning to the data to further define factors and outcomes as contradictions arise when trying to identify cross-case patterns.

If you want a good overview of QCA, including the key steps for undertaking a QCA, check out:

And useful references for applying QCA in evaluation include:

Area’s to Consider When Delivering Training

As part of our internal staff capacity building at Clear Horizon, we organise fortnightly learning and development sessions. Last week we discussed adult learning principles and styles, and how these guide the facilitation process of training activities and workshops that we deliver.

In the 1970’s, Malcolm Knowles coined the term “andragogy” which refers to methods and principles used in adult education. Later in 1984, he identified six adult learning principles including:

  • The need to know: Adults need to know why they need to learn something before they learn it.
  • Self-concept: Adults like self-direction. They grow to be independent learners, responsible for their own decisions.
  • Experience: Adults come to training with a great deal of ‘life’ experience which should be drawn upon and used as a learning resource.
  • Readiness to learn: Adults are more ready and willing to learn things that are relevant to them and that may help them to cope with real life situations.
  • Orientation to learning: Adults learn best when they can immediately apply what they have learnt to real life situations.
  • Motivation: Adults learn best when they are motivated to do so with intrinsic motivators more effective motivators than extrinsic.

Additionally, adult learning styles are the other important area to consider when delivering training.  Adult learning styles refer to learning approaches that individuals naturally prefer to maximise their personal learning experience. Peter Honey and Alan Mumford, based upon the work of Kolb, have identified four adult learning styles including:

  • Activists are those people who learn by doing.
  • Reflectors are people who learn by observing and thinking about what happened.
  • Theorists are people who like to understand the theory behind the actions.
  • Pragmatists are people who need to be able to see how to put the learning into practice in the real world.

A question raised on the adult learning principles for non-western learners that may be dissimilar to these adult learning principles coined by Knowles. Please share your thoughts on the differences between western and non-western adult learning principles via our twitter @ClearHorizonAU.

Resources:

Mobbs, Richard. Honey and Mumford. Retrieved from: https://www2.le.ac.uk/departments/doctoralcollege/training/eresources/teaching/theories/honey-mumford

Adult Learning Australia. Retrieved from: https://ala.asn.au/adult-learning/the-principles-of-adult-learning/