APP Evaluation or: How I learned to stop worrying and love failure

I love evaluation because I’m always interested in how we might be able to improve things for people, whether that’s our students, our staff, our sector colleagues, or of course society at large. I also love WP because it’s emancipatory and life changing on small and large scales every single day. That said, life’s never static in HE, but it feels like the sands have really been shifting over the past few months, let alone years. In my role I’m trying to develop an evaluation framework for the full lifecycle of WP. Seems simple enough, in theory. But with work-streams sitting under different directorates with different priorities, practitioners and staff going through upheavals and restructures internally, regulators with changing requirements, and, most importantly, learners and students with evolving needs, and never – it feels – like the time or the budget to get everything we want, and need, done.
So how can we start to seriously develop our sector-wide evaluation capacities and use evaluation as a tool to really help us achieve our strategic goals? We’re always going to have our own local contexts, what works in place A will not automatically work in place B, but I’m keen not to reinvent the wheel if there is great work out there to learn from. Sometimes exploring and really understanding why something won’t work in place B can be incredibly useful – if we’re open to sharing our knowledge with the sector.

Collaboration and Knowledge Exchange

I’ve met some fantastic evaluator colleagues at organisations all around the country and there is some important (and exciting for this evaluation nerd) work being undertaken. But it’s hard to find without those chance meetings and without the willingness to share. I know calls for publishing evaluations have been alarming and concerning for some, and I certainly can’t speak for the OfS, but having mechanisms and platforms for us to share our work and our findings with each other could save us all time and has the potential to help us all enhance and refine our interventions, to keep iterating together. Sharing expertise and resources, building strong networks and co-ordinating approaches where possible can help us to develop truly robust and fit-for-purpose approaches to evaluating WP activities across the lifecycle. For the past few years there has been an over-reliance on quantitative data. Yes, we will usually need to know the ‘what’ and the ‘how many’, but without the ‘why’ and the ‘how’ those initial questions are reductive and won’t help us to develop our practice or realise our ambitions. TASO’s enthusiasm for further quantitative methods has been challenging for many of us working in complex and creative settings. I’m a qualitative evaluator at heart, and I’m really encouraged by the murmurings I’m hearing about more qualitative and creative methods coming into play. There are so many fantastic methods that we can use – alongside our numbers data – to give a really rich picture of what is happening in our interventions. We aren’t bound to just use Focus Groups for Every. Single. Intervention. We can explore interviews, process tracing, significant stories of change, photo journaling, reflexive diaries… the list could go on, so I’ll get off my soapbox for now!

The value of failure

For us to actually reap the benefits as a sector, we need to learn the value of failure. We need to trust our colleagues and feel safe to fail at our institutions and openly within our sector. As long as we can learn something from it. Learning from failure should underpin our work, but it doesn’t always feel safe to do so. Personally, we may feel embarrassed or ashamed, we may even worry for our jobs if we are on precarious contracts. Institutionally we might be facing a challenging landscape to get senior buy in (and budgets) and it might feel like a failure might would be used to cut budgets or deprioritise WP work. At the sector level, maybe we feel that we’re ‘too big to fail’, we might worry that our work would damage the reputation of our institutions – or often worse, our own personal reputation with colleagues we may have known for years. Instead of the necessary and challenging work of engaging with failure, what we often see if a focus on celebratory facts or headline stats about how something works and brushing off the negative outcomes or issues. Failure isn’t one single or clear thing. Success and failure exist somewhere on a spectrum, or a compass. Different stakeholders will have different value systems and different understandings of failure. I’ve been exploring the work of FailSpace recently to help me start untangling these knotty problems. This AHRC project is helping the cultural industry start to seriously grapple with failure, and I think there are a lot of lessons we could use in WP. The project has developed a framework for Degrees of Failure:
  • Outright failure – even if there have been some elements of success the prevalence of failures resulted in goals/intentions fundamentally not being achieved. Opposition and criticism is great and/or approval and support is virtually non-existent.
  • Precarious failure – failures may slightly outweigh successes and few if any of the secondary goals/intentions are achieved. A number of the primary goals/intentions are only partially achieved. Opposition and criticism outweighs approval and support.
  • Tolerable failure – failures may slightly outweigh successes and few if any of the secondary goals/intentions are achieved. A number of the primary goals/intentions are only partially achieved. Opposition is small and/or criticism is virtually non-existent but any support/approval may be limited to specific groups of stakeholders.
  • Conflicted success – failures are fairly evenly matched with successes and the achievement of goals/intentions is varied. Criticism and approval exists in relatively equal measure but varies between different groups of stakeholders. It proves difficult to avoid repeated controversy and debate.
  • Resilient success – successes may slightly outweigh failures and a number of the secondary goals/intentions are not achieved. However, none of the failures significantly impede the fulfillment of the primary goals/intentions. Opposition is small and/or criticism is virtually non-existent but any support/approval may be limited to specific groups of stakeholders.
  • Outright success – even if there have been some elements of failure, the prevalence of successes resulted in all of the goals/intentions being fully achieved. Criticism and opposition is virtually non-existent and approval and support is almost universal and from a diverse group of stakeholders
These feel like useful ways for us to start thinking about what we mean when we say What Works? For whom? Where? When? Why? and How? I’m about to embark on an knowledge mobilisation quest in my institution for us to start thinking more productively and honestly about failure. I would like to develop this into our framework, to really pin down our understandings of success from our purpose, to our process and our participation work. If we are going to get serious about understanding the impact of our interventions, and ‘what works’, we need everyone, from regulators and sector networks, to senior leaders and practitioners on the ground to really appreciate and acknowledge the value of failure. So that’s another thing for the to do list then…! As ever, I’m always keen to nerd out about evaluation, so if you want to get in touch to explore failure in evaluation, then drop me a line: R.Long@sussex.ac.uk And I would definitely recommend a look at the FailSpace website: https://failspaceproject.co.uk Robyn Long is Research & Evaluation Manager at the University of Sussex
Scroll to Top