Evaluation and policymaking: Creative and interdependent

Image by Skye Studios

I have seen a few Twitter threads recently highlighting the coincidence (or not) that many evaluators have a background in music. When I originally wrote this blog post, I immediately drew the same parallel from my own experience, largely because I believe in the case to be made about the creative nature of evaluation and policymaking. Throughout school I leaned towards both music and the humanities. I enjoyed learning about history and geography and equally loved to create and play music with friends. I thrived in a school community that provided me and my friends the space to create music together in our break times (many of us still play together now). Given this history, I don’t think it’s any surprise that I have ended up in a role related to policymaking in schools and dedicating most of my adult life to the study and practice of evaluation in education. I believe policymaking and evaluation is an inherently creative endeavor, and I feel privileged to work in this field.

This blog post begins by viewing policymaking and evaluation as a creative process, it then moves on to thinking about the interdependent nature of the evaluation and policymaking process, ending with a short analysis of potential barriers we face in living out creative and interdependent policymaking and evaluation in widening participation (WP).

Creative policymaking and evaluation

Since 2015 I have worked professionally for higher education institutions across England in the evaluation of WP programmes, and I am currently in the middle of a PhD at the University of Bristol researching the institutional practice of WP evaluation and its influence on practice and policy decision-making. Since April I have been developing a policy and evaluation framework for a multi-academy trust. Throughout my short career working closely with school and WP practitioners, I have come to realise the creative and complex process of developing and implementing policies, and then evaluating their implementation and effectiveness. It is well known that policies are far more than compliance and a series of documents left in a computer file. Practitioners care deeply and often go above and beyond the statutory purpose of policies to deliver programmes and interventions that seek to benefit our students and communities. They see how policies are interconnected with one another and how if one falls others fall with it. I wish to emphasize, therefore, that in order to achieve the additional intent of policies above and beyond their legalities and regulation, we need to think of policymaking and evaluation as a creative process.

There are many definitions of creativity and it is a difficult concept to pin down. On the whole, most definitions assert that creativity requires originality, novelty, utility, effectiveness, authenticity, and aesthetics (Corazza, 2016; Runco & Jaeger, 2012). As a process creativity can be a pragmatic endeavour used to enhance our ability to solve problems and achieve effective, useful outcomes. Such circumstances appear in the world of developing and implementing policies, associated programmes and evaluating their effectiveness. We are often making decisions with incomplete information, attempting to solve problems of social justice and inequality with limited resources and budgets, and we desire to make these programmes aesthetically pleasing for participants – we put our own stamp on our practice. Thus, every programme and policy designed to enhance the social and economic wellbeing of our students is a creative endeavour. When viewed from this position, policymaking and evaluation is in essence the reason we do what we do, and it opens opportunities for innovation and development.

Image by Alice Dietrich
Our own experience working with students from groups under-represented in higher education, as well as professionals across HE and beyond, mirrors the evidence mentioned above. By facilitating someone to find their own answers, we help to build their sense of self-efficacy. We strengthen their belief that they are well resourced. They find evidence that they have the skills, strengths and experience to find their own solutions. The feedback we receive is how much our coachees appreciate feeling listened to without judgement, and supported to make their own decisions. But if coaching is so great, then why don’t we see more of it? Increasingly, we are being asked to run workshops and training for parents, student ambassadors, and staff to help develop their coaching skills. The feedback we receive is always similar. Firstly, it’s harder than they expected to turn off those advice monsters and ask open questions. But more importantly, they can see the impact a coaching approach can have even with some small changes.

Interdependent Policymaking and Evaluation

When I was a graduate student my advisor critiqued one of my essays saying, “Catherine, you can’t see the wood for the trees”. That comment is forever tattooed on my mind. He was right, my essay was a mess, I couldn’t quite make clear all of the different points that I was trying to make. Probably much like this blog post. In the world of WP, seeing the bigger picture, because it is complex, requires active participation and communication between a wide variety of actors who affect and are affected by the policy. This may include students, parents, schools, and a large swathe of institutional staff within admissions and faculties/departments. Each facet of WP is unique and each person who is either affected by or affecting an institution’s WP policy is viewing it and living it out with a different lens and from a different perspective. Each area of WP within higher education is also intertwined with other institutional strategies, and so to understand one facet of WP, we need to also understand how it interacts with other areas of the institution (e.g. WP access and recruitment, and student success, pedagogy and the curriculum). We are reliant on each other to make it work.

WP is well positioned in this sense, the development of national subscription services and networks such as the Higher Education Access Tracker (HEAT), East Midlands Widening Participation Research and Evaluation Partnership (EMWPREP), the Network for Evaluating and Researching University Participation Interventions (NERUPI), the National Education Opportunities Network (NEON), and the Forum for Access and Continuing Education (FACE), etc. actively connects evaluators and practitioners from across the country to discuss practices and consider new approaches to their work.

Indeed, when we are designing our policies and programmes and planning to evaluate them, we should actively involve people who are directly and when possible indirectly affected by them. Of course, the value of practitioner-led inquiry in WP is well-known (Gazeley, Lofty, Longman & Squire, 2019), and many institutions and networks advocate for participatory based evaluation approaches. These practices highlight the inherently collaborative nature of the policymaking and evaluation setting. Reflexivity here is important as we consider why we have chosen certain people or groups to be involved over others, and the overarching values driving our decisions.

Potential barriers

In practice, many WP practitioners are making creative evaluative decisions about their programmes and their effects, developing cross-collaborations between departments and institutions and sharing knowledge and experiences (particularly during the COVID pandemic). Still, it is useful to critically reflect on our practice and consider potential barriers that may affect the cultures of evaluation and policymaking in which we work. 

First, with the increased emphasis on evaluation best practices in WP, there is a risk of over-emphasising the use of particular methodologies over others. For example, the OfS recently issued standards of evaluation, Type 1 being a narrative and theory of change, Type 2 empirical research, and Type 3 a design which can infer causality (recommended as a quasi-experiment or randomised control trial). We have to be mindful that whatever type of methodology or approach to evaluation we choose, our decisions will have consequences. Consequences may be related to the way an evaluation is used or misused, how credible an evaluation is perceived to be by certain stakeholders, and how evaluation affects participants and communities. One type of evaluation consequence, known as the ‘constitutive effects’ of evaluation (Dahler-Larsen, 2014) can occur when evaluation is driven by a specific methodology and programmes become a reproduction of the evaluation methods used to assess their effectiveness (Anderson, 2020). For example, rather than building an evaluation design around the needs of stakeholders, if an institution starts to focus on the need to use a particular methodology, programmes may be altered to fit the logistical requirements of that methodology. When this occurs, we risk performing tokenistic evaluations that are not comprehensive and do not meet the needs of most stakeholder groups.

Image by Ali Yahya

Indeed, many institutions and evaluators in WP are actively opposing this type of practice (e.g. Clements & Short, 2020). There are many more examples of creative evaluation practices on the FACE blog, such as a blog by Jo Astley and Luke Gordon-Calvert at the University of Derby about the use of reflective diaries in evaluation, and a blog by Chris Bayes and Martin Walker at Lancaster University about applying research-informed practice to WP. Further, the presentations from the latest NEON symposium (2020) reflect the diverse and creative evaluation practices occurring within institutions.

Nonetheless, we need to keep checking in and reflecting on our wider practices and the ways in which evaluation is systematised within higher education. If not checked, higher education institutions and their WP departments can become ‘evaluation machines’ which tend to standardise evaluation inputs and outputs so that evaluations cover a large amount of programmes, strengthening risk management, but reducing the complexity of information that is generated (Dahler-Larsen, 2012). In these cases, aspects of programme practice may be missed such as the use of performance indicators enabling institutions to target participants who would likely succeed without an intervention (Harrison & Waller, 2017). Evaluation machines can reinforce constitutive effects. It is important, therefore, that we continue to reflect on the evaluation culture being developed within our institutions.

The strength of the WP community and the active collaboration across institutions, especially amongst WP evaluators and practitioners, means we are able to build evaluation capacity within higher education institutions in a way that supports the notion of evaluation for learning and improvement. We are doing this and can continue to do this by combining our knowledge and skills in methodology, evaluation theory and approaches, and working collaboratively alongside professionals within our institutions. Above all, we should continue to revisit the basics, deeply consider the needs of our communities and students, do our best to involve them in the process, and identify the structural barriers that may be preventing us from addressing those needs creatively.

Blog by Catherine Kelly - PhD Researcher at the University of Bristol and an Evaluation Practitioner

Scroll to Top