Alongside the concerns raised by the contemporary understanding of policymaking, potentially interesting and important questions emerge for evaluation practice: do we need to think more deeply about what we may risk overlooking while we are busy strengthening our technical skills to understand ‘what works’? Are we trading in ‘judgement’ for something else? As evaluators, (how) do we understand the nature and practice of policy-making and is it reflected in how we approach our work?
Within the policy sciences, it has long been recognised that policymaking involves trade-offs between competing values and about how to act –debates that are not resolved with the help of technical evidence alone. This has led to increased questioning of the validity of the evidence-based movement and the related ‘technocratic’ understanding of how research-based knowledge is used.
Different ways of conceptualising policy processes have emerged as a ‘corrective’ to this way of thinking. One example is the ‘argumentative turn’, which emphasises the importance of language, values, argumentation, deliberation and collective judgement in these processes. What follows is that producing policy-relevant knowledge requires putting these elements at the forefront of an analytical task, with evaluators explicitly becoming “deliberative practitioners”.
When it comes to evaluation, these concepts are far from new. Making value-judgements are central to evaluating, and evaluators have approaches and tools that are designed to elicit the often multiple, competing, views about what a policy or programme problem is and where the solution lays. However, evaluation has also seen an increased focus on methods that answer impact questions; standard evaluation questions often relate to how programmes and policies are implemented. Much more difficult to come across are questions around whether any given objective is socially desirable.
An exploratory review of the evaluation literature points to something worth thinking about: the argumentative and deliberative ‘turns’ that are present in the wider policy sciences are reflected in discussions about the role of values in evaluation, about the paradox that evaluation finds itself in (expressed in the demands to be objective vs evaluation’s central task of making value judgements), in attempts to renew evaluation’s “moral function” and a tradition rooted in deliberative democratic theories. However, these may also be struggling to progress due to challenges to practical implementation and to an evidence-based discourse that, if not adhered to, perhaps challenges the profession.
This lunchtime session provided an opportunity to explore these and other questions relating to the relationship between policy and evaluation through TIHR’s commitment to ensure evaluation provides useful and usable learning for all those concerned.