By Kat Smith and Paul Cairney
How can we improve the way we think about the relationship between evidence and policy? What are the key insights that existing research provides?
1. Evidence does not tell us what to do
It helps reduce uncertainty, but does not tell us how we should interpret problems or what to do about them.
2. There is no such thing as ‘the evidence’
Instead, there is a large number of researchers with different backgrounds, making different assumptions, asking different questions, using different methods, and addressing different problems. Synthesising their research can be useful for policymakers, but risks providing advice from a too-narrow perspective. Similarly, focusing on a particular set of experts may exclude important insights from other disciplines.
3. Policy-relevant evidence is contestable and open to interpretation
Researchers interpret data differently and debate the scientific implications which can lead to scientific controversy. Making a policy recommendation involves a further step in contestation and interpretation, which can pose dilemmas for researchers.
4. Many people and organisations are involved in translating evidence for policy
They each provide a selective focus on some evidence at the expense of other findings, and add their own take on the implications. As interpretations of evidence are shared, ideas may change.
5. Networks and institutions act as evidence filters
Networks are the relationships between policymakers and influencers. Institutions represent the formal and informal rules guiding practices. Both help explain how evidence is used in policy: relationships help facilitate trust in the messenger, while the rules help policymakers decide whose evidence is policy-relevant. This may involve ‘filtering out’ ideas that don’t fit with existing policy thinking.
6. It can be strategically useful to present decisions as ‘evidence-based’, but policy is necessarily political
Many scientists promote the idea that policy should be ‘evidence-based’, and politicians often agree. Yet, it is also easy to show that governments rarely conduct evidence-based policymaking. Rather, policymaking is necessarily guided by values, beliefs and previous experience, alongside potentially relevant evidence. Markers of scientific credibility have symbolic value in policy. Downplaying the political use of evidence is a political act.
7. Policymaker attention to research evidence can be maximal or minimal
Researchers can experience long periods of feeling ignored, followed by a sudden lurch of attention to everything they do. Or, their ability to influence policy with evidence resembles a ‘window of opportunity’, in which attention to a problem rises, there is only time to consider already-feasible solutions, and policymakers have a fleeting opportunity to act. Such ‘windows’ often prompt ‘scientific showdowns’, in which many actors promote their preferred solution, drawing selectively on evidence to try to enhance the credibility of their solution.
8. Numbers and projections about the future are seductive but potentially flawed
The visual representation of numbers can represent an efficient and reassuring way to present evidence (so ‘killer charts’ can be particularly attractive). Models and projections are certainly helpful during crises, with projections providing the data ‘gold dust’ that policymakers seek. Yet, the oft quoted claim that ‘all models are wrong’ has merit and focusing overly on modelling can be risky. This is especially so if model data/assumptions are flawed or if policymakers’ attention is pulled away from other kinds of useful evidence.
9. Refining policy-relevant evidence is frustrating but necessary
Scientists seek to improve their predictions and revise their conclusions as they incorporate new information. This process can be frustrating for the policymakers trying to project certainty, and confusing to media and public audiences. Yet, making choices based on outdated information or advice is worse.
10. The ‘good governance of evidence’ requires transparency
It is tempting for policymakers to retreat behind closed doors. Secrecy can help foster the kinds of open debate that are necessary in science and politics but can also fuel media criticism and public concern. The ‘good governance of evidence’ requires a willingness to be open about uncertainty and to learn from mistakes.
In conclusion, are there other key tips that you would provide? What other important insights do you feel the existing literature on evidence and policy provides?
This blog post is a modified version of the inaugural post for the Evidence and Policy blog, which also contains numerous links to further reading, and can be found at: https://evidenceandpolicyblog.co.uk/2020/05/11/welcome-to-the-evidence-policy-blog-our-reflections-on-the-field/
Biography: Kat Smith PhD is a professor of public health policy at the University of Strathclyde in the UK. Her key research interests are analysing who influences policies impacting on public health and how, tackling health inequalities, and studying innovation in health taxes.
Biography: Paul Cairney PhD is a professor of politics and public policy at the University of Stirling in the UK. His key research interests are in policy processes, including the idea of ‘evidence based policymaking:’ https://paulcairney.wordpress.com/ebpm/.
13 thoughts on “Ten insights on the interplay between evidence and policy”
Thanks to the authors for an interesting post. Let me philosophize a little on the topic of “facts and evidence” points in politics.
Some philosophers argue that politicians (in general, all people) who are concerned with facts and evidence are divided into two groups.
For the first group, the facts do not need proof. The facts provided and their predicted consequences conveniently fall on their own experience and on the idea of social life.
For the second group, not only the facts themselves need proof, but also the ways of obtaining and interpreting them.
Probably, the absolute majority of politicians are produced by the first group. Agree, it’s easier to say that politics and sociology are not exact sciences. Therefore, in modern politics, it is enough to use an arbitrary basis, for example, to be based on the historical experience of the ruling (Conservative, Labor, Republican and other) party and solve political and social problems in the order in which they appear.
Probably, the politicians from the second group will not agree with the linear perception and interpretation of political and social processes. It is enough to assume that these processes are cyclical in nature, then immediately there are grounds for using such concepts as the goal of the current cycle; stages in the direction of the goal; periods in achieving the goal, etc. These concepts make it possible to model a cyclic process, describe the content, quantitative and qualitative parameters of its stages and periods. In this case, the facts are important. They are evidence of the normal or abnormal state of the cyclic process. They are able to tell whether the goal of this process will be achieved. As a result, sufficient (basic) scientific rigor of politics and sociology is formed.
Probably, if we do not strive to form such scientific rigor in politics and sociology, we will bury humanity’s faith in sustainable development, as well as the hope that sustainable development can be effectively managed.
The use of deliberative democracy processes to introduce an additional step between evidence and political decision-making may be of interest in this context. Citizen deliberation may create opportunities to put difference sources of evidence ‘on the table’ and to bring diverse public perspectives and values to bear on how evidence should inform decision and actions. Readers may be interested in a process being run in Canberra in coming months called Connecting to Parliament.
Wendy – looks like an interesting project. Getting all the perspectives on the table is a good first step.
Lovely and compelling summary. Thank you.
As a policy practitioner, I have a simple equation for policy. Policy = evidence + principles + politics. Evidence never gives you the complete story for all of the reasons you have captured. But I would suggest a nuance to your point six. My experience in government was that people need to separate “pure” politics – positioning to get an advantage over an opponent or compromises made to see an idea become reality – from “policy principles” which are often used to guide decision making where the evidence base is incomplete, contested or uncertain. As a practical example, a policy principle we used when I chaired a prime ministerial welfare task force was “if you are receiving welfare and you have the capacity to work (taking into account your caring responsibilities as well as your work ability), we will ask you to seek work to your level of capacity”. In a sense this was capturing the political philosophy of the government of the day, but it also provided a deeper grounding for the detailed design of policy measures. Good principles also provide an anchor for the way we look at evidence – if society’s intent is for people to work if they can, the evidence questions become “how to achieve this in a way that maximises well being” and “in what circumstances are we harming well being by asking people to seek work”. The evidence from these questions in turn flows into the detail of policy design. There is always, of course, room for evidence to turn a fundamental construct on its head – but this is rarer than we tend to think.
My experience for what it is worth is that well-articulated principles help manage politics and tend to give policies greater longevity. This particular policy construct has endured through many changes of government.
Thanks again for a great article.
Strongly agree that principles are foundational to the architecure of theoretical and practical knowledge systems.
Sean – good points! I would suggest that principles can (and should) be treated as part of the policy model (or, as some say, policy theory). Both model and principle combine to provide a set of assumptions about how the world works (or does not work!). And, while our policy models are typically linear (so non-systemic and so prone to failure), our principles (mental models) are typically less systemic (e.g. our race is better than your race) leading to decisions that are still more prone to failure. So, the process of surfacing and clarifying those principles becomes critically important.
Excellent summary! I would add that policy is made according to the political needs/desires of the makers; so (as you suggested), they will filter the data and accept only the data which agrees with their pre-set perspectives. Another important part of the overarching problem is that the world of research is terribly limited in its ability to provide highly useful research. We provide, instead, a fragmented mess of incoherent notions. While, in contrast, political operatives provide narratives that are clear and simple. While wrong and misleading, those narratives are easier for the public to understand.
Good analysis. Mixed methods approaches assist with critiquing the performativity of numbers, and participatory appproaches test propositional kowledge against practical. Without budget to invest in primary research and with increasing reliance on AI to troll through vast scales of data as a proxy test of truth – rather than testing the specifics of the subjective assumptions in the architecture of inquiry – we are unlikley to be able to do much more than go with the so called flow. Lived experience will remain dominant, and an inability to critique assumed meaning of lived experience (culture) will remain our existiential threat.
Thank you very much for this concise and highly useful summary. Given all that has been said, if the goal is to inform rather than influence policy, how can we build more holistic and transparent communication forums? What if evidence providers could succinctly disclose their assumptions, values, results, and the manner in which they were vetted . . . in an open digital venue? It would be far from perfect, but the public could at least compare the stated assumptions, values, results, and vetting strategies of the participating evidence providers. Are there any simple ways to make our evidence communication more holistic and transparent? Thank you again for this thought-provoking post.
Thank you, Tim. I don’t know of such a forum or initiative, and I’m not sure of its value in practice. In my view there are already many obstacles for providers of evidence, and this action may have the unintended consequence of discouraging even more.
Tim – I think you have hit the nail on the head. I very much agree that such a forum would be a great improvement in our process. Perhaps using a platform such as kumu.io Following a few simple rules, stakeholders could present their causal diagrams for comparison and discussion. I imagine that participants in the discussion could click on various parts of the diagram to join focused discussions and evaluate relevant data to that part. Also, diverse participants could add to a common diagram – and even synthesize the presented diagrams. One example here (and, as a systems thinker can see, the policy is unlikely to succeed because the diagram is not systemic). https://kumu.io/Guswn/drakenstein-housing-policy-2010
Thank you Steven and Paul. I think your thoughts on my comment raise some key issues. To Paul’s point, a certain level of transparency seems very helpful for evidence vetting and communication, but adding excessive communication hurdles in the form of strong “transparency requirements” could actually have a paradoxical effect. For example, it seems useful to clarify who your employer is when you present evidence, but insisting that all data be publicly available could eliminate the consideration of medical and epidemiologic studies in policy deliberations (the data is sensitive and can’t ethically be released). To Steven’s point: Thank you so much for directing me to this platform, and it looks like I have some reading to do. At first glance, this type of approach raises the profile of systems thinking (and causal diagrams) and this seems quite helpful to me. If we can find interactive tools for visualizing the stakeholders and the problems more holistically, perhaps we can illuminate unexpected common ground between stakeholders. At the very least we may learn what we need to study next! Thanks again for this conversation.
Yes – transparency might also backfire because it results in the generation of (or availability of) so much data that it may become impossible to sort through it. https://www.researchgate.net/profile/Steven_Wallis2/publication/338860652_Exceeding_the_limits_Commentary_on_The_limits_of_transparency/links/5e9b0a3ba6fdcca7892279d3/Exceeding-the-limits-Commentary-on-The-limits-of-transparency.pdf Happy to provide reading material 😉 and, if you like, we can talk about it to focus the reading a bit email@example.com You mention finding “common ground” between stakeholders (which some may interpret as finding “common goals.” I prefer to think of it as the recognition of a shared situation or problem. What we have found from mapping is that it is not necessary to find common goals. Instead, we can use causal mapping to help stakeholders find “interdependent” goals (must more systemic that way).