Argument-based tools to account for uncertainty in policy analysis and decision support

Community member post by Sven Ove Hansson and Gertrude Hirsch Hadorn

sven-ove-hansson
Sven Ove Hansson (biography)

Scientific uncertainty creates problems in many fields of public policy. Often, it is not possible to satisfy the high demands on the information input for standard methods of policy analysis such as risk analysis or cost-benefit analysis. For instance, this seems to be the case for long-term projections of regional trends in extreme weather and their impacts.

gertrude-hirsch-hadorn
Gertrude Hirsch Hadorn (biography)

However, we cannot wait until science knows the probabilities and expected values for each of the policy options. Decision-makers often have good reason to act although such information is missing. Uncertainty does not diminish the need for policy advice to help them determine which option it would be best to go for.

When traditional methods are insufficient or inapplicable, argument-based tools for decision analysis can be applied. Such tools have been developed in philosophy and argumentation theory. They provide decision support on a systematic methodological basis. These tools enable a systematic decision analysis when the high demands on the information input of standard methods such as risk analysis or cost-benefit analysis are not fulfilled.

Reasoning with argument-based tools

An argument consists of an inference from one or several premises to a conclusion. Argument analysis scrutinizes whether or to what extent the conclusion is supported by the premises. To this end, argument analysis identifies the positions in a debate as well as the range of possible reasons that may speak for or against these positions. It furthermore reconstructs reasons and positions as inferences from premises to conclusions and assesses whether inferences are correct.

Argument-based tools for decision analysis

The concept of argument analysis is wide and covers a large and open-ended range of methods and tools, including tools for conceptual analysis, structuring decisions, assessing arguments, and evaluating decision options.

Argument-based tools can be used to systematize deliberations if, for instance, probabilities or values are undetermined or further information is lacking, uncertain or contested. This can, for instance, apply to information about what options are available or how to frame those options and what their potential consequences may be. See the figure below for an overview of argument-based tools to assess uncertainty of components in a decision.

Argument-based tool to assess uncertainty of components in a decision

(Image: Gertrude Hirsch Hadorn)

The argumentative turn in policy analysis is a widened rationality approach that scrutinises inferences from what is known and what is unknown in order to support decision-supporting deliberations. It includes and recognises the normative components of decisions and makes them explicit in order to help in finding reasonable decisions with democratic legitimacy. Such legitimacy has to be grounded in a social framework that assigns a large role to rational argumentation.

To find out more:
Hansson, Sven Ove and Hirsch Hadorn, Gertrude (eds). (2016). The Argumentative Turn in Policy Analysis – Reasoning about Uncertainty. Springer: Cham, Switzerland. Online (DOI): 10.1007/978-3-319-30549-3

This book, written by an international team of philosophers, is the first comprehensive presentation of argument-based methods for decision analysis. It contains case studies including decisions on flood risk governance, decisions taken in the course of the financial crisis in 2008, uncertainty assessment for deciding on cleaning up a nuclear waste site, research about climate geoengineering and research in the field of synthetic biology that will become policy problems in the future.

Biography: Sven Ove Hansson is professor in philosophy at the Department of Philosophy and History, Royal Institute of Technology, Stockholm. He is editor-in-chief of Theoria and member of the Royal Swedish Academy of Engineering Sciences. His research includes contributions to decision theory, the philosophy of risk, moral and political philosophy, logic, and the philosophy of science and technology.

Biography: Gertrude Hirsch Hadorn is an adjunct professor at the Department of Environmental Systems Science, ETH Zurich. She has worked in environmental ethics and in the philosophy of environmental and sustainability research with case studies in the fields of climate change and ecology. She has contributed to the methodology of transdisciplinary research, the analysis of values in science, the epistemology of computer simulations, and the analysis of uncertainty and decision-making.

Overturning the design of outcome measures

Community member post by Diana Rose

rose
Diana Rose (biography)

Outcome measures in research about treatment and service provision may not seem a particularly controversial or even exciting domain for citizen involvement. Although the research landscape is changing – partly as a result of engaging stakeholders in knowledge production and its effects – the design of outcome measures has been largely immune to these developments.

The standard way of constructing such measures – for evaluating treatment outcomes and services – has serious flaws and requires an alternative that grounds them firmly in the experiences and situations of the people whose views are being solicited.

Standard measure generation is top-down. It starts from the literature, written by clinicians and academics, and collects together all existing scales on a given topic, also produced by clinicians and academics. This generates a large number of  ‘items’ and there follow complex statistical processes to reduce the number of items, to ensure the measure is psychometrically robust and thus to develop a new and hopefully ‘better’ measure of what is being assessed. Occasionally the new scale will be ‘concept checked’ by potential treatment and service users, but that is both uncommon and a very limited role.

The first problem with this is that not much can ever change as new scales are so contingent on what has gone before. But more significantly, this method ensures that the measures assess what clinicians and academics think is important in what is being evaluated. This may be light years away from what is important to those on the receiving end of services. Their insights are rendered entirely invisible by the usual method just described.

My work is mainly in mental health, although the insights described here are transferable. Myself and my team are ourselves mental health service users as well as researchers (we refer to ourselves as user-researchers). We have devised a completely different way of generating outcome measures that has now been successfully used nearly a dozen times.

It starts with qualitative research with people who have used or experienced the treatment or service that we want to evaluate to gain their perspective. We run focus groups and expert panels (here the users are the experts) and these are facilitated by researchers who themselves have experience of what is being assessed. This ‘participatory method’ is an effort to break down the power structures that usually pervade the research endeavour – even though this is never spoken of. We are all part of the same community because we all have experience of mental health challenges and treatment – not exactly the same experiences but with much in common.

From intensive qualitative work like this we gradually build up a mixed-methods measure, constantly checking with participants that it reflects their concerns. When it is finally ready we do indeed undertake psychometric assessment. We are not anti-science!

In my area of mental health we have found that participants with diagnoses of psychosis deliver more robust psychometric results, for example very high test-retest reliability, than are usual for such scales. Interestingly this flies in the face of dictums in psychometric textbooks that ‘subjects’ must be ‘cognitively intact’ for psychometric assessment. It suggests that scales developed by peers are more likely to make sense and are therefore more robust.

Nevertheless, the method is not perfect, with three major problems. First, breaking down power relations in this context is difficult. It can fail completely or give rise to complex issues about reciprocity and degrees of disclosure by user-researchers.

Second, like any participatory method, it relies on ‘gatekeepers’ and crucial groups may be missed – for example those thought to ‘lack capacity to consent’ may be excluded, as may others whom gatekeepers regard as too ‘vulnerable’. The views of such people are crucial yet including them is incredibly hard.

Third, the measures do not please everyone. Out in the wider community most seem to like them, but – by believing we needed to make the measures ‘easy to complete’ – we have been accused of envisioning mental health service users as ‘simple-minded’. This was a wake-up call.

Paradoxes abound but this method does turn the standard on its head and in our view is grounded in and stays closer to the people who all this research is supposed to benefit.

What do you think? Do you have relevant experience to share?

For more information see:
Rose, D., Evans, J., Sweeney, A., and Wykes, T. (2011). A model for developing outcome measures from the perspectives of mental health service users. International Review of Psychiatry, 23, 1: 41-46. Online (DOI): 10.3109/09540261.2010.545990

Biography: Diana Rose PhD is Co-director of the Service User Research Enterprise and Professor of User-Led Research at the Institute of Psychiatry, Psychology and Neuroscience, King’s College London. She has also been treated by mental health services all her adult life and uses that experience to spear-head consumer-led research.

Designing for impact in transdisciplinary research

Community member post by Cynthia Mitchell, Dena Fam and Dana Cordell

mitchell
Cynthia Mitchell (biography)

Starting with richly articulated pictures of where we would like to be at some defined point in the future has powerful consequences for any human endeavour. How can we use such “Outcome Spaces” to guide the conception, design, implementation, and evaluation of transdisciplinary research?

Our Outcome Spaces Framework (Mitchell et al., 2017) considers three essential impacts:

(1) improving the situation,
(2) generating relevant stocks and flows of knowledge, and
(3) mutual and transformational learning by the researcher/s and involved participants. Continue reading

Whose side are we on and for whom do we write?

Community member post by Jon Warren and Kayleigh Garthwaite

warren-x
Jon Warren (biography)

In 1967 Howard Becker posed the question – to academics – “Whose side are we on?.

Becker was discussing the question during the time of civil rights, the Vietnam war and widespread social change in the US. He sparked a debate about objectivity and value neutrality which had long featured as part of the social sciences’ methodological foundations and which has implications beyond the social sciences for all academics.

garthwaite
Kayleigh Garthwaite (biography)

What relevance do these ideas have now, in an era when academics and their research are becoming increasingly commodified? Academics are increasingly pressured by their own institutions and fellow professionals to gain more funding, publish more papers and make more impact. Questions of social justice and professional integrity are at risk of being swamped by these forces allied to unscrupulous careerism.

We argue that the question now is not only who academics serve but also who we write for. Continue reading

Critical Back-Casting

Community member post by Gerald Midgley

gerald-midgley
Gerald Midgley (biography)

How can we design new services or strategies when the participation of marginalized stakeholders is vital to ethicality? How can we liberate people’s creativity so we can move from incremental improvements to more fundamental change?

To answer these questions, I have brought together insights from Russ Ackoff and Werner Ulrich to develop a new method that I call Critical Back-Casting.

Russ Ackoff, writing in the 1980s, is critical of organizations that focus on incremental improvements without ever asking whether they are doing the right thing in the first place. Thus, they are at risk of continually ‘improving’ the wrong thing, when they would be better off going for a more radical redesign. Ackoff makes two far-reaching prescriptions to tackle this problem. Continue reading

Building a better bridge: The role of research mediators

Community member post by Jessica Shaw

jessica-shaw
Jessica Shaw (biography) (Photograph by Chris Soldt)

What, and who, are research mediators? And are they the key to linking research with policy and practice?

There has long existed a gap, perhaps a chasm, between the worlds of research and of policy and practice. All too often, policymakers and practitioners do not use research evidence when making key decisions, while researchers design entire programs of research without a complete understanding of the needs of those on the ground doing the work. Because of this divide, we’re left wondering—how do we get individuals to use the most relevant research findings when making personal healthcare decisions? how do we get school officials to choose evidence-based curriculum? how do we get legislators to develop scientifically-sound policies? Continue reading