Making predictions under uncertainty

Community member post by Joseph Guillaume

joseph-guillaume
Joseph Guillaume (biography)

Prediction under uncertainty is typically seen as a daunting task. It conjures up images of clouded crystal balls and mysterious oracles in shadowy temples. In a modelling context, it might raise concerns about conclusions built on doubtful assumptions about the future, or about the difficulty in making sense of the many sources of uncertainty affecting highly complex models.

However, prediction under uncertainty can be made tractable depending on the type of prediction. Here I describe ways of making predictions under uncertainty for testing which conclusion is correct. Suppose, for example, that you want to predict whether objectives will be met. There are two possible conclusions – Yes and No, so prediction in this case involves testing which of these competing conclusions is plausible.

How can you test if a conclusion is correct? A good way is to look at plausible scenarios to support the conclusion. If you can find a plausible scenario, then the conclusion is plausible. If, despite your best efforts, you cannot find a plausible scenario, then that conclusion is not plausible.

Modelling provides a powerful tool to put this approach into practice – to help find scenarios and to determine whether they are plausible. Creating a model scenario involves specifying 1) the model equations; 2) parameter values that stipulate how the equations apply to a particular case; and 3) inputs to the model that describe external factors. Encompassing all the different model equations, parameter values and inputs provides many possible scenarios. We can think of all these possible scenarios as spanning a “model scenario space”, within which there are four options as demonstrated in the figure below (objectives are met or not; models are plausible or not).

model-space-scenario

Every point in the diagram represents a model scenario and we can divide up the model scenario space according to whether or not objectives are met in that model scenario, and whether or not that model scenario is plausible. From a modelling point of view, prediction under uncertainty is then a matter of searching through and classifying points within the model scenario space. In the figure above, there are two plausible models, one which models a scenario where the objectives are met—MY—and one which models a scenario where the objectives are not met—MN.

There are several ways of doing this analysis, many of which are well-established in various scientific disciplines. The method to use depends on the level of uncertainty, and whether it is more appropriate to define what is plausible based on exploring possibilities, expressing existing views or making use of existing data.

Exploring possibilities

Exploring possibilities is suitable when little is known. Very few assumptions are made, and the scenarios generated are intended to prompt a response rather than provide a clear answer. Examples of these techniques include vulnerability analysis, scenario discovery and breakpoint analysis, each with a slightly different focus.

possible-techniques-chart

Expressing existing views

Expressing existing views relies on different views of the world, or different expert opinions, and tests whether these views result in different outcomes. This can take the form of defining entire scenarios, or just defining parameter bounds.

existing-views-table1

Making use of existing data

Making use of existing data is typically seen as the core business of quantitative modelling. What is plausible is determined by observations. Various techniques exist for capturing uncertainty implied by data. Set membership and statistical techniques make different assumptions about errors in data. Optimisation-based approaches can be used with either of these, and aim to efficiently find model scenarios that are consistent with the data, and support each of the possible conclusions. Optimisation-based approaches are not yet widely used, but are perfectly suited to making predictions under uncertainty by testing whether competing conclusions are plausible.

guillaume_making-predictions-uncertainty_table3

What if plausible models exist for both outcomes?

As in the diagram above, uncertainty means that you might find that both outcomes are plausible, so you cannot give a definite answer. In order to then refine your answer, so that it is more useful, there are three choices: finds ways to reduce uncertainty, revise the research question or accept the uncertainty and devise appropriate actions to accommodate it.

I’m keen to hear examples of other ways people have dealt with uncertainty through modelling. I also welcome reactions to the ideas proposed and examples of how the various techniques mentioned have been employed.

Further information:

Guillaume, J. H. A., Kummu, M., Räsänen, T., A. and Jakeman. A., J. (2015). Prediction under uncertainty as a boundary problem: A general formulation using Iterative Closed Question Modelling. Environmental Modelling & Software, 70: 97–112.
Online: 10.1016/j.envsoft.2015.04.004

Biography: Joseph Guillaume is a Postdoctoral Researcher with the Water and Development Research Group at Aalto University, Finland. He is a transdisciplinary modeller with a particular interest in uncertainty and decision support. Application areas have focussed primarily on water resources, including rainfall-runoff modelling, hydro-economic modelling, ecosystem health, global water scarcity and global food security. Ongoing work involves providing a synthesis of the many ways we communicate about uncertainty, and their implications for modelling and decision support. He is member of the Core Modeling Pursuit funded by the National Socio-Environmental Synthesis Center (SESYNC).

This blog post is one of a series resulting from the first meeting in March 2016 of the Core Modelling Pursuit. This pursuit is part of the theme Building Resources for Complex Action-Oriented Team Science funded by the US National Socio-Environmental Synthesis Center (SESYNC).

Why are interdisciplinary research proposals less likely to be funded? (Reblog)

The first empirical support for a long-standing complaint by interdisciplinary researchers was recently published in the leading journal Nature. The Australian National University’s Lindell Bromham, Russell Dinnage and Xia Hua showed that interdisciplinary research is less likely to be funded than discipline-based research proposals (Nature, 534, 684–687 (30 June), DOI: 10.1038/nature18315).

They cleverly applied a technique from evolutionary biology that examines relatedness between biological lineages, using a hierarchical classification of research fields rather than an evolutionary tree. The relative representation of different field of research codes and their degree of difference were used as a proxy measure for interdisciplinarity.

The results, based on 5 years of data from the Australian Research Council’s Discovery program, are robust and are unaffected when number of collaborators, primary research field and type of institution are taken into account.

What does it mean? Both main potential interpretations raise concerns and require action. The results may support the grumbling of interdisciplinary researchers about an unfair review processes, where reviewers with no interdisciplinary research experience hold sway over which interdisciplinary research is published. It is also possible that interdisciplinary research proposals are not as good as those which are discipline based.

What is required for adequate peer-review of interdisciplinary grant applications? I recently laid out the key issues in a paper for Palgrave Communications (Bammer, G. 2016. ‘What constitutes appropriate peer review for interdisciplinary research?’ Palgrave Communications, 2 (16017). This paper argued that only involving reviewers from the disciplines was likely to significantly short-change an interdisciplinary proposal.

First, the unknowns (expressed as research questions) that are addressed in interdisciplinary proposals may include issues of concern to disciplines, but are unlikely to be confined to those. Interdisciplinary research is also likely to address unknowns that worry stakeholders and unknowns that are central to the problem, both of which may well be outside the purview of disciplines. For example, in research I conducted on the feasibility of prescribing pharmaceutical heroin to dependent heroin users as a new treatment option, we addressed concerns raised by police, one of the major stakeholders, about the possibility that heroin prescription would cause a “honeypot effect”, an issue that was not on the radar of any of the disciplines involved. Similarly we addressed concerns about fostering more permissive attitudes to illicit drug use, which are integral to changing prohibitionist policies. Such issues may seem irrelevant to discipline-based reviewers and certainly cannot be adequately assessed by them.

Further, the discipline based unknowns that are relevant to the interdisciplinary research, such as estimating the number of heroin users (demography), assessing the ethics of heroin prescription (philosophy), and estimating the likely impact on the illicit drug market (economics) may seem pedestrian to discipline-based researchers, even though they are critical to assessing an interdisciplinary problem such as the feasibility of heroin prescription. Indeed in this project only the economists undertook research that was ground-breaking from a disciplinary perspective.

Discipline-based researchers are also ill-equipped to evaluate the integrative processes that an interdisciplinary proposal plans to use, such as assessing the systems view taken, the rationale by which disciplines and stakeholders were chosen for consideration in the study and how all the different insights will be combined. Along with the different unknowns addressed, integrative methods are what set interdisciplinary research apart.

These considerations are relevant to one kind of interdisciplinary research, where several diverse disciplines and stakeholders tackle a complex real-world problem. But there are also other kinds of interdisciplinary research, ranging from one person borrowing from other disciplines to tackle a problem in a new way (as Bromham and colleagues did in their paper), to teams working at the interface of two or more disciplines (which may lead to the development of a new discipline, as when quantum physics and biology combine to form quantum biology), to closely related disciplines converging to develop a new technology. Different peer review processes may well be required to do justice to these different kinds of interdisciplinarity.

At the moment these issues are in the funders’ too hard basket. And who can blame them? Apart from complaints about the inadequacy of current processes, the interdisciplinary community is too unorganised to suggest adequate ways forward. Key tasks include providing a useable definition of interdisciplinarity (that encompasses the main kinds), highlighting central issues for peer-review of each kind, and providing an adequate college of peers to undertake effective peer-review.

And what if the results by Bromham and colleagues stem from the quality of the interdisciplinary research proposals submitted for funding not being of a high enough standard? That’s also the purview of the interdisciplinary community. Just as the disciplines police their own standards, interdisciplinarians need to make clear which parameters mean that interdisciplinary research is outstanding and which mean it is poor.

If we don’t organise, our research will always be judged by outsiders, who are less well equipped to do so than we ourselves are.

This blog post was first published on 11 August 2016 on the LSE Impact blog as Why are interdisciplinary research proposals less likely to be funded? Lack of adequate peer-review may be a factor and is reposted under a Creative Commons license (CC BY 3.0).

Advice to graduate students on becoming “translational”

Community member post by Alexis Erwin

alexis-erwin
Alexis Erwin (biography)

In an earlier post on this blog, Mark Brunson posed the questions: How does an ecologist become “translational”? What training is needed to venture beyond the lab or university and to engage with the potential beneficiaries or users of research? Here I offer my own thoughts as someone who started working to “become translational” halfway through a traditional ecology Ph.D. program.

Although the focus of this blog post is on translational ecology and on specific resources for graduate students in the U.S., I suggest the ideas are more widely applicable. Continue reading

Two frameworks for scoping

How can all the possibilities for understanding and acting on a complex social or environmental problem be elucidated? How can a fuller appreciation of both the problem and the options for tackling it be developed, so that the best approach to dealing with it can be identified? In other words, how can a problem be scoped?

The point of scoping is to illuminate a range of options. It moves those dealing with the complex problem beyond their assumptions and existing knowledge to considering the problem and the possibilities for action more broadly.

Practicalities, however, dictate that everything cannot be included, so that scoping is inevitably followed by boundary setting. Continue reading

Social science identities in interdisciplinary research and education

Community member post by Eric Toman

eric-toman
Eric Toman (biography)

What does it mean to include ‘a social scientist’ in a team tackling complex problems? Here I focus on complex environmental problems and how biophysical and social scientists work together. I’m curious if social scientists face the same issues in other problem areas, such as health.

Things have improved since my early academic career, when I was often asked to justify why a social scientist deserved a seat at the table when discussing environmental questions. It seemed that even supportive natural scientists were motivated to engage their social science colleagues only to ‘fix’ some type of problem caused by people (e.g., politicians, decision-makers, managers, the “general public”).

While it’s now normal for social scientists to be included, they tend to be lumped together, unlike the biophysical scientists who are differentiated into a range of disciplines with relevant specialization areas. Continue reading

A process model for teaching interdisciplinary research

Community member post by Machiel Keestra

machiel-keestra
Machiel Keestra (biography)

How can we effectively teach interdisciplinary research to undergraduate and masters students? What is needed to encompass research ranging from cultural analysis of an Etruscan religious symbol to the search for a sustainable solution for tomato farming in drying areas? Given that there is no predetermined set of theories, methods and insights, as is the case with disciplinary research, what would an interdisciplinary textbook cover? How can such a textbook accommodate the fact that interdisciplinary research usually requires students to collaborate with each other, for which they also need to be able to articulate their own cognitive processes? Continue reading