Making predictions under uncertainty

By Joseph Guillaume

Joseph Guillaume (biography)

Prediction under uncertainty is typically seen as a daunting task. It conjures up images of clouded crystal balls and mysterious oracles in shadowy temples. In a modelling context, it might raise concerns about conclusions built on doubtful assumptions about the future, or about the difficulty in making sense of the many sources of uncertainty affecting highly complex models.

However, prediction under uncertainty can be made tractable depending on the type of prediction. Here I describe ways of making predictions under uncertainty for testing which conclusion is correct. Suppose, for example, that you want to predict whether objectives will be met. There are two possible conclusions – Yes and No, so prediction in this case involves testing which of these competing conclusions is plausible.

How can you test if a conclusion is correct? A good way is to look at plausible scenarios to support the conclusion. If you can find a plausible scenario, then the conclusion is plausible. If, despite your best efforts, you cannot find a plausible scenario, then that conclusion is not plausible.

Modelling provides a powerful tool to put this approach into practice – to help find scenarios and to determine whether they are plausible. Creating a model scenario involves specifying 1) the model equations; 2) parameter values that stipulate how the equations apply to a particular case; and 3) inputs to the model that describe external factors. Encompassing all the different model equations, parameter values and inputs provides many possible scenarios. We can think of all these possible scenarios as spanning a “model scenario space”, within which there are four options as demonstrated in the figure below (objectives are met or not; models are plausible or not).

author-joseph-guillaume_making-predictions-under-uncertainty

Every point in the diagram represents a model scenario and we can divide up the model scenario space according to whether or not objectives are met in that model scenario, and whether or not that model scenario is plausible. From a modelling point of view, prediction under uncertainty is then a matter of searching through and classifying points within the model scenario space. In the figure above, there are two plausible models, one which models a scenario where the objectives are met—MY—and one which models a scenario where the objectives are not met—MN.

There are several ways of doing this analysis, many of which are well-established in various scientific disciplines. The method to use depends on the level of uncertainty, and whether it is more appropriate to define what is plausible based on exploring possibilities, expressing existing views or making use of existing data.

Exploring possibilities

Exploring possibilities is suitable when little is known. Very few assumptions are made, and the scenarios generated are intended to prompt a response rather than provide a clear answer. Examples of these techniques include vulnerability analysis, scenario discovery and breakpoint analysis, each with a slightly different focus.

possible-techniques-chart

Expressing existing views

Expressing existing views relies on different views of the world, or different expert opinions, and tests whether these views result in different outcomes. This can take the form of defining entire scenarios, or just defining parameter bounds.

existing-views-table1

Making use of existing data

Making use of existing data is typically seen as the core business of quantitative modelling. What is plausible is determined by observations. Various techniques exist for capturing uncertainty implied by data. Set membership and statistical techniques make different assumptions about errors in data. Optimisation-based approaches can be used with either of these, and aim to efficiently find model scenarios that are consistent with the data, and support each of the possible conclusions. Optimisation-based approaches are not yet widely used, but are perfectly suited to making predictions under uncertainty by testing whether competing conclusions are plausible.

guillaume_making-predictions-uncertainty_table3

What if plausible models exist for both outcomes?

As in the diagram above, uncertainty means that you might find that both outcomes are plausible, so you cannot give a definite answer. In order to then refine your answer, so that it is more useful, there are three choices: finds ways to reduce uncertainty, revise the research question or accept the uncertainty and devise appropriate actions to accommodate it.

I’m keen to hear examples of other ways people have dealt with uncertainty through modelling. I also welcome reactions to the ideas proposed and examples of how the various techniques mentioned have been employed.

Further information:

Guillaume, J. H. A., Kummu, M., Räsänen, T., A. and Jakeman. A., J. (2015). Prediction under uncertainty as a boundary problem: A general formulation using Iterative Closed Question Modelling. Environmental Modelling & Software, 70: 97–112.
Online: 10.1016/j.envsoft.2015.04.004

Biography: Joseph Guillaume is a Postdoctoral Researcher with the Water and Development Research Group at Aalto University, Finland. He is a transdisciplinary modeller with a particular interest in uncertainty and decision support. Application areas have focussed primarily on water resources, including rainfall-runoff modelling, hydro-economic modelling, ecosystem health, global water scarcity and global food security. Ongoing work involves providing a synthesis of the many ways we communicate about uncertainty, and their implications for modelling and decision support. He is member of the Core Modeling Pursuit funded by the National Socio-Environmental Synthesis Center (SESYNC).

This blog post is one of a series resulting from the first meeting in March 2016 of the Core Modelling Pursuit. This pursuit is part of the theme Building Resources for Complex Action-Oriented Team Science funded by the US National Socio-Environmental Synthesis Center (SESYNC).

10 thoughts on “Making predictions under uncertainty”

    • Thanks for your interest. The software used depends on the type of model and how the scenario is created. I tend to work with quantitative models in data-centred programming environments like R and Matlab – high learning curve, but well suited to working with very complex integrated scenarios.

      There are a number of dedicated tools available for other specific modeling paradigms like system dynamics, Bayesian networks, and agent based models. Kelly (Letcher) et al. 2013 (http://dx.doi.org/10.1016/j.envsoft.2013.05.005) discuss advantages of each of these approaches.

      From the point of view of this article, stakeholder-friendly tools are generally best suited for approaches involving expressing existing views. Exploring possibilities benefits from tools like the EMA Workbench (http://simulation.tbm.tudelft.nl/ema-workbench/contents.html), and making use of existing data from optimisation and identifiability tools like those in PEST (http://www.pesthomepage.org/), UCODE [Moderator update – In September 2023, this link was no longer available and so the link structure has been left in place but the active link deleted: igwmc.mines.edu / freeware / ucode] and hydromad ([Moderator update – In March 2024, this link was no longer available and so the link structure has been left in place but the active link deleted: hydromad.catchment.org]).

      Reply
  1. Nice article, thank you! I’d add that there are a couple of rather psychological aspects (see for example the book “Decisive” by Chip Heath and Dan Heath) and the risk of confusing correlation with causation for which you need an explorative approach to include more factors which sometimes is easier using a qualitative modeling (see know-why.net).

    Reply
    • Hi Kai, Thanks for your input. The various ways of testing if a conclusion is correct definitely do have a range of different psychological implications. I find it interesting that “testing” sounds like a potentially hostile devil’s advocate approach, but there are in fact multiple means of achieving the same ends with different psychological (and social) impacts.

      I agree it’s important to distinguish between correlation and causation when creating a model. However, when exploring possibilities or expressing existing views, the model scenario is simply a means of capturing a way of thinking about the world, and the resulting conclusions. The collection of models on know-why seems to be a nice example of this.

      Reply

Leave a Reply to René RohrbeckCancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Discover more from Integration and Implementation Insights

Subscribe now to keep reading and get access to the full archive.

Continue reading