By Antonie Jetter
I frequently struggle to explain how participatory modeling deals with uncertainty. I found useful guidance in the management literature.
After all, participatory modeling projects and strategic business planning have one commonality – a group of stakeholders and decision-makers aims to understand and ultimately influence a complex system. They do so in the face of great uncertainty that frequently cannot be resolved – at least not within the required time frame. Businesses, for example, have precise data on customer behavior when their accountants report on annual sales. However, by this time, the very precise data is irrelevant because the opportunity to influence the system has passed.
Two key lessons from the management literature deal with the nature of uncertainty and responding to four major types of uncertainty.
Nature of uncertainty: objective or subjective?
Does the term uncertainty pertain to phenomena that are objectively unknowable? Or is uncertainty subjective because people “feel” uncertain about something? The management literature acknowledges both forms of uncertainty.
Objective uncertainty is not preventable and persists even after all possible efforts are made to collect data and gain insights. This ‘residual’ uncertainty occurs in many planning efforts because much of the future is unknowable. It can result in the subjective experience of “feeling uncertain”, however, subjective uncertainty may also have other root causes, such as a lack of understanding of the system under study or mistrust in the decision-process.
Participatory modeling needs to address both types of uncertainty appropriately: a community’s discomfort with what is unknowable, for example, may lead to requests for more and more data. If modelers give in to this desire, two things can happen: subjective uncertainty can remain unchanged which may make it impossible for the community to agree on a course of action. Or subjective uncertainty decreases as more data is presented, even though the collected data does not address all of the objective uncertainty. As a result, the community may feel increasingly at ease with the now seemingly lower levels of uncertainty, while being unaware of residual objective uncertainty. This is the case of so-called “unknown unknowns” or “unk unks”. They are frequently discussed in the management literature, because one cannot successfully prepare for or manage what one is not aware of.
State, effect, and response uncertainty and unknown unknowns
So how can modelers conceptualize the different types of uncertainty in their models? Building on earlier literature on uncertainty in business environments, Frances Miliken (1987) offers an interesting framework that differentiates state, effect, and response uncertainty.
State uncertainty refers to a situation where the variables that contribute to a problem (ie., the system elements) are well understood, but their value is unknown. For example, a modeling team may know that the dollar exchange rate affects the business in a particular way or how the growth of the population relates to water use. However, they do not know what the exchange rate or the future population count will be.
Computational system models are usually well set up to address this type of uncertainty: the models are run for a range of possible values that the uncertain variables can take. The results of these different input scenarios are interpreted as the range of possible system states.
Effect uncertainty refers to a situation where the variable is understood to be relevant for a particular problem but the nature of its impact is not known. For example, a community may observe that its population is aging or that new modes of transportation, such as ride-sharing services, are emerging. However, it does not know how these trends will impact them specifically: How many housing units with parking spaces will be needed in a neighborhood? How many family homes versus home for singles and couples? Will young people move to the suburbs or live downtown? In system modeling, uncertainty about the model structure – how variables affect each other – is typically considered to be something that needs to be resolved before the model can be useful. Stakeholders can aid this process by offering their perspectives, such as explaining what members of their communities would do.
Increasingly, modelers also use system models in an exploratory fashion to create and test the model outcomes for a range of alternative system structures. Rather than synthesizing knowledge into one model, they thus construct an ensemble of plausible models and explore their impacts.
The third category of uncertainty, response uncertainty, is experienced by planners and community decision makers: it is a lack of response options and the inability to predict the likely consequences of a response choice. Coming up with response options is a creative act: city planners, faced with the possibility that ride-sharing services may reduce the need for downtown parking, for example, cannot rely on tried-and-true ways – as there are none – to respond to this trend. Instead, they need to innovate and think about the robustness of their solutions in various possible scenarios – in some cities, a chosen response is to modify building codes to require that newly built parking garages have street-level entrances (rather than ramps) so that they can be converted into apartments or offices buildings if they become obsolete.
These types of responses are not generated by or within the system model. But a good system model, that creates a deep understanding of the problem at hand, can help the process. It can also help evaluate the impacts, including unintended consequences, of the response options under discussion.
But what about those unk unks? Because the modelers and decision makers are not aware of them, they are not explicitly reflected in the model and not considered in decision making. But that does not mean that participatory system modeling does not address them at all.
Unk unks can be caused by the complexity of the system which results in a lack of understanding of how the system behaves. In response, system modeling provides a toolset for exploring the dynamic behavior of complex systems in a systematic fashion. Unk unks can also be caused by so-called “blind spots” – a lack of awareness for important system elements that are consequently not included in the model. Participatory modeling aims to minimize these blind spots by including diverse stakeholder groups and by systematically pooling their insights and perspectives.
Participatory system modeling thus provides approaches for addressing all of the above types of uncertainty. However, to fully leverage them, modelers and communities need clarity about what uncertainties they are struggling with.
Do you find these frameworks helpful? How else do you conceptualize uncertainty in your projects?
Milliken, F. J. (1987). Three types of perceived uncertainty about the environment: state, effect, and response uncertainty. Academy of Management Review, 12, 1: 133–143.
Biography: Antonie Jetter is an Associate Professor of Engineering and Technology Management and Director of the Innovation Program in the Maseeh College of Engineering and Computer Science at Portland State University. While still in college, she was on the founding team of a venture-backed start-up company in equipment manufacturing. Her research is focused on improving the management of early stage new product development, which results in better product planning methods, simple and efficient decision heuristics and successful project organization. She applies participatory Fuzzy Cognitive Map modeling in scenario planning, product planning and stakeholder engagement. She is member of the Participatory Modeling pursuit funded by the National Socio-Environmental Synthesis Center (SESYNC).
This blog post kicks-off a series resulting from the second meeting in October 2016 of the Participatory Modeling pursuit. This pursuit is part of the theme Building Resources for Complex, Action-Oriented Team Science funded by the National Socio-Environmental Synthesis Center (SESYNC).
13 thoughts on “Uncertainty in participatory modeling – What can we learn from management research?”
Hi again Antoine!
Have you published anything on this (except for this blog)? I’d like to refer to this in a paper.
The article was brilliant, thank you. When I think about complexity I think of a couple of characteristics based on Kellert, Bird and Gleick who speak about not just complexity but the development of the system into complexity such that a trajectory from simple, complex and chaotic can be observed at least mathematically.
In my opinion as it relates to design, ideation significantly impacts the rest of the project and in line with chaos theory, small changes over time have an impact if not unpredictable outcomes. To summarise I agree that some framework is necessary otherwise we will not learn more about the system. I am cautious as it was mentioned that our interpretation of the system, especially when they are defined as complex are in fact complex because it changes over time and the facets of the system are not aware of each other therefore changes in one facet may not impact the others (or at least in the way that was intended).
The whole of the phase space is not defined but it may not also be helpful. This article piece is a good segway to surfacing and testing approaches which is brilliant. I believe the framework you have provided is a great step forward to test your ideas.
Regarding Beatrice’s comment, there has been quite a lot of work on uncertainty guidelines in modelling in various areas (e.g. in the Netherlands and for groundwater), though I suspect that the PM angle introduces new issues. This work has included uncertainty matrices, and using diagnostic questions for classifying uncertainties, e.g. Warmink et al. 2010 (http://dx.doi.org/10.1016/j.envsoft.2010.04.011).
I think those approaches provide useful building blocks, but to be of practical use for identifying a course of action, the context needs to be recognised at a higher level. I like Lewis’ suggestion of tipping point uncertainty as a higher level concept, which might be affected by both state and effect uncertainty. I don’t know how I would approach it from a PM perspective, but it does have some interesting distinctive characteristics, e.g. I don’t actually care about eliminating state or effect uncertainty, all I care about is where my (uncertain) state will sit relative to the tipping point (defined by an uncertain effect).
I like the suggested solution to Val’s question too – if stakeholders are paralysed by uncertainty avoidance, don’t expect them to change, but you can try to focus on things they are certain about, and use a variety of tools for doing that. Change the question if you can’t change the information or the attitude. That is probably also applicable to “non-genuine” uncertainty avoiders…
Antonie, I read your article with great interest. I am sharing real applications of your classification applied in the case of Cyprus where we have been conducting Structured Democratic Dialogues to help
About the Nature of uncertainty: objective or subjective?
In the case of many (maybe more than 100) of Structured Democratic Dialogues that have been conducted in Cyprus we have experienced both types of uncertainties. In virtually all occasions, the participants were people from all walks of life, including occasionally some politicians, but the groups have never included enough people who had the means and the power to take decisions that mattered. Not to mention that the key decision maker, i.e., Turkey, was never directly or indirectly included in our deliberations. Therefore, a great deal of the uncertainty was always objective, and thus not preventable. Having said that, we should also point out that while we immersed our participants in visions of ideal futures, they continued to live their daily lives in the realities of the island. They continue to witness daily the lack of progress, the concerns of their fellow citizens and the overall negative milieu. It is this subjective uncertainty that we managed to decrease significantly through the application of a large number of structured democratic dialogues, not only about the Cyprus problem, but also on other domains of life; many were organized by the Marios MIchaelides through the Cyprus Academy of Public Administration and others by my team through numerous European projects implemented by Future Worlds Center (http://www.futureworlds.eu/wiki/Chronological_List_of_SDDPs_by_Future_Worlds_Center_and_Associates). The participants experienced repetitively the possibility of bridging gaps, developing shared action plans and acting successfully upon these actions. It is this practical experience (i.e., having it done once) that allowed us to decrease subjective uncertainty. This is the exact reason why today Cyprus is hopeful that people might vote “yes” to a new referendum for unification despite the fact that the proposed solution will most probably be unsatisfactory; exactly because they trust their experience that people ARE able to co-design ideal futures and they know how this is done.
About Response uncertainty
Again, in the case of Cyprus, this type of uncertainty is very real especially among the intelligentsia. Many were wondering why “clever” and “educated” people were in favour of a “no” in the 2004 referendum for unification based on a UN-proposed plan. Indeed, the “educated” in the “no” pool outnumbered the “yes”. The explanation is easy when viewed through this prism. They have the mental capacity to extrapolate and “see” problems that will appear in the future, predict possible responses and thus the uncertainty of their effectiveness. The status quo is in such a case the most “certain” thing. It is a bit analogous to the observation that very intelligent people don’t become very rich; simply because they can “see” the consequences and uncertainties of their potential actions. To make money, one needs to either take risks or to trust her/his instincts.
Thank you for pointing me towards interesting work on Structured Democratic Dialogues. This seems similar in spirit to many participatory modeling projects. I do not know the method in any detail, but I would certainly expect it to help with uncertainty!
Dear Antonie: Very interesting approach to uncertainty in Complex Systems Performances and in Participatory Modelling, especially after implementation of directives coming from already obtained models, which means what the attitude of stakeholders becomes when confronting residual uncertainties.
I would suggest that some possible solutions to the whole kind of uncertainties would have to do with Open Online Observatories and also with Observatorium Facilities managed in local spaces. The Science of Dialogic Design world extended community has come to this conclusion in Heraklion, Crete, Greece where an already working facility has begun to experience with same and different stakeholders groups, to become observers of the effects of models piloted at local complex situations. This is an incipient program, but it brings hopes for humanity to cope with complex socioeconomic problems and design succesful public policies that might be reproduced elsewhere, thus reducing uncertainties originated by unknown factors. Please give a look to the Demoscopio description [Moderator note; in October 2021 the following link was not accessible and has been deleted: dialogicdesignscience[dot]wikispaces[dot]com… Demoscopio] and let me know if you find it worthwhile.
Reynaldo, I am new to open online observatories. Very interesting! They remind me of the public engagement platforms that some US cities are using, such as Mindmixer, mySidewalk, and MetroQuest. Are you aware of them? How similar are they to your work?
Your categorizations are clear and useful, thanks.
But when you invited suggestions of alternatives to your framing, I thought of a “tipping point”, particularly in the context of public policy models. I would characterize that as the expectation of certain variables that they could reach values that, to accord with the evidence, must bring about discontinuous changes to the model’s behaviour and/or structure. There is uncertainty in predicting the triggering value, the likelihood of reaching it, and what the effects will be. Young adult anti-smoking policies and programs were good examples – the evidence showed that once a sufficient number of teens in a community stopped smoking, the remainder quit more quickly.
That seems to me to combine your notions of state and effect uncertainty into a single but useful modeling pattern. Perhaps there are other patterns?
As an aside, your mention of people’s differences reminded me of an old saying: “Managers are paid the big bucks to tolerate uncertainty!”
Lewis – I have never thought about tipping points in the context of uncertainty. Thank you!
If I understand you correctly, you are referring to a situation where a change in some variables fundamentally changes the “rules of the game”. For example, there is a relationship between anti-smoking programs and young people quitting. Initially, the impact is small (e.g. “one unit of government anti-smoking program leads to 0.2 units of smoking reduction”). At the tipping point, the same “unit of smoking programs” suddenly translates into a much stronger effect (e.g., ”one unit of programs leads to 1.5 units of smoking reduction”). Did I get this right?
If yes, I am wondering if this isn’t effect uncertainty after all: the model structure is uncertain and, as a result, we don’t know the effect of anti-smoking programs. This problem prevails, even if there is no state uncertainty about the input variable. A policy maker may say “We have budgeted 2 units of anti-smoking programs and will implement these programs”. However, if they do not know if they are pre- or post-tipping point, they won’t know if this will translate into 0.4 or 3 units of smoking reduction.
I agree with you that there are problems where state and effect uncertainty occur at the same time. In your example, this could be something like “We have budgeted 2 units of anti-smoking programs but we may only get 1 or 1.5 approved” or “We have budgeted these programs, but we may not have enough people to implement them”. In this case, you know neither the state nor the effect.
I still would prefer to keep them separate because, as a modeler, you’d probably address them differently. The effect uncertainty (tipping point problem) could be addressed by getting deeper insights into the model structure (in your example, maybe we need to explicitly model the word-of-mouth effect) or doing exploratory modeling. The state uncertainty could be, among others, addressed through probabilistic approaches.
Thanks for a good conceptual structure for uncertainties in relation to complex systems decision-making in general and for PM specifically. You also started to describe ways by which the different kinds of uncertainties are/can be handled during a PM process. I think this part is very interesting and useful, and that it could be developed further, perhaps into a kind of guide for handling uncertainties in PM. Or, have you already done that?
Thank you, Beatrice!
I like the idea of creating a guideline for managing uncertainty in participatory modelling – I have not done anything like it and am not aware of any, but they may exist. I see one big challenge for any such guideline: how do you correctly diagnose the nature of the uncertainty, so that you can find appropriate approaches to managing it? In practice, these types of uncertainty often exist in parallel and are difficult to discern. Are there diagnosis tools we could adapt?
Antonie, I very much enjoyed reading this blog. Your comment about requests for more and more data or analyses with the expectation that more information will resolve the issues resonated strongly. Some of these requests can be from stakeholders wishing to delay decision-making or who hope to uncover some facts that might sway other stakeholders towards a particular point of view.
I am interested in those genuine stakeholders who believe that more information will reduce their feelings of uncertainty. Are there ways or methods of displaying information or allowing stakeholders to interact with the information that might assist stakeholder groups get to that balance where they are able to operate with, rather than being stymied by or blind to, the uncertainty?
Val, thank you for your insightful question – great food for thought!
I think that people and communities simply have a different tolerance for residual uncertainty, regardless of how it is presented to them. In my personal experience, some people are simply more at ease with uncertainty than others: My students have a background in engineering and many of them have a strong mindset of precision, double checking, and getting it a 100% right. I don’t know if they ended up in engineering because of this mindset or if they were taught to think like this in engineering – in either case, it is what it is and there is little I can do about it.
(Side note: “Uncertainty avoidance” is actually considered such an important and long-term stable trait that it is even used to describe and differentiate national cultures).
I, therefore work under the assumption that I cannot fundamentally change people’s attitude toward uncertainty. However, I can change their decision-making approaches and emphasize robust planning, built-in flexibility, and real options. But I do not know if any of these concepts would apply to your projects. What do you think?