Moving from models that synthesize to models that innovate

By Pete Loucks

Pete Loucks (biography)

When computer technology became available for developing and using graphics interfaces for interactive decision support systems, some of us got excited about the potential of directly involving stakeholders in the modeling and analyses of various water resource systems. Many of us believed that generating pictures that could show the impact of various design and management decisions or assumptions any user might want to make would give them a better understanding of the system being modeled and how they might improve its performance.

We even got fancy with respect performing sensitivity analyses and displaying uncertainty. Our displays were clear, understandable, and colorful. Sometimes we witnessed users even believing what they were seeing. We occasionally had to remind users that our models were, and would continue to be, approximations of reality, at best. It was fun developing and using such tools, and indeed today most models that are used to analyze river basins, groundwater, and coastal zones incorporate interactive, graphics-based, decision support frameworks.

But what we modelers haven’t done yet is to figure out how to make our models suggest planning and management options that we haven’t thought of before. This would be an especially important feature for integrated water resources planning and management. Integrated implies that our models have included all the links to all the other major components of our social, economic and, if applicable, ecological environments.

Right now our models can only inform us about a system we have defined when we developed them. They analyze and synthesize but they don’t innovate. They cannot identify better assumptions regarding our model parameters and their values. They cannot suggest different systems boundaries or components. They cannot suggest different designs or policies based on components that we hadn’t already included in our models.

For a simple example, if we are modeling the design of a water storage tank, and we are using models to identify its least-cost length, width and height, wouldn’t it be nice if the output of such a model could also identify or suggest the possibility of designing a cylindrical, or spherical, tank and display their appropriate dimensions and costs? Similarly, if we are modeling a proposed reservoir, say on the Mekong River, in addition to learning how fast it may fill with sediment under different hydropower production and sediment management policies, wouldn’t it be nice if our model could suggest other sites, other designs and other operating policies that might be preferable to what has been modeled, and show the appropriate tradeoffs regarding hydropower production and sediment and fish passage.

This may be asking too much, but I don’t think so. Waiting for us to use in more creative ways are: massive data, Google search engines and their ability to access all the information available on the Internet, Google Earth, voice recognition (that kids take for granted when asking their cell phones questions), parallel cloud computing, and even three-dimensional virtual reality environments that you can step into (available today in various museums).

I am not at all optimistic about our ability to model and predict human behavior, but with such enhanced decision support systems, stakeholders—including decision makers—can become part of the overall model. We model builders and analysts can then observe what decisions they make before and during simulations of these systems, and more importantly we can observe what questions they ask of the model, and what aspects of water resource systems they may be most concerned about. This is turn can be used to improve future models.

Hopefully this will become more than just a dream.

Biography: Professor Daniel P. Loucks serves on the faculties of the School of Civil and Environmental Engineering and the Institute of Public Affairs at Cornell University. His teaching and research interests include the development and application of systems analysis methods integrating economics, ecology, environmental engineering and public policy. He is the principal author of a widely used text in water resources systems engineering. He is a member of the US National Academy of Engineering, and recipient of a Senior U.S. Scientist Research Award from the German Alexander von Humboldt Foundation. He is member of the Core Modeling Pursuit funded by the National Socio-Environmental Synthesis Center (SESYNC).

This blog post kicks-off a series resulting from the first meeting in March 2016 of the Core Modelling Pursuit. This pursuit is part of the theme Building Resources for Action-Oriented Team Science through Syntheses of Practices and Theories funded by the National Socio-Environmental Synthesis Center (SESYNC).

8 thoughts on “Moving from models that synthesize to models that innovate”

  1. Hi Peter,

    “But what we modelers haven’t done yet is to figure out how to make our models suggest planning and management options that we haven’t thought of before.”

    Until, as you suggest, you integrate humans in your models. This is precisely what we do with Companion Modelling. We turn our models into games and invite players to solve the puzzles and challenge the rules, propose alternative solutions, explore new behaviours.

    Through this simple hack, with “real brains inside”, our models “achieve innovation and the exploration of possible futures beyond what we think may happen”

    To allow for emergence and surprises, you need however to empower the participants. They have to be given freedom to explore all ranges of possible outcomes, they must be able to reframe the problem and the game, and create new options not initially contemplated by the research team.

    “We model builders and analysts can then observe what decisions they make before and during simulations of these systems, and more importantly we can observe what questions they ask of the model, and what aspects of water resource systems they may be most concerned about. This is turn can be used to improve future models.”

    If you follow the path you outline, the line between modelers and participants becomes blurred – and this is a good thing. I argue that the models that need improvement are the ones used in decision making. And until the moment we outsource responsibility to algorithms, this remains the mental models of the decision-makers themselves.

    • Thanks for your comment Claude. I fully agree with you and compliment you for your innovative approach to what has been called shared vision or collaborative modeling by others. I am convinced we are never going to be able to model, successfully, human behavior even though some very respected modelers are trying, Thus in my opinion the alternative is to, as you implied, put the human brain(s) of stakeholders into our model development and implementation processes. From their reactions and questions and suggestions we might get a better idea of just what their objectives and concerns are, and even what they consider an optimal mix of multiple objectives, providing they can reach a consensus. I think your comment says the same thing. But what I was dreaming of is having models that can suggest options that the model developers and users haven’t thought of before – in other words invent new ideas and then of course predict the impact if such ideas were implemented. I’m not asking such models to tell us what we should do, but rather what we could consider doing. Then we could decide whether or not any of the ideas are any good. I wish I knew how construct models that could to do this. I can imagine someday someone in the artificial intelligence field will show us how.

      • I agree, and I seek the same outcome – solutions nobody had thought before. In the approach we use, I have the hunch that what is required is a Wildcard in the set of players. A “joker” whose role is to disrupt expectations, explore crazy strategies. I have no proof yet, but am currently testing this.

        The other alternative I see, from articifical intelligence, seems to be brute force – genetic algorithms that generate thousands of choices and a selection process that let us filter as you say what we could consider doing.

        For funkier outcomes, we could always try giving human players LSD before a session…

  2. Well argued! It fits our mission though we are not close to J.A.R.V.I.S., yet. On there are some first collaborative models where stakeholders and experts can directly work on the same models using natural language to connect arguments in explorative, qualitative cause and effect models. The analyses of insight matrices then shows where we have to come up with ideas whether to foster a measure or to address an obstacle. When defining a new influence the iMODELER offers a magic button that intelligently searches for ideas from other models. That is very useful though this platform has just begun to learn. Finally the four so called know-why-questions (also implemented with the iMODELER) should help to develop effective ideas.


Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.