By Pete Loucks

How does a modeler know the ’optimal’ level of complexity needed in a model when those desiring to gain insights from the use of such a model aren’t sure what information they will eventually need? In other words, what level of model complexity is needed to do a job when the information needs of that job are uncertain and changing?
Simplification is why we model. We wish to abstract the essence of a system we are studying, and estimate its likely performance, without having to deal with all its detail. We know that our simplified models will be wrong. But, we develop them because they can be useful. The simpler and hence the more understandable models are the more likely they will be useful, and used, ‘as long as they do the job.’
Modelers of real systems addressing real problems have the job of providing the information needed by those making recommendations or decisions. But those individuals themselves often don’t know what they will need or want until they get what our models give them. And models are not their only source of information, as they will also receive advice from their staffs of planners and lawyers. But all this advice and information is worthless unless there is a level of trust between those providing it and those receiving it. This includes trust in the models used to generate information.
One way to build that level of trust and at the same time help guide modelers in determining just what level of complexity may be most appropriate is to start simple and only add complexity when it is called for. I illustrate this with examples from two planning projects I’ve been involved in.
Example #1 Regulating flows and water levels in the lower Great Lakes
The issue in this project was how to regulate the flows and water levels in Lake Ontario and the St. Lawrence River, so as to keep all its users happy or, at least, minimally unhappy. This included maintaining the health of its wetland ecosystems which had not been a goal when the existing operating policy was originally established.
Those of us involved in leading this 5-year, 20-million-dollar, study felt it was very important to have stakeholder buy-in if any new policy was to be successfully implemented. Stakeholders included:
- shoreline owners concerned about their land being eroded
- the navigation (shipping) industry that made it clear to us that lowering river flow levels even by a foot would cost them millions of dollars in reduced cargo carrying capacity
- hydropower interests desiring high water levels and flows
- recreational boaters who wanted access to their marinas without incurring boat damage
- environmental interests that wanted to protect and restore wetland species and fish
- commercial fishing interests
- water supply utilities who wanted flows above their supply intakes.
We went to great efforts to involve stakeholders in the building of what we called a ‘shared vision model’ – an interactive graphics based spreadsheet model that could be modified on the spot to address any stakeholder concern or question and display the results in a number of attractive, and hopefully meaningful, ways. When needed we added additional complexity to address evolving issues. Some of the relationships in this spreadsheet model were based on much more detailed models developed to address, for example, shoreline erosion as influenced by water levels, waves caused by shipping or wind, and ice.
Building trust in our analyses, and in the models we were using, was a big effort. To fast forward, I think over time we did get that level of trust, but we didn’t get a ‘shared vision’ of what to do because different stakeholders were not willing to compromise to reach that ‘minimally unhappy’ compromise.
Example #2 Hydropower dams in the Mekong River basin
In the Mekong River basin the development of hydropower dams has become attractive to investors, as well as to potential consumers of electrical energy. Since the ending of the wars that plagued the region, that basin – containing one of the most bio-diverse rivers in the world—has become the site for about 150 potential reservoirs, some of which are under construction today.
What has maintained the river’s natural biodiversity has been its hydrologic and sediment regimes. Sediment not only transports nutrients that support over 1200 species of fish, but it also helps maintain the Vietnam delta that is the food basket for much of the region. Dams can alter those flow and sediment regimes. Hence the challenge in this project is to identify alternative ways of siting, designing and operating hydropower reservoirs that allow sediment passage through dams, which could extend the life of reservoirs and also provide downstream benefits.
To estimate the effectiveness of alternative siting, designs and operating policies, models were developed. Initially they were developed for use on spreadsheets to aid in technology transfer. Later it became obvious we needed to address a number of uncertainties that caused us to increase the model complexity and develop software that replaced our use of spreadsheets.
More recently fish and larval passage has become an issue, and addressing this issue has resulted in an increase in the model’s complexity. As in the Great Lakes example, the outputs of other models addressing in more detail some of the hydraulic issues, for example, were inputs to the overall planning model designed to communicate information to those making decisions in the region. This simulation model helped address various what-if questions and identify tradeoffs among various objectives, and in general help focus the debate among various ministries in the basin on just what decisions to make.
Conclusion
In both cases model complexity was determined in an adaptive manner. As different concerns were expressed, the model or models were modified to address those concerns. Such modifications increased model complexity. But starting with relatively simple models, and adding additional complexity only when needed, built up trust, and promoted understanding.
How do you deal with complexity in building models?
Biography: Professor Daniel P. Loucks serves on the faculties of the School of Civil and Environmental Engineering and the Institute of Public Affairs at Cornell University. His teaching and research interests include the development and application of systems analysis methods integrating economics, ecology, environmental engineering and public policy. He is the principal author of a widely used text in water resources systems engineering. He is a member of the US National Academy of Engineering, and recipient of a Senior U.S. Scientist Research Award from the German Alexander von Humboldt Foundation. He is member of the Core Modeling Practices pursuit funded by the National Socio-Environmental Synthesis Center (SESYNC).
This blog post kicks-off a series resulting from the second meeting in October 2016 of the Core Modelling Practices pursuit. This pursuit is part of the theme Building Resources for Complex, Action-Oriented Team Science funded by the National Socio-Environmental Synthesis Center (SESYNC).
Oliver Wendell Holmes Jr. – justice of the Supreme Court of the United States
“The only simplicity for which I would give a straw
is that which is on the other side of the complex
— not that which never has divined it.”
“Holmes-Pollock Letters : The Correspondence of Mr. Justice Holmes and Sir Frederick Pollock, 1874-1932” (2nd ed., 1961), p. 109.
Usually quoted at least by consultants like me, as
“I wouldn’t give a fig for the simplicity
on this side of complexity;
I would give my right arm for the simplicity
on the far side of complexity”
and often attributed to Oliver Wendell Holmes, Sr.
Thanks for the nice description of the practice of starting simple and adding complexity if needed. I particularly appreciate that you single out the conditions in which this is a useful practice, namely building trust and helping stakeholders to understand the model.
I think this partially explains why the “start simple” approach is not used in other domains. For example, stakeholders may not be able to or want to understand the simplest model to do the job if it is already difficult to understand, or represents the system at a level of detail they cannot relate to, e.g. heterogeneity in a groundwater model. In those same cases, trust in a particular parameterised model is also more difficult to acquire, which might explain why more attention is given instead to uncertainty quantification and robustness of results.
It seems this is consistent with the Great Lakes case – the more detailed models were not used directly by stakeholders, but rather through the spreadsheet model. Presumably it would have been difficult to establish trust in the detailed model directly, but building trust in the higher level spreadsheet model was more manageable?
I’d be interested to hear whether you agree. Have you had other experiences with non-examples, where starting simple was not the right strategy?
Thanks Joseph: You are making me think! Starting simple and adding complexity as needed is the way I typically work, but indeed the word ‘simple’ is relative. Simple compared to what? But at whatever level of model complexity I have begun with, the model or set of models typically gets more complex as trust and confidence builds, and/or as the needs of the ‘job’ expand and change, I can’t think of any modeling exercise I’ve been involved with that began with a model more complex than needed, at least in my opinion. It could be that others would have disagreed, of course, but I was not aware of any.. But for sure I’ve experienced adding too much complexity, especially when computers were much slower than they are now. In such cases I’ve had to backtrack because the added complexity made the model too impractical or too clumsy or whatever.
My observation today is that since we now have considerable computer power, many of us tend to get lazy and just crank out results of thousands of simulations, say for some uncertainty analysis, hoping each simulation is ‘right’, and not really being forced to think about how a more fundamental or simpler model might produce just as good results in a more transparent way. Complexity doesn’t necessarily add more precision or accuracy. It can, but it’s not a given. We also have to be careful to avoid the illusion of precision when it is not there. Just because our models can produce results to the nth decimal place doesn’t mean we should report that to a client or state that in some journal paper. Reading a report that states that the expected cost twenty years from now will be $43,597,206.81 doesn’t give me much confidence in the modeler, I personally would prefer to see a range of costs likely to occur in about two decades, expressed just in millions of dollars. Modeling teams must be sure to interpret their model outputs and communicate the degree of uncertainty associated with their results,especially if they pertain to say future events that may be impossible to verify.
You mention the situation where even a simple model may be too complex for some stakeholders to understand. Absolutely. Then it seems to me it is even more critical that a sufficient level of trust be developed between the modeling team and those who receive and are to benefit from the model results.
OK, give me a hard time! Thanks again Joseph. Pete
Thanks, Pete. You add several nice points. Backtracking of complexity may be necessary. Starting simple is about transparency as well as trust. Trust in the modeller vs. the model is sometimes a more appropriate strategy.
If you allow it, I would have two more questions for you:
– How do you know if a model is too complex? Is there anything you look for other than being too impractical or clumsy?
– One reason for starting complex (put everything you know into the model) is that it minimises the presence of systematic errors due to omitted processes, and therefore makes it easier to quantify uncertainty. If the aim is to estimate a range of costs, then a complex model may more appropriate than a simple one. What do you think of this idea?
Hi Pete
Of course I identify completely with your “starting simple” proposition. How could I not when you write so persuasively and with such experience? And it is judicious in many circumstances, including when stakeholders are not involved and the goal is scientific understanding of a problem where there are basic uncertainties about process representation, parameter values and data. We argue this in the purely scientific context in papers in Environmetrics in 1984 and Water Resources Research in 1993. But when goals are unclear and stakeholders have different perspectives, uncertainty becomes rife and being unnecessarily complicated risks a useful outcome.
Amen! I completely agree Tony. Pete