By Randall J. Hunt
How to give others your hard-won insights so that their work can be more informed, efficient, and effective? As I’ve gotten older, it is something that I think about more.
It is widely recognized that the environment is an integrated but also “open” system. As a result, when working with issues relating to the environment we are faced with the unsatisfying fact that we won’t know “truth”. We develop an understanding that is consistent with what we currently know and what we consider state-of-the-practice methods. But, we can never be sure that more observations or different methods would not result in different insights.
Even deciding on the true uncertainty to report for an environmental model is uncertain! Such lack of affirmation ripples to the creation of guidelines, and the confidence one can project when transferring insight from the areas and times where it was won. Do we really know enough to rule out future competing ideas? Are we sure that actions that work now won’t frivolously spend precious resources and time in future work?
Environmental modeling is a case in point. As Mary Anderson, Bill Woessner, and I discuss in our recent textbook on applied groundwater modeling, the model purpose has to be the ultimate driver for deciding which simplifications of reality are appropriate. One cannot decide on appropriate simplification of an unknowably complex world without knowing why the model is being built in the first place. Consequently, the processes, parameterization and concepts included in a model will need to change if its purpose changes.
Likewise, different environmental settings can require inherently different levels of complexity for appropriate representation. A parameterization that is saliently simple for a homogeneous sandbox may be grossly oversimplified when applied to glacial braided stream sediments.
Such fundamental concerns make one wonder if there are true widely applicable guidelines, protocols, practices, and/or recipes for simulating the environment. Not a happy thought for authors writing a textbook on how to apply modeling approaches to the environment!
During the writing process, it occurred to me that those strenuously objecting to the very idea of general guidelines in a world dominated by site-specific factors were not going away, and had a legitimate concern. Yet this issue is no different than that of knowledge itself. One does not know something even when an experiment is successful; there may be many alternatives that give the same (or better) result.
When an experiment fails, however, something is truly learned. Moreover, a hypothesis that does not work at one location is less suited for transferring to other places and times. With this line of thinking, we decided to include a section at the end of each textbook chapter that does not provide guidelines for applied modeling, but rather reports common modeling errors we had seen (and often done ourselves).
This became an immensely satisfying way to pass on knowledge because it is intrinsically a powerful shorthand, a checklist, that concisely conveys the hard-won insight of the authors.
For example, I may not know what is the correct amount of time to devote to each phase of an environmental modeling project for every case. But I do know that many times a modeler gets lost in the world of model construction and testing, leaving insufficient resources for the all-important subsequent activities such as uncertainty analysis, model report and documentation, and constructing a coherent and defensible model archive.
Similarly, I may not know the best method to calculate uncertainty for every case that may occur, but I’ve seen cases where not including any discussion of uncertainty has harmed a model’s acceptance and its utility.
What I suggest is akin to the call for scientists to publish more of their “ugly babies” (Shapiro, 2007). Often, just as parents like to show only the attractive pictures of their children, science rewards publications and presentations that show flattering pictures of the underlying research.
But what might be learned if other scientists could see the ugly babies? Those data that did not support the favored hypothesis or dominant paradigm? Likewise, rather than reporting only environmental modeling activities that worked, what if the modelers also included activities or models that did not?
With authors typically having an option to associate online supplemental material with a publication, the ugly baby could have a seat at the table (even if all eyes are primarily on the good-looking sibling). Of course, today’s readers have limited time, so what is related should focus on areas thought to have high transferability and general application. But if done well, this additional hard-won insight can be passed to others, and in turn, help them avoid similar pitfalls.
I believe this approach has untapped potential. Relating common errors was deemed sufficiently important for facilitating good applied modeling that Mary, Bill and I argued successfully for their distribution without buying our book; you can freely access online the supplementary material in our applied groundwater modeling textbook.
What do you think? Can you think of valuable lessons you have learnt when a modeling effort did not turn out as intended? Would it be worth relating those examples? Perhaps, at the end of the day, we can collectively be more successful when given the chance to learn from our individual failures.
Anderson, M. P., Woessner, W. W. and Hunt, R. J. (2015). Applied Groundwater Modeling: Simulation of Flow and Advective Transport (2nd Edition). Academic Press: San Diego, California, United States of America.
Shapiro, A. M. (2007). Publishing our “Ugly Babies”. Groundwater, 45, 6: 655.
Biography: Randall Hunt PhD is a Research Hydrologist and Associate Director of Science at the U.S. Geological Survey Wisconsin Water Science Center. Before coming to the Survey he worked in the private sector as a consultant. His research uses a variety of approaches such as numerical modeling, ion and isotope chemistry, parameter estimation, and stochastic methods. It has emphasized a range of groundwater-surface water settings including wetland, stream, and lake interactions. His work also includes investigation of ecohydrology, spanning areas such as effects of water on aquatic biotic/ecologic communities to how pathogens affect drinking water supplies. He is member of the Core Modeling Practices pursuit funded by the National Socio-Environmental Synthesis Center (SESYNC).
This blog post is one of a series resulting from the second meeting in October 2016 of the Core Modelling Practices pursuit. This pursuit is part of the theme Building Resources for Complex, Action-Oriented Team Science funded by the National Socio-Environmental Synthesis Center (SESYNC).