Integration and Implementation Insights

Five lessons to improve how models serve society

By Andrea Saltelli

andrea-saltelli
Andrea Saltelli (biography)

Models are mathematical constructs better understood by their developers than by users. So should the public trust models? What insights can help society demand the quality it needs from modeling?

Mathematical modelling is a multiverse, where each scientific discipline adopts its own styles of modeling and quality control. Very little in the way of ‘user instructions’ is available to those affected by modeling practices.

This blog post presents five lessons to improve modelling that were developed as a manifesto by a cross-disciplinary group of natural and social scientists (Saltelli et al., 2020).

Lesson 1: Mind the assumptions

Uncertainty quantification and sensitivity analysis are complementary approaches to measuring the robustness of model predictions. The usefulness of a model depends largely on the accuracy and credibility of its outputs. Yet, because model inputs are rarely precise, output values are always subject to some imprecision and uncertainty. Uncertainty analysis is the process of determining the uncertainty in the model output that is generated from uncertainty in parameter inputs.

An essential complement to uncertainty quantification is a sensitivity analysis, which involves assessing how variations in model outputs can be apportioned to different input sources. Performing global uncertainty and sensitivity analyses is fundamentally critical to model quality. Conveying the uncertainty associated with model predictions can be as important to decision-making and policy development as the predictions themselves.

Lesson 2: Mind the hubris

At their core, models are simplified representations of real systems or processes. It is commonly held that simpler models are often preferable to complex ones. They are easier to understand and validate, and their predictions are typically more accurate. Increasing complexity comes at the cost of adding parameters, whose uncertainty propagates to the model outputs.

But this is at odds with current trends that see increasingly complex and larger models. This attraction to complexity may reflect the justified ambition of modelers to achieve a more accurate representation of the study system. But no matter how big or complex the model is, it cannot reflect all of reality.

If models are to fulfill their objectives, modelers must resist the urge of complexity as a goal and, instead, build models with an optimum trade-off between complexity and error.

Lesson 3: Mind the framing

Framing refers to the different lenses, worldviews, or underlying assumptions that guide how individuals, groups, and societies perceive a particular issue. Model results will at least partly reflect their creators’ disciplinary orientations, interests, and biases. Critics of model predictions or policy implications will point to these biases to sow public distrust.

How these results are framed and communicated can influence public opinion and steer one policy outcome over another. Modeling practitioners must develop models that are transparent and help model users understand their inner workings and outputs. Successful and transparent framing can support effective results communication and enhance trust with stakeholders.

Lesson 4: Mind the consequences

When appropriately executed, mathematical modeling helps society make smarter decisions. But when not done well, models can lead to wrong or simply unjustified choices. Quantification can backfire. By helping to make complex financial products seem safe but failing to highlight the underlying assumptions clearly, models contributed to the breakdown of global financial markets in 2008.

Society must collectively establish new social norms and ethics of quantification to ensure model predictions contribute to effective decision-making. Modelers must refrain from projecting a false sense of certainty, and decision-makers cannot offload accountability to models just because they fit a pre-established agenda.

Lesson 5: Mind the unknowns

Failure to acknowledge and communicate uncertainties can artificially limit policy options and open the door to unintended consequences. Philosophers have long reflected on the virtue of knowing what is not known. In the 1400s German philosopher and mathematician Nicholas of Cusa described this in De Docta Ignorantia — learned ignorance.

Mathematical modeling often commits the sin of excess precision. Too often, modelers are reluctant to acknowledge uncertainties, fearing candor undermines their credibility. In presenting their results, modelers must communicate how prediction uncertainties might change the conclusions. Being transparent about uncertainties strengthens public trust, both in the models and their sources.

Conclusion:

Statistician George EP Box famously said, “Essentially, all models are wrong, but some are useful.” Useful models foster understanding. When used appropriately, they make life better and safer in myriad ways. The five lessons above can help ensure mathematical models are responsibly produced and ultimately useful. Each of these lessons showcases the strengths and limits of model outputs and collectively will help preserve mathematical modeling as a valuable tool.

Do you have additional lessons to share? Do you have examples to illustrate the lessons above? What are other ways that we can improve both modeling and the ability of decision makers and the public to understand models?

This blog post is adapted from Saltelli A. (2022) Reckoning with uncertainty, social sciences lessons for mathematical modelling. In Dhersin, J-S., Kaper, H., Ndifon, W., Roberts, F., Rousseau, C. and Ziegler, G. M. (Editors), Mathematics for action: supporting science-based decision-making, UNESCO [62236]. (Online):
https://unesdoc.unesco.org/ark:/48223/pf0000380883.locale=en

Reference:
Saltelli, A., Bammer, G., Bruno, I., Charters, E., Di Fiore, M., Didier, E., Espeland, W. N., Kay, J., Lo Piano, S., Mayo, D., Pielke Jr, R., Portaluri, T., Porter, T. M., Puy, A., Rafols, I., Ravetz, J. R., Reinert, E., Sarewitz, D., Stark, P. B., Stirling, A., van der Sluijs, J., Vineis, P. (2020). Five ways to ensure that models serve society: A manifesto. Nature, 582, 7813: 482-484. (Online) (DOI): https://doi.org/10.1038/d41586-020-01812-9

Biography: Andrea Saltelli PhD is a guest researcher at the Centre for the Study of the Sciences and the Humanities at the University of Bergen in Norway. He is mainly focused on sensitivity analysis of model outputs, a discipline where statistical tools are used to interpret the output from mathematical or computational models, and on sensitivity auditing, an extension of sensitivity analysis to the entire evidence-generating process in a policy context.

Exit mobile version