Five lessons to improve how models serve society

By Andrea Saltelli

Andrea Saltelli (biography)

Models are mathematical constructs better understood by their developers than by users. So should the public trust models? What insights can help society demand the quality it needs from modeling?

Mathematical modelling is a multiverse, where each scientific discipline adopts its own styles of modeling and quality control. Very little in the way of ‘user instructions’ is available to those affected by modeling practices.

This blog post presents five lessons to improve modelling that were developed as a manifesto by a cross-disciplinary group of natural and social scientists (Saltelli et al., 2020).

Lesson 1: Mind the assumptions

Uncertainty quantification and sensitivity analysis are complementary approaches to measuring the robustness of model predictions. The usefulness of a model depends largely on the accuracy and credibility of its outputs. Yet, because model inputs are rarely precise, output values are always subject to some imprecision and uncertainty. Uncertainty analysis is the process of determining the uncertainty in the model output that is generated from uncertainty in parameter inputs.

An essential complement to uncertainty quantification is a sensitivity analysis, which involves assessing how variations in model outputs can be apportioned to different input sources. Performing global uncertainty and sensitivity analyses is fundamentally critical to model quality. Conveying the uncertainty associated with model predictions can be as important to decision-making and policy development as the predictions themselves.

Lesson 2: Mind the hubris

At their core, models are simplified representations of real systems or processes. It is commonly held that simpler models are often preferable to complex ones. They are easier to understand and validate, and their predictions are typically more accurate. Increasing complexity comes at the cost of adding parameters, whose uncertainty propagates to the model outputs.

But this is at odds with current trends that see increasingly complex and larger models. This attraction to complexity may reflect the justified ambition of modelers to achieve a more accurate representation of the study system. But no matter how big or complex the model is, it cannot reflect all of reality.

If models are to fulfill their objectives, modelers must resist the urge of complexity as a goal and, instead, build models with an optimum trade-off between complexity and error.

Lesson 3: Mind the framing

Framing refers to the different lenses, worldviews, or underlying assumptions that guide how individuals, groups, and societies perceive a particular issue. Model results will at least partly reflect their creators’ disciplinary orientations, interests, and biases. Critics of model predictions or policy implications will point to these biases to sow public distrust.

How these results are framed and communicated can influence public opinion and steer one policy outcome over another. Modeling practitioners must develop models that are transparent and help model users understand their inner workings and outputs. Successful and transparent framing can support effective results communication and enhance trust with stakeholders.

Lesson 4: Mind the consequences

When appropriately executed, mathematical modeling helps society make smarter decisions. But when not done well, models can lead to wrong or simply unjustified choices. Quantification can backfire. By helping to make complex financial products seem safe but failing to highlight the underlying assumptions clearly, models contributed to the breakdown of global financial markets in 2008.

Society must collectively establish new social norms and ethics of quantification to ensure model predictions contribute to effective decision-making. Modelers must refrain from projecting a false sense of certainty, and decision-makers cannot offload accountability to models just because they fit a pre-established agenda.

Lesson 5: Mind the unknowns

Failure to acknowledge and communicate uncertainties can artificially limit policy options and open the door to unintended consequences. Philosophers have long reflected on the virtue of knowing what is not known. In the 1400s German philosopher and mathematician Nicholas of Cusa described this in De Docta Ignorantia — learned ignorance.

Mathematical modeling often commits the sin of excess precision. Too often, modelers are reluctant to acknowledge uncertainties, fearing candor undermines their credibility. In presenting their results, modelers must communicate how prediction uncertainties might change the conclusions. Being transparent about uncertainties strengthens public trust, both in the models and their sources.


Statistician George EP Box famously said, “Essentially, all models are wrong, but some are useful.” Useful models foster understanding. When used appropriately, they make life better and safer in myriad ways. The five lessons above can help ensure mathematical models are responsibly produced and ultimately useful. Each of these lessons showcases the strengths and limits of model outputs and collectively will help preserve mathematical modeling as a valuable tool.

Do you have additional lessons to share? Do you have examples to illustrate the lessons above? What are other ways that we can improve both modeling and the ability of decision makers and the public to understand models?

This blog post is adapted from Saltelli A. (2022) Reckoning with uncertainty, social sciences lessons for mathematical modelling. In Dhersin, J-S., Kaper, H., Ndifon, W., Roberts, F., Rousseau, C. and Ziegler, G. M. (Editors), Mathematics for action: supporting science-based decision-making, UNESCO [62236]. (Online):

Saltelli, A., Bammer, G., Bruno, I., Charters, E., Di Fiore, M., Didier, E., Espeland, W. N., Kay, J., Lo Piano, S., Mayo, D., Pielke Jr, R., Portaluri, T., Porter, T. M., Puy, A., Rafols, I., Ravetz, J. R., Reinert, E., Sarewitz, D., Stark, P. B., Stirling, A., van der Sluijs, J., Vineis, P. (2020). Five ways to ensure that models serve society: A manifesto. Nature, 582, 7813: 482-484. (Online) (DOI):

Biography: Andrea Saltelli PhD is a guest researcher at the Centre for the Study of the Sciences and the Humanities at the University of Bergen in Norway. He is mainly focused on sensitivity analysis of model outputs, a discipline where statistical tools are used to interpret the output from mathematical or computational models, and on sensitivity auditing, an extension of sensitivity analysis to the entire evidence-generating process in a policy context.

17 thoughts on “Five lessons to improve how models serve society”

  1. Perhaps not an appropriate question to ask, being someone who is not at all versed in mathematics or quantification: But do you think these lessons apply to non-mathematical models/frameworks? What are the nuances?

    • Dear Huiyuan
      It is surely appropriate, and surely there are important nuances in different families of quantification where models of some sort are used – from the construction of algorithms to the use of statistical models for various kind of analyses, from mathematical models proper (but there are many families there as well) to composite indicators. I believe that what makes the rules necessary is not so much the style of quantification (or non quantification if the model is e.g. a logic model) but its use. Whenever the model has to be seen by someone different from its developers, then excess complexity, neglect of the consequences, possibly forgotten assumptions or implicit frames, or removal of ignorance are all issue to be considered.
      Of course the nuances will kick in when the rules will need to be applied in practice.

  2. Perhaps one reason for the relative obscurity of OR, and the ‘soft systems theory’ from which it is partly descended, is precisely its use of skill and judgement in its operations. It thereby doesn’t fit with the ‘vulgar-Cartesian’ paradigm that now dominates policy-relevant science, where junk-numbers are the hallmark of Truth. I recommend ‘Understanding Human Ecology / A Systems Approach to Sustainability’ by Robert Dyball and Barry Newell, for examples of totally qualitative models, which focus on structure and process and thereby provide enlightenment without delusion ([Moderator update in August 2023 – note this comment relates to the 1st edition, whilst this link now goes to the (2023) 2nd edition]:

    • Thanks for the hint Jerry! For the readers interested in the Cartesian paradigm or dream two great readings below. [1] contains a splendid preface by Jerry Ravetz himself.

      [1] Â. G. Pereira and S. Funtowicz, Science, Philosophy and Sustainability : the End of the Cartesian dream. 2015.
      [2] P. J. Davies and R. Hersh, Descartes’ Dream: The World According to Mathematics (Dover Science Books)

        • Thanks for asking David.
          The books are really worth reading, but if you ask me for a definition “in pills” I would say that the Cartesian dream is the idea of man as master and possessor of nature, of prediction and control, of Bacon’s wonders of science and of Condorcet’s mathematique sociale [1]. If you like reading about these topics, there is a working paper in open access [2] and an interesting video lecture by Daniele Sarewitz [3], Then there is Toulmin [4], but this would be another book!

          [1] See slides 15-36 here
          [2] E. S. Reinert, M. Di Fiore, A. Saltelli, and J. R. Ravetz, “Altered States: Cartesian and Ricardian dreams,” UCL Institute for Innovation and Public Purpose, London, IIPP WP 2021/07, 2021,
          [4] S. Toulmin, Cosmopolis : the hidden agenda of modernity. University of Chicago Press, 1992.

          • Looking at your very interesting (and colorful) slides, Andrea, I suspect I half agree with you.

            The use of models and algorithms in policy making is certainly wildly and harmfully overdone these days. I enjoyed that you include the so-called social cost of carbon, as I consider it possibly the worst misuse of models in history. It uses a combination of physical,social and economic models to go out an incredible (literally) 300 years to derive these seemingly precise numbers, which are now written into federal policy. 300 years! That this obvious absurdity is now widespread policy is a measure of just how bad the situation is in general.

            On the other hand I am probably still a Cartesian dreamer. In fact my shortest definition of science is “Science is the mathematical description of nature”. Of course description includes discovering and understanding what is to be described, plus discovering the math needed to describe it. Descarte’s was one of the greatest math discoveries, so he is allowed a bit of hype.

            As for “possessing” nature, I suspect that is not a good translation. Today we talk about management and control, to the extent that is possible. As a civil engineer I worked in the US flood control program. We try to manage the impact of hurricanes.

            More deeply I control a small bit of nature by living in a house. I control the temperature and light, as well as fire. I control precipitation and wind by (hopefully) keeping them out. When I go out I continue to try to control nature by wearing clothing. I have successfully camped out at minus 30F.

  3. Dear Andrea
    I was inspired by your artistic, description of mathematical modeling: «Mathematical modelling is a multiverse, where each scientific discipline adopts its own styles of modeling and quality control. Very little in the way of ‘user instructions’ is available to those affected by modeling practices». However, in my life I have often met mathematicians who, to put it mildly, were condescending towards representatives of the humanities and social sciences. Perhaps that is why David Wojick’s arguments are so convincing: «Models are useful in science to explore possibilities, but mere possibilities typically do not support policy decision making. Science may not be to the point where it can usefully assist policy decision making in many cases».

    Recently, in this blog there was a discussion about the responsibility of scientists, more precisely, about whether this responsibility ends after the transfer of research results to the customer. In this context, one can philosophize a little over the terms “responsible creation” and “usefulness” of mathematical models.

    I think that the content of these terms largely determines the level of the scientific worldview from which “responsibility” and “usefulness” are perceived. If we state that science does not have a final idea of an object or phenomenon, then we can talk about “limited” liability and the “obvious” usefulness of mathematical models. Many scientists and politicians are satisfied with such conclusions.

    It is important to note that until recently the general level of the scientific worldview was determined by outstanding scientists. The modern scientific worldview is actively shaped by the structure of disciplinary, interdisciplinary, multidisciplinary and transdisciplinary approaches, disciplinary and systems thinking. The strengthening of interdisciplinary interactions contributed to the unification of the possibilities of systems thinking and transdisciplinarity. Such a combination made it possible to transform “limited” liability into “legitimate” liability. In turn, “obvious” utility was transformed into “inevitable” utility.

    Conclusion of philosophizing: It is impossible to refuse creation of mathematical models. But, one should logically systematize the multiverse of mathematical modeling. I am sure that everything will “fall into place” if we start talking about three levels of scientific and social problems: low-threshold, medium-threshold and high-threshold problems. This will allow the compilation of appropriate lessons and recommendations for mathematical modeling for each problem threshold (five lessons for each problem threshold). In this case, we will really be able to prove and provide the expected “legitimate” liability and “inevitable utility” of mathematical models. Perhaps it will be possible to prepare and implement joint projects of systems transdisciplinary strengthening of actual mathematical modeling?

    • Dear Vladimir
      “A new ethics of quantification must be nurtured, which takes inspiration from a long tradition of sociology of numbers; Pierre Bourdieu and Theodor Porter come to mind […] the distinction between a positivistic and a relativistic philosophy in model validation needs to be overcome for progress to be achieved.”
      This is the close of a recent article of mine [1] to the effect that the voice of social sciences and humanities urgently need to be heard in the modelling community (I should have mentioned Alain Desrosières though, my bad).
      I also agree with you that the terms useful and responsible need to be used with a clear attention to power structures: useful to whom, responsible toward what are key questions. As you note, my worldview determines what I understand from these terms, and no technique is neutral in this respect.

      Thanks for sharing your ideas on how to proceed on this.


      [1] A. Saltelli, “Statistical versus mathematical modelling: a short comment,” Nat. Commun., vol. 10, pp. 1–3, 2019, doi: 10.1038/s41467-019-11865-8.

  4. Operational research (OR) is an extensive but quite unknown field of research which uses mathematical modelling in problem solving and decision making in areas ranging from business and industry to environment and energy. Somehow OR has remained out of focus even to mathematicians. The issues related to assumptions, complexity, framing and uncertainty, which are described in this manifesto, have been discussed for long in the OR community. They are well recognized and taught in the academic curricula. The manifesto refers to behaviour in modelling. Mathematics was for long seen exact and bias free but as soon as people start using it for problem solving human behaviour becomes relevant. The way we use models matters as is noted in the manifesto. Today, there is very active research in the area of behavioural OR (BOR) the focus of which is precisely the human impact on modelling. People interested to follow BOR activities, such as conferences, workshops and summer schools, can join for free the EURO working group on BOR at

    For more see, e.g. ,these publications:
    Hämäläinen, R. P., Luoma, J., & Saarinen, E. (2013). On the importance of behavioral operational research: The case of understanding and communicating about dynamic systems. European Journal of Operational Research, 228(3), 623-634.
    Hämäläinen, R. P., Luoma, J., & Saarinen, E. (2014). Mathematical modeling is more than fitting equations. The American Psychologist, 69(6), 633-634.
    Hämäläinen, R. P. (2015). Behavioural issues in environmental modelling–The missing perspective. Environmental Modelling & Software, 73, 244-253.
    Franco, L. A., Hämäläinen, R. P., Rouwette, E. A., & Leppänen, I. (2021). Taking stock of behavioural OR: A review of behavioural studies with an intervention focus. European Journal of Operational Research, 293(2), 401-418.

    Also, for those interested in a discussion around leadership and modelling, see:
    Leadership in participatory modelling by Raimo Hämäläinen, Iwona Miliszewska and Alexey Voinov

    • OR is certainly useful, as is a lot of modeling in well defined and predictable cases. I use modeling for stability analysis in designing large dams. But policy issues tend to involve a degree of unpredictability that likely makes OR ineffective. Is there research on these limits?

      • Dear David,
        Since OR is about real life problem solving the issue of unpredictability is naturally of interest in OR. In the OR area of decision and risk analysis this is the core focus of studies. Another area is scenario analysis which tries to model the consequences and support decision making under different future developments. In interactive policy modelling approaches the process typically considers unpredictable events too.

  5. In addition to uncertainty in input parameters, there is often the case where the phenomenon in question is simply not well enough understood to model well enough to be useful in policy decision making. Models are useful in science to explore possibilities, but mere possibilities typically do not support policy decision making. Science may not be to the point where it can usefully assist policy decision making in many cases.

    Thus in many policy cases the proper asseessment is that useful modeling simply cannot be done, because we do not understand the situation well enough to usefully model it, but I have never heard this from the modeling community. They seem to ignore the fundamental distinction between exploration and forecasting. Of course the policy media and practitioners help them ignore it, often making extremely tentative research results sound like forecasts.

    I call this the fallacy of confusing “might be” with “will be” and I see it everywhere.

    • Dear David
      Thanks for this comment and I agree that there are indeed conspicuous cases where one should refrain from modelling. You say “but I have never heard this from the modelling community”. I do not believe that this community ignores the distinction between exploration and forecasting. This is a community of experts in precisely these matters. Alternative explanations could be:
      – The political economy of mathematical modelling. The known asymmetry between developers and users facilitate over-interpreting modelling results
      – Mathematical modelling is not a discipline with recognized leaders who can point the finger to malpractices (e.g. as in statistics)
      – Due to the performatives power of numbers, worlds are created by the modelling activity which gain a life of their own, i.e. they become ‘things’ in the parlance of sociologists of quantification. Once such worlds are created, contributing to their maintenance and growth become normal technical activities as the absurdity of the original claim is forgotten.
      … there are surely other explanations but these are the first that come to my mind-

      • Well said, Andrea. This is a great research topic all on its own. The propagation of confusion.

        One can often see it in the sequence from research article to press release, then press coverage, then political commentary, then political speech and finally on to supposedly “common knowledge”. The modeler’s careful caveats in the article are long gone.

    • In OR the concept of a model is understood in a broad sense. Models do not always need to be numerical but they can also be visual structural descriptions of dependencies and relationships. These are widely used in complex policy problems and the OR community does recognize that numerical modelling is not always possible or even needed. The interested reader can find more info in this review of the area of problem structuring in OR:


Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: