Managing innovation dilemmas: Info-gap theory

By Yakov Ben-Haim

Author - Yakov Ben-Haim
Yakov Ben-Haim (biography)

To use or not to use a new and promising but unfamiliar and hence uncertain innovation? That is the dilemma facing policy makers, engineers, social planners, entrepreneurs, physicians, parents, teachers, and just about everybody in their daily lives. There are new drugs, new energy sources, new foods, new manufacturing technologies, new toys, new pedagogical methods, new weapon systems, new home appliances and many other discoveries and inventions.

Furthermore, the innovation dilemma occurs even when a new technology is not actually involved. The dilemma arises from new attitudes, like individual responsibility for the global environment, or new social conceptions, like global allegiance and self-identity transcending all nation-states. Even the enthusiastic belief in innovation itself as the source of all that is good and worthy entails a dilemma of innovation.

An innovation’s newness and the uncertainty of its promise for improvement is the source of the dilemma. Tomorrow we will understand the innovation better, its dangers and its benefits, but today we must decide. Without our optimistic belief in the future we would remain always in the past. But optimism without understanding can be dangerous because the innovation may harbor unanticipated and unpleasant surprises. We need a sensible and responsible method for responding to the endless flow of discovery and invention.

Info-gap theory provides the basis for such a method. Central to the theory is the idea of an information gap: the disparity between what you do know and what you need to know in order to make a responsible decision. Info-gap theory explores the link between the boundlessness of our ignorance and the limitation of our ability to achieve optimal outcomes. Indeed outcome optimization becomes less feasible as uncertainty grows.

The main info-gap approach to managing an innovation dilemma is to make a decision that robustly satisfies critical requirements. The idea is to identify outcomes or consequences that are essential (that must be achieved) and then to choose the alternative that will achieve these outcomes over the widest possible range of potential realizations.

Focusing on reliable achievement of critical goals is different from focusing on achieving the best possible outcome. The considerations differ (and the outcomes may or may not differ). Focusing on optimal outcomes ignores the central importance of reliably achieving critical results in the face of deep uncertainty. In contrast, focusing on robustness against ignorance, while aiming at specific critical goals, enables decision makers to balance the quality of the outcome against the confidence in achieving an acceptable outcome.

Info-gap evaluation of robustness is based on three components: system model, uncertainty model, and performance requirements. The system model may be quantitative or qualitative, and represents the situation of interest, where some parameters or functions are uncertain. The uncertainty model quantifies the uncertainty non-probabilistically, and the horizon of uncertainty is unbounded – there is no known worst case. The performance requirements are those outcomes that must be achieved in order for the decision to be acceptable. The robustness of a proposed decision is the greatest horizon of uncertainty up to which the system model is guaranteed to achieve the specified performance requirements.

In summary, the info-gap conception of an unknown unknown entails two ideas. First, we don’t know the probabilities of alternative realizations, or at best we know probabilities only very imperfectly. Second, we don’t know how wrong our current understanding is. More precisely, there is no known worst case or, stated pessimistically: if you think things are bad now, they could be much worse. Info-gap theory proposes to manage unknown unknowns by first identifying essential outcomes, and then by prioritizing the decision alternatives in terms of their robustness to uncertainty for achieving those critical outcomes. Rather than aiming at optimal outcomes, when facing deep uncertainty we attempt to achieve necessary outcomes over the widest range of surprise.

Everyone faces deep uncertainty – though we may give it different names: unknown unknowns, info-gaps, severe uncertainty. What deep uncertainties have you faced, and how have you dealt with them? What seems to work, and under what sorts of circumstances?

To find out more:
Ben-Haim, Y. (2018). The Dilemmas of Wonderland: Decisions in the Age of Innovation. Oxford University Press: Oxford, United Kingdom.

See also:
Ben-Haim, Y. (no date) Info-Gap Theory: Decisions Under Severe Uncertainty. Technion Israel Institute of Technology, Haifa, Israel. (Website): https://info-gap.technion.ac.il/ 

Biography: Yakov Ben-Haim PhD initiated and developed info-gap decision theory, a decision-support tool for assessing and selecting policy, strategy, action, or decision in a wide range of disciplines and when facing deep uncertainty. He holds the Yitzhak Moda’i Chair in Technology and Economics at the Technion – Israel Institute of Technology in Haifa, Israel.

This blog post belongs to a series on unknown unknowns as part of a collaboration between the Australian National University and Defence Science and Technology.

Published blog posts in the series:

Accountability and adapting to surprises by Patricia Hirl Longstaff
https://i2insights.org/2019/08/27/accountability-and-surprises/

How can we know unknown unknowns by Michael Smithson
https://i2insights.org/2019/09/10/how-can-we-know-unknown-unknowns/

What do you know? And how is it relevant to unknown unknowns? by Mathew Walsh
https://i2insights.org/2019/09/24/knowledge-and-unknown-unknowns/

Scheduled blog posts in the series:

October 22: Creative writing as a journey into the unknown unknown by Lelia Green
November 5: Looking in the right places to identify “unknown unknowns” in projects by Tyson R. Browning
November 19: Blackboxing unknown unknowns through vulnerability analysis by Joseph Guillaume

7 thoughts on “Managing innovation dilemmas: Info-gap theory”

  1. Thanks for an interesting discussion. I would like to offer what I hope might be some potentially useful perspectives by coming at the problem from a couple of other alternative directions; it ought not be surprising that the essential problem has popped up all over the place and hence, I would argue, has a kind of widely fragmented relevant literature. Hence, I am working on the assumption that bringing perspectives on this from many disciplinary points of view will help us, while being aware of the possibility of proving myself wrong in the process (it wouldn’t be the first time).

    The first perspective to which I would like raise attention is economics, though I’ll probably delve into a bit of pure mathematics too since I cannot help myself. Forgive me for giving basic definitions but it is often helpful given that economics is frequently misunderstood as being about flows of money: economics is about mechanisms of allocation of scarce resources, where scarcity means they can be potentially utilized in many different ways, and I am specifically here intending this allocation as occurring under conditions of uncertainty about the future. That is, I’m adopting Keynes basic framing of the problem of investment, which is very similar to how you define the problem here. In short: investors face a fundamental problem of deciding where to invest given that the future is unknown and unknowable. Keynes was interested here in human behaviour and thus he noted that humans therefore resort to superstitious measures to psychologically pretend the uncertainty away (there are four points and can elaborate further but I think this is enough for now); macro-economic instability is generated endogenously (i.e. from within the system rather than via ‘shocks’ to it from outside) because of this. In other words, the very uncertainty under which the investors are operating is generated substantially by the actions of those investors who are facing it. I like to use this as simple illustrative case because it suffices as an illustration of the logical self-reference that sets up uncertainty in the incompleteness and unsolvability senses of the effects within a context of the presence of logical paradoxes, which are propositions that cannot be resolved as either true or false within that context, and hence that embodies Keynes’ ontological uncertainty as a fundamental limit to knowing.

    I think you’ve set the problem up on the basis of a distinction between the promise of an innovation along broadly similar lines here to Keynes’ investment problem, so economics theory looks very relevant to me. The choice of whether to use the innovation with respect to its unknowns is the same as the investor considering the unknowns of investment options – both are allocations of scarce resources with consequences that matter. Keynes was interested in explaining macroeconomic effects in connection with microeconomic behaviour, so he didn’t really get into how investors might behave better. But others certainly do; for instance, in financial economics, a strategy used for dealing with high uncertainty investment decision-making is what they sometimes call a ‘barbell’ or ‘dumbbell’ strategy (reflecting its double modes, in contrast to the single mode of expected returns models). The point is to allocate resources first to hedge against unacceptable failure (for investors this means low volatility secure assets that don’t pay off much) and then to throw what can be managed into opportunities for disproportionate returns (so high volatility assets that have high potential risk but high potential payoff).

    This approach is intrinsically asymmetric (this is related to the asymmetry between short and long positions but details are perhaps for another time). The working hypothesis on the first side of the ‘barbell’ is that intolerable failure is hedged and hence attention is given to checking for refutation of this, while on the second, the working hypothesis is that the bet will fail so attention is devoted to looking for unexpected payoff with respect to further investment decision-making. Of course, I’ve greatly simplified matters here to outline the idea, but to make it more realistic we also have to cast this in a kind of hierarchy of allocation decisions with respect to typically complex investment goals as well as to extend it to a fully dynamic decision process (or control problem more formally) with decision paths over time. It seems to me that there are very similar elements here about the draw of the unknown potential of an innovative approach against the established benefits and drawbacks of using a known approach.

    I’ll skip over this lightly: other perspectives I had in mind as I read were epistemological and methodological. In the interests of keeping it short, I’ll just pick up one point here from this: the decision between the innovative approach and an established approach might be understood, and given detailed structure, according to the implications each has for the growth of knowledge and where the decision fits within an overall knowledge development campaign, allowing here too for the fact that in any such campaign there are possibilities for various kinds of failure, including terminal failure and productive failure that leads to new ideas. Perhaps the thing I want to emphasize here the most, though, is that it might be worth distinguishing between decisions about problem choices, decisions about solution options, and decisions about the evaluation of solution options, all of which play differently because they may involve different sets of underlying paradoxical phenomena. Sometimes the best way to solve a problem is not select a solution option but to solve the problem of how to change the problem.

    The last part also drew my interest: the questions you pose here are astute. We’ve been gathering various researchers looking into these very questions, albeit primarily from more formal points of view, but the idea here is that the principles are agnostic to disciplinary formulation. The basic idea is that problem circumstances, no matter how much uncertainty they present, can be characterized according to some (typically hidden and abstract) features that remain constant and thereby give bounding conditions to the uncertainty and a basis for reliable decision-making. In effect, an invariant condition defines the limit of effect of the paradoxical phenomena that yield the uncertainty. The hypothesis we have been following is that we can specifically utilize order parameters (i.e. measures that distinguish different regimes of behaviour) in the form of measures of various surprise potentials to construct such characterizations. It isn’t the absolute measure value here that matters nor are these measures reducing everything to a single number, but rather they provide abstraction mechanisms to describe what is going on in the problem environment at a good level of abstraction for decision-making, so we would have multiple handles on the limits on how the potential for surprise can change in the problem environment, with respect specifically to the features that matter in regard to decision outcomes. The ‘features that matter’ part is where the bimodal ‘barbell’ construct comes into play.

    Just a few thoughts as a result of reading your post, so to conclude: thanks very much for taking the time to put together the discussion piece.

    Reply
  2. I enjoyed this post, Yakov– Persuasively written as always!

    There’s a related innovation dilemma that was articulated by David Collingridge in his 1980 book. Its two “horns” are:
    (1) Impacts of a new technology cannot be predicted or understood until the technology is extensively developed and widely used.
    (2) Control of or changes to it becomes difficult once the technology has become extensively developed and widely used.
    Collingridge’s dilemma invokes a temporal trap, by observing that steering a new technology is easier early on, but that also happens to be when we know less.

    A second point that could be worth considering is whether a (dis)utility function scales nonlinearly in the amount of uncertainty. In Steve Lewandowsky et al.’s (2014) paper on uncertainty about global warming, we showed that if the damage function due to warming is convex with respect to warming then greater uncertainty about the extent of warming implies a more urgent need to abate emissions now. This conclusion directly opposes our widely held intuition that greater uncertainty should be grounds for not acting until we know more.

    References:
    Collingridge, D. (1980) The Social Control of Technology (New York: St. Martin’s Press; London: Pinter).
    Lewandowsky, S., Risbey, J.S., Smithson, M., Newell, B.R., & Hunter, J. (2014) Scientific Uncertainty and Climate Change: Part I. Uncertainty and Unabated Emissions. Climatic Change, 124, 21-37.

    Reply
    • Hi Mike,

      Thanks for your comment. Your discussion of temporality raises an interesting point. The unidirectionality of time is intimately related to the unidirectionality of learning. While it is true that we can forget (though that takes time too), it always takes time to learn the potentials and pitfalls of innovations. This unidirectionality is the source of innovation dilemmas.

      But now consider the power of prophecy. The prophet says: If you do X, then Y will happen. This induces people to refrain from X, assuming they believe the prophet. Innovation dilemmas can be avoided because the pitfalls of the innovation are foreseen by the prophet.

      This, however, creates a new dilemma. The future now plays out entirely in the present; there is, in a sense, no future because the prophet can foresee the entire line of future events. Time is no longer uni-directional, though it is still one dimensional.

      But if we have no future, if tomorrow is entirely known now, then innovation is no longer possible because innovation is the invention of what is not yet known. But without the possibility of innovation, invention, and discovery, life would be drab and mundane. Humans are what they are because of their sense of time – of the past as well as of the future.

      While prophecy could prevent innovation dilemmas and much pain and loss, it comes with a price because it eliminates innovation altogether.

      Take your choice.

      Yours,

      Yakov

      Reply
  3. Info-Gap Analysis (IGA) appears to be Decision Analysis without probabilities. But why leave out probabilities if their incorporation improves decision insight and decision quality? And there are some very effective methods for acquiring subjective probabilities from subject matter experts (SME). So in other words, eliciting those probabilities need not be painful for the SME.

    Back to IGA, say a “robust” value is $100. I.e., if $100 is lost in an investment the enterprise goes bankrupt. IGA seems to suggest that all strategies and plans in which $100 may be lost should be eliminated. Which seems remarkably risk averse. Introducing probability into the decision calculus would allow the Decision Maker (DM) to make a rational trade based on his/her risk attitude. IGA-only appears to preclude that trade-space from being defined and evaluated. IGA appears to be a perhaps useful but limited method given the availability of tractable DA which incorporates probabilities.

    Reprinted from the LinkedIn group Systems Thinking Network

    Reply
    • Hi Stephen,

      I fully agree that, if one knows probabilities, then that knowledge should be used. However, there are important situations in which we do not know probabilities. For instance, we do not know future inventions and discoveries (because they have not yet appeared), so we cannot know the related probabilities. We often do not known an adversary’s intentions, or how popular preferences will evolve, or what innovative ideas or processes will emerge. The economist Frank Knight distinguished between risk, for which probabilities are known, and what he called “true uncertainty” for which probabilities are unknown. The latter category, which has come to be known as Knightian uncertainty, is the realm in which info-gap theory is relevant.

      Regarding risk-aversion in info-gap theory: The decision maker controls the trade off between how good an outcome is required and how much confidence is required in attaining that outcome. A highly risk-averse DM would aim at a modest outcome in return for high robustness against surprise. A risk-loving DM would accept low robustness against surprise, while aiming at an ambitious outcome.

      Yours,

      Yakov

      Reply
    • Hi Taylor,

      There are many applications of info-gap theory to dynamical situations – those in which time is a major variable – in diverse fields, including engineering, economics, national security, and more. You will find lots of citations on info-gap.com

      Yours,

      Yakov

      Reply

Leave a Reply to Stephen MackCancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Discover more from Integration and Implementation Insights

Subscribe now to keep reading and get access to the full archive.

Continue reading