Looking in the right places to identify “unknown unknowns” in projects

Author - Tyson R. Browning
Tyson R. Browning (biography)

By Tyson R. Browning

Unknown unknowns pose a tremendous challenge as they are essentially to blame for many of the unwelcome surprises that pop up to derail projects. However, many, perhaps even most, of these so-called unknown unknowns were actually knowable in advance, if project managers had merely looked in the right places.

For example, investigations following major catastrophes (such as space shuttle disasters, train derailments, and terrorist attacks), and project cost and schedule overruns, commonly identify instances where a key bit of knowledge was in fact known by someone working on that project—but failed to be communicated to the project’s top decision makers. In other cases, unknown unknowns emerge from unforeseen interactions among known elements of complex systems, such as product components, process activities, or software systems.

With the right mindset and toolset, we can shine a light into the right holes to uncover the uncertainties that could affect a project’s success. Various tools for directed recognition can help us convert unknown unknowns (unk unks) to known unknowns, as depicted below (figure adapted from Browning and Ramasesh 2015).

directed recognition for converting unknown unknowns to known unknowns

To be clear, I’m referring to a particular context, a project, which is a “temporary endeavor undertaken to create a unique product, service, or result” (Project Management Institute 2017: 4)—although I would expect many of the concepts discussed in this post to apply more generally. In projects, a subset of uncertainties have the potential to have a positive or negative effect on its success. Positive uncertainties present opportunities, while negative ones (threats) present risks. Many formal techniques exist for managing project risks, but all of these begin by identifying the risks in the first place—ie., by rendering them as known unknowns.

Where to look?

In projects, unknown unknowns can emerge from at least six complex systems: the project’s desired result, the work done to get it (process), the people and teams doing the work (organization), the resources and tools they’re using (tools), the project’s requirements and objectives (goals), and the project’s environment (context). Each of these systems, the first five of which we could also call project subsystems, involves a complex network of related elements. These systems are also related with each other. Many of these systems have been studied in isolation; much less have they been studied in tandem. These six systems provide a minimal starting point for searching for project risks and opportunities.

What makes unknown unknowns more likely?

Six factors increase the likelihood of unknown unknowns in projects:

  1. Complexity stems from the constituent elements of a system and their interactions. Complexity increases with the number, variety, internal complexity and lack of robustness of its elements, and with the number, variety, criticality and internal complexity of the relationships among these elements.
  2. Complicatedness is observer-dependent. It depends on project participants’ abilities to understand and anticipate the project, which depends on the intuitiveness of the project’s structure, organization, and behavior; its newness or novelty; how easy it is to find necessary elements and identify cause and effect relationships; and the participants’ aptitudes and experiences.
  3. Dynamism is a system’s propensity to change. Unknown unknowns are more likely to emerge from fast-changing systems.
  4. Equivocality refers to imprecise information. In projects, this may manifest as an aspect of poor communication. It clouds judgment and inhibits decision making.
  5. Mindlessness refers to perceptive barriers that interfere with the recognition of unknown unknowns, such as an overreliance on past experiences and traditions, the inability to detect weak signals, and ignoring input that is inconvenient or unappealing. It includes individual biases and inappropriate filters such as denial and dismissal.
  6. Project pathologies represent structural or behavioral conditions that allow unknown unknowns to remain hidden, including unclear expectations among stakeholders and dysfunctional cultures, such as shooting messengers, covering up failures, discouraging new ideas, and making some topics taboo for discussion.

Knowing places and causes is a good start

When we consider how each of these six factors can affect each of the six project subsystems, we get a 6×6 grid of places to start looking for lurking unknown unknowns. If we save records from past projects, we do not have to start with a blank sheet—but it is just a better starting point, because it is also dangerous to rely completely on historical data. But how do we plumb these 36 places?

Shining the light: tools for directed recognition

Here are eleven types of lights to use for detecting unknown unknowns and converting them into known unknowns:

  1. Decompose the project: Model the project’s subsystems. Understand their structures, how their elements relate to one another, and the sub-factors of complexity.
  2. Analyze scenarios: Construct several different future outlooks and explore their ramifications.
  3. Use checklists: Codify learning from past projects.
  4. Scrutinize plans: Independently review a project’s work plans, schedules, resources, budgets, etc.
  5. Use “long interviews” (Mullins 2007) with project stakeholders, subject matter experts, and other participants to uncover lurking problems and issues. Such interviews probe deep and wide and ask ‘out-of-the box’ questions to help managers identify latent needs that project stakeholders are unable or unlikely to articulate readily.
  6. Pick up weak signals: Weak signals often come in subtle forms, such as a realization that no one in the organization has a complete understanding of a project, unexplained behaviors, or confusing outcomes. Recognizing and interpreting weak signals requires scanning local and extended networks, mobilizing search parties, testing multiple hypotheses, and probing for further clarity.
  7. Mine data: Electronic data mining can be a particularly powerful tool for extracting implicit, previously unknown, and potentially useful information. By simultaneously reviewing data from multiple projects, data mining could enable project managers to identify the precursors of potential problems.
  8. Communicate frequently and effectively
  9. Balance local autonomy and central control. Allow bad news to travel ‘up’ in the organization structure. Provide emergency channels through any bureaucracy.
  10. Incentivize discovery. Reward the messenger.
  11. Cultivate an alert culture. Educate about unknown unknowns, where they tend to lurk, and why.

I always ask my project and risk management students a question. Given two very similar projects—one with a list of 100 risks identified, and the other with no risks identified—which project is riskier? Unfortunately, many executives and managers seem to get this backwards in practice. Individuals and cultures that prefer (or are incentivized) to ignore uncertainties and risks fuel the delusion and deception problems (eg., Flyvbjerg, et al. 2009) that plague many projects and other endeavors. Many unknown unknowns remain so because of our own lack of will to find and face them. The approaches outlined above should be part of the due diligence of any complex project’s planning and execution.

To find out more:
Browning, T. R. and Ramasesh, R. V. (2015). Reducing Unwelcome Surprises in Project Management. MIT Sloan Management Review, 56, 3: 53-62. (Online): http://sloanreview.mit.edu/x/56319 (The ideas and some of the wording in this blog post come from this article.)

Ramasesh, R. V. and Browning, T. R. (2014). A Conceptual Framework for Tackling Knowable Unknown Unknowns in Project Management. Journal of Operations Management, 32, 4: 190-204. (Online) (DOI): http://dx.doi.org/10.1016/j.jom.2014.03.003

References:
Flyvbjerg, B., Garbuio, M. and Lovallo, D. (2009). Delusion and Deception in Large Infrastructure Projects: Two Models for Explaining and Preventing Executive Disaster. California Management Review, 51, 2: 170-193

Mullins, J. W. (2007). Discovering ‘Unk-Unks’: How Innovators Identify the Critical Things They Don’t Even Know that They Don’t Know. MIT Sloan Management Review, 48, 4: 17-21

Project Management Institute. (2017). A Guide to the Project Management Body of Knowledge, 6th ed., Newtown Square: PA, United States of America

Biography: Tyson R. Browning PhD is Professor of Operations Management in the Neeley School of Business at Texas Christian University in Fort Worth, Texas, USA. His primary research is on managing complex projects. Previously, he worked for Lockheed Martin, Honeywell Space Systems, and Los Alamos National Laboratory. He is currently co-Editor-in-Chief of the Journal of Operations Management.

This blog post is part of a series on unknown unknowns as part of a collaboration between the Australian National University and Defence Science and Technology.

For the five other blog posts already published in this series, see: https://i2insights.org/tag/partner-defence-science-and-technology/

Scheduled blog posts in this series:
November 19: Blackboxing unknown unknowns through vulnerability analysis by Joseph Guillaume
December 3: Yin-yang thinking – A solution to dealing with unknown unknowns? by Christiane Prange and Alicia Hennig
TBA: Detecting non-linear change ‘inside-the-system’ and ‘out-of-the-blue’ by Susan van ‘t Klooster and Marjolijn Haasnoot
January 28, 2020: How can resilience benefit from planning? by Pedro Ferreira
February 11, 2020: Why do we protect ourselves from unknown unknowns? by Bem Le Hunte

25 thoughts on “Looking in the right places to identify “unknown unknowns” in projects”

  1. Thanks for a rich article and to those who offered thoughtful observations. To your point that often there is someone who does know the missing piece that becomes so obvious after the fact, there is the matter of known knowns that are interpreted in different ways. I would suggest a circle for known knowns belongs in the graphic. That could serve to address something that may be embedded in “mindlessness,” that of assumptions. In working with groups of senior leaders, it is frequently surprising how different their assumptions are, how those assumptions affect the project, and how infrequently assumptions are made explicit. Thanks for the work. This is a great reference and guide.

    Reply
    • Thank you, Jim. I completely agree that differing, implicit assumptions are often a source of unwelcome surprises. Making this more explicit somehow in the framework would be a nice extension.

      Reply
  2. Thanks for an interesting post. I had a couple of observations. Firstly, I think there might be an insufficiently qualified antecedent to the argument to the effect that unknowns are oftentimes knowable in advance: I appreciate what you are saying about why this is the case but without qualifying just when this does hold before the fact, it becomes easy to come dangerously close to predicting the past when examining past projects and their outcomes, and therefore obtaining a skewed picture of what is possible for projects of the future. As people, we generally appear to have a strong tendency to predict the past: without carefully distinguishing between what was genuinely able to be anticipated in advance and merely assuming that events must have been able to be anticipated simply because they are what happened, we end up denying the very real uncertainty of human endeavours rather that really setting ourselves up better to deal with it. I was thinking about how to tackle this: I would propose to begin by distinguishing between knowing future outcomes and anticipating exposures to types of future outcomes characterised by consequences or properties that matter, with respect to a particular purpose we have in mind. That is, I suggest there is central importance here to separating unknowing into different orders, both opportunity and threat, so that while knowing a future outcome might not be possible, we can nonetheless manage the uncertainty using a recognition that future outcomes having certain effects are possible. I was left thinking that this might help by bringing a mechanism for structuring effective strategies around resource allocations; hence this would uncover the qualifying conditions on inferences I think the position needs.

    The second thought I had is that the construction of the idea makes the tacit assumption that the environment of the project (or, perhaps more accurately, the problem it is thereby trying to address) is fixed in character over time, unchanging, stationary. This comes through noticeably to me in the list of six sources of unknowns, which doesn’t directly include organisational or contextual change or, to make it more general and grander, the evolution of human history. For contexts including Defence, the large resources and long time-frames involved in effecting major capability changes do indeed mean that substantial projects play out against exogenous uncertainties of human historical progression. You could argue that the influence of cycles in the logical structure of nontrivial systems is indicated under the complexity factor, and therefore historical development and contextual change is entailed therein, but I still think this downplays the significance of what I would regard as a primary concern. I tend to think the list of factors does imply there are unknowns generated from within the project itself (i.e. endogenously), but again I tend to think that these endogenous uncertainties are well worth thinking about as primary objects for effective strategy development.

    Reply
    • Thank you for your thoughts on this. I recommend that you look at the two references, which go into further details that should clarify. In brief, re: your first thought: Yes, it is dangerous to be what in the US we call a “Monday morning quarterback”–someone who says what the (American) football player should have done in yesterday’s game. However, it is often the case that important facts were not actually unknowable, because some people actually did know them at the time–rather they were just not communicated to the decision maker because of the barriers mentioned in the framework (e.g., failing to pay attention to appropriate, weak signals). I also agree that it is important to separate uncertainty into opportunity and threat (i.e., positive or negative effects on project outcomes). The dividing line between the two often depends on a project’s chosen goals (see Browning 2019, Project Management Journal). The connection to resource allocation decisions is spot on.

      Re: your second thought, one of the six driving factors is Dynamism, which is change and can act on the other factors (e.g.,Complexity and Complicatedness) and each of the six project systems–including, especially re: your comment, the project’s Context or environment, which is surely changing, as well as the project’s Organization, etc. I do think that cycles in all six of the systems have important implications. Some of these (e.g., product and process) have already been studied (I can provide references upon request); others should be. And cycles that cut across these systems would be especially interesting to study. Some efforts in this direction have begun through the use of multidomain matrix (MDM) modeling and analysis.

      Reply
      • Thanks for the discussion: I will have a closer look. I started reading some of the references you provided but haven’t returned to this for a couple of weeks with insane rush leading into the Christmas break. A major point for me leading from your article in terms of actionable consequences is what both that which is anticipatable and that which is not means for effective resource allocation. My personal view on dealing with uncertainty in practical terms (though I fully admit here that my personal driving interest here is deeply theoretical) is that while it is much more common to frame matters first in terms of unknowns, and usually implicitly then working towards what it means for decision-making, I’ve long convinced myself that there is much to working essentially in the opposite direction. Of course, this may be debatable, yet I’m confident it is at least worth considering as a methodological construct. I’ve tended to start with decision-making in terms of resource allocations and working out the consequences of any proposed mechanism in terms of sensitivity to uncertainty (susceptibility to threats and inability to leverage opportunities) it entails. The feedback of detecting unacceptable limits can drive improvements to the resource allocation system; thus mechanisms of decision-making are, in this framing, essentially scientific theories that are subjected to test specifically in terms of something like robustness to unknowns. There’s a bit of a sense of this in your article at times, too, so I have the feeling other people are also reaching towards similar principles.

        Reply
        • I agree that working backwards, as you suggest, is an excellent approach, among others. Typically I recommend that projects/programs keep two types of resource allocations: one for specific uncertainties, risks, or areas and another as a general buffer. The first is spent on various forms of insurance, options, hedging, etc., whereas the latter is merely kept as a reserve account.

          Reply
  3. Nancy Leveson in Engineering a Safer World and articles she has produced has shown all kinds of real world scenarios where decomposing misses things. All components can being doing perfectly what thg they should and failure still can happen. I think her STPa approach could be used here.
    (re-posted from LinkedIn Systems thinking group by Gabriele Bammer)

    Reply
    • Thank you for highlighting this point about the challenges of decomposition. For this reason, the fuller explanation of the approach emphasizes not only the basic decomposition of each system but also heightened attention to the relationships among the decomposed elements–because it is these relationships (among known elements) from which many unk-unks emerge. The 2014 paper in the Journal of Operations Management (see reference above) discusses the importance of exploring these relationships and their characteristics in order to increase understanding (reduce uncertainty) around their implications.

      Reply
  4. Decomposing projects being the first step makes the rest of the steps a lot easier. Creating an alert culture can eliminate the dystopians from concluding that plans were not scrutinized and hence things did not work as intended.
    (re-posted from LinkedIn group International Professors Project by Gabriele Bammer)

    Reply
  5. I think “data saturation” is reflective of known unknowns. For example, in a study of police brutality, one might ask all the police about it until saturation is reached… without asking any of the victims. Then, expanding, one might ask all the victims until again, it seems that saturation is reached… and so on. On the other hand, I would redefine “theoretical saturation” as something very different – a situation where the theory is measurably completely systemic (as I think I’ve mentioned once or twice) and highly effective in practical application for addressing the problem (as are laws of physics). The theories i’ve analyzed in the social/behavioral sciences where data saturation is claimed tend to have a very low level of systemicity (typically under 20%) and so are not highly useful in practical application for addressing the problem. So, data saturation does not necessarily lead to theoretical saturation (at least in the short run).

    Reply
  6. Several of the comments mention the challenge of determining how much to invest in uncovering unk unks. How do you know if you’re investing enough? When should you stop? What is too much? These are good questions, and I don’t have a definitive answer. But here are two ideas. First, this is akin to the question, “How much insurance should I buy?” The answer depends on several factors, from the likelihood of problems worth insuring against to the ability of the organization or individual to otherwise absorb any consequences of choosing the wrong amount. Second, we can draw on the idea from research of “theoretical saturation.” This is how those doing qualitative, case study research determine when they’ve studied enough cases to have some reasonable confidence that their findings are reasonably representative and generalizable. If you’ve invested a typical/reasonable amount (due diligence) in the various 36 channels described in the post and not finding anything at all, then it might make sense to stop, whereas if you are uncovering things, you might need to go further. Also, as each of the six driving factors (complexity etc.) increase, then the amount of investment in uncovering unk unks should increase, all else being equal. Over time, organizations can collect data and calibrate appropriate amounts of investments for different kinds of projects. When they find a mistake, then can adjust for the future. The point is to pay attention (this does cost something) and learn systematically.

    Reply
  7. I really dislike the term unknown unknowns as it has become a free pass for some people and organisations who essentially give up when the intellectual effort or social interactions get too tough.

    There is also a philosophical problem with the idea of allocating resources to anything you don’t know about. There could be masses of things we don’t know about, so where do we stop?

    My own thoughts on the subject are set out here. It’s five years old so I would probably expand on it if I rewrote it today but I think the gist of it still stands.

    http://broadleaf.com.au/resource-material/unknown-unknowns/

    Reply
  8. I fail to see how any of these are “detection” mechanisms. They may serve to heighten awareness and even to demonstrate in hindsight that a given unknown was supposedly “detectable” in the past (if only we knew then what we know now), but they contain no tools for revealing unknowns. All one can do is examine the conditions which allow a given situation to exist and ask what happens if one or more of those conditions change. What is revealed is knowable unknowns. Unknown unknowns cannot be known until change occurs.
    (Reposted from Systems Thinking LinkedIn group by Gabriele Bammer)

    Reply
    • Thanks for this comment! Some of these tools are indeed ways to “examine the conditions which allow a given situation to exist”–situations of complexity, complicatedness, etc. In retrospect, our research revealed that many unk-unks could have indeed been knowable if people had just examined the right things earlier. The Dynamism factor pertains to change, so more and/or faster change is another indicator that unk-unks are more likely to be lurking.

      Reply
  9. There are some great suggestions and guidelines for identifying knowable unk unks in this post and the readings it refers to. However, I was hoping there would be some consideration of when it’s actually a good idea to convert unk unks into known unknowns and when it isn’t. The implicit assumption in this post seems to be that this conversion always is an unalloyed good. Writ large, of course, it isn’t, as the following simple thought experiment demonstrates. Imagine suddenly becoming aware of the (practically infinite) variety of things you don’t know, including the (again, practically infinite) variety of your unk unks. The psychological impact of this would be overwhelming and probably mentally paralysing, not least of all because it would include unk unks that we are psychologically unable to face (and therefore are in denial). Therefore, we have a case for placing some limits on this conversion, and guidelines for doing so merit serious thought.

    Also missing is any consideration of whose unk unks are being converted, by whom and for what purposes. Any set of reasonable guides to deciding whether such a conversion is beneficial would need to incorporate these reference-points. In Tracy Kidder’s book, “The Soul of a New Machine” (1981, Little, Brown, & Co.) he describes a manager of an engineering team who says that when he’s given a project with impossible specifications and/or deadlines, he does not take the project to seasoned engineers because they’ll laugh at him and point out that the project is impossible. Instead, he hires “kids” straight out of engineering degrees because the “kids” don’t know that they don’t know what is impossible. And sometimes some of them will achieve the impossible. This manager knows better than to convert the kids’ unk unk into a known unknown. Instead, he exploits it to his and their mutual advantage.

    Reply
    • Interesting points! There may indeed be some times where it’s better not to know some things. But note that here (using Kidder’s example) the project manager does know the things. He just chooses not to reveal them to his entire team. This is different from him not knowing the things at all. In fact, it’s better that he does know about the situation so he can determine which information is appropriate to reveal to whom.

      Reply
    • I am interested in the relationship between an “alert culture” and spotting anomalies. Unknown Unknowns are simply anomalies, but spotting them….one interesting method is Gary Klein’s recognition primed decision making and his use of cognitive frames in immersive situational awareness and Weickian sensemaking. What you could call an “alert culture” in doing environmental scanning. This is particularly relevant in High Reliability Organisations and it is a form of Kleinian-Weickian Western Mindfulness and very different to Buddhist or Eastern Mindfulness because you are trained in how to develop Hyperfocus, Pattern Recognition, Sensemaking, Scanning, Navigation and Wayfinding. John Boyd completed the training and came up with the OODA Loop, Mica Endsley came up with Situational Awareness and David Snowden came up with the Cynefin Framework.

      Reply
      • Terrific points. The longer, academic version of this post (the 2014 paper in the Journal of Operations Management) gets more into the points about HROs and alert cultures. I’m also a big fan of the OODA loop (emphasis on speed of its cycles) and situation visibility/awareness.

        Reply
  10. Thank you for your comment. I fully agree that novelty is important to consider. Actually, it is part of the complicatedness construct, which is the more subjective aspect, because novelty can vary from one person to another. Of course, the more novelty-driven complicatedness in a project’s desired result, process for getting it, organization, resources and tools, goals, and context, the more effort should be invested in directing recognition towards unk-unks.

    Reply
    • I do not disagree, but the greater the novelty the more likely the unrecognizable unks will occur and be significant. One needs to plan and budget for these. For example, the rapid mobilization of an emergency decision group, where every operating arm is represented by someone who can make major decisions. Budgeting for the unknown is the hardest part.

      Reply
  11. I suggest a seventh factor that can dramatically increase the likelihood of unk unks. This is novelty. How close, or far, from what has been done before is this project?

    Here is an example, an old study if I remember correctly. In building a chemical plant (or new unit thereof) there is a step called commissioning and startup. This is where everything is in place and the task is to reliably create the desired product. The question was how much of the project budget should be allocated to this task? The study found that it was highly dependent on the novelty of the process. If replicating an existing unit, with modest improvements, then 10%. If scaling up a prototype for the first time, then more like 50%. This is a huge range.

    The pont is that one should budget for unks, based on the novelty of the project. There are also management steps that can be taken, such as planning for emergencies, which is different from emergency planning.

    Reply

Leave a Reply to tbrowningtcuCancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Discover more from Integration and Implementation Insights

Subscribe now to keep reading and get access to the full archive.

Continue reading