Looking in the right places to identify “unknown unknowns” in projects

Author - Tyson R. Browning
Tyson R. Browning (biography)

By Tyson R. Browning

Unknown unknowns pose a tremendous challenge as they are essentially to blame for many of the unwelcome surprises that pop up to derail projects. However, many, perhaps even most, of these so-called unknown unknowns were actually knowable in advance, if project managers had merely looked in the right places.

For example, investigations following major catastrophes (such as space shuttle disasters, train derailments, and terrorist attacks), and project cost and schedule overruns, commonly identify instances where a key bit of knowledge was in fact known by someone working on that project—but failed to be communicated to the project’s top decision makers. In other cases, unknown unknowns emerge from unforeseen interactions among known elements of complex systems, such as product components, process activities, or software systems.

With the right mindset and toolset, we can shine a light into the right holes to uncover the uncertainties that could affect a project’s success. Various tools for directed recognition can help us convert unknown unknowns (unk unks) to known unknowns, as depicted below (figure adapted from Browning and Ramasesh 2015).

directed recognition for converting unknown unknowns to known unknowns

To be clear, I’m referring to a particular context, a project, which is a “temporary endeavor undertaken to create a unique product, service, or result” (Project Management Institute 2017: 4)—although I would expect many of the concepts discussed in this post to apply more generally. In projects, a subset of uncertainties have the potential to have a positive or negative effect on its success. Positive uncertainties present opportunities, while negative ones (threats) present risks. Many formal techniques exist for managing project risks, but all of these begin by identifying the risks in the first place—ie., by rendering them as known unknowns.

Where to look?

In projects, unknown unknowns can emerge from at least six complex systems: the project’s desired result, the work done to get it (process), the people and teams doing the work (organization), the resources and tools they’re using (tools), the project’s requirements and objectives (goals), and the project’s environment (context). Each of these systems, the first five of which we could also call project subsystems, involves a complex network of related elements. These systems are also related with each other. Many of these systems have been studied in isolation; much less have they been studied in tandem. These six systems provide a minimal starting point for searching for project risks and opportunities.

What makes unknown unknowns more likely?

Six factors increase the likelihood of unknown unknowns in projects:

  1. Complexity stems from the constituent elements of a system and their interactions. Complexity increases with the number, variety, internal complexity and lack of robustness of its elements, and with the number, variety, criticality and internal complexity of the relationships among these elements.
  2. Complicatedness is observer-dependent. It depends on project participants’ abilities to understand and anticipate the project, which depends on the intuitiveness of the project’s structure, organization, and behavior; its newness or novelty; how easy it is to find necessary elements and identify cause and effect relationships; and the participants’ aptitudes and experiences.
  3. Dynamism is a system’s propensity to change. Unknown unknowns are more likely to emerge from fast-changing systems.
  4. Equivocality refers to imprecise information. In projects, this may manifest as an aspect of poor communication. It clouds judgment and inhibits decision making.
  5. Mindlessness refers to perceptive barriers that interfere with the recognition of unknown unknowns, such as an overreliance on past experiences and traditions, the inability to detect weak signals, and ignoring input that is inconvenient or unappealing. It includes individual biases and inappropriate filters such as denial and dismissal.
  6. Project pathologies represent structural or behavioral conditions that allow unknown unknowns to remain hidden, including unclear expectations among stakeholders and dysfunctional cultures, such as shooting messengers, covering up failures, discouraging new ideas, and making some topics taboo for discussion.

Knowing places and causes is a good start

When we consider how each of these six factors can affect each of the six project subsystems, we get a 6×6 grid of places to start looking for lurking unknown unknowns. If we save records from past projects, we do not have to start with a blank sheet—but it is just a better starting point, because it is also dangerous to rely completely on historical data. But how do we plumb these 36 places?

Shining the light: tools for directed recognition

Here are eleven types of lights to use for detecting unknown unknowns and converting them into known unknowns:

  1. Decompose the project: Model the project’s subsystems. Understand their structures, how their elements relate to one another, and the sub-factors of complexity.
  2. Analyze scenarios: Construct several different future outlooks and explore their ramifications.
  3. Use checklists: Codify learning from past projects.
  4. Scrutinize plans: Independently review a project’s work plans, schedules, resources, budgets, etc.
  5. Use “long interviews” (Mullins 2007) with project stakeholders, subject matter experts, and other participants to uncover lurking problems and issues. Such interviews probe deep and wide and ask ‘out-of-the box’ questions to help managers identify latent needs that project stakeholders are unable or unlikely to articulate readily.
  6. Pick up weak signals: Weak signals often come in subtle forms, such as a realization that no one in the organization has a complete understanding of a project, unexplained behaviors, or confusing outcomes. Recognizing and interpreting weak signals requires scanning local and extended networks, mobilizing search parties, testing multiple hypotheses, and probing for further clarity.
  7. Mine data: Electronic data mining can be a particularly powerful tool for extracting implicit, previously unknown, and potentially useful information. By simultaneously reviewing data from multiple projects, data mining could enable project managers to identify the precursors of potential problems.
  8. Communicate frequently and effectively
  9. Balance local autonomy and central control. Allow bad news to travel ‘up’ in the organization structure. Provide emergency channels through any bureaucracy.
  10. Incentivize discovery. Reward the messenger.
  11. Cultivate an alert culture. Educate about unknown unknowns, where they tend to lurk, and why.

I always ask my project and risk management students a question. Given two very similar projects—one with a list of 100 risks identified, and the other with no risks identified—which project is riskier? Unfortunately, many executives and managers seem to get this backwards in practice. Individuals and cultures that prefer (or are incentivized) to ignore uncertainties and risks fuel the delusion and deception problems (eg., Flyvbjerg, et al. 2009) that plague many projects and other endeavors. Many unknown unknowns remain so because of our own lack of will to find and face them. The approaches outlined above should be part of the due diligence of any complex project’s planning and execution.

To find out more:
Browning, T. R. and Ramasesh, R. V. (2015). Reducing Unwelcome Surprises in Project Management. MIT Sloan Management Review, 56, 3: 53-62. (Online): http://sloanreview.mit.edu/x/56319 (The ideas and some of the wording in this blog post come from this article.)

Ramasesh, R. V. and Browning, T. R. (2014). A Conceptual Framework for Tackling Knowable Unknown Unknowns in Project Management. Journal of Operations Management, 32, 4: 190-204. (Online) (DOI): http://dx.doi.org/10.1016/j.jom.2014.03.003

References:
Flyvbjerg, B., Garbuio, M. and Lovallo, D. (2009). Delusion and Deception in Large Infrastructure Projects: Two Models for Explaining and Preventing Executive Disaster. California Management Review, 51, 2: 170-193

Mullins, J. W. (2007). Discovering ‘Unk-Unks’: How Innovators Identify the Critical Things They Don’t Even Know that They Don’t Know. MIT Sloan Management Review, 48, 4: 17-21

Project Management Institute. (2017). A Guide to the Project Management Body of Knowledge, 6th ed., Newtown Square: PA, United States of America

Biography: Tyson R. Browning PhD is Professor of Operations Management in the Neeley School of Business at Texas Christian University in Fort Worth, Texas, USA. His primary research is on managing complex projects. Previously, he worked for Lockheed Martin, Honeywell Space Systems, and Los Alamos National Laboratory. He is currently co-Editor-in-Chief of the Journal of Operations Management.

This blog post is part of a series on unknown unknowns as part of a collaboration between the Australian National University and Defence Science and Technology.

For the five other blog posts already published in this series, see: https://i2insights.org/tag/partner-defence-science-and-technology/

Scheduled blog posts in this series:
November 19: Blackboxing unknown unknowns through vulnerability analysis by Joseph Guillaume
December 3: Yin-yang thinking – A solution to dealing with unknown unknowns? by Christiane Prange and Alicia Hennig
TBA: Detecting non-linear change ‘inside-the-system’ and ‘out-of-the-blue’ by Susan van ‘t Klooster and Marjolijn Haasnoot
January 28, 2020: How can resilience benefit from planning? by Pedro Ferreira
February 11, 2020: Why do we protect ourselves from unknown unknowns? by Bem Le Hunte

16 thoughts on “Looking in the right places to identify “unknown unknowns” in projects

  1. Nancy Leveson in Engineering a Safer World and articles she has produced has shown all kinds of real world scenarios where decomposing misses things. All components can being doing perfectly what thg they should and failure still can happen. I think her STPa approach could be used here.
    (re-posted from LinkedIn Systems thinking group by Gabriele Bammer)

    • Thank you for highlighting this point about the challenges of decomposition. For this reason, the fuller explanation of the approach emphasizes not only the basic decomposition of each system but also heightened attention to the relationships among the decomposed elements–because it is these relationships (among known elements) from which many unk-unks emerge. The 2014 paper in the Journal of Operations Management (see reference above) discusses the importance of exploring these relationships and their characteristics in order to increase understanding (reduce uncertainty) around their implications.

  2. Decomposing projects being the first step makes the rest of the steps a lot easier. Creating an alert culture can eliminate the dystopians from concluding that plans were not scrutinized and hence things did not work as intended.
    (re-posted from LinkedIn group International Professors Project by Gabriele Bammer)

  3. I think “data saturation” is reflective of known unknowns. For example, in a study of police brutality, one might ask all the police about it until saturation is reached… without asking any of the victims. Then, expanding, one might ask all the victims until again, it seems that saturation is reached… and so on. On the other hand, I would redefine “theoretical saturation” as something very different – a situation where the theory is measurably completely systemic (as I think I’ve mentioned once or twice) and highly effective in practical application for addressing the problem (as are laws of physics). The theories i’ve analyzed in the social/behavioral sciences where data saturation is claimed tend to have a very low level of systemicity (typically under 20%) and so are not highly useful in practical application for addressing the problem. So, data saturation does not necessarily lead to theoretical saturation (at least in the short run).

  4. Several of the comments mention the challenge of determining how much to invest in uncovering unk unks. How do you know if you’re investing enough? When should you stop? What is too much? These are good questions, and I don’t have a definitive answer. But here are two ideas. First, this is akin to the question, “How much insurance should I buy?” The answer depends on several factors, from the likelihood of problems worth insuring against to the ability of the organization or individual to otherwise absorb any consequences of choosing the wrong amount. Second, we can draw on the idea from research of “theoretical saturation.” This is how those doing qualitative, case study research determine when they’ve studied enough cases to have some reasonable confidence that their findings are reasonably representative and generalizable. If you’ve invested a typical/reasonable amount (due diligence) in the various 36 channels described in the post and not finding anything at all, then it might make sense to stop, whereas if you are uncovering things, you might need to go further. Also, as each of the six driving factors (complexity etc.) increase, then the amount of investment in uncovering unk unks should increase, all else being equal. Over time, organizations can collect data and calibrate appropriate amounts of investments for different kinds of projects. When they find a mistake, then can adjust for the future. The point is to pay attention (this does cost something) and learn systematically.

  5. I really dislike the term unknown unknowns as it has become a free pass for some people and organisations who essentially give up when the intellectual effort or social interactions get too tough.

    There is also a philosophical problem with the idea of allocating resources to anything you don’t know about. There could be masses of things we don’t know about, so where do we stop?

    My own thoughts on the subject are set out here. It’s five years old so I would probably expand on it if I rewrote it today but I think the gist of it still stands.

    http://broadleaf.com.au/resource-material/unknown-unknowns/

  6. I fail to see how any of these are “detection” mechanisms. They may serve to heighten awareness and even to demonstrate in hindsight that a given unknown was supposedly “detectable” in the past (if only we knew then what we know now), but they contain no tools for revealing unknowns. All one can do is examine the conditions which allow a given situation to exist and ask what happens if one or more of those conditions change. What is revealed is knowable unknowns. Unknown unknowns cannot be known until change occurs.
    (Reposted from Systems Thinking LinkedIn group by Gabriele Bammer)

    • Thanks for this comment! Some of these tools are indeed ways to “examine the conditions which allow a given situation to exist”–situations of complexity, complicatedness, etc. In retrospect, our research revealed that many unk-unks could have indeed been knowable if people had just examined the right things earlier. The Dynamism factor pertains to change, so more and/or faster change is another indicator that unk-unks are more likely to be lurking.

  7. There are some great suggestions and guidelines for identifying knowable unk unks in this post and the readings it refers to. However, I was hoping there would be some consideration of when it’s actually a good idea to convert unk unks into known unknowns and when it isn’t. The implicit assumption in this post seems to be that this conversion always is an unalloyed good. Writ large, of course, it isn’t, as the following simple thought experiment demonstrates. Imagine suddenly becoming aware of the (practically infinite) variety of things you don’t know, including the (again, practically infinite) variety of your unk unks. The psychological impact of this would be overwhelming and probably mentally paralysing, not least of all because it would include unk unks that we are psychologically unable to face (and therefore are in denial). Therefore, we have a case for placing some limits on this conversion, and guidelines for doing so merit serious thought.

    Also missing is any consideration of whose unk unks are being converted, by whom and for what purposes. Any set of reasonable guides to deciding whether such a conversion is beneficial would need to incorporate these reference-points. In Tracy Kidder’s book, “The Soul of a New Machine” (1981, Little, Brown, & Co.) he describes a manager of an engineering team who says that when he’s given a project with impossible specifications and/or deadlines, he does not take the project to seasoned engineers because they’ll laugh at him and point out that the project is impossible. Instead, he hires “kids” straight out of engineering degrees because the “kids” don’t know that they don’t know what is impossible. And sometimes some of them will achieve the impossible. This manager knows better than to convert the kids’ unk unk into a known unknown. Instead, he exploits it to his and their mutual advantage.

    • Interesting points! There may indeed be some times where it’s better not to know some things. But note that here (using Kidder’s example) the project manager does know the things. He just chooses not to reveal them to his entire team. This is different from him not knowing the things at all. In fact, it’s better that he does know about the situation so he can determine which information is appropriate to reveal to whom.

  8. Thank you for your comment. I fully agree that novelty is important to consider. Actually, it is part of the complicatedness construct, which is the more subjective aspect, because novelty can vary from one person to another. Of course, the more novelty-driven complicatedness in a project’s desired result, process for getting it, organization, resources and tools, goals, and context, the more effort should be invested in directing recognition towards unk-unks.

    • I do not disagree, but the greater the novelty the more likely the unrecognizable unks will occur and be significant. One needs to plan and budget for these. For example, the rapid mobilization of an emergency decision group, where every operating arm is represented by someone who can make major decisions. Budgeting for the unknown is the hardest part.

  9. I suggest a seventh factor that can dramatically increase the likelihood of unk unks. This is novelty. How close, or far, from what has been done before is this project?

    Here is an example, an old study if I remember correctly. In building a chemical plant (or new unit thereof) there is a step called commissioning and startup. This is where everything is in place and the task is to reliably create the desired product. The question was how much of the project budget should be allocated to this task? The study found that it was highly dependent on the novelty of the process. If replicating an existing unit, with modest improvements, then 10%. If scaling up a prototype for the first time, then more like 50%. This is a huge range.

    The pont is that one should budget for unks, based on the novelty of the project. There are also management steps that can be taken, such as planning for emergencies, which is different from emergency planning.

Leave a Reply