How can we know unknown unknowns?

By Michael Smithson

Michael Smithson
Michael Smithson (biography)

In a 1993 paper, philosopher Ann Kerwin elaborated a view on ignorance that has been summarized in a 2×2 table describing crucial components of metacognition (see figure below). One margin of the table consisted of “knowns” and “unknowns”. The other margin comprised the adjectives “known” and “unknown”. Crosstabulating these produced “known knowns”, “known unknowns”, “unknown knowns”, and unknown unknowns”. The latter two categories have caused some befuddlement. What does it mean to not know what is known, or to not know what is unknown? And how can we convert either of these into their known counterparts?

Source: Adapted from Kerwin (1993) by Smithson in Bammer et al. (2008)

In this post, I will concentrate on unknown unknowns, what they are, and how they may be identified.

Attributing ignorance

To begin, no form of ignorance can be properly considered without explicitly tracking who is attributing it to whom. With unknown unknowns, we have to keep track of three viewpoints: the unknower, the possessor of the unknowns, and the claimant (the person making the statement about unknown unknowns). Each of these can be oneself or someone else.

Various combinations of these identities generate quite different states of (non)knowledge and claims whose validities also differ. For instance, compare:

  1. A claims that B doesn’t know that A doesn’t know X
  2. B claims that A doesn’t know that A doesn’t know X
  3. A claims that A doesn’t know that B doesn’t know X
  4. A claims that A doesn’t know that A doesn’t know X

The first two could be plausible claims, because the claimant is not the person who doesn’t know that someone doesn’t know X. The last two claims, however, are problematic because they require self-insight that seems unavailable. How can I claim I don’t know that I don’t know X? The nub of the problem is self-attributing false belief. I am claiming one of two things. First, I may be saying that I believe I know X, but my belief is false. This claim doesn’t make sense if we take “belief” in its usual meaning; I cannot claim to believe something that I also believe is false. The second possible claim is that my beliefs omit the possibility of knowing X, but this omission is mistaken. If I’m not even aware of X in the first place, then I can’t claim that my lack of awareness of X is mistaken.

Current unknown unknowns would seem to be claimable by us only about someone else, and therefore current unknown unknowns can be attributed to us only by someone else. Straightaway this suggests one obvious means to identifying our own unknown unknowns: Cultivate the company of people whose knowledge-bases differ sufficiently from ours that they are capable of pointing out things we don’t know that we don’t know. However, most of us don’t do this—Instead the literature on interpersonal attraction and friendships shows that we gravitate towards others who are just like us, sharing the same beliefs, values, prejudices, and therefore, blind-spots.

Different kinds of unknown unknowns

There are different kinds of unknown unknowns, each requiring different “remedies”. The distinction between matters that we mistakenly think that we know about, versus matters that we’re unaware of altogether, is probably the most important distinction among types of unknown unknowns. Its importance stems from the fact that these two kinds have different psychological impacts when they are attributed to us and require different readjustments to our view of the world.

1. False convictions

A shorthand term for the first kind of unknown unknown is a “false conviction”. This can be a matter of fact that is overturned by a credible source. For instance, I may believe that tomatoes are a vegetable but then learn from my more botanically literate friend that its cladistic status is a fruit. Or it can be an assumption about one’s depth of knowledge that is debunked by raising the standard of proof—I may be convinced that I understand compound interest, but then someone asks me to explain it to them and I realize that I can’t provide a clear explanation.

What makes us vulnerable to false convictions? A major contributor is over-confidence about our stock of knowledge. Considerable evidence has been found for the claim that most people believe that they understand the world in much greater breadth, depth, and coherence than they actually do. In a 2002 paper psychologists Leonid Rozenblit and Frank Keil coined a phrase to describe this: the “illusion of explanatory depth”. They found that this kind of overconfidence is greatest in explanatory knowledge about how things work, whether in natural processes or artificial devices. They also were able to rule out self-serving motives as a primary cause of the illusion of explanatory depth. Instead, illusion of explanatory depth arises mainly because our scanty knowledge-base gets us by most of the time, we are not called upon often to explain our beliefs in depth, and even if we intend to check them out, the opportunities for first-hand testing of many beliefs are very limited. Moreover, scanty knowledge also limits the accuracy of our assessments of our own ignorance—greater expertise brings with it greater awareness of what we don’t know.

Another important contributor is hindsight bias, the feeling after learning about something that we knew it all along. In the 1970’s cognitive psychologists such as Baruch Fischhoff ran experiments asking participants to estimate the likelihoods of outcomes of upcoming political events. After these events had occurred or failed to occur, they then were asked to recall the likelihoods that they had assigned. Participants tended to over-estimate how likely they had thought an event would occur if the event actually happened.

Nevertheless, identifying false convictions and ridding ourselves of them is not difficult in principle, providing that we’re receptive to being shown to be wrong and are able to resist hindsight bias. We can self-test our convictions by checking their veracity via multiple sources, by subjecting them to more stringent standards of proof, and by assessing our ability to explain the concepts underpinning them to others. We can also prevent false convictions by being less willing to leap to conclusions and more willing to suspend judgment.

2. Unknowns we aren’t aware of at all

Finally, let’s turn to the second kind of unknown-unknown, the unknowns we aren’t aware of at all. This type of unknown unknown gets us into rather murky territory. A good example of it is denial, which we may contrast with the type of unknown unknown that is merely due to unawareness. This distinction is slightly tricky, but a good indicator is whether we’re receptive to the unknown when it is brought to our attention. A climate-change activist whose friend is adamant that the climate isn’t changing will likely think of her friend as a “climate-change denier” in two senses: he is denying that the climate is changing and also in denial about his ignorance on that issue.

Can unknown unknowns be beneficial or even adaptive?

One general benefit simply arises from the fact that we don’t have the capacity to know everything. The circumstances and mechanisms that produce unknown unknowns act as filters, with both good and bad consequences. Among the good consequences is the avoidance of paralysis—if we were to suspend belief about every claim we couldn’t test first-hand we would be unable to act in many situations. Another benefit is spreading the risks and costs involved in getting first-hand knowledge by entrusting large portions of those efforts to others.

Perhaps the grandest claim for the adaptability of denial was made by Ajit Varki and Danny Brower in their book on the topic. They argued that the human capacity for denial was selected (evolutionarily) because it enhanced the reproductive capacity of humans who had evolved to the point of realising their own mortality. Without the capacity to be in denial about mortality, their argument goes, humans would have been too fearful and risk-averse to survive as a species. Whether convincing or not, it’s a novel take on how humans became human.


Having taken us on a brief tour through unknown unknowns, I’ll conclude by summarizing the “antidotes” available to us.

  1. Humility. A little over-confidence can be a good thing, but if we want to be receptive to learning more about what we don’t know that we don’t know, a humble assessment of the little that we know will pave the way.
  2. Inclusiveness. Consulting others whose backgrounds are diverse and different from our own will reveal many matters and viewpoints we would otherwise be unaware of.
  3. Rigor. Subjecting our beliefs to stricter standards of evidence and logic than everyday life requires of us can quickly reveal hidden gaps and distortions.
  4. Explication. One of the greatest tests of our knowledge is to be able to teach or explain it to a novice.
  5. Acceptance. None of us can know more than a tiny fraction of all there is to know, and none of us can attain complete awareness of our own ignorance. We are destined to sail into an unknowable future, and accepting that makes us receptive to surprises, novelty, and therefore converting unknown unknowns into known unknowns. That unknowable future is not just a source of anxiety and fear, but also the font of curiosity, hope, aspiration, adventure, and freedom.

Bammer G., Smithson, M. and the Goolabri Group. (2008). The nature of uncertainty. In, G. Bammer and Smithson, S. (eds.), Uncertainty and Risk: Multi-Disciplinary Perspectives, Earthscan: London, United Kingdom: 289-303.

Kerwin, A. (1993). None too solid: Medical ignorance. Knowledge, 15, 2: 166-185

Rozenblit, L. and Keil, F. (2002). The misunderstood limits of folk science: An illusion of explanatory depth. Cognitive Science, 26, 5: 521-562

Varki, A. and Brower, D. (2013). Denial: Self-deception, false beliefs, and the origins of the human mind. Hachette: London, United Kingdom

Biography: Michael Smithson PhD is a Professor in the Research School of Psychology at The Australian National University. His primary research interests are in judgment and decision making under ignorance and uncertainty, statistical methods for the social sciences, and applications of fuzzy set theory to the social sciences.

This blog post is part of a series on unknown unknowns as part of a collaboration between the Australian National University and Defence Science and Technology.

Published blog posts in the series:
Accountability and adapting to surprises by Patricia Hirl Longstaff

Scheduled blog posts in the series:
September 24: What do you know? And how is it relevant to unknown unknowns? by Matthew Welsh
October 8: Managing innovation dilemmas: Info-gap theory by Yakov Ben-Haim

17 thoughts on “How can we know unknown unknowns?

    • The concept of an unknown unknown certainly can include “unknowables”, as can known unknowns. I may believe, for instance, that until it occurs the time and manner of my demise are unknowable. In that case I’d be declaring a known unknown that I think is unknowable. If I believe that no-one can foretell when and how they will die, then I’ll consider anyone who thinks they can foretell this to possess a false conviction– i.e., an unknown unknown– that happens to concern what is or isn’t knowable.

  1. In a new paper in Environment & Planning A I link unknown unknowns to the rendering completely obsolete of present knowledge by cascading changes to beliefs, attitudes and behaviours made by diverse actors in response to – and in anticipation of others’ responses to – new developments. This requires NOT a mere revision and updating of the probabilities of known possibilities already residing within an ex-ante defined state space, but the complete destruction and reframing of the space. It can be done by combining qualitative & quantitative scenario techniques.

    Derbyshire, J. (2019) Answers to questions on uncertainty in geography: Old lessons and new scenario tools. Environment & Planning A (in press).

    Reposted from Twitter (by Gabriele Bammer)

    • This should be a very interesting paper, on a relatively neglected topic. Biologists who are trying to assess the biodiversity of an ecosystem are engaged in the construction of a state space, and I’ve taken some of their methods, heuristics, and intuitions into the study of how other people do this. In preliminary experiments I’ve found that in the absence of prior beliefs about the nature of the state space, people construct it in a similar (and sensible) way to biologists as they sample “species” from an environment. However, if they have prior beliefs or stereotypes about the state space these override all other heuristics.

      • interesting – on a related note, I’ve found that when people lack more concrete information, they tend to “fill in” the knowledge gaps with their assumptions/biases/etc. Which, in turn, leads them to make decisions based on those unfounded biases (not an optimal situation). I am looking forward to a world where people can understand the structure of knowledge and use that to identify directions for seeking more complete and more useful knowledge. For an environmental example… imagine studying an ecosystem; perhaps using “the food chain” as a kind of structure. If there is a missing link in the chain, you would know that there is something about that ecosystem you do not understand… and that would give you some idea about “where” to look to find the knowledge to fill that gap.

  2. Great post Michael, and a very interesting conversation.

    I would note that good interdisciplinary practices encourage most if not all of your recommended antidotes.

    Picking up on something Steve said, I wonder if an important type of unknown unknown involves not knowing that one thing we know something about affects another thing that we know something about. We might all, for example, have been aware that there was a cultural backlash among some people against migration and other social changes. And we also know that democracy depends on some degree of mutual respect and open communication. But we might not have imagined that the former could threaten the latter.

    This sort of unknown unknown — where we simply don’t imagine how one thing might encourage unexpected changes in another — might be the most important for planners to grapple with. My sense of history is that historical surprises generally stem from some interaction among phenomena studied in different disciplines. And if planners engage with disciplinary experts one at a time they are unlikely to come to know what they need to know. Disciplinary silos generate unknown unknowns. The antidote for this sort of unknown unknown would seem to be increased interdisciplinarity and a conscious effort to map how each phenomenon we study might affect others. This is no easy task, to be sure, but is probably more manageable than it might seem at first glance.

    • I agree that good interdisciplinary practices would include my “antidotes”– As would good practices within any discipline I’m acquainted with.
      I also concur with your observation that an important subcategory of unknown unknowns involves unforeseen relationships between things that we already know about. Medical drug research provides a rich set of examples, in the form of unanticipated “interactions” between medications. Testing for the efficacy of a drug on its own truncates our ability to know how it might affect or be affected by other drugs present in a person. So in addition to increased interdisciplinarity, we could add a recommendation that RCTs [randomised controlled trials] be extended to incorporate tests of these interactive effects, at least for the co-presence of commonly-used medications.

    • ‘where we simply don’t imagine how one thing might encourage unexpected changes in another’: That is what I meant when I said that unknown unknowns come about from ‘cascading changes to beliefs, attitudes and behaviours made by diverse actors in response to – and in anticipation of others’ responses to – new developments’. If you’re a government and you enact a particular policy, the policy essentially represents an inherent prediction about how people are going to respond to it. Since the aggregate response depends on people trying to anticipate and respond to other people’s anticipated responses, and so on over multiple levels, it becomes impossible to know what the outcome will be. The effect is an unknown unknown resulting from a policy created the assumption that the outcome is knowable and predictable.

  3. Thanks Michael, Thanks Dr Steve
    I’m often totally lost when it comes to the Philosophy of Knowledge, but can it not be that the unknower and the possessor of the unknowns are identical, and would it not be meaningful in that case to assert that “A doesn’t know that A doesn’t know X”, where X is not something specific? True, A would have to know (at least how to name) X in order to assert that A doesn’t know X, and thus A would know that A doesn’t know X, but what if the something that is not known escapes knowledge to the point that it cannot be assigned an X? What if X cannot currently be identified? Would it not be meaningful and true to say that “A claims that A doesn’t know the things that A has yet to identify”? While this may appear to be a tautology, it would also reinforce Dr Steve’s example based on “categories of knowledge related to Workers, Raw Material, Products, Customers, and Natural Environment”. What if, as a company executive, I failed to realise that one of the missing categories might be “Executives”? That might mean that I now know that I don’t know how the recommendations are going to affect me personally, but if I hadn’t identified Executives as a category of impact, I would still be in the position of unknown unknowns with regards to the impact upon my own role …? Maybe humility, and an acceptance of Aristotle’s comment that “The more you know, the more you realise you don’t know”, is indeed the way forward. Especially when it is noted that you don’t know what you don’t know, until you do …

    • Leila, I agree that this meta-cognitive material can be slippery stuff. Let’s see what happens to “A claims that A doesn’t know that A doesn’t know X” when I use your example, so it becomes “I don’t know that I don’t know that ‘Executives’ is a missing category”. This statement still doesn’t make sense, because it suggests that instead I actually do know that ‘Executives’ is a missing category. I think what you’ve done is to put it in the past tense, i.e., “”I didn’t know that I didn’t know that ‘Executives’ is a missing category”, in which case it does seem plausible. Often the primary path to resolving our unknown unknowns is with hindsight.

      • Hi Mike.
        Great post. Your response here is preempting my post (due on the 24th) where I add to the confusion with a discussion of how the 2×2 structure isn’t quite as clear as we sometimes pretend. In particular, focusing on the fact that things regularly move between these categories and not just in the directions we think. We can realise that something was missing as you describe above but we can also forget or fail to recall things that we were aware of previously. Whether a particular thing is known or unknown (etc), can thus be very much dependent on the context in which the ‘knower’ finds themselves.

        • Hi Matt,
          Great to hear that you’re going to be posting on the 24th– I’ll look forward to that. I agree that there are fuzzy boundaries in the 2×2 structure and often a fair amount of “drift” between its categories. As you’d know, recall is strongly influenced by contextual cues so that an unknown unknown in one situation can become, e.g., a known unknown or an unknown known in another.

    • Hi Lelia – yes – we can recognize that something is missing if the assigned category/box is empty, but it is much more difficult to recognize that we are missing a category! What we have found effective is to look at the structure of knowledge – a very simple example here:

      Basically, we can create a map (e.g. boxes and arrows) to represent our knowledge (theory, policy model, etc). On that map, we know that there should be at least two arrows pointing at each box from other boxes. So, any place where a box does not have at least two arrows pointing to it, we know that there is something missing (a kind of “gap analysis”) and we know to start looking (although, not necessarily what we will find).

  4. Good stuff – always good to keep one’s mind open to new learning!

    A few related points… First, it is often more effective to think in terms of “useful knowledge” (rather than simply how much knowledge we have). Second, using a “practical mapping” approach, it is possible to objectively identify “blank spots” on our maps – places where we can focus our efforts to identify our unknowns. This improves our ability to know what we don’t know… and improve our level of useful knowledge. Third (for those who are serious meta-thinking geeks) we’ve made some progress in identifying unknown unknowns by using and “orthogonality” perspective. Way-too-simply (the way-too-long paper is under submission now) let’s say there are a few categories of knowledge accepted as useful (let’s say that these are categories of knowledge related to Workers, Raw Material, Products, Customers, and Natural Environment). Now, let’s say that you (as a company executive) are handed a report with recommendations for changing how your company is run. You look at the report and see that there is lots of information about how things will change in every category but one…. there is nothing about how the changes will change the Natural Environment. Bingo – you have some idea of where to look for your unknown unknown knowledge. Of course, you should choose your categories carefully – the more categories there are (and the more abstract they are), the more likely you will not miss some unknowns.

    • Steve, you’ve raised some interesting points. Thinking in terms of “useful” knowledge can indeed be effective, but only if you already know enough to have a fairly accurate (and suitably inclusive) definition of “usefulness”. That said, the observation you’ve made about choosing your categories carefully bears on one of my favourite kinds of unknowns: Sample space ignorance. This is when we don’t have a complete list of all the possible states or outcomes. Examples of this are a zoologist who is trying to identify the species of animals in a heretofore unexplored environment, or a software developer trying to identify all of the bugs in a complex piece of software.

      • Mike – Good points. For usefulness, our knowledge should consist of measurable concepts/variables and we should understand the causal connections between them.

Leave a Reply