What do you know? And how is it relevant to unknown unknowns?

By Matthew Welsh

Author - Matthew Welsh
Matthew Welsh (biography)

How can we distinguish between knowledge and ignorance and our meta-knowledge of these – that is, whether we are aware that we know or don’t know any particular thing? The common answer is the 2×2 trope of: known knowns; unknown knowns; known unknowns; and unknown unknowns.

For those interested in helping people navigate a complex world, unknown unknowns are perhaps the trickiest of these to explain – partly because the moment you think of an example, the previously “unknown unknown” morphs into a “known unknown”.

My interest here is to demonstrate that this 2×2 division of knowledge and ignorance is far less crisp than we often assume.

This is because knowledge is not something that exists in the world but rather in individual minds. That is, whether something is ‘known’ depends not on whether someone, somewhere, knows it; but on whether this person, here-and-now does.

What an individual ‘knows’, however, is not static. Obviously, we learn new things: unknowns becoming known. But we also forget things: knowns becoming unknown, whether permanently or temporarily.

Further, whether we remember particular things is contextual – how questions are posed to us and other factors affect our memory processes. This alters what parts of our memory we search and how – changing the likelihood of our ‘finding’ different pieces of knowledge. That is, we can (and do) fail to recall things that we, in another sense, ‘know’.

To take a simple example, if I ask you to list all possible reasons for your car not starting, you will produce a list of possibilities informed by recent experience and other contextual effects. This list is unlikely to be exhaustive. Instead, it will be a subset of the potential list you could produce with a perfect search of your memory (and rational extrapolation). This larger list will, itself, be a subset of a complete list that would include mechanisms that you have never encountered or don’t understand.

When I then ask you to decide how likely each of these possible causes is, the missing items are all, epistemologically, unknown unknowns – things you are unaware you need to estimate. Some, however, would have been known unknowns in different circumstances – had your cognitive processing been triggered from a different starting point and turned up a different set of alternatives. For example, if you forget to include “out of petrol” on your list, the probability of this as a reason for your car not starting is an unknown unknown – despite you knowing (in another sense) that this is a possible reason.

The literature on decision making is rife with examples of context affecting people’s cognition and, by extension, what they ‘know’ at any given point in time. This complexity is too easily swept under the rug when we think about knowledge as if it exists in the world rather than in the mind.

Given this, asking ‘what do you know?’ seems an oversimplification. To understand what we know, we have to consider when we know things and the processes underlying how we know things – and that is before we even start on the thorny problem of differentiating knowledge from belief.

Questions
Is the 2×2 structure more of a help or a hindrance in thinking about knowns and unknowns? What other categories/distinctions might a complete model need to include?

Biography: Matthew Welsh PhD is a Senior Research Fellow at the Australian School of Petroleum, University of Adelaide, Australia. His research focusses on how people’s inherent cognitive processes affect their judgements, estimates and decisions and the implications of this for real-world decision making. He is the author of Bias in Science and Communication: a field guide – a guide for scientists and science communicators interested in understanding how biases arise, can be identified and countered.

This blog post is part of a series on unknown unknowns as part of a collaboration between the Australian National University and Defence Science and Technology.

Published blog posts in the series:

Accountability and adapting to surprises by Patricia Hirl Longstaff
https://i2insights.org/2019/08/27/accountability-and-surprises/

How can we know unknown unknowns by Michael Smithson
https://i2insights.org/2019/09/10/how-can-we-know-unknown-unknowns/

Scheduled blog posts in the series:

October 8: Managing innovation dilemmas: Info-gap theory by Yakov Ben-Haim
October 22: Creative writing as a journey into the unknown unknown by Lelia Green
November 5: Looking in the right places to identify “unknown unknowns” in projects by Tyson R. Browning

26 thoughts on “What do you know? And how is it relevant to unknown unknowns?”

  1. Dear Matthew,
    The subject of your message remains relevant.
    It is important to note that the topic is so complex that you use nuances in the construction of phrases and sentences to make sense of it. It is these nuances that cause the interlocutor to have associations that allow you to form an overall picture and evaluate your reasoning.
    Perhaps you will be interested in reasoning that will reduce the complexity of the topic, and also allow you to move from the nuances of phrases to a certain methodology.

    First, it is necessary to solve the question of what is the unknown unknown:
    • is it an independent substance?
    • is it a shadow cast by an independent substance?
    • is it an object that we do not perceive?
    • is this an object that we cannot identify?

    Secondly, it is necessary to clarify what the known unknown is:
    • is this an object?
    • is it the interaction of objects?
    • is this a process?
    • is this the relationship between an object, objects and processes?

    Thirdly, it is necessary to clarify what the known known is:
    • is this knowledge?
    • is this an opinion?

    Fourthly, it is necessary to decide from which position the relationship to the known and to the unknown should be considered:
    * from a philosophical position associated with the possibility of knowing unknown unknowns;
    * from an intellectual position associated with the identification of unknown known;
    * from a professional position associated with the solution of professional tasks and the participation of known unknowns;
    • from a trivial position associated with ensuring the real-life activities of people and the participation of well-known celebrities.

    Various combinations of elements of the first, second, third and fourth will form a conditionally closed problem space and context that will allow you to reasonably use the appropriate methodology or successfully do without it.
    I am sure that an intellectual position associated with the identification of unknown known and a professional position associated with the solution of professional tasks and the participation of known unknowns are useful for specialists in defense sciences and technologies.
    Simply put, specialists want to know what they want to know and what they need to know to make a decision or to develop such a solution. What they want and need to know should be knowledge, not opinion. This is knowledge about the relationship between an object, objects and processes. Knowledge is the baggage of the relevant scientific discipline. Consequently, a methodology capable of obtaining such knowledge, in conjunction with academic disciplines, should belong to a scientific discipline that accumulates the possibilities of systemic and transdisciplinary thinking. Perhaps it will be systems transdisciplinarity?

    Reply
  2. Hello Matthew, thank you for this great post. Further to Stephen’s comments in regard to Dave Snowden’s Cynefin framework (https://www.youtube.com/watch?v=N7oz366X0-8), one of Snowden’s seven rules of managing knowledge is “We only know what we know when we need to know it.” This reflects the dynamic and contextual nature of knowledge that you discuss in your post.

    The seven rules are the distinctions that underpin the Cynefin framework. They are:

    1. Knowledge can only be volunteered, it cannot be conscripted.

    2. We only know what we know when we need to know it.

    3. In the context of real need few people will withhold their knowledge.

    4. Everything is fragmented.

    5. Tolerated failure imprints learning better than success.

    6. The way we know things is infrequently the same as the way we report we know things.

    7. We always know more than we can say, and we will always say more than we can write down.

    Snowden provided a very good explanation of these rules in a recent presentation to the Systems Integration Knowledge Management (SIKM) Leaders Community. The presentation slides are available at: https://www.slideshare.net/SIKM/lets-start-to-manage-knowledge-not-information

    An important aspect that Snowden emphasises is that because of the dynamic and contextual nature of knowledge, knowledge capture should occur in real time.

    Reply
  3. The common problem with ‘framing’ a concept, e.g. a 2×2 matrix, is the tendency for misinterpretation of superficially meaningless classifications as only ‘stuffing’ the proposed structure for the sake of completeness. The unknown-knowns and in particular unknown-unknowns are a good example of apparent conceptual overloading of a dimension, only present to complement the framing in order to make it coherent.

    Since the ‘unknown’ side of the axis doesn’t lend itself to classification of facts, that is, by definition remains empty as Matthew Welsh observed, its purpose can be counterintuitive. Yet in practical application it proves instructive as a placeholder which expands the scope of investigation. A function that points the analysis or questioning beyond the known-unknowns, always front and centre of investigations. In effect, the unknown quadrants open the search to discovery of a more comprehensive view of the system.

    Reply
    • Hi Piotr.

      Thanks for your comment.
      I agree that the inclusion of the unknowns is very useful for guiding
      thinking – necessary , in fact. What I’m wondering is whether the
      2×2 structure is sufficient or whether it needs more nuance and
      clarity around what belongs in which category (and when) and
      possibly a way to distinguish knowledge and belief?

      Cheers,
      Matthew.

      Reply
      • Indeed, the very classification of data as facts or knowledge versus beliefs and assumptions is the essence of this technique, especially as a dimension of a diagram. The notion of ‘unknown’, represented by the empty quadrants, or in effect a standing question; ‘what else?’, can be uncovered or reconsidered in context of this vector.

        Reply
  4. Matthew, I am not as deep into this subject as you and your commenters. Really. But it did bring up something that has been bothering me. How can we tell if a decision-maker is wrong? I am guessing you will say it has something to do with context but I would love to hear you discuss it. So….
    Person X hears news from a source they believe is trustworthy and that news is part of what they already believe is going on. Person Y hears news from a source they believe is trustworthy and that news is part of what they already believe is going on. They are looking at the same problem and they do not have lots of time to figure out how they know stuff. How would you counsel them? Are they acting on known-knowns? Or something else?

    Reply
    • Hi Patricia.
      Thanks for the question. I think it comes down to the extra nuance that we need to add to complete the 2×2 structure. As it stands – and as Mike and David pointed out in their comments – it doesn’t differentiate between knowledge and belief.

      In your example, both X and Y have what an epistemologists would call “justified beliefs” (if I’m remembering my philosophy correctly) – that is, beliefs they hold for what we would consider ‘good’ reasons – a trusted source and information that accords with pre-existing beliefs.

      The problem is that a justified belief is not, necessarily, true and, if it isn’t true, it wouldn’t be considered knowledge (philosophically speaking). The classic example is looking at a clock sometime during the afternoon and seeing that it says 2 o’clock. You now have a justified belief that it is 2pm based on the fact that you know it is after noon and that clocks are, generally, reliable. If, however this clock stopped an hour ago and it is really 3pm, your justified belief is false and thus you do not ‘know’ the time.

      So… when X and Y are acting on their beliefs, it is possible that, strictly defined, one is acting on a known known and the other on an unknown unknown – as that person thinks they know the truth but are wrong and therefore aren’t aware that they are missing information.

      From a practical point of view, though, we often have to act on our beliefs without being able to confirm their truth and acting on a justified belief is what we would expect a rational person to do. In my book, I even have a little section entitled “Gullibility is Good” – reflecting the fact that most of our learning comes from fairly reliable social sources and thus fact-checking all of the information you receive is far too cognitively demanding for us to attempt.

      Given this, the counsel I would offer people us to remember that we have a number of biases that make us more likely to uncritically accept information that accords with our pre-existing beliefs (e.g., confirmation bias; the fluency effect) and that, therefore they should consider the possibility that their justified beliefs will be wrong more often that they expect. That is, you need to consider that there is a non-zero probability that your ‘known known’ is actually an unknown of some stripe and plan accordingly.

      Cheers,
      Matthew.

      Reply
  5. Hi Matthew,

    Great article. Very relevant to decision making for complex issues. One approach that could help with practical application of decisions about unknown unknowns is scenario building. This is how it could work:

    In first round of collaboration, some stakeholders build scenarios using their current knowledge whilst a secpnd group of stakeholders remain blind to these scenarios. For the first group of stakeholders the scenarios are considered known uknowns but for the second group it is likely that at least some of the scenario elements are unknown unknows (i.e. beyond their experience or knowledge).

    In the second stage of collaboration the second group of stakeholders receive the scenarios and apply them to their policy recommendations. This process tests if the policies recommended can withstand and adapt should a range of unknown events occur.

    We know that we can’t test every unknown unknown like this because we will never know all the suprises that could occur but we can ensure that there are sufficient mechanisms within the policy recommendations that allow sensible adaptation.

    Reply
    • Hi Bonnie.

      That is an interesting approach and would, I think, be a good test of the resilience of policies to unknown unknowns. I think diversity would be key here – bigger, more diverse groups should generate more known unknowns during scenario building and thus provide a better test of the second group’s policies – but it could provide some benefit even with just a pair.

      Cheers,
      Matthew.

      Reply
      • Sorry Matthew, I am so immersed in the field that I forget that it is an unknown unknown 🙂 to some.

        Cynefin is just a label chosen by David Snowden for a framework we can use to understand what we can and do know about a system. There is an extensive body of work that flows from it.

        The short and very meager introduction is that it is useful to distinguish between:
        – Ordered systems that behave according to ordered principles, which might be [a] Obvious, so straightforward that we can assume all concerned will understand them or [b] Complicated, reserving the word complex for now, where we need a lot of time or specialist expertise to be able to understand them, manage them and make predictions about how they will behave but there is order at work, and
        – Unordered systems in which the way the component parts interact is itself changing, so we see not only the effect of one component on others but the effect of the cause-effect relationships evolving over time with new features emerging and other fading away, which might be [a] Complex, operate on a long enough timescale to make it worth trying to understand and follow the emergent process so we can gain insights and exercise influence in the short term and we can understand how things happened even if we could not have predicted so we need to keep updating our understanding or [b] Chaotic, changing so fast it is impossible to gain any understanding of what will happen or even why things happened afterwards.

        The word unorder is used so that disorder can be reserved for the state of being unconscious of the distinction between order and unorder or lacking sufficient information to know which one you face.

        Examples of chaos are rare. It is usually not tolerated. Someone steps in to clamp down and impose order but some rather trite examples of the other states might be:
        – Obvious, if your car runs out of fuel you need to fill the tank
        – Complicated, if the electronic ignition is misfiring you need a specialist mechanic to fix it
        – Complex, if you invest in an electric car you have some idea where you can use it and be assured of recharging it without being stranded but the interaction of climate change activism, social attitudes, car manufacturers’ developments, government regulation and other things make it very hard, even with the best expertise and a lot of time for analysis, to understand whether you will be able to use one everywhere you would like next year and we don’t know if the need to use a personal car will remain a serious concern in ten years time.

        It took me a couple of years to internalise the implications of the Cynefin domains but it opened me to insights I have not seen on offere anywhere else. Here are a few links that might be useful.

        http://broadleaf.com.au/resource-material/complexity-resources/

        http://broadleaf.com.au/resource-material/complexity-whats-new/

        http://broadleaf.com.au/news/complex-systems-methods-and-ict-projects/

        Reply
  6. Hi Matthew – thanks for the post – really interesting.
    It provoked a thought that I wanted to test with you – you say that knowledge is held by individuals – however as humans we can store information outside ourselves. Bodies of knowledge like science knowledge, or at least bodies of information exist whether an individual knows about them or not. Take a book – (or a manual about car mechanics!) – even if everyone who wrote that manual is no longer around, doesn’t the knowledge still exist even if completely unknown by all individuals? Would this make the knowledge it known and unknown at the same time?
    I wonder whether it might be useful to use the 2×2 at different scales – individual, project, body of knowledge – this way it could be known and unknown at the same time, but at different scales…?
    Melissa

    Reply
    • Hi Melissa.

      Thanks for the comment!

      I think you’ve hit on a key distinction – which echoes what people call institutional knowledge. There definitely is a sense in which we regard things as being ‘known’ if they are written down somewhere – and this is how we often think about the 2×2 structure, with things becoming ‘known’ once anyone knows them. The question, though, is how relevant that is when a person is involved in decision? The answer, I think, has to be driven by the availability of that knowledge to the person.

      Take a (completely-up-to-date) example: during the dark ages in Europe, the knowledge of the Greeks and Romans wasn’t technically lost. It still existed – in the Middle East. From the point of view of a western European, though, would we consider the discoveries of Archimedes ‘known’? I would be inclined to think of the information as still existing but that information not being known. The same goes at a more local scale within an organisation. While someone may have knowledge of a particular event, if the person making the decision doesn’t, then it has to be regarded as an unknown. Given this, organisational structures, SOPs [standard operating procedures], etc, have to be designed to funnel information to decision makers so that information and the knowledge of others is transferred to them.

      Cheers,
      Matthew.

      Reply
  7. Very nice post, Matt, and an excellent question at the end. Permit me to introduce two additional complexities into the meta-cognitive mix: Standards and burdens of proof. How we decide when we know or don’t know something depends on the standard of proof we apply. When I ask classes of undergraduates how many of then “know” the Earth isn’t flat, almost all hands go up. When I then ask how many of them could prove it, nearly all hands go down. By asking this, I’ve raised the standard of proof. These standards are socially constructed, and obvious examples of these constructions can be found in law, science, and policy.

    Likewise, to decide whether something “is known” we have to apply both a standard and a burden of proof. The latter refers to who or what is landed with the responsibility for making the case that something is known. Again, legal trials provide an obvious example (the prosecution must prove the defendant’s guilt, given the prior presumption of innocence), and the precautionary principle provides another (it must not be left to the environment to prove harm is being caused by action X). The locus and status of the burden of proof also are socially constructed.

    The socially constructed nature of standards and burdens of proof is important because any consensus among stakeholders about what is or isn’t known usually requires an agreement about what standards and burdens of proof shall be applied. Many of the more honest disagreements about matters such as the reality of human-caused climate change are fuelled by unresolved differences among laypeople, scientists, and politicians regarding standards and burdens.

    Reply
    • Hi Mike.

      You make an excellent point – one that I deliberately dodged when writing my post!

      My first draft did start considering the difference between belief and knowledge. I still have flashbacks to my Epistemology classes discussing when a belief qualifies as knowledge – criteria like it being ‘justified’ and ‘indefeasible’ – but I decided that the added complexity was detracting from the main point that I wanted to make.

      If we want to have a complete model of knowledge and ignorance, though, it definitely needs to be considered – both at the philosophical level and at the practical level of burdens of proof that you introduce.

      Cheers,
      Matthew.

      Reply
  8. There is also the problem of what Mark Twain described as the things we know that ain’t so. That is, things we believe we know that in fact are false. This is a common problem but decision theory tends to ignore it (as does economics).

    Reply
    • Hi David.

      Yes that is definitely a problem that needs to be incorporated into our discussions of knowledge and ignorance. (One that, as per my response to Mike’s comment above, I deliberately left out of my post.)

      How to distinguish between beliefs and knowledge is the subject of many an Epistemological text and I’m not sure that there is always a good answer. You can, as you state, believe things that aren’t true but you can also believe things that are true – but for the wrong reasons. Neither of those counts as knowledge, philosophical speaking, but they have very different practical implications.

      A lot of what we consider ‘knowledge’ probably isn’t, technically speaking – like Mike’s example of his student’s ‘knowing’ that the Earth is round. As social creatures, we have a tendency towards gullibility. Not in an entirely negative sense but in that we are often willing to accept information given to by others at face value. This means that a lot of what we consider our ‘knowledge’ probably hasn’t been critically examined.

      Cheers,
      Matthew.

      Reply
  9. Not only is individual knowledge not static, the world is not static either. When something becomes known about, it leads to adaptations to behaviour which then invalidate the knowledge that gave rise to them in the first place, throwing up new contexts.

    Reply
    • Hi James.

      Absolutely true!

      Even without considering feedback loops like the one you describe, changes in the world change what is true. Given that the most basic definition of knowledge is a “true belief”, this means that changes in the world change whether you know something or not.

      A simple example might be “knowing” that the Cray Titan is the fastest super-computer in the world. If you believed this in 2012, then that would qualify as knowledge as it was, at that time, true. If you still believe that now, however, then your belief is mistaken and thus no longer knowledge.

      Cheers,
      Matthew.

      Reply
      • I think the context specificity angle is interesting because, if you take something like a (literal) black swan (i.e. not of the Taleb variety, but an actual black swan), their existence was a surprise to people arriving from a context thousands of miles away, but aboriginal people had known about their existence for centuries. Perhaps for them, a white swan would have been a surprise!

        Reply
    • A standard feature of the complex domain, see post on Cynefin.

      Simply by observing and exploring a system we can change it. Think of what happens with community consultation. As soon as the fact that the consultation is being undertaken becomes known, people think more deeply about the topic and start to shift their positions.

      The most effective way to operate in such an environment is to acknowledge that the context is everything and the context is constantly shifting so we need to keep exploring, try to minimise the disruptive effects of those explorations so we can get a grip on how to make the changes we do want to see and make change incrementally with frequent review, update and redirection. A lot of parallels with agile working but that has been overtaken by consultants making it a commodity so it’s not necessarily the best analogy.

      Reply
      • And any domain including people will be complex. Your comment reminded me of the Hawthorne Effect in psychology – the (unsurprising) finding that observing people changes their behaviour. This was observed in a study on variables affecting productivity in a factory (Hawthorne Works) which was initially taken as showing that pretty much all of the interventions increased productivity. Later explanations (labelled the Hawthorne Effect) centred on the fact that the workers, knowing they were being experimented on, had increased attention to their environs and responded to any change positively. This can be a big problem in action research designs – where the enthusiasm of the researcher for the novel change stimulates the participants to do better over and above any effect of the planned intervention.
        Cheers,
        Matthew.

        Reply

Leave a Reply to Matthew WelshCancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Discover more from Integration and Implementation Insights

Subscribe now to keep reading and get access to the full archive.

Continue reading