How can we know unknown unknowns?

By Michael Smithson

Michael Smithson
Michael Smithson (biography)

In a 1993 paper, philosopher Ann Kerwin elaborated a view on ignorance that has been summarized in a 2×2 table describing crucial components of metacognition (see figure below). One margin of the table consisted of “knowns” and “unknowns”. The other margin comprised the adjectives “known” and “unknown”. Crosstabulating these produced “known knowns”, “known unknowns”, “unknown knowns”, and unknown unknowns”. The latter two categories have caused some befuddlement. What does it mean to not know what is known, or to not know what is unknown? And how can we convert either of these into their known counterparts?

relationship_known-to-unknown_after-kerwin1993
Source: Adapted from Kerwin (1993) by Smithson in Bammer et al. (2008)

In this post, I will concentrate on unknown unknowns, what they are, and how they may be identified.

Attributing ignorance

To begin, no form of ignorance can be properly considered without explicitly tracking who is attributing it to whom. With unknown unknowns, we have to keep track of three viewpoints: the unknower, the possessor of the unknowns, and the claimant (the person making the statement about unknown unknowns). Each of these can be oneself or someone else.

Various combinations of these identities generate quite different states of (non)knowledge and claims whose validities also differ. For instance, compare:

  1. A claims that B doesn’t know that A doesn’t know X
  2. B claims that A doesn’t know that A doesn’t know X
  3. A claims that A doesn’t know that B doesn’t know X
  4. A claims that A doesn’t know that A doesn’t know X

The first two could be plausible claims, because the claimant is not the person who doesn’t know that someone doesn’t know X. The last two claims, however, are problematic because they require self-insight that seems unavailable. How can I claim I don’t know that I don’t know X? The nub of the problem is self-attributing false belief. I am claiming one of two things. First, I may be saying that I believe I know X, but my belief is false. This claim doesn’t make sense if we take “belief” in its usual meaning; I cannot claim to believe something that I also believe is false. The second possible claim is that my beliefs omit the possibility of knowing X, but this omission is mistaken. If I’m not even aware of X in the first place, then I can’t claim that my lack of awareness of X is mistaken.

Current unknown unknowns would seem to be claimable by us only about someone else, and therefore current unknown unknowns can be attributed to us only by someone else. Straightaway this suggests one obvious means to identifying our own unknown unknowns: Cultivate the company of people whose knowledge-bases differ sufficiently from ours that they are capable of pointing out things we don’t know that we don’t know. However, most of us don’t do this—Instead the literature on interpersonal attraction and friendships shows that we gravitate towards others who are just like us, sharing the same beliefs, values, prejudices, and therefore, blind-spots.

Different kinds of unknown unknowns

There are different kinds of unknown unknowns, each requiring different “remedies”. The distinction between matters that we mistakenly think that we know about, versus matters that we’re unaware of altogether, is probably the most important distinction among types of unknown unknowns. Its importance stems from the fact that these two kinds have different psychological impacts when they are attributed to us and require different readjustments to our view of the world.

1. False convictions

A shorthand term for the first kind of unknown unknown is a “false conviction”. This can be a matter of fact that is overturned by a credible source. For instance, I may believe that tomatoes are a vegetable but then learn from my more botanically literate friend that its cladistic status is a fruit. Or it can be an assumption about one’s depth of knowledge that is debunked by raising the standard of proof—I may be convinced that I understand compound interest, but then someone asks me to explain it to them and I realize that I can’t provide a clear explanation.

What makes us vulnerable to false convictions? A major contributor is over-confidence about our stock of knowledge. Considerable evidence has been found for the claim that most people believe that they understand the world in much greater breadth, depth, and coherence than they actually do. In a 2002 paper psychologists Leonid Rozenblit and Frank Keil coined a phrase to describe this: the “illusion of explanatory depth”. They found that this kind of overconfidence is greatest in explanatory knowledge about how things work, whether in natural processes or artificial devices. They also were able to rule out self-serving motives as a primary cause of the illusion of explanatory depth. Instead, illusion of explanatory depth arises mainly because our scanty knowledge-base gets us by most of the time, we are not called upon often to explain our beliefs in depth, and even if we intend to check them out, the opportunities for first-hand testing of many beliefs are very limited. Moreover, scanty knowledge also limits the accuracy of our assessments of our own ignorance—greater expertise brings with it greater awareness of what we don’t know.

Another important contributor is hindsight bias, the feeling after learning about something that we knew it all along. In the 1970’s cognitive psychologists such as Baruch Fischhoff ran experiments asking participants to estimate the likelihoods of outcomes of upcoming political events. After these events had occurred or failed to occur, they then were asked to recall the likelihoods that they had assigned. Participants tended to over-estimate how likely they had thought an event would occur if the event actually happened.

Nevertheless, identifying false convictions and ridding ourselves of them is not difficult in principle, providing that we’re receptive to being shown to be wrong and are able to resist hindsight bias. We can self-test our convictions by checking their veracity via multiple sources, by subjecting them to more stringent standards of proof, and by assessing our ability to explain the concepts underpinning them to others. We can also prevent false convictions by being less willing to leap to conclusions and more willing to suspend judgment.

2. Unknowns we aren’t aware of at all

Finally, let’s turn to the second kind of unknown-unknown, the unknowns we aren’t aware of at all. This type of unknown unknown gets us into rather murky territory. A good example of it is denial, which we may contrast with the type of unknown unknown that is merely due to unawareness. This distinction is slightly tricky, but a good indicator is whether we’re receptive to the unknown when it is brought to our attention. A climate-change activist whose friend is adamant that the climate isn’t changing will likely think of her friend as a “climate-change denier” in two senses: he is denying that the climate is changing and also in denial about his ignorance on that issue.

Can unknown unknowns be beneficial or even adaptive?

One general benefit simply arises from the fact that we don’t have the capacity to know everything. The circumstances and mechanisms that produce unknown unknowns act as filters, with both good and bad consequences. Among the good consequences is the avoidance of paralysis—if we were to suspend belief about every claim we couldn’t test first-hand we would be unable to act in many situations. Another benefit is spreading the risks and costs involved in getting first-hand knowledge by entrusting large portions of those efforts to others.

Perhaps the grandest claim for the adaptability of denial was made by Ajit Varki and Danny Brower in their book on the topic. They argued that the human capacity for denial was selected (evolutionarily) because it enhanced the reproductive capacity of humans who had evolved to the point of realising their own mortality. Without the capacity to be in denial about mortality, their argument goes, humans would have been too fearful and risk-averse to survive as a species. Whether convincing or not, it’s a novel take on how humans became human.

Antidotes

Having taken us on a brief tour through unknown unknowns, I’ll conclude by summarizing the “antidotes” available to us.

  1. Humility. A little over-confidence can be a good thing, but if we want to be receptive to learning more about what we don’t know that we don’t know, a humble assessment of the little that we know will pave the way.
  2. Inclusiveness. Consulting others whose backgrounds are diverse and different from our own will reveal many matters and viewpoints we would otherwise be unaware of.
  3. Rigor. Subjecting our beliefs to stricter standards of evidence and logic than everyday life requires of us can quickly reveal hidden gaps and distortions.
  4. Explication. One of the greatest tests of our knowledge is to be able to teach or explain it to a novice.
  5. Acceptance. None of us can know more than a tiny fraction of all there is to know, and none of us can attain complete awareness of our own ignorance. We are destined to sail into an unknowable future, and accepting that makes us receptive to surprises, novelty, and therefore converting unknown unknowns into known unknowns. That unknowable future is not just a source of anxiety and fear, but also the font of curiosity, hope, aspiration, adventure, and freedom.

References:
Bammer G., Smithson, M. and the Goolabri Group. (2008). The nature of uncertainty. In, G. Bammer and Smithson, S. (eds.), Uncertainty and Risk: Multi-Disciplinary Perspectives, Earthscan: London, United Kingdom: 289-303.

Kerwin, A. (1993). None too solid: Medical ignorance. Knowledge, 15, 2: 166-185

Rozenblit, L. and Keil, F. (2002). The misunderstood limits of folk science: An illusion of explanatory depth. Cognitive Science, 26, 5: 521-562

Varki, A. and Brower, D. (2013). Denial: Self-deception, false beliefs, and the origins of the human mind. Hachette: London, United Kingdom

Biography: Michael Smithson PhD is a Professor in the Research School of Psychology at The Australian National University. His primary research interests are in judgment and decision making under ignorance and uncertainty, statistical methods for the social sciences, and applications of fuzzy set theory to the social sciences.

This blog post is part of a series on unknown unknowns as part of a collaboration between the Australian National University and Defence Science and Technology.

Published blog posts in the series:
Accountability and adapting to surprises by Patricia Hirl Longstaff
https://i2insights.org/2019/08/27/accountability-and-surprises/

Scheduled blog posts in the series:
September 24: What do you know? And how is it relevant to unknown unknowns? by Matthew Welsh
October 8: Managing innovation dilemmas: Info-gap theory by Yakov Ben-Haim

29 thoughts on “How can we know unknown unknowns?”

  1. Hello Michael, many thanks for your great post, which I’ve been pleased to share with the knowledge management community (https://realkm.com/2019/10/11/how-can-we-know-unknown-unknowns/).

    The knowledge management (KM) field offers further antidotes to unknown unknowns through tools and techniques that allow people in organisations to share their experiences in dealing with surprises. This is done with the aim of helping others to either prevent the same surprises from happening again, or if that’s impossible, to better deal with them when they unexpectedly arise. Examples of useful tools and techniques include “lessons learned” and “communities of practice”. The popular “lessons learned” approach involves documenting failures and what was learnt from them and then making these lessons both accessible to employees and a reference for training activities in the organisation. A “community of practice” is a formal or informal networks of people with a similar job focus or interests that regularly meets or communicates to share knowledge, including experiences of dealing with unknown unknowns. Communities of practice can operate either within organisations or across a number of organisations, and either locally or at wider geographic scales including internationally. Further information on KM tools and techniques can be found in the KM texts referenced below.

    However, the KM field isn’t currently adequately addressing one of the antidotes you list – inclusiveness. KM strategies are effectively engaging internal stakeholders in organisations, but not external stakeholders (for example as I argue in https://realkm.com/2019/07/08/getting-to-the-heart-of-the-problems-with-boeing-takata-and-toyota-part-2-current-approaches-to-km-arent-adequately-addressing-complexity/). Indeed, the KM community’s own decision-making processes are falling down in this regard (for example as I argue in https://realkm.com/2017/12/21/km-standard-controversy-can-the-km-profession-unite-in-support-of-the-new-standard/).

    Your post is very valuable in helping to further educate the KM community in regard to the importance of inclusiveness, as part of an overall approach to effectively dealing with unknown unknowns.

    Some notable KM texts:

    Dalkir, K. (2013). Knowledge management in theory and practice. Routledge.

    Rao, M. (2012). Knowledge management tools and techniques. Routledge.

    Uriarte, F.A. (2008). Introduction to knowledge management. ASEAN Foundation, Jakarta, Indonesia.

    Young, R. (2010). Knowledge management tools and techniques manual. Asian Productivity Organization. https://www.apo-tokyo.org/publications/ebooks/knowledge-management-tools-and-techniques-manual-pdf-2mb/

    Reply
    • The concept of an unknown unknown certainly can include “unknowables”, as can known unknowns. I may believe, for instance, that until it occurs the time and manner of my demise are unknowable. In that case I’d be declaring a known unknown that I think is unknowable. If I believe that no-one can foretell when and how they will die, then I’ll consider anyone who thinks they can foretell this to possess a false conviction– i.e., an unknown unknown– that happens to concern what is or isn’t knowable.

      Reply
  2. In a new paper in Environment & Planning A I link unknown unknowns to the rendering completely obsolete of present knowledge by cascading changes to beliefs, attitudes and behaviours made by diverse actors in response to – and in anticipation of others’ responses to – new developments. This requires NOT a mere revision and updating of the probabilities of known possibilities already residing within an ex-ante defined state space, but the complete destruction and reframing of the space. It can be done by combining qualitative & quantitative scenario techniques.

    Derbyshire, J. (2019) Answers to questions on uncertainty in geography: Old lessons and new scenario tools. Environment & Planning A (in press).

    Reposted from Twitter (by Gabriele Bammer)

    Reply
    • This should be a very interesting paper, on a relatively neglected topic. Biologists who are trying to assess the biodiversity of an ecosystem are engaged in the construction of a state space, and I’ve taken some of their methods, heuristics, and intuitions into the study of how other people do this. In preliminary experiments I’ve found that in the absence of prior beliefs about the nature of the state space, people construct it in a similar (and sensible) way to biologists as they sample “species” from an environment. However, if they have prior beliefs or stereotypes about the state space these override all other heuristics.

      Reply
      • interesting – on a related note, I’ve found that when people lack more concrete information, they tend to “fill in” the knowledge gaps with their assumptions/biases/etc. Which, in turn, leads them to make decisions based on those unfounded biases (not an optimal situation). I am looking forward to a world where people can understand the structure of knowledge and use that to identify directions for seeking more complete and more useful knowledge. For an environmental example… imagine studying an ecosystem; perhaps using “the food chain” as a kind of structure. If there is a missing link in the chain, you would know that there is something about that ecosystem you do not understand… and that would give you some idea about “where” to look to find the knowledge to fill that gap.

        Reply
  3. Great post Michael, and a very interesting conversation.

    I would note that good interdisciplinary practices encourage most if not all of your recommended antidotes.

    Picking up on something Steve said, I wonder if an important type of unknown unknown involves not knowing that one thing we know something about affects another thing that we know something about. We might all, for example, have been aware that there was a cultural backlash among some people against migration and other social changes. And we also know that democracy depends on some degree of mutual respect and open communication. But we might not have imagined that the former could threaten the latter.

    This sort of unknown unknown — where we simply don’t imagine how one thing might encourage unexpected changes in another — might be the most important for planners to grapple with. My sense of history is that historical surprises generally stem from some interaction among phenomena studied in different disciplines. And if planners engage with disciplinary experts one at a time they are unlikely to come to know what they need to know. Disciplinary silos generate unknown unknowns. The antidote for this sort of unknown unknown would seem to be increased interdisciplinarity and a conscious effort to map how each phenomenon we study might affect others. This is no easy task, to be sure, but is probably more manageable than it might seem at first glance.

    Reply
    • I agree that good interdisciplinary practices would include my “antidotes”– As would good practices within any discipline I’m acquainted with.
      I also concur with your observation that an important subcategory of unknown unknowns involves unforeseen relationships between things that we already know about. Medical drug research provides a rich set of examples, in the form of unanticipated “interactions” between medications. Testing for the efficacy of a drug on its own truncates our ability to know how it might affect or be affected by other drugs present in a person. So in addition to increased interdisciplinarity, we could add a recommendation that RCTs [randomised controlled trials] be extended to incorporate tests of these interactive effects, at least for the co-presence of commonly-used medications.

      Reply
    • ‘where we simply don’t imagine how one thing might encourage unexpected changes in another’: That is what I meant when I said that unknown unknowns come about from ‘cascading changes to beliefs, attitudes and behaviours made by diverse actors in response to – and in anticipation of others’ responses to – new developments’. If you’re a government and you enact a particular policy, the policy essentially represents an inherent prediction about how people are going to respond to it. Since the aggregate response depends on people trying to anticipate and respond to other people’s anticipated responses, and so on over multiple levels, it becomes impossible to know what the outcome will be. The effect is an unknown unknown resulting from a policy created the assumption that the outcome is knowable and predictable.

      Reply
    • I’m reminded of a paper from early last year that offers interesting insights into good interdisciplinary practices that could facilitate the level of knowledge sharing needed to address this learning and understanding need. The paper examines two neighbouring disciplines that are highly influential in management and organization studies – innovation studies and project management research. The authors found that these research disciplines largely ignored each other’s contributions for several decades, but in recent years there has been greater cross-referencing and mutual recognition. They argue that two inter-related approaches are behind this trend: the adoption of meta-theories and community-building initiatives across disciplinary boundaries. Two meta-theories – theories of organizational learning and social practice theories of organizing – have recently influenced project management and innovation research and created bridges between the two disciplines. Further, joint community-building activities, such as conferences and workshops, have brought project management and innovation researchers together to share knowledge, debate ideas, and confront each other’s assumptions and agendas.

      Davies, A., Manning, S., & Söderlund, J. (2018). When neighboring disciplines fail to learn from each other: The case of innovation and project management research. Research Policy, 47(5), 965-979.

      https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3132518

      Reply
      • Bruce – looks like a very interesting paper. I look forward to reading it!

        I’ve recently had a paper accepted for publication that provides a more clear and direct method for integrating/synthesizing knowledge between disciplines: Wallis, S. E. (2020). The missing piece of the integrative studies puzzle. Interdisciplinary Science Reviews, (in press).

        If you (or anyone) would like a copy, please email me at: SWallis@ProjectFAST.org

        Another approach is “theory knitting” – one paper here: https://journals.sagepub.com/doi/pdf/10.1177/1356389015607712

        Key, however, is defining what knowledge “is.” We like to evaluate the “usefulness” of knowledge on three dimensions. 1 – Supported by data. 2 – Having a more coherent/systemic internal structure. 3 – Relevance/meanginfulness to the person/group and their situation. The higher score on all three means that the knowledge is more useful. If something is high in one or another of those three dimensions, it may cause confusion – a pretense of knowledge. For example, Ohm’s law has great structure, and is supported by lots of data, but it is not relevant to most people (electrical engineers excepted). The prophecies of Nostradamus are supported by little data, have poor structure, but seem relevant to some peoples’ lives.

        I suspect that part of the problem in connecting knowledge between diverse groups/organizations/fields is a sort of chicken-egg problem. Each side sees the other in ways that are rather fuzzy/simplistic. So, neither side really understands what they other has… or needs… or wants. For a (dare I say) simplistic example, one person might look at another and say something like, “You are human, therefore you should adopt my religious beliefs” (yes – I view religious beliefs as a kind of knowledge… albeit low structure, low data, high meaning/relevance).

        A potential approach would be to ask representatives of each group to create a practical map – a causal knowledge map – including most or all of the concepts that make up their understanding (the known-knowns of each group are the unknown-knowns of the other group). Then, the two maps may be compared to identify overlaps. Integrating/synthesizing/knitting those maps often results in an increase of useful knowledge.

        Also, while different groups often have different versions/views of the world, we rarely find situations where those views directly contradict one another using the practical mapping process. Mainly because our structural conventions pull people out of their traditional (confusing – unknown unknown?) terminology.

        Thanks,

        Steve

        Reply
        • Hi Steve, thanks for you reply, I’m very interested to read your paper so I’ll send through an email.

          In regard to the knowledge mapping approach you mention at the end of your reply, causal knowledge maps would be a very useful input, but they won’t surface all of the knowledge that people hold. This is because, as Dave Snowden advises, “we always know more than we can say, and we will always say more than we can write down” (see my comment on Matthew Welsh’s post at https://i2insights.org/2019/09/24/knowledge-and-unknown-unknowns/#comment-29566).

          Narrative techniques are an effective way of accessing and revealing more of people’s knowledge, and a good example is anecdote circles (http://www.anecdote.com/pdfs/papers/Ultimate_Guide_to_ACs_v1.0.pdf). An example of the use of anecdote circles in conjunction with knowledge mapping, in this case information flows mapping rather than causal knowledge mapping, is our use of them as part of the knowledge strategy process for Australia’s natural resource management organisations (https://realkm.com/2016/07/29/case-study-a-knowledge-strategy-process-for-natural-resource-management-organisations/). Interdisciplinary knowledge sharing and learning is a different context to the development of a knowledge strategy, so a different process design would be needed, but this example gives a sense of how narrative enquiry could form a key part of that design.

          Reply
          • Bruce – I very much agree that it is difficult to capture tacit knowledge. We have also been developing methods to surface and clarify that knowledge – from a causal mapping perspective. Of course, tacit knowledge is not perfect either; so, there is an iterative process that includes putting the knowledge into practice.

            Hmmm…. I’m thinking that among all the techniques we’ve been talking about, some may be ‘objectively’ better than others. So, it would be interesting to develop a scale of knowledge representation effectiveness.

            However, I ‘m also starting to think in terms of context – there is good reason to use a technique that is subjectively more comfortable for a client. What if, early in a client engagement, consultants would conduct an assessment of “how” the client typically/commonly/comfortably generates, represents/shares its knowledge. It is by formal reports, powerpoint, magical thinking, anecdotes, stories, statistics, concept maps, causal maps? what is their “culture of knowledge”? Are questions encouraged at meetings or only asked behined closed doors? Understanding that might make it easier to help them (first) improve their existing process, then to step up to the “next level” of knowledge.

            Reply
      • Those are interesting and worthwhile points, Bruce, and the paper you’ve cited certainly is relevant here. Mikael Klintman has a recent book out on “knowledge resistance”, in which he attempts a general overview of why and when people resist knowledge or insights available to them. Many of the points he raises could be applied productively to the question of when disciplines do and don’t learn from each other. While (in my opinion) his treatment isn’t especially deep, it is balanced and he does bring together various explanations and questions that bear on this issue. For example, on the one hand he presents some persuasive arguments that knowledge resistance can be functional, both at the individual and socio-cultural levels. On the other hand, he also poses questions about when it is dysfunctional, e.g. “If we include facts and knowledge into what distinguishes ‘tribes’– including modern ones– does this mean that we have to accept that every tribe has its own truth?” (pg. 82).

        Reference:
        Klintman, M. (2019). Knowledge Resistance: How We Avoid Insight from Others. Manchester University Press.

        Reply
        • Many thanks Michael for your reply. Klintman’s book looks to be essential reading for me, and his question about truths that you quoted has stimulated much thought. So much so that when my comment passed a thousand words in length I decided it was best to write it up as an article, which I’ve just published at https://realkm.com/2019/10/30/case-study-how-polarized-debates-can-be-the-result-of-rational-deliberation-and-how-they-can-be-resolved/

          To summarise the article, no, we don’t have to accept that every tribe has its own truths, particularly when those truths conflict with established science. However, we do have to accept that, as scientifically unsound as they may be, those truths are likely to be the product of rational thought processes. When we accept that polarized views can be the result of rational deliberation by rational actors, it changes how we frame debates, which leads to the identification and implementation of better solutions. We move from criticism and blame to looking at how we can cooperatively assist groups who hold truths that are radically opposed to our own to broaden their awareness and understanding and effectively engage in decision-making.

          My evidence for this comes from a recent University of Pennsylvania paper (Singer et al. 2019). The authors conclude that “even though group polarization looks like it must be the product of human irrationality, polarization can be the result of fully rational deliberation with natural human limitations.” Through the use of an agent-based model of group deliberation, they found that group coherence is a rational memory-management strategy for memory-limited agents. This finding appears to at least in part parallel Klintman’s observations.

          In the article, I put forward a case study from my past work that clearly illustrates this rational polarization. The case study also supports the findings of the paper I cited in my previous comment (Davies, Manning, & Söderlund 2018). These findings are that interdisciplinary learning can be facilitated by the adoption of meta-theories and community-building initiatives across disciplinary boundaries.

          A couple of additional notes in regard to aspects of the article. The first is that in the article I state that I’m not a fan of blackbox approaches because I think they work against the degree of understanding and learning needed for adequate ownership over the outcomes of a decision. However, I note that there’s a forthcoming article in this series on blackboxing unknown unknowns, so I may well be about to have my own truths in this regard challenged. The second is that the problem-solving communication skills that I mention as having been learnt nearly 30 years ago at the University of Queensland were taught by Bob Dick, who I’ve seen comment on this blog. I’m most grateful to Bob for this teaching, as these skills have been invaluable throughout my career.

          Davies, A., Manning, S., & Söderlund, J. (2018). When neighboring disciplines fail to learn from each other: The case of innovation and project management research. Research Policy, 47(5), 965-979. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3132518

          Singer, D. J., Bramson, A., Grim, P., Holman, B., Jung, J., Kovaka, K., … & Berger, W. J. (2019). Rational social and political polarization. Philosophical Studies, 176(9), 2243-2267. http://www.pgrim.org/articles/rationalpolarization.pdf

          Reply
          • very good example! Reminds me that there have been many conflicting views (and polarized groups) in the history of science. In today’s world, we need approaches like this to resolve those issues relatively quickly.

            Reply
          • Bruce, your RealKM article presents a very nice example underpinning your main arguments regarding polarization and its potential functions. Back in the 1980’s Donald T.Campbell had a similar insight and generated a productive line of discussion about how best to progress our understanding of complex problems, when he claimed that such progress is most rapid for a “disputatious community of scholars”. By this he meant rigorous intellectual debates, of course, rather than ad hominem attacks.

            As long as researchers could continue high-level debates from multiple perspectives, Campbell believed, the best ideas eventually would survive the most severe criticisms. Even if a consensus never was attained, the resulting collection of ideas would have great value. He was a strong advocate of evaluating policies and programs from a variety of frameworks, but he did not descend into relativism. Instead, he argued for a type of “evolutionary epistemology”.

            Reference:
            Campbell, D. T. (1984). Can we be scientific in applied science? In R. F. Connor,
            D. G. Altman, & C. Jackson (Eds.), Evaluation studies review annual (Vol. 9,
            pp. 26-48). Newbury Park, CA: Sage

            Reply
            • Many thanks Michael for the introduction to Donald Campbell’s work. I’m having difficulty tracking down that paper, so I was wondering if it might be possible to be emailed a copy? Gabriele Bammer is very welcome to share my email address.

              From summaries of Campbell’s work that I’ve located, his approaches in regard to the evolution of knowledge through the selection and refutation of ideas and the need to involve the widest possible range of stakeholders in the generation of those ideas appear to have some parallels at least to Henry Chesbrough’s concept of open innovation, to the deliberative / direct democracy approaches, and also to what is now being called co-creation (which is an approach that I’ve been using since first being exposed to it nearly 30 years ago – see https://realkm.com/2019/04/26/case-studies-in-complexity-part-2-ipswich-heritage-program/). I wonder what are your thoughts on these parallels?

              It’s good to raise the issue of relativism, because I consider it to be a risk in both deliberative / direct democracy and co-creation. In a paper in the recent Evidence & Policy special issue on co-creation, Graeme Nicholas and colleagues use Critical Systems Heuristics as the basis for developing a framework of the critical heuristics of co-creation in research. These heuristics have the potential to assist in preventing relativism, for example by asking what should be the key measures of success and upon what core values and assumptions should a deployment of a co-creation approach in research be based.

              What I like about Nicholas and colleagues’ heuristics is that they seek to establish values and boundaries without stifling the openness that is vital to successful co-creation. As part of this, Nicholas and colleagues also propose that the heuristics are used for “critical reflection and improved practice … rather than … simply as design or evaluation criteria.” For example, in regard to measures of success, they state that “We are not suggesting assessment against fidelity to set criteria (as in a process evaluation); rather, we suggest that a dynamic and repeated use of the question of success criteria is at the heart of co-creation.”

              However, It would be very interesting to see what Campbell’s evolutionary epistemology approach could bring to the dialogue on the heuristics that Nicholas and colleagues are hoping to encourage.

              Looking more broadly than just research, frameworks such as that proposed by Nicholas and colleagues are very important, because in the face of controversial and unscientific ideas across society, the current reaction is to act to stifle those ideas and the people putting them forward. All that this does is stoke and further entrench polarization. A much more successful approach (and I say that because I’ve done it) is to engage those people and their ideas in dialogue and constructive debate, so that not only do the best ideas rise to the top as Campbell advises, but that the proponents of the rejected ideas understand and accept this rejection. This can be done for even the most extreme ideas, as I discuss in https://realkm.com/2018/05/25/the-intellectual-dark-web-is-nothing-new-but-highlights-a-critical-issue/

              Chesbrough, H. (2012). Open innovation: Where we’ve been and where we’re going. Research-Technology Management, 55(4), 20-27.

              Nicholas, G., Foote, J., Kainz, K., Midgley, G., Prager, K., & Zurbriggen, C. (2019). Towards a heart and soul for co-creative research practice: a systemic approach. Evidence & Policy: A Journal of Research, Debate and Practice, 15(3), 353-370. https://www.ingentaconnect.com/contentone/tpp/ep/2019/00000015/00000003/art00003

              Reply
              • Alas, my hard copy of Campbell’s chapter went missing a while ago (a bibliopathic colleague absconded with it). However, the ANU library seems to have the entire Evaluation Studies Review Annual set of volumes. So when I return from China (early December) I’ll see if I can hunt it down and scan it.

                Reply
  4. Thanks Michael, Thanks Dr Steve
    I’m often totally lost when it comes to the Philosophy of Knowledge, but can it not be that the unknower and the possessor of the unknowns are identical, and would it not be meaningful in that case to assert that “A doesn’t know that A doesn’t know X”, where X is not something specific? True, A would have to know (at least how to name) X in order to assert that A doesn’t know X, and thus A would know that A doesn’t know X, but what if the something that is not known escapes knowledge to the point that it cannot be assigned an X? What if X cannot currently be identified? Would it not be meaningful and true to say that “A claims that A doesn’t know the things that A has yet to identify”? While this may appear to be a tautology, it would also reinforce Dr Steve’s example based on “categories of knowledge related to Workers, Raw Material, Products, Customers, and Natural Environment”. What if, as a company executive, I failed to realise that one of the missing categories might be “Executives”? That might mean that I now know that I don’t know how the recommendations are going to affect me personally, but if I hadn’t identified Executives as a category of impact, I would still be in the position of unknown unknowns with regards to the impact upon my own role …? Maybe humility, and an acceptance of Aristotle’s comment that “The more you know, the more you realise you don’t know”, is indeed the way forward. Especially when it is noted that you don’t know what you don’t know, until you do …

    Reply
    • Leila, I agree that this meta-cognitive material can be slippery stuff. Let’s see what happens to “A claims that A doesn’t know that A doesn’t know X” when I use your example, so it becomes “I don’t know that I don’t know that ‘Executives’ is a missing category”. This statement still doesn’t make sense, because it suggests that instead I actually do know that ‘Executives’ is a missing category. I think what you’ve done is to put it in the past tense, i.e., “”I didn’t know that I didn’t know that ‘Executives’ is a missing category”, in which case it does seem plausible. Often the primary path to resolving our unknown unknowns is with hindsight.

      Reply
      • Hi Mike.
        Great post. Your response here is preempting my post (due on the 24th) where I add to the confusion with a discussion of how the 2×2 structure isn’t quite as clear as we sometimes pretend. In particular, focusing on the fact that things regularly move between these categories and not just in the directions we think. We can realise that something was missing as you describe above but we can also forget or fail to recall things that we were aware of previously. Whether a particular thing is known or unknown (etc), can thus be very much dependent on the context in which the ‘knower’ finds themselves.

        Reply
        • Hi Matt,
          Great to hear that you’re going to be posting on the 24th– I’ll look forward to that. I agree that there are fuzzy boundaries in the 2×2 structure and often a fair amount of “drift” between its categories. As you’d know, recall is strongly influenced by contextual cues so that an unknown unknown in one situation can become, e.g., a known unknown or an unknown known in another.

          Reply
    • Hi Lelia – yes – we can recognize that something is missing if the assigned category/box is empty, but it is much more difficult to recognize that we are missing a category! What we have found effective is to look at the structure of knowledge – a very simple example here: [LINK REMOVED as no longer operational – methodspace.com making-sense-knowledge-explosion-knowledge-mapping]

      Basically, we can create a map (e.g. boxes and arrows) to represent our knowledge (theory, policy model, etc). On that map, we know that there should be at least two arrows pointing at each box from other boxes. So, any place where a box does not have at least two arrows pointing to it, we know that there is something missing (a kind of “gap analysis”) and we know to start looking (although, not necessarily what we will find).

      Reply
  5. Good stuff – always good to keep one’s mind open to new learning!

    A few related points… First, it is often more effective to think in terms of “useful knowledge” (rather than simply how much knowledge we have). Second, using a “practical mapping” approach, it is possible to objectively identify “blank spots” on our maps – places where we can focus our efforts to identify our unknowns. This improves our ability to know what we don’t know… and improve our level of useful knowledge. Third (for those who are serious meta-thinking geeks) we’ve made some progress in identifying unknown unknowns by using and “orthogonality” perspective. Way-too-simply (the way-too-long paper is under submission now) let’s say there are a few categories of knowledge accepted as useful (let’s say that these are categories of knowledge related to Workers, Raw Material, Products, Customers, and Natural Environment). Now, let’s say that you (as a company executive) are handed a report with recommendations for changing how your company is run. You look at the report and see that there is lots of information about how things will change in every category but one…. there is nothing about how the changes will change the Natural Environment. Bingo – you have some idea of where to look for your unknown unknown knowledge. Of course, you should choose your categories carefully – the more categories there are (and the more abstract they are), the more likely you will not miss some unknowns.

    Reply
    • Steve, you’ve raised some interesting points. Thinking in terms of “useful” knowledge can indeed be effective, but only if you already know enough to have a fairly accurate (and suitably inclusive) definition of “usefulness”. That said, the observation you’ve made about choosing your categories carefully bears on one of my favourite kinds of unknowns: Sample space ignorance. This is when we don’t have a complete list of all the possible states or outcomes. Examples of this are a zoologist who is trying to identify the species of animals in a heretofore unexplored environment, or a software developer trying to identify all of the bugs in a complex piece of software.

      Reply
      • Mike – Good points. For usefulness, our knowledge should consist of measurable concepts/variables and we should understand the causal connections between them.

        Reply

Leave a Reply to Dr. SteveCancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Discover more from Integration and Implementation Insights

Subscribe now to keep reading and get access to the full archive.

Continue reading