Accountability and adapting to surprises

By Patricia Hirl Longstaff

Image of Patricia Hirl Longstaff
Patricia Hirl Longstaff (biography)

We have all been there: something bad happens and somebody (maybe an innocent somebody) has their career ruined in order to prove that the problem has been fixed. When is blame appropriate? When is the blame game not only the wrong response, but damaging for long-term decision making?

In a complex and adapting world, errors and failure are not avoidable. The challenges decision-makers and organizations face are sometimes predictable but sometimes brand new. Adapting to surprises requires more flexibility, fewer unbreakable rules, more improvisation and deductive tinkering, and a lot more information about what’s going right and going wrong. But getting there is not easy because this challenges some very closely held assumptions about how the world works and our desire to control things.

Let’s not kid ourselves. Sometimes people do really dumb things that they should be blamed for. What we need is to be more discriminating about when finding blame and accountability is appropriate. Blame is often appropriate where known dangers have been ignored. But it may not be appropriate as a reaction to surprises such as black swans (possible but unlikely events), unknown unknowns (never happened before and not predicted), and problems that have emerged from underlying processes over which the person in charge has no control.

Whenever someone is blamed in a modern organization it becomes a story that is told and retold in an effort to understand its meaning. In some cases, the energy it takes to fix and apportion blame has little payback and is diverted from processes that would lead to future adaptation. The people in these systems often try to resist similar surprises by creating new rules or constraints on the system – tragically, these are often constraints that will rob the system of resilience in the long run. For example, by making more rules for people who have to deal with surprises it will reduce their ability to adapt.

Perhaps, more importantly, outside stakeholders expect accountability. Everybody rightly expects that people who cause problems because they are lazy or incompetent or corrupt should be held accountable – or blamed and punished – in order to make sure the organization is working properly. There is an assumption that if you get rid of one bad cog the whole machine will work perfectly again because they caused the problem – it’s just a matter of finding the bad cog. But maybe it’s time to re-examine the idea of causality in complex organizations that have to operate under high uncertainty.

Engineers know that their technical systems can be full of surprises and in need of resilience strategies. They conclude that management in these systems requires:

“…experience, intuition, improvisation, expecting the unexpected, examining preconceptions, thinking outside the box, and taking advantage of fortuitous events. Each trait is complementary, and each has the character of a double-edged sword (Nemeth 2008: 7).”

This is consistent with modern definitions of human intelligence. Smart people exhibit dynamic behaviors in the face of surprises, adapt to their environment and learn from experience (Sternberg 2002). That makes their actions somewhat unpredictable. And if they aren’t following standard operating procedure and there is a bad outcome they might expect to be the next victim of the blame game. Note also that all of these strategies require that managers really know what’s going on. If their people do not report changes or unexpected outcomes because they are afraid they will be blamed and punished, it will make surprises almost inevitable.

Some will be surprised to learn that it is often difficult (or impossible) to pinpoint one cause for surprises or malfunctions in complex technical or human systems. A lengthy investigation by independent parties is likely to come up with a list of things that contributed to the incident. Many of these things will indicate problems with the system and not with individuals in the system. But if it is the system, then is the person in charge of the system at fault? Who is accountable? We often demand accountability because we think it will improve performance. But if the

“…accounting is perceived as illegitimate, … intrusive, insulting, or ignorant of the real work, then the benefits of accountability will vanish or backfire. Effects include decline in motivation, excessive stress and attitude polarization…” (Woods et al., 2010: 226).

They also include defensive posturing, obfuscation of information, protectionism, and mute reporting systems.

Accountability can be seen as forward-looking while blame is backward-looking. Error in complex adapting systems is inevitable, blame is not. We need to reconsider internal and external blame games for more effective decision making.

What has your experience been? Do you have additional suggestions for identifying when blame is and is not appropriate? What ideas do you have for learning from surprises and maintaining accountability in complex adapting systems?

References:
Nemeth, C. P. (2008). Resilience Engineering: The Birth of a Notion”. In, E. Hollnagel, C. P. Nemeth and S. Dekker (eds.), Resilience engineering perspectives. Vol. 1: Remaining sensitive to the possibility of failure, Ashgate Publishing: Surrey, United Kingdom, and Burlington, Vermont, United States of America.

Sternberg, R. (ed.). (2002). Why Smart People Can Be So Stupid. Yale University Press: New Haven, Connecticut, United States of America.

Woods, D., Johannesen, L., Cook, R. and Dekker, S. (2010). Behind Human Error. 2nd Edition, CRC Press: Boca Raton, Florida, United States of America.

Biography: Patricia Hirl Longstaff is Senior Research Fellow at the Moynihan Institute of Global Affairs, The Maxwell School of Citizenship and Public Affairs, Syracuse University, USA. Her research has focused on the resilience of institutions when confronted with unknown unknowns. Current research also includes artificial intelligence.

This blog post is the first of a series on unknown unknowns as part of a collaboration between the Australian National University and Defence Science and Technology.

Scheduled blog posts in the series:
September 10: How can we know unknown unknowns? by Michael Smithson
September 24: What do you know? And how is it relevant to unknown unknowns? by Matthew Welsh
October 8: Managing innovation dilemmas: Info-gap theory by Yakov Ben-Haim

61 thoughts on “Accountability and adapting to surprises”

  1. Thank you, Patricia, for your blog.
    Your blog brings lots of thought about accountability. Many years of my life have been delegated to the railway industry. From a practitioner’s point of view in any industry which involves the risk for the general public should be rules and regulations. When events happen, you have to react quickly, and the rules and regulations help to do this. Also, a team leader is essential to take responsibility of the team actions to avoid delays.
    Before events happen a need continuously to look for signs that can give clues that events can happen, and at this stage knowledge, imagination, information and communication is essential. Link signs together in different scenarios, make the rules on how to defend every scenario. This, in case if events happen, will help to deal with the situation in the quickest and most productive way.

    Reply
    • Thank you, Inara. I feel better knowing that people in the railroad industry think ahead! I imagine that sometimes the technology, the people, and weird situations all combine to make bad things happen. Sorting it all out can take time and we hope that only the people closed to the problem do not get blamed. I like planning in advance for “what if” things, but sometimes they are just surprises, That is where your knowledge, imagination and communication will come in handy. Thanks again.

      Reply
  2. Very interesting reflections, Patricia, on how to think more deeply about what accountability means in this increasing VUCA world. Your arguments reminded me of the importance of emotional safety (Amy Edmonson, Harvard) and vulnerability (Brene Brown, who first started her research on shame), key strategies for enabling learning organisations that enable connection, creativity and engagement. These concepts are well aligned with Google’s Aristotle project that discovered that one of the top factors accounting for teams that perform at their best is emotional safety: this means that people not only make more mistakes but they are also willing to discuss them publicly and learn together. It also entails enabling vulnerability, the emotion we experience when we feel uncertainty,risk and emotional exposure. Maybe it’s time to revisit what being accountable is, including co-accountability in environments where interdependence is the way problems are defined and solutions co-created.

    Reply
    • Yes! Let’s do it. Defining “accountability” (the ability to account) as applying to more than one person or group where they are part of a an interdependent organization is a goal worth reaching for. I really like the ideas of safety and vulnerability. You are not going to take risks if your career is at risk. So the big question is: Is this interdependent organization going to account for any failure to the world? How can you do this in short sentences they can use in news reports? Thanks!

      Reply
      • I would think that the way to forge this is to co-create a safe holding place for a group of organisations and/or leaders willing to discuss failure and identify ways forward to co-create new ways of being accountable, together. The solution is probably a set of prototypes developed by those who dare create a new path and with support of a funder or group of funders that also dare.

        Reply
        • I’m in! it is not just academics but real organizations and/or leaders who are wiling to defend people who fail but that failure helps move everybody along. The hard part is getting them to defend people in public – where everyone wants a scape goat. So teaching the public this lesson looks to be equally important. Thanks

          Reply
          • Yes, I am thinking more widely, not just researchers. Let’s have this idea on our radar and see if we find institutions willing to engage. Thanks!

            Reply
  3. Thanks for your interesting post. The idea of blaming somebody or as you said an innocent somebody in the failure situation reminds me of the way we think about problem-solving (in this case the failure is the problem). We tent to ignore the fact that the situation has happened within a system and need to be analysed within a system thinking mindset. Blaming one person and putting all responsibility of failure on him/her, means we ignored the real drivers of the problem within the organization. It is a common Fixes that fail Archetype (more in Braun, 2002, The system archetypes), which leads to experiencing the same failing again and again and blaming individuals again and again.

    Facing with unknowns and complex situations, decision making should be based on collective knowledge and within a reflective cycle rather than individual decision making, in this case, failure is a part of the learning process (if the reflection happens).

    If we can find somebody to blame, it might be because the system is designed in a way that one person is responsible for the consequences, I don’t think this system design will work efficient in facing unknowns. So we need to shift from the question of When is blame appropriate? To why the system needs blaming? Why there is a need to find a person and blame him/her and forget about the drivers? Focusing too much on when blame is appropriate might feed our biases as experts to think we are always right (even in failures) and prevent the proper reflection.

    Reply
    • Fateme, several good points here. Several years ago I learned from a member of the Royal Navy (who is an Australian) that that phrase “not on my watch” comes from ships where pretty much everything was in hearing distance of the person on watch. So much has changed and yet we still hold a person who is “in charge” to be blameworthy. Unless they set up the system that made the it fail, maybe we should move onto something like the “drivers” as you say.

      Reply
  4. Hi Patricia, thanks for your really insightful blog.

    The thing that really jumped out at me was the double-edged sword quote. The necessary characteristics to manage unknowns and surprises, and as a consequence being somewhat unpredictable. I agree with your comment about “all of these strategies requires that managers really know what is going on. I was involved in a couple of projects where we were taking some risks and doing things that hadn’t been done before. For us – in our very diverse research team, it wasn’t so much the managers as the project sponsors who were really key. We kept them closely informed on the projects, and this really helped when we did get surprised and when things went wrong – they were able to give the projects a bit of protection.

    We have recently gone back and had a good look at these projects, looking at different things that went wrong. We used the Integration and Implementation science framework (Bammer 2013) to try and ‘diagnose’ the things that went wrong. It was insightful to describe the problem and then try and understand what it was a failing of. We found it was a mixture of systems and individuals, and often hard, as you way to pinpoint exact causes, but the framework gave us a structure to analyse what went wrong.

    Regards, Melissa

    Reply
    • Melissa, glad you found a way to diagnose what went wrong. What did you do to (for?) the people who were found to be have something to do with the failure? The US military has something called an “After-action report.” It can be important for analysis but sometimes the people are not noted because it would kill their careers. In your case, it may be difficult to find some sort of “fault” among individuals in such a diverse group. Unless they were lazy (in which case they should be blamed) they should have some sort un-blameworthy protection. Any idea of how to do this?

      Reply
  5. Dear Pat

    Thanks for your post. It reminds me of the difference between air crash investigations where the aim is “improving aviation safety and not to apportion blame” (https://theconversation.com/heres-how-airplane-crash-investigations-work-according-to-an-aviation-safety-expert-113602), and the somewhat more adversarial approach to examining medical errors, reflecting a need to defend against a possible malpractice action. (But see ‘A Checklist Manifesto’ [2009] by surgeon Atul Gawande for a more optimistic account of how medicine might learn from mistakes.)

    Following on from the previous discussion on safe failing, the search for total organisational efficiency may well be as inefficient as the search for someone to blame when things go wrong. The contemporary preoccupation with productivity and eliminating waste has many strengths but several weaknesses. Dictating a process or procedure as not simply ‘the best way’ but ‘the only way’ removes options for alternatives and may misrepresent the complex as simple. In a situation where there are unknowns, inflexibility risks trammeling the way forward for people who know they work in unpredictable contexts.

    In seeking to avoid duplication and inefficiency, today’s organisations may also be undermining resilience. Redundancy is only redundant when things go as expected: there’s a reason why people talk about ‘a belt and braces approach’ to keeping up trousers! Preparing for the advent of the unexpected, let alone of unknown unknowns, allows unshackled organisations to show their strengths.

    Adaptability may well be a function of slack in the system; a little bit of play in the constraints upon people and the things they do. Perhaps one of the hidden costs of a blame-based commitment to conformity is a reduction in creativity and flexibility. The trick may be to combine more freedoms in some areas with checklists retained for processes that determine life or death — like flying an aircraft, or open-heart surgery.

    Reply
    • And even flying an aircraft and open-heart surgery can require a bit of creativity when dealing with a complex system. But of course, you are right. Some things require do some ability to move around. And you are SO right about resilience. Efficiency is often the enemy of resilience. I have written about this and it is not without controversy. Do you decide that the organization will fail because competitors are more efficient or let it fail when it cannot bounce back due to lack of redundancy? I guess the answer may be to decide the odds of unknowns. But those are getting bigger all the time. So….

      Reply
    • Hi Lelia, I worry that the no-blame approach in aviation won’t survive the current US Department of Justice criminal investigations into the recent Boeing 737 MAX disasters. While Boeing has certainly made mistakes that it should never have made, I’m confident that it didn’t intend to do the wrong thing, but rather argue that it didn’t successfully manage complexity, as I discuss in https://realkm.com/getting-to-the-heart-of-the-problems-with-boeing-takata-and-toyota/

      Reply
      • Dear Bruce
        I get the impression that I rushed in where angels would fear to tread! I agree that the complexity located at the interface of technology and human decision making is a potentially fatal complicator of activity in both aviation and medicine. But of the two, to date, aviation has made the better job of untangling and publicising the impact of unknown unknowns at the point where fatalities occurred, and the unknowns became visible. Great work, though, on the chilling tale of Boeing 737 MAX. As someone who flies too much, thank you!

        Reply
  6. Blame is an interesting subject for discussion about innovation in public policies. I argue that an experimental government changes how to think about failure (Ansell and Gayer 2016, Peters 2018). Usually, admitting failure means becoming a scapegoat and the centre of the blame game. But in order to accelerate learning and deal with uncertainty, there is a need to allow for learning from failure when exploring possible new ways of addressing complex problems (wicked problems).

    One of the ways that I have attempted to work with this premise is to reframe (good) ideas as testable hypotheses. The problem with ideas is that even the good ones often fail when they are confronted with reality. In practice, ideas are never fully formed, but need to work in and adapt to a dynamic system.

    The process of policy design needs to be continually updated and adjusted, and each action would be seen as an opportunity to learn more about how to adapt to changing circumstances. Public policies become hypotheses, and management actions become experiments to test those hypotheses (testing in the action process itself) (Folke et al., 2005). The iterative updates associated with the generative experiments involve a constant negotiation to move towards a solution that satisfies the different stakeholders. It is unlikely that a generative experiment will advance without a certain degree of shared agreement on the problem itself and the desirability of learning about it. Bos, Brown, and Farrely (2013) argue that a shared learning agenda is an essential starting point for a public policy experiment.

    Governments are using a wide range of experimental methodologies as an important strategy to address environmental challenges and construct solutions in times of uncertainty and divergent interests (Laakso et al. 2017; Ansell and Bartenberger, 2016, Voß and Schroth 2018; Jordan et al. 2018; McFadgen and Huitema 2018).

    Reply
    • There is starting to be some policy recommendations that in line with your thinking. Wired Magazine (not a technophobe) published an opinion piece that suggested that algorithms be subject to some sort of performance standard like the medicines that are tested before they become available to people. Do they actually work for what they are supposed to do? And are they safe – do they work with other code? For other public policy it is said to be bad if regulation changes before people get used to it. Maybe some sort of policy testing that you suggest (like testable hypothesis) will take some of the uncertainty away and foreclose the opportunity for a Blame Game.

      Reply
  7. Patricia’s attention to surprises is a welcome addition to any discussion about unknown unknowns—Surprise and ignorance go hand in hand. However, there’s a blind-spot in her account, as often is the case when people think about unknowns. This blind-spot (which I’ve written about on several occasions) is the tendency to think of unknowns as unremittingly negative.

    Absent from Patricia’s account is anything about adapting to good surprises. These do happen, and people have to adapt to them too. In fact, most of us are in the business of producing good surprises for each other. There are “occasioned” surprises such as gift-giving or surprise-parties, and “unoccasioned” ones such as the surprises that are the backbone of much entertainment (plot-twists in novels or movies, for instance). There are “windfall” surprises, such as an unexpected pay rise or award, an out-of-the-blue visit from a long-absent friend, or a longed-for pregnancy.

    Moreover, many of us cherish these surprises and resent it when they are denied to us. Were you irritated when someone let slip the contents of the Christmas gift you were going to receive? Do you hate encountering plot-spoilers before you’ve seen the movie? Did you wish the doctor had not insisted on letting you know the sex of your unborn child? Welcome to a large majority of humankind.

    Good surprises bring with them a positive affective “boost”, which undoubtedly is one of the main reasons that many of us like them. They also have prosocial consequences. The counterpart of looking for someone or something to blame for bad surprises is searching for the responsible party to thank. Good surprises produce gratitude, which in turn has nice social byproducts. Those of us who engineer good surprises for friends and family are engaging in a strongly adaptive prosocial enterprise that strengthens important social bonds. Likewise, at the heart of shared entertainment is shared surprises, which again reinforce connectivity within the group sharing the entertainment.

    However, there is a curious glitch in human adaptation to good surprises, and it inheres in our impulse to make sense out of them. In the process of searching for who or what to blame for bad surprises, we are trying to make sense out of those surprises and being able to make sense of a traumatic event decreases its negative emotional impact on us. Exactly the same kind of sense-making process is underway when we try to account for good surprises, but it has an emotional effect that most of us aren’t aware of.

    Timothy Wilson and his colleagues nailed this in a neat set of experiments, which they reported in a 2005 paper. As the authors put it, “the cognitive processes used to make sense of positive events reduce the pleasure people obtain from them”. Wilson et al. (2008) put these two effects together in a general account of how sense-making of (“explaining away”) events weakens their emotional impact, be that emotional impact positive or negative. The sting in the tail of their account is their evidence that, when asked, most of us predict that if we come to understand the reasons behind a pleasant surprise our pleasures will increase. We’re wrong, but that’s no surprise—Wilson and others have comprehensively demonstrated that we have very poor awareness of our own mental processes.

    Wilson, T. D., Centerbar, D. B., Kermer, D. A., & Gilbert, D. T. (2005). The pleasures of uncertainty: prolonging positive moods in ways people do not anticipate. Journal of personality and social psychology, 88(1), 5.
    Wilson, T. D., & Gilbert, D. T. (2008). Explaining away: A model of affective adaptation. Perspectives on Psychological Science, 3(5), 370-386.

    Reply
    • Nice observation, Michael: I do think people do have a strong tendency to wrongly equate uncertainty with threat, when, in fact, it manifests as opportunity just as often. I think the discussion here is, by way of its premise about attribution of blame for failure, automatically set to examine the threat manifestations of uncertainty rather than the opportunity manifestations. Your comment makes me wonder about whether this tendency varies culturally, and also whether we are somehow wired to attribute only the negative outcomes of uncertainty to uncertainty, perhaps for evolutionary reasons regarding an asymmetry in the price of failure to recognize each? I don’t have knowledge of these things, so it’d be of great interest to me to see people who do posting on these topics.

      Reply
    • Too true, Micheal. I hate it when people spoil surprises. But I imagine that when an engineer or a soldier gets a happy surprise they will, sooner or later, try to figure out why.

      Reply
  8. This is an interesting alternative perspective into the question of dealing with unknowns that is probably not sufficiently thought about in these terms. My comment here perhaps needs a short introduction about viewpoint: I have been working more in developing mathematical perspectives on the underlying nature of deep uncertainty, which arises because of the presence of the possibility of self-reference in the system manifesting it. This self-reference yields underlying paradoxes, usually hidden, and frequently assumed away, which yield limitations on knowing within the context of the system. Common examples we see in economics and in Defence and military decision-making are the cyclic dependencies over time between decisions made with respect to expected future conditions and the dependencies of future conditions on decisions made in the present.

    Systems without the possibility of self-reference are tidy and stable and predictable, and technically trivial, and a central observation here is that when decision-making is based on such assumptions (whether formed up mathematically or not), and these assumptions are not accurate of the real problem environment, failure is, in general, not neat and graceful, but sudden, unpredictable and possibly catastrophic. Problem environments with the possibility of self-reference typically go through long periods of seeming to match these stability assumptions, before then entering rapid periods of change (phase changes); these periods of apparent stability are highly deceptive to eyes attuned to looking for stable patterns on which to base decisions by prediction of future states.

    On this basis, I greatly support your basic point questioning the laying of blame as being attributing, perhaps oftentimes wrongly, the cause of failure to incompetence when it may be simply due to the inherently unstable, phase-change-prone nature of the problem environment. I would like to perhaps expand on this a little, however: we may be dealing with three kinds of error rather than two. The first, as you have noted, is genuine laziness and incompetence (which may be attributable to groups as well as individuals, in what we sometimes attribute to something like “cultural failures”). The second is the failures due to the sudden unpredictable changes in the environment that mark phase change events, which again you describe nicely.

    The third kind of failure, and the one with which I am personally particularly interested, is failure to understand how to set up decision-making parameters appropriately to understand sufficiently the nature of the environment and how to therefore set decision-making on a reliable foundation. Here, I think, attribution of responsibility for failure is much murkier: there are certainly cases of where this manifests as culpable failure, but there are many more, particularly where failure is systemic, where the culprit is lack of knowledge. To be fair: we probably should consider knowledge about how to qualify, quantify, bound and respond to such types of uncertainty to be still in its infancy, especially in organisational practices but even in pure mathematics, theoretical computer science, economics theory and physics. After reading your post, I was left with the conclusion that the lines of culpability in this murky Third Failure category should and must shift over time as we advance in our understanding of uncertainty. It is one thing to fail because of a poverty in the state of human knowledge, quite another to fail in the face of accessible knowing by which to obtain reliable guidance.

    In circumstances when the problem environment manifests deep uncertainty – which most obviously manifests as unpredictability of future states or relevant properties of future states to outcomes – reliable decision-making can still be obtained. The key to seeing how is to recognise that strong assumptions about the problem environment to the effect that it is stable and statistically predictable constitute *invariant* conditions, or invariant assumptions. These are properties that remain constant despite the instability in the evolution of actual states. While it is not possible to deal with arbitrary uncertainty, there are any number of weaker, usually hidden and non-obvious, invariant assumptions by which to characterise problem environments and to use as bases for obtaining reliable decision-making in the environments that support those invariants.

    As something of an aside: some of the work in which I am involved is looking at entropy measures for quantifying changing potentials for surprise as a basis for formulating hidden invariant conditions for difficult problem domains. The idea is here is that entropic order parameters are tools for *abstraction*; that is, for aggregating away from the myriad of noisy details to yield overall pictures of the properties of a system over various timeframes of interest (a system may have different properties depending on the timeframes, which fits because decisions are bound to timeframes of consideration of their effects).

    As our investigation into such invariant conditions progresses, and especially as we establish applicability of both invariants and general methods of elucidating them in actual decision-making problems, it is fair to say that the changing boundaries of knowing should show up as changing delineations for culpability for failure.

    Reply
    • Wow, Darryn. This is deep. I am sure everyone reading this learned a lot. I am happy to admit that there are at least three kinds of blame/accountability. Your work gives many of us a new picture of the kinds of information that must be in hand to make decisions that involve what appear to be unknowns. As you continue your investigations, perhaps we can talk more about blame and how you would explain it to the people who have to make these decisions. Thank you for your truly interesting comment.

      Reply
  9. Hi Patricia.

    Very thought-provoking post!

    I think that two psychological effects add to the difficulty of determining whether blame is appropriate or not. The first is outcome bias (Baron & Hershey, 1988) – our tendency to judge the quality of a decision by its outcome rather than by the quality of the decision process used. Given a complex and uncertain world, we have to make decisions based on incomplete information and, even making the best decision with that available information, you can still get a bad outcome.

    Once you have a bad outcome, however, people will often assume that you made the wrong decision even it it was the best, possible decision at the time it was made.

    This is compounded by hindsight bias. Once we have seen the outcome of a decision (or other event), it becomes much easier to determine causality – or, at least, to construct a causal story. This fools us into thinking that the outcome was more predictable than it actually was. Thus, the decision-maker’s failure to predict it becomes a personal failing – because it is now easy for us to see how the decision led to the bad outcome, we think that they should have seen it too.

    At an organisational level, rewarding and penalising people based on outcomes rather than decision quality thus makes risk aversion an employee survival strategy (Welsh & Begg, 2008) – whereas, from a corporate point of view, appropriate risk taking would be more beneficial.

    Baron, J., & Hershey, J. C. (1988). Outcome bias in decision evaluation. Journal of personality and social psychology, 54(4), 569.
    Fischhoff, B., & Beyth, R. (1975). I knew it would happen: Remembered probabilities of once—future things. Organizational Behavior and Human Performance, 13(1), 1-16.
    Welsh, M. B., & Begg, S. H. (2008). Modeling the economic impact of individual and corporate risk attitude. In SPE Annual Technical Conference and Exhibition. Society of Petroleum Engineers.

    Reply
    • Thank you, Mathew. You introduce a new idea here – risk taking can be very good for organizations. How you calculate that risk can be the cause of blame – particularly when there are a lot of unknowns in the mix. How would you treat somebody who took such a risk?
      You also bring up the “casual” problem because “hindsight bias …fools us into thinking that the outcome was more predictable than it actually was.” How would you explain this to a TV reporter?

      Reply
      • So, a simple but accurate explanation of hindsight bias?

        I would probably start with the observation that what people are trying to do is learn true relationships in a complex world.

        In order to do so, we make some guesses about which events could occur given our starting conditions and possible causal connections.

        We then watch to see which predicted events actually occur.

        Once an event has occurred, though, it becomes clearer which of the starting conditions and potential causes were actually relevant and which were not. That is, we can create a causal explanation for why a particular event occurred.

        Having this explanation, it makes sense – given our limited cognitive abilities and the goal of gaining true knowledge of the world – for us to forget the other, possible causal explanations that turned out not to be true.

        So, when we try to remember what we had predicted before we observed the event, this is often contaminated by our current knowledge. Knowing what the ‘truth’ is now, it is hard for us to recreate our original state of ignorance and, instead, we project our current knowledge back in time – assuming that, because we now know what caused an event, we would easily have predicted it then.

        How’s that? Maybe too long for a reporter?

        Reply
        • I think the last sentence of the last paragraph is about right for reporters. Maybe sometime we can get some real journalists in a room and explain complex systems and unknown-unknowns to them. Maybe they will be less willing to chase blame in those situations. Thanks again, Mathew.

          Reply
    • Hello Matthew, I agree that “hindsight bias” exists, with Taleb also stating that our cognitive biases cause us to create explanations for “Black Swan” events after the fact that make these events appear to be much more predictable than they actually were.

      However, I’ve also seen very clear examples of negative “surprises” that should not have been so if the responsible organisations had done things differently. I document one such example in the “The ugly: The Murray-Darling Basin Plan” section of the article at https://realkm.com/2018/03/23/km-standard-controversy-lessons-from-the-environment-sector-in-regard-to-open-inclusive-participatory-processes/

      As I discuss in that article section, at the time when images of the guide to the Murray-Darling Basin Plan being burnt by angry farmers were splashed across Australian television screens, I was managing a large-scale river recovery program for another major Australian river system, the Hawkesbury-Nepean. Even though it had to deal with a number of extremely difficult and complex issues, and even trialled a number of the key measures used in the Murray-Darling Basin Plan, the Hawkesbury-Nepean program was uncontroversial because of effective stakeholder engagement, an agile approach, comprehensive risk management, and horizon scanning ( https://web.archive.org/web/20150926165015/https://innovation.govspace.gov.au/files/2014/05/Horizon-Scanning.pdf ). This suite of approaches meant that potential problems were identified early, and then addressed and monitored.

      As I reveal in the article section, horizon scanning meant that not only was I immediately aware of potential risks to the Hawkesbury-Nepean program, but also that I could clearly see the rising backlash against the Murray-Darling Basin Plan, and indeed, warned some colleagues about it. This isn’t wisdom in hindsight.

      The backlash against the Murray-Darling Basin Plan isn’t a “Black Swan” event. Rather, it’s a “Grey Rhino”, for which Michele Wucker alerts in https://www.thegrayrhino.com/black-swan-to-gray-rhino/ that “Once you start looking at how many crises began with clear but essentially ignored warning signals, it becomes strikingly clear how often we miss opportunities to head off predictable problems.”

      Reply
      • Bruce gives us another piece of the puzzle: horizon scanning. Even if your problem is slightly different it can give you clues. If your problem has a rather long lead time this is a good idea. But can you “blame” someone for not horizon scanning? Is it like not getting enough information to make the decision? For problems that do not have that kind of time it maybe impossible.

        Reply
        • Thanks Pat, interesting questions. Sadly, the Murray-Darling Basin is in serious ecological decline, with critical ecosystems collapsing. Stakeholders are also more divided than they’ve ever been. Am I distressed and angry about this? Yes, because I know that things would have been different if the Murray-Darling Basin Authority (MDBA) had appropriately engaged all basin stakeholders and actively monitored information channels for signs of discontent, as I was doing successfully at the same time. But do I blame them for not doing this? No, because I’m confident that everyone in the MDBA was doing the best that they could, and I know from the responses to the articles I publish and comments in forums and networks that many people whose work involves dealing with complexity lack sufficient knowledge about how to deal with it effectively.

          If someone knows the right thing to do but then actively chooses not to do it, then I would consider that blame is an appropriate response, particularly if there are serious negative consequences. But if as Bob Dick says in his comment they’ve sought to act for the common good and inadvertently end up doing the wrong thing due to a lack of knowledge, then I would consider that blame is an inappropriate response. Yes, an argument against this is that people have a responsibility to actively seek all of the knowledge they need to successfully carry out their work. But this knowledge can be an unknown unknown for them. so governments, education providers, professional associations and other relevant organisations have an education and awareness raising responsibility. This blog is an excellent initiative in this regard.

          Reply
      • Excellent, Bruce. An excellent contribution. Iterative planning does help to find bugs. I hope no one has any problems for not finding blame as part of the iterative process.

        Reply
      • Hi Bruce.
        You make a good point – some events certainly are (or should have been) predictable in advance and, in those cases, attributing blame may well be appropriate.
        Even without considering genuine Black Swan events, though, complexity means that predicting outcomes can be very difficult and our cognitive processes can make it hard for many people to divorce what they know now from what they knew before the event was observed. (This is probably an adaptive function of memory – given a goal of learning true causal relationships from the world, it makes sense that, once you have uncovered a causal structure, you should discard/forget your prior, false beliefs to avoid confusion.)
        It probably, however, makes us more willing to assign blame than we would be if we accurately remembered how difficult some of these predictions originally were.
        Cheers,
        Matthew.

        Reply
        • Hi Matthew, many thanks for your further reply and apologies for my slow response. Your advice in regard to outcome and hindsight bias has provided very useful insights in regard to a case study that I’ve just written up, where I see these biases as potentially applying just as much to the claiming of success as the directing of blame. The case study is at https://realkm.com/2019/10/04/case-studies-in-complexity-part-5-queensland-land-clearing-campaign/ (corrections/criticisms welcome).

          I think we still need to try to learn from past mistakes (and successes), so the question that comes to mind now is how to deal with the cognitive biases that can cloud such learning. One potential approach to dealing with cognitive biases is to design processes that bypass or eliminate the potential for them to arise, and for example this is what I’ll be putting forward as the solution to the cognitive biases I alert to in the article at https://realkm.com/2019/07/18/getting-to-the-heart-of-the-problems-with-boeing-takata-and-toyota-part-3-toyota-takata-and-cognitive-biases/

          However I also notice with great interest that you conduct cognitive bias training, and I wonder if could find out more about this?

          Many thanks, Bruce.

          Reply
          • Hi Bruce.
            Yes, I have conducted cognitive bias training for some companies – mostly oil and gas ones. These focus on awareness of the biases as a necessary (but not sufficient) step in reducing bias. Then there are specific steps that can be taken to reduce particular biases. For example, the petroleum industry is very interested in overconfidence in range estimates (when estimated parameter ranges contain the true value far less often than they are expected to) and there are a number of practical steps that can be used to reduce this effect. Mostly these focus around different ways of eliciting estimates from experts.
            Adapting this approach to other industries would be a matter of identifying the key, ‘bugbear’ biases for that industry and refocussing the debiasing on those.
            Cheers,
            Matthew.

            Reply
            • Many thanks Matthew for your reply. Cognitive biases impact significantly on people’s knowledge, so they are an important factor to consider in knowledge management, which is the perspective I’m writing from. There’s growing societal awareness of cognitive biases, and also what looks to be a reasonable evidence base in regard to how they come about and the impact that they have (e.g. as listed by the Decision Lab at https://thedecisionlab.com/biases). However, there doesn’t yet appear to be a strong body of evidence in regard to successful interventions.

              A notable related example is in relation to unconscious bias training (UBT). Last year, considerable public and media attention was focused on Starbucks’ closure of 8,000 stores across the US for the conduct of UBT in the wake of a race incident at a Philadelphia Starbucks (https://www.nytimes.com/2018/04/17/business/starbucks-arrests-racial-bias.html). In the wake of the incident, it was also revealed that the Philadelphia police use UBT (https://youtu.be/gRHkAXiqfVQ). However, a recent systematic review (https://www.researchgate.net/publication/333154319_Interventions_designed_to_reduce_implicit_prejudices_and_implicit_stereotypes_in_real_world_contexts_A_systematic_review) and research report [Moderator update – In November 2023, this link was no longer available and so the link structure has been left in place but the active link deleted: www. equalityhumanrights. com / en/publication-download/unconscious-bias-training-assessment-evidence-effectiveness]. have found mixed results for the benefits of this training.

              This makes your work on cognitive bias training very valuable, and if possible it would be great to read more about it (papers, reports, etc), and to also see it promoted to practitioners.

              Reply
          • Hi again Bruce.
            Yes, outcome bias is definitely involved in claiming credit for good outcomes too. A jaded person might suggest it explains why senior management are convinced of the quality of their own decision making- after all, they have likely been promoted based on a series of good outcomes!
            I had a quick look at your article and, yes, it does seem like the biases provide some insight into people’s behaviour in this situation.
            Cheers,
            Matthew.

            Reply
  10. “Accountability can be seen as forward-looking while blame is backward-looking. Error in complex adapting systems is inevitable, blame is not. We need to reconsider internal and external blame games for more effective decision making.” There’s a lot packed into this small paragraph. I sort of see where you are going with the forward-vs-backward-looking distinction; but I also think we need to consider blame and accountability — at the same time — in terms of the internal-external distinction.

    I take accountability viewed from the outside as mostly a game of legitimation. We view an organization as legitimate only if it demonstrates accountability. Internal accountability, on the other hand, seems to me to be a game of self-regulation. These are linked, in that organizations that fail to regulate themselves effectively (and so to lack internal accountability) lose legitimacy and open themselves up to external regulation. I’m thinking of social media companies right now.

    Accountability viewed in these ways could be both forward-looking — trying to make sure people in organizations act responsibly — and backward-looking — we see (from the outside) that you’ve acted responsibly in the past, so we trust you to regulate yourself.

    Blame is a separate concept. It’s about holding someone responsible for their (bad) actions. Blame could also be forward-looking, however, as fear of punishment. If I anticipate being blamed, I may not act responsibly.

    So, I think I disagree that accountability is only forward-looking and blame only backward-looking. I find the internal/external games distinction more fruitful. I enjoyed this thought-provoking piece, however.

    Reply
    • An interesting idea. It is the kind of debate for my non-academic friends that makes them crazy – I think it is very helpful. We agree that an organization must tell its people what will be good and bad. The individuals and the organization are both “accountable” to someone – they are “able to account” for their actions (both good and bad). It is this idea of accountability that gets all messed up when people in the organization hide things that might get them “blamed” or held accountable.
      Thank you for an interesting comment. I though a lot about it.

      Reply
  11. Similar to others, my experience has taught me that blame is a zero-sum game for all involved. I have found that learning from mistakes is essential so it is important to hold people accountable for certain types of mistakes (e.g., those that do not involve malice, or are repeated in nature), but embedded in this process should be an opportunity to be forgiven. This means, however, that it is important to promote a culture of forgiveness among the members of a group or an organization. It is also important for organizational leaders to cultivate honesty and trust as well as to be culpable when one is responsible for mistakes.

    Reply
    • Rebecca, you bring up two very important points: trust and forgiveness. We spent a lot of time at my institution trying to define “trust” in many disciplines. It was not easy. But we decided that the basics were essentially “trust but verify” and “trust the things you trusted before.” For individuals in an organization, blame of a trusted person, is, as you say, a zero-sum game. We should all talk more about forgiveness. How do you publicly forgive someone (or an organization) who made a decision with a bad out come? What would be the criteria?

      Reply
  12. Hello all – I’m new to the discussion of unknown unknowns, but following this thread reminds me of some of the themes that are discussed in social learning/collaborative learning circles, and something which I have begun to see in some data that were gathered not too long ago by colleagues of mine (and which we are still in the process of analyzing!). In particular – the comments below on safe to experiment and learning from failure….

    What I have noticed is that in our (collective) obsession with X causing Y modeling approaches, we often fail to capture and appreciate the underlying, socially complex mechanisms that give rise to a Y ‘effect’ and that are incredibly important. What I mean is this – sometimes the end result (what we see initially as a failure in our data results) is really because we lack sufficient longitudinal data – the social complex takes a long time – and our short-term data gathering and modeling efforts lead us to conclude we’re witnessing a failure.

    What might be happening is that understanding is building…slowly. And eventually this will lead to a learning outcome.

    Reply
    • Christina,

      A couple of thoughts.

      Unknown unknowns: Taleb’s ‘Black Swan’ is worth a look.

      “What I mean is this – sometimes the end result (what we see initially as a failure in our data results) is really because we lack sufficient longitudinal data” … consider the expressions ’emergence’ or ‘praxis’. Sometimes, things are not finished ….they are just still happening which can bambozzle the “X causing Y” mind-set.

      “What might be happening is that understanding is building…slowly” ….yes, and at the moment – many of the things we have to act on are still ‘building’ which means we have to act on imperfect information. So, you do the best you can at the time & spend a lot of energy on M&E to check that the intended & unintended consequences of the actions have utility.

      My opinion: We have to learn to work/engage better with process, rather than wait for finality (when the conventional data set can analysed and used as the basis for action).

      Reply
    • Welcome, Christina. Please read the comment above by Darryn Reid. He seems to be helpful to your project. The idea that important information is not knowable yet has many applications. So how will you explain to people that a decision with a bad outcome is due to lack of longitudinal data or complex social interactions? I really want to know this. It is one thing to explain to ourselves about unknowns but how do we explain it to the folks out there? In case, think about it. And thank you for your new comment!

      Reply
  13. It seems that the argument in favour of blame here is utilitarian, but that the consequences of blame in any particular situation are uncertain. There is even uncertainty as to whether an individual should have expected the surprise.

    My gut reaction is like others’ to invoke the precautionary principle and avoid blame altogether (while still considering the potential for separately removing some people involved).

    However, it does make me think that this is simply another situation where one uncertainty decision has led to the need to make another one, and that if we stay within a utilitarian framework, we have a variety of tools at our disposal to decide on whether to blame or not…
    Have you come across situations where net benefits of blame are sufficiently clear, and a level-headed decision to lay blame has been successfully made?

    Reply
    • Thank you, Joseph. Not many of our readers will be familiar with the variety of tools available in the utilitarian framework (at least I am not). Perhaps you could enlighten us? I think most people would agree that the net benefits of blaming an individual for a decision based on laziness to get relevant info or for self-interest would be a reasonable basis to lay blame. For organizations such as drug companies the idea that they did not get the relevant info for safety of the drug is a pretty clear blame-worthy decision. If I am a public employee and I award a contract to my nephew (in most countries) that would be blame-worthy. But those are the easy ones. After that is gets harder.

      Reply
      • In short, a utilitarian framework involves making decisions based on the outcomes of a decision. If the outcomes are known with certainty, than minimising costs, maximising benefits, or maximising net benefits would suffice.

        If outcomes are uncertain, one approach is to maximise expected utility, i.e. assign values and probabilities to each outcome and pick the alternative that provides the best outcome on average (accounting for those probabilities)
        https://i2insights.org/2019/06/04/economics-lessons-for-managing-uncertainty/

        If probabilities cannot be easily assigned, we might still want to make explicit the possible consequences in the form of scenarios, to better inform our final decision.
        https://i2insights.org/2019/07/02/designing-scenarios-for-decisions/

        With your examples, I would guess that in cases where the consequences of blame are weighed before making a decision, the main focus is actually norm enforcement, i.e. encouraging certain behaviours that are assumed to be useful rather than actually digging deeper into the consequences of those norms for the specific case (and double loop learning to change/nuance the norm).
        Because of this social aspect, quantitative analyses of anticipated consequences of blame are probably very rare.
        It is important enough to discourage laziness and nepotism that we don’t even try to examine other side effects of assigning blame. An organisation might even receive public criticism if they’re seen to hesitate before making the “obvious” decision.

        I don’t know whether blame has already been analysed in economics/game theory using this kind of framework, but Robin Hanson talks quite a bit about the weight we give to social over other consequences http://www.overcomingbias.com/

        Reply
        • Nice, Thank you. I get the feeling that most people follow the norms rather than digging deeper into the consequences of those norms. If they do that, could you blame them for a bad result?

          Reply
  14. Patricia asked:

    “What has your experience been? Do you have additional suggestions for identifying when blame is and is not appropriate? What ideas do you have for learning from surprises and maintaining accountability in complex adapting systems?”

    Thanks, Patricia. As you imply, I think blame is overused. In opening a discussion on its use I think you raise important points. Before responding, I’d like to add another couple of dimensions to the discussion. These are especially from a practitioner’s point of view …

    1. High reliability organisations are organisations that have very low error rates despite the constant threat of disaster. According to Karl Weick and Kathleen Sutcliffe, one of the characteristics of such organisations is that they try to avoid blame. (Managing the unexpected, Jossey-Bass, 2007.) People are not punished for a mistake. Instead they are commended for admitting that they made a mistake.

    2. W. Edwards Deming developed an approach towards quality of organisational outcomes. He believed that a substantial proportion of variation in quality is due to the system, though we tend to blame the individual. Number 8 of his 14 points to improve quality is: “Drive out fear so that everyone may work effectively for the company”. (Out of the crisis, MIT, 1986).

    My own view (and I’m speaking only for myself) is that on balance it is better to avoid blame as much as possible. Instead, I favour attending to learning from whatever happens, and trying to understand the system within which it occurs.

    To return to Patricia’s question …

    It may sometimes be true that resolving an issue may require removing the cause. Is the cause found after examination to be an individual? Perhaps then it is appropriate that the individual is removed from the situation. If this can be done without blame, so much the better.

    If I had to nominate some principle by which to operate when issues arise, it would be this … Is the intent of a person to contribute to the common good? Then, as far as possible, seek a solution to the issue that does not disadvantage that person. Do what is necessary to reduce the recurrence of the issue.

    I also think it is appropriate that there are known sanctions for certain behaviours that diminish collaborative action. From Robert Plomin’s work (Blueprint, Allen Lane, 2018), it seems that homo sapiens’ collaborative instincts may have arisen because, among hunter gatherers, actions against the common good were sanctioned. I assume, therefore, that a commitment to the common good is part of our hard wiring.

    Reply
    • Thank you, Bob. I appreciate a practitioners’ point of view! It is you, after all, that this is meant for. I am big fan for Weick and Sutcliff. You may want to check out the Resilience Engineering Association that uses their ideas and many more that you would approve of. I am really intrigued by the idea of the “common good.” How would you identity this in a military organization? At Google? If we could identify it we would be a long way toward avoiding the blame game.

      Reply
      • Pat, you wrote:

        > … I am really intrigued by the idea of the “common good.” How would
        > you identity this in a military organization? At Google? If we could
        > identify it we would be a long way toward avoiding the blame game.

        Yes, that’s my view too.

        My guess is that a belief in the common good is a part of our hunter-gatherer inheritance. Hunter-gatherers lived in relatively small clans or villages. For them I would guess that the common good applied mainly within their own tribe.

        (I understand, though, that Indigenous Australian tribes had “boundary protocols” to extend collaboration to adjoining tribes, and beyond.)

        I assume therefore that it may not be too difficult to appeal to notions of the common good within a tribe. That applies to our current “tribe”, which could be a military organisation, Google, or something else. Or sometimes it seems to apply only to a single team within a larger tribe. Some governance structures seem to favour competition rather than collaboration between teams (or even mainly between individuals).

        If that is a reasonable guess, it might explain the tension between traditional structures (including leader-follower relations) and some mostly-recent innovations. Part of the tension could be between a belief that the common good applies only within the team or organisation, or to something much broader than that.

        The topic deserves much more than these speculations. I plan to extend them, eventually, in other forums.

        Perhaps it is issues such as these that help to explain the high involvement and the rich discussion that you’ve catalysed.

        Reply
        • I really hope so, Bob. Blame seems to be the munition that kills a lot of people who thought they were acting for the common good. I will look forward to your work..

          Reply
  15. I forgot to mention: when i tried it we developed an organisational culture of promoting the notion that ‘failure’ was inevitable in some instances, but the collective learning from the failures – as well as the successes – contributed to a longer term win for all involved.

    Reply
    • Failure is how we learn. An organization that allows failure without blame is likely to be successful. Who would you want working for you – someone who learned from a good failure or someone who won every time? Artificial intelligence learns both from what works and what does not. Thank you Christopher, for this insightful comment.

      Reply
  16. Thanks. At a Dave Snowden talk I heard him discuss ‘safe to fail’ experiments (these are sometimes called ‘fast fail’). I have tried this and it worked really well. I’ve just done a google search of ‘safe to fail experiments’ and there are far more hits than I remember last time i looked. the link is here

    https://www.google.com/search?source=hp&ei=xYRkXcfOMsOvkwXo2aSYCQ&q=safe+to+fail+experiments&oq=safe+to+fail&gs_l=psy-ab.1.1.0l3j0i22i30l7.7343.10446..12112…0.0..0.405.3498.2-11j1j1……0….1..gws-wiz…..0..0i131.vbRKNc1seG8

    Reply
    • Again, thank you for bringing the idea of failure to the nature of learning. Too bad most academic work that get’s published is only the successes. A look at some well set-up failure would save a lot of time.

      Reply
      • Advice in regard to how these safe-fail experiments can be used in practice is put forward in an ODI (UK OVerseas Development Institute) Background Note (https://cdn.odi.org/media/documents/8287.pdf), which advises that (page 5):

        “For effective programming in complex situations, setting learning objectives may be as important as
        performance objectives, and interventions should be designed to actively test hypotheses. Plans need to be clear on how elements of the intervention will test and confirm, disconfirm, or refine key hypotheses; for example Snowden advocates plans to include ‘safe-fail experiments’: small interventions designed to test ideas for dealing with a problem where it is acceptable for these interventions to fail.

        The usefulness of these hypotheses needs to be reviewed regularly in the light of experience and/or
        changes in context. This requires an ‘iterative’ planning model, which foresees the revision and adaptation of plans through successive implementation cycles or learning loops.”

        Further advice is provided n the “Adaptive strategy development” section on page 9.

        The ODI has published a number of useful papers on complexity, and with all ODI papers being published under a Creative Commons license I’m progressively republishing them in series form as I don’t think the original papers have received the wide attention they deserve. The series published to date are https://realkm.com/exploring-the-science-of-complexity-series/ (completed) and https://realkm.com/planning-and-strategy-development-in-the-face-of-complexity-series/ (in progress).

        Reply

Leave a Reply to Bruce BoyesCancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Discover more from Integration and Implementation Insights

Subscribe now to keep reading and get access to the full archive.

Continue reading