Integration and Implementation Insights

Accountability and adapting to surprises

By Patricia Hirl Longstaff

Image of Patricia Hirl Longstaff
Patricia Hirl Longstaff (biography)

We have all been there: something bad happens and somebody (maybe an innocent somebody) has their career ruined in order to prove that the problem has been fixed. When is blame appropriate? When is the blame game not only the wrong response, but damaging for long-term decision making?

In a complex and adapting world, errors and failure are not avoidable. The challenges decision-makers and organizations face are sometimes predictable but sometimes brand new. Adapting to surprises requires more flexibility, fewer unbreakable rules, more improvisation and deductive tinkering, and a lot more information about what’s going right and going wrong. But getting there is not easy because this challenges some very closely held assumptions about how the world works and our desire to control things.

Let’s not kid ourselves. Sometimes people do really dumb things that they should be blamed for. What we need is to be more discriminating about when finding blame and accountability is appropriate. Blame is often appropriate where known dangers have been ignored. But it may not be appropriate as a reaction to surprises such as black swans (possible but unlikely events), unknown unknowns (never happened before and not predicted), and problems that have emerged from underlying processes over which the person in charge has no control.

Whenever someone is blamed in a modern organization it becomes a story that is told and retold in an effort to understand its meaning. In some cases, the energy it takes to fix and apportion blame has little payback and is diverted from processes that would lead to future adaptation. The people in these systems often try to resist similar surprises by creating new rules or constraints on the system – tragically, these are often constraints that will rob the system of resilience in the long run. For example, by making more rules for people who have to deal with surprises it will reduce their ability to adapt.

Perhaps, more importantly, outside stakeholders expect accountability. Everybody rightly expects that people who cause problems because they are lazy or incompetent or corrupt should be held accountable – or blamed and punished – in order to make sure the organization is working properly. There is an assumption that if you get rid of one bad cog the whole machine will work perfectly again because they caused the problem – it’s just a matter of finding the bad cog. But maybe it’s time to re-examine the idea of causality in complex organizations that have to operate under high uncertainty.

Engineers know that their technical systems can be full of surprises and in need of resilience strategies. They conclude that management in these systems requires:

“…experience, intuition, improvisation, expecting the unexpected, examining preconceptions, thinking outside the box, and taking advantage of fortuitous events. Each trait is complementary, and each has the character of a double-edged sword (Nemeth 2008: 7).”

This is consistent with modern definitions of human intelligence. Smart people exhibit dynamic behaviors in the face of surprises, adapt to their environment and learn from experience (Sternberg 2002). That makes their actions somewhat unpredictable. And if they aren’t following standard operating procedure and there is a bad outcome they might expect to be the next victim of the blame game. Note also that all of these strategies require that managers really know what’s going on. If their people do not report changes or unexpected outcomes because they are afraid they will be blamed and punished, it will make surprises almost inevitable.

Some will be surprised to learn that it is often difficult (or impossible) to pinpoint one cause for surprises or malfunctions in complex technical or human systems. A lengthy investigation by independent parties is likely to come up with a list of things that contributed to the incident. Many of these things will indicate problems with the system and not with individuals in the system. But if it is the system, then is the person in charge of the system at fault? Who is accountable? We often demand accountability because we think it will improve performance. But if the

“…accounting is perceived as illegitimate, … intrusive, insulting, or ignorant of the real work, then the benefits of accountability will vanish or backfire. Effects include decline in motivation, excessive stress and attitude polarization…” (Woods et al., 2010: 226).

They also include defensive posturing, obfuscation of information, protectionism, and mute reporting systems.

Accountability can be seen as forward-looking while blame is backward-looking. Error in complex adapting systems is inevitable, blame is not. We need to reconsider internal and external blame games for more effective decision making.

What has your experience been? Do you have additional suggestions for identifying when blame is and is not appropriate? What ideas do you have for learning from surprises and maintaining accountability in complex adapting systems?

References:
Nemeth, C. P. (2008). Resilience Engineering: The Birth of a Notion”. In, E. Hollnagel, C. P. Nemeth and S. Dekker (eds.), Resilience engineering perspectives. Vol. 1: Remaining sensitive to the possibility of failure, Ashgate Publishing: Surrey, United Kingdom, and Burlington, Vermont, United States of America.

Sternberg, R. (ed.). (2002). Why Smart People Can Be So Stupid. Yale University Press: New Haven, Connecticut, United States of America.

Woods, D., Johannesen, L., Cook, R. and Dekker, S. (2010). Behind Human Error. 2nd Edition, CRC Press: Boca Raton, Florida, United States of America.

Biography: Patricia Hirl Longstaff is Senior Research Fellow at the Moynihan Institute of Global Affairs, The Maxwell School of Citizenship and Public Affairs, Syracuse University, USA. Her research has focused on the resilience of institutions when confronted with unknown unknowns. Current research also includes artificial intelligence.

This blog post is the first of a series on unknown unknowns as part of a collaboration between the Australian National University and Defence Science and Technology.

Scheduled blog posts in the series:
September 10: How can we know unknown unknowns? by Michael Smithson
September 24: What do you know? And how is it relevant to unknown unknowns? by Matthew Welsh
October 8: Managing innovation dilemmas: Info-gap theory by Yakov Ben-Haim

Exit mobile version