Guest Post by Geary Sikich (first posted on CERM ® RISK INSIGHTS – reposted here with permission)
Introduction
There’s always a story that explains why an event is a “Black Swan” – after the fact. Where are the “Black Swan” prognosticators before events occur? I see “after the fact” articles and statements, such as, “we knew that was coming” or “we predicted this” – all after the fact. Generally projections and predictions seem never to hold sway until after the fact. Predictions and projections never seem to occur when the prognosticator specifies — is it just that the timing never seems to work out – until after the fact. From Nostradamus to the “Bible Code” we are enamored with predictions – after the fact. Does there seem to be a theme developing here?
There seem to be so many “Black Swans” circling that we really shouldn’t call them “Black Swans” anymore. It seems that there many “experts” today who are jumping on the bandwagon and laying claim to some aspect of, or permutation of, the “Black Swan” concept. Some have labeled themselves “Master Black Swan Hunters.” My question is, “What constitutes a Master Black Swan Hunter?” Does this mean that we have “Apprentice Black Swan Hunters” and “Certified Black Swan Hunters?” There must be some criteria that are used to determine when one has achieved the refulgent “Master Black Swan Hunter” status. I mean how does one reach such an exulted status? I guess that being able to identify the unknown – unknowns, the highly improbable and extremely rare events before they happen is, in itself, a “Black Swan Event”, albeit – in most cases after the fact. Even Nassim Taleb, the author of the famous book, “The Black Swan, The Impact of the Highly Improbable,” has not yet mastered this. Or he is just not claiming the title, “Master Black Swan Hunter” yet.
The definition of a Black Swan according to Nassim Taleb, author of the book “The Black Swan: The Impact of the Highly Improbable” is:
“A black swan is a highly improbable event with three principal characteristics: it is unpredictable; it carries a massive impact; and, after the fact, we concoct an explanation that makes it appear less random, and more predictable, than it was.”
If we take the three principal characteristics – Unpredictability, Massive Impact, Explaining the event away after the fact; and assess each, perhaps we too can become “Master Black Swan Hunters.”
The Problem
There is a general lack of knowledge when it comes to rare events with serious consequences. This is due to the rarity of the occurrence of such events. In his book, Taleb states that “the effect of a single observation, event or element plays a disproportionate role in decision-making creating estimation errors when projecting the severity of the consequences of the event. The depth of consequence and the breadth of consequence are underestimated resulting in surprise at the impact of the event.” To quote again from Taleb:
“The problem, simply stated (which I have had to repeat continuously) is about the degradation of knowledge when it comes to rare events (“tail events”), with serious consequences in some domains I call “Extremistan” (where these events play a large role, manifested by the disproportionate role of one single observation, event, or element, in the aggregate properties). I hold that this is a severe and consequential statistical and epistemological problem as we cannot assess the degree of knowledge that allows us to gauge the severity of the estimation errors. Alas, nobody has examined this problem in the history of thought, let alone try to start classifying decision-making and robustness under various types of ignorance and the setting of boundaries of statistical and empirical knowledge. Furthermore, to be more aggressive, while limits like those attributed to Gödel bear massive philosophical consequences, but we can’t do much about them, I believe that the limits to empirical and statistical knowledge I have shown have both practical (if not vital) importance and we can do a lot with them in terms of solutions, with the “fourth quadrant approach”, by ranking decisions based on the severity of the potential estimation error of the pair probability times consequence (Taleb, 2009; Makridakis and Taleb, 2009; Blyth, 2010, this issue).”
You need to look at long spans of history to catch sight of “rare” events that are all too often forgotten, although they turn out to be far more common and similar than people seem to think. For example, Deepwater Horizon – many called it a Black Swan, but the Aban Pearl a rig that went down off the coast of Venezuela a few month later barely got any press coverage. I guess that distance makes it harder to see the Swan?
Process vs. End Results
A process is defined as: a series of actions or steps taken in order to achieve a particular end. Interruptions to the way you do business – dealing with disruptive events causes processes to be in a state of discontinuity. However, if we continue to focus Business Continuity planning on the recovery of “mission critical” processes we may end up in an “Activity Trap” as defined by George Odiorne. He calls the “activity trap”:
The Activity Trap is the abysmal situation people find themselves in when they start out toward an important and clear objective, but in an amazingly short time, become so enmeshed in the activity of getting there that they forget where they are going.
Once clear goals may evolve into something else, while the activity remains the same and becomes and end in itself. In other words, the activity persists, but toward a false goal. Meanwhile all this activity eats up resources, money, space, budgets, savings, and human energy like a mammoth tapeworm. While it’s apparent that the Activity Trap cuts profits and fails to achieve missions, it has an equally dangerous side effect on people; they shrink personally and professionally.
Interruptions to your business happen more often than you can imagine. The loss of customers, supplier issues, competition, new technology, etc., all can disrupt a business and threaten its existence – realize that it is all about survivability (end results) not about process recovery.
Resilience, Antifragility, Recovery vs. Risk Parity
There is a lot of talk about resilience and what it means. This is a goal that we should all strive to achieve – to be resilient. However, resilience is dependent on bouncing back – after the fact. Antifragile, coined by Nassim Taleb in his latest book Antifragile is resilience renamed and on steroids. Taleb even admits to coming up with the name Antifragile because he did not like the term resilience.
In a recent conversation with a colleague, John Stagl; John pointed to an article by Bob Freitag in Natural Hazard Observer. Freitag makes a good point about resilience:
There are many definitions of resilience. Here I will borrow from the field of social ecology, defining resilience as the ability of an individual or community to adapt or transform in response to stress and shocks—rather than just “bouncing back,” undergoing undesirable change, or collapsing. Important to this definition is that resilience demands a focal point. What is resilient to one stressor may not be resilient to others. One’s resilience may depend on another’s collapse. Any change may bring benefits to some, hardships to others.
The ability to transform in response to stress and shocks seems to me to be a better approach to resilience than what is currently practiced in the industry. This leads me to recovery. While given tacit attention in the planning process – generally a few paragraphs and some discussion on organizational change; recovery is not really addressed. Few if any would be able to cite a drill, exercise, simulation that focused on recovery and began to address the complexity of recovery.
Change the paradigm – think risk parity instead of resilience, antifragility and recovery. Risk parity is an approach that focuses on the allocation of risk, usually defined by exposure, velocity and volatility rather than allocation of assets to the risk. The risk parity approach asserts that when asset allocations are adjusted (leveraged or deleveraged) to the same risk level, risk parity is created resulting in more resistance to discontinuity events.
The principles of risk parity will be applied differently according to the risk appetite, goals and objectives of the organization and can yield different results for each organization over time.
The greatest failure of most business continuity and enterprise risk management programs is that they cannot de-center. That is, they cannot see the risk from different perspectives internally or externally. Poor or no situation awareness generates a lack of expectancies, resulting in inadequate preparation for the future – until after the fact.
Being Able to Answer the “Why” Question
As my colleague John Stagl often points out; Senior Management is focused on getting the answer to two questions in a crisis or a situation of discontinuity. These are:
Why Didn’t the Plan Work? – In Croatian there is a phrase, “Ni luk jeo, ni luk mirisao”. The phrase basically means to deny ones involvement in something. To insist you have nothing to do with the situation. The literal translation is – I haven’t eaten the onion nor smelled it. (I haven’t been involved in this at all). Example: Question: Business Continuity Planner, you have generated a BIA and a BCP that gives us a lot of data, do you think that is appropriate in the current crisis? Answer: Ni luk jeo, ni luk mirisao – I haven’t been involved in this at all. It is also a poor excuse to offer that your program is not as mature as it should be or to state that the plan was based on “best practices” (copying “best practices” from other companies is more dangerous than helpful).
Why Did the Plan Work? – While you may have dodged a bullet, taking credit for why the plan worked can be a double edged sword. Yes, it worked, but generally speaking, not as it was written or intended. How many times have you seen some in a crisis stop to read their plan? Heck, just getting some organizations to bring their plans to a drill is a major accomplishment. The plan worked because you were able to respond to action and got results.
Conclusion
Business Continuity has its roots in the year 1790 when the US Coast Guard was established under the Treasury Department: Continuity of Waterways. The Coast Guard mission was (and still is) to protect US waters, keep goods and service flowing, regulate US waterway vessels and respond to incidents; Business Continuity at its core.
But to conclude this article I will go back a bit further to The Ingenious Gentleman, Don Quixote of La Mancha. We must ask ourselves, “Are we tilting at windmills – attacking imaginary enemies – engaged in after the fact identification of “Black Swans”? The phrase “tilting at windmills” is sometimes used to describe confrontations where adversaries are incorrectly perceived, or to courses of action that are based on misinterpreted or misapplied heroic, romantic, or idealistic justifications. It may also connote an importune, unfounded and vain effort against confabulated adversaries for a vain goal.
Management must be able to identify the most important unanswered question that can make or break the organization when disruption occurs. They must be able to say how that question was identified, explain the process by which the question will be answered, the time required to answer the question, how much money it will take and what the allocation of other resources will be. They also need to know how to recognize when they have answered the question.
In an ever-changing world where only a third of excellent organizations stay that way over the long-term, and where even fewer are able to implement successful change programs, leaders are in need of continuity plans that provide the tools necessary to survive. The organization that can execute in a crisis transforms in response to discontinuity and has the vitality to survive over the long-term. Ultimately, building a culture of continuity within your organization’s “Value Chain” is an intangible asset that competitors copy at their peril and that enables you to skilfully adapt to situations of discontinuity faster than others—giving you the ultimate competitive advantage.
About the Author
Geary Sikich – Entrepreneur, consultant, author and business lecturer
Contact Information: E-mail: G.Sikich@att.net or gsikich@logicalmanagement.com. Telephone: 1- 219-922-7718.
Geary Sikich is a seasoned risk management professional who advises private and public sector executives to develop risk buffering strategies to protect their asset base. With a M.Ed. in Counseling and Guidance, Geary’s focus is human capital: what people think, who they are, what they need and how they communicate. With over 25 years in management consulting as a trusted advisor, crisis manager, senior executive and educator, Geary brings unprecedented value to clients worldwide.
Geary is well-versed in contingency planning, risk management, human resource development, “war gaming,” as well as competitive intelligence, issues analysis, global strategy and identification of transparent vulnerabilities. Geary began his career as an officer in the U.S. Army after completing his BS in Criminology. As a thought leader, Geary leverages his skills in client attraction and the tools of LinkedIn, social media and publishing to help executives in decision analysis, strategy development and risk buffering.
Geary has a passion for helping executives, risk managers, and contingency planning professionals leverage their brand and leadership skills by enhancing decision making skills, changing behaviors, communication styles and risk management efforts. A well-known author, his books and articles are readily available on Amazon, Barnes & Noble and the Internet.
Leave a Reply