RCA has an image problem and needs a public relations agent to reshape its reputation in the healthcare industry! RCA is primarily viewed as a reactive tool. This perception is how we have been conditioned by various regulatory agencies that require us to do RCA under very specific circumstances (usually when something very bad has occurred). When such ‘Sentinel Events’ occur, then we pull the microscope out to take a deeper look using our respective RCA tools. Under this use, RCA is viewed as a ‘Money-Taker’ because it appears only to consume people’s time and resources when they already feel they are overloaded. Rarely is the CEO asking for an ROI associated with an RCA.
We hear more and more about High Reliability Organizations/Organizing (HRO’s) today and their mindfulness of the future. Why do we have to wait for bad things to happen to apply RCA? Why can’t we apply RCA in a more versatile manner on unacceptable risks, near misses or chronic failures that do not rise to the severity of a ‘Sentinel/Reportable Event’? In this paper, we will explore how to use proactive approaches to quantifiably measure the impacts of undesirable outcomes over time and determine the Significant Few (the 20% of the events costing us 80% of our risk and/or dollar losses). In this fashion we are using RCA proactively to provide an actual, measurable Return-on-Investment (ROI) that will quickly and dramatically improve patient safety!
The Big Picture: The History and Applicability of High Reliability Organizations
In healthcare today, the term ‘High Reliability Organizations’ or HRO’s has become mainstream. Much of this can be attributed to the text, Managing the Unexpected(Weick and Sutcliff, 2007). In the opening pages of this text (Page 2) the author’s state the following:
“Our basic message is that expectations can get you into trouble unless you create a mindful infrastructure that continually does all of the following:
1. Tracks small failures
2. Resists oversimplification
3. Remains sensitive to operations
4. Maintains capabilities for resilience
5. Takes advantage of shifting locations of expertise”
These are very powerful concepts that are often misunderstood in their translation to effective implementation. While powerful, these concepts are not new to many industries outside of healthcare. Such concepts were the foundation of Reliability Engineering approaches as applied to the aviation and nuclear industries as early as the 1950’s.
In 1972 Allied Chemical Corporation formed the first Research and Development (R&D) group to explore, design and implement a Corporate Reliability Engineering Department for their 300 facilities around the world. In this Reliability Approach the keys to Reliability were defined as:
Allied, at the time, pioneered the practical transition of Reliability principles from Aviation to heavy continuous process manufacturing.
In the mid-90’s the transition of such Reliability principles from the U.S. Space Program started to be integrated to healthcare via The Joint Commission’s (TJC) Failure Modes & Effects Analysis (FMEA) and Root Cause Analysis (RCA) requirements. Rick Croteau, Patient Safety Advisor/Joint Commission International, was a systems engineer with the U.S. Space program in the 60’s before becoming a surgeon. He understood from his work in systems engineering that ‘people make mistakes and that is not a cause of failure but rather a condition of function that must be incorporated into the design of systems”. As a result of his efforts, in 1996 The Joint Commission issued A Framework for Root Cause Analysis in Response to a Sentinel Event. This was the beginning of the formal transition of Reliability principles into the healthcare sector.
What is understood in these transitions is that all organizations are systems. All systems have inputs, a transformation of these inputs in some form or fashion and then desired outputs. Given this, these enduring Reliability principles can apply anywhere and are completely transferable.
This is not a hard concept to understand, but it is very difficult to make a reality. We will define ‘Priority’ as:
“Management has decided to support achieving Reliability behaviors”
This requires a cultural shift from leadership that recognizes proaction as being more desirable than becoming better reactors.
Leadership cannot just talk about Reliability; their actions have to demonstrate a serious effort to implement the required infrastructure to support proactive behaviors. Such ‘seriousness’ comes in the form of signing checks (funding) to educate senior staff down to those who interact with patients on a day-to-day basis. Leadership will also demonstrate sincerity by establishing a Reliability Policy with related procedures for proper application and associated system infrastructure to ensure success. This makes Reliability a requirement for the organization instead of a talking point with no negative consequence if the proactive behaviors are not demonstrated.
Leadership often is perceived by the rank and file that ‘they never seem to have the time and budget to do things right, but they always seem to have the time and budget to do things again’. Making proaction a priority is a reversal of this paradigm demonstrated by actions and not just words.
In a nutshell, Reliability = Proaction = No Surprises!
We will define ‘Proaction’ as:
‘Any activity that will improve 1) operations, 2) prevent equipment, process or human failure or 3) lessen the consequence of failure’
When analyzing this definition in the context of looking at RCA, some may see a paradox when contrasting with their own current definition of RCA. This article is not intended to debate the definition of the effectiveness of an RCA approach, but rather the effective application of such an approach on the proper candidates for RCA.
As the title of this article suggests, currently RCA is predominantly viewed as a reactive tool. Therefore, the RCA task itself carries a negative connotation within a facility, which is required by regulation to perform such activities under certain circumstances. Where does this negative connotation come from?
When looking at current regulatory requirements for conducting RCA, under which conditions are we required to do a full blown RCA?
In healthcare, The Joint Commission requirements state “Such events are called “sentinel” because they signal the need for immediate investigation and response.” A certain threshold of pain must be incurred before the RCA requirement goes into effect and actions are taken.
The regulatory requirements themselves encourage such a reactive paradigm. So this raises a question about how much authority do we want our regulators to have in making us ‘do the right thing’?
Do we want or need a regulatory body to make us more proactive as opposed to us doing it because we realize it is the right thing to do? In the manufacturing industries, they make widgets. In healthcare the product is quality of life. If anywhere in our society where proaction should be an expectation, it should be in healthcare where lives are at stake every day.
Proaction, where RCA is concerned, is when we apply RCA to events that pose unacceptable risks and have NOT incurred consequences (events that would cause the pain associated with meeting a regulatory requirement to take action).
What are such opportunities to apply proaction? Such events might include:
1. unacceptable risks as defined by FMEA results,
2. chronic failures that occur so frequently they become viewed as a cost of doing business and
3. near misses where we got ‘lucky’ and stopped an error chain before we suffered ‘pain’ no RCA was required.
Now we need to focus on quantifying and prioritizing proactive candidates for RCA. The following approaches will be discussed in an overview format, 1) Failure Modes and Effects Analysis [FMEA] and 2) Opportunity Analysis.
FAILURE MODES AND EFFECTS ANALYSIS (FMEA)
While FMEA is currently required by TJC (LD.04.04.05, element of performance 10), it reads “at least every 18 months, the hospital selects one high-risk process and conducts a proactive risk assessment.” This was recently changed from being required every 12 months. Does this demonstrate a priority for proaction?
The universal measure of risk is:
Severity (S) x Probability (P) = Criticality (or Risk Prioritization Number [RPN])
This analysis involves looking at the steps in a process flow diagram and looking for ways in which failures could occur that would interrupt the quality and continuity of the overall process flow (See Figure 1).
Figure 1: Sample FMEA Worksheet
So when a new process or plan is being developed, conducting an FMEA will seek to identify the vulnerabilities in our process or plan and allow us to design out the flaws before the plan is put into action. This is a truly proactive tool because it is assessing risk and identifying the ‘Significant Few’. The Significant Few are the 20% or less of the potential failure modes accountable for 80% or greater of the potential risks (See Figure 2).
(Used with Permission by Reliability Center, Inc.)
Figure 2: Sample Significant Few Results from OA on IV Antibiotic Omissions
With this information in hand, we can then apply our effective RCA approach to understand why the risks are so high. When latent or systemic causes are identified and corrective actions properly implemented, our risks will be mitigated to an acceptable level (after all, no system is failsafe!).
OPPORTUNITY ANALYSIS (OA)
While OA is not required by any regulatory agency, it is the most effective tool in any organization to identify qualified candidates for RCA and to associate an ROI for each. The primary difference between an FMEA and OA is an OA seeks to identify failures that areoccurring in a given system over a year’s time.
The OA measure of loss is:
Frequency/Yr x Impact/Occurrence = Total Annual Loss
In this approach the following steps take place:
1. Map out a process flow diagram of the process chosen to analyze.
2. Define what a ‘failure’ is for that process.
3. Define ‘assumptions’ for the costs associated with each failure (i.e. – labor, lengths of hospital stay, downtime costs, supplies/materials, etc.)
4. Obtain input (i.e. – failure modes and frequencies) to fill in the blanks on the OA spreadsheet, from those closest to the work in the process chosen (See Figure 3)
5. Calculate the Significant Few (See Figure 4)
6. Conduct RCA on the Significant Few
Figure 3: Sample OA Worksheet
(Used with Permission by Reliability Center, Inc)
Figure 4: Sample Significant Few Results from OA on Blood Redraws
In this OA approach, it makes a financial business case for why we should be conducting RCA on events that have not passed through the regulatory threshold of pain. These chronic failures and near misses, if left unchecked, pose additional risks in the future to contributing to such sentinel type events.
As an example, in a 225 bed acute care facility, an OA was conducted on the blood drawing process. This study was commissioned by the CFO to find cost reduction opportunities. When the OA was completed it was found that the average cost of a redraw was about $300 and the total number of redraws per year was 10,013. This demonstrated a loss of over $3,000,000/yr that was hidden in plain sight. It was accepted as a cost of doing business therefore never questioned. It was not harming anyone so there was no regulatory requirement to do it. Was it the right thing to do…ask that CFO.
These concepts, approaches and tools can be applied in any organization, anywhere once some type of performance gap has been identified (i.e. – compliance, frequency of occurrence, costs, patient satisfaction indexes, readmissions, six sigma projects, claims, falls, etc.). A performance gap is simply the difference between a desired state and a current state.
In conclusion, RCA needs a face lift simply because most do not understand its true versatility and capability in practical application. Since we are only required to typically apply it under duress, it produces a negative association with the task. However, if we approach true RCA with more open-mindedness to its proactive capabilities, we will create an environment where personnel will not run from RCA meetings (or stress out about attending such meetings) but embrace their proactive nature and be a part of preventing consequences, instead of getting better at responding to them.
About Reliability Center, Incorporated (RCI)
RCI is a international leader in bringing scientific and Reliability engineering principles to the field of solving problems, correcting failures and preventing future human errors in the workplace. RCI has been a leader in successfully applying these skills to the health care, manufacturing and government sectors for over 44 years. RCI is helping caregivers and health care managers improve patient care and safety while reducing litigation risk. For more information, visit www.reliability.com
 Weick, Karl E. and Sutcliff, Kathleen M. 2007. Managing the Unexpected. San Francisco. Jossey-Bass.
 Latino, Charles J., 1981. Strive for Excellence…The Reliability Approach, Morristown, NJ. Allied Chemical Corporation.
 Croteau M.D., Rick, J. 2010. Root Cause Analysis in Healthcare: Tools and Techniques. Oakbrook Terrace. Joint Commission Resources.
 Latino, Charles J. 1985. Reliability Concepts Workshop. Hopewell, VA. Reliability Center, Inc.
 Latino, Charles J. 1985. Reliability Concepts Workshop. Hopewell, VA. Reliability Center, Inc.
 Sentinel Event Policy and Procedures. http://www.jointcommission.org/Sentinel_Event_Policy_and_Procedures/, (Accessed June 10, 2013)
 Latino, Robert J. 2009. Patient Safety: The PROACT Root Cause Analysis Approach. Case Study #3 Boca Raton. Taylor & Francis.