Veteran professionals in the Reliability field view every business as a system. All systems have 1) inputs, 2) a transformation of those inputs in some form or fashion and 3) outputs. Just think about that for a minute; think about your schools, banks, manufacturing plants, small businesses…they are all systems.
Hospitals are systems also. However, I find they traditionally operate within silos, making it difficult to learn from others in their own systems (organizations), as well as external to their systems. In a hospital, in its most basic form, a system may look like the following:
Input = Patient presents with some combination of undesirable symptoms
Transformation = Patient is assessed, diagnosed and a treatment plan implemented
Output = Patient is discharged in a better condition than when they entered the hospital.
NOTE: However, oftentimes the disease/condition they have is not treatable given known treatments/technologies and a patient may pass as a result. This is a uniqueness of the HC industry and unlike the physical sciences of engineering. There are still many unknowns about the human condition and we learn along the way.
Given this very simplified macro view of a hospital as a system, why wouldn’t the principles of Reliability from manufacturing apply?
Disciplined Reliability Engineering (RE) has been around in excess of 50 years. Its roots are embedded in the aerospace and nuclear sectors. True RE principles are timeless and field-proven in heavy manufacturing to be fiscally sound business practices, which not only optimize the bottom-line, but also quantumly improve personnel safety.
For this reason, Reliability Departments are commonplace in such progressive manufacturing industries. The formal release of ISO Standard 55000 (Asset Management – Overview, Principles and Terminology) further supported the standardization of such comprehensive and holistic Reliability principles around the world.
Why are such proven principles not standard in our healthcare system?
For the purposes of keeping this discussion at a macro level, I view Reliability as becoming a master at Proaction and Systems Thinking. Mainteance people/First Responders are critical to any operation, but they are what I call ‘today’ people. They handle the ‘here and now’ of today.
Reliability personnel are in charge of ‘tomorrow’. They are the ‘proact-ers’. They try to prevent the consequences the ‘react-ers’ have to respond to. So a great deal of Reliability has to deal with becoming proficient at risk assessment, measurement and prediction. If we can foresee the signals of impending failure (consequences) we can take planned actions to prevent the failure. When we can plan for such actions (scheduled), it is much less expensive than having to deal with the unplanned outcomes under emergency conditions.
Many in healthcare (HC) have the paradigm that outsiders don’t understand that HC is ‘different’. Therefore, they feel many of the successful approaches from other industries will not work in HC. I find this to be a restraining paradigm preventing true strides in patient safety from materializing. As discussed, this is simply because Reliability views all organizations as ‘systems’.
Let’s take diagnosis error for example. This is an enormous issue in the U.S. healthcare system today. Of what is reported to be diagnosis error, about 1 in 10 diagnoses are wrong (either missed, wrong or delayed diagnoses). The Society to Improve Diagnoses in Medicine reports that diagnosis error kills between 40k and 80k patients a year. Such cases are what often funnel into our court systems as malpractice cases (that is a whole other topic…the impact of our tort system on clinical decision-making).
The term ‘diagnosis error’ itself has not been universally defined nor thresholds established to identify such errors or error rates. Oftentimes such errors are called ‘complications’ and the patient never knows such an error occurred. So how can we even get a handle of the magnitude of the diagnosis error rate, if we can’t define and identify what a diagnosis error is?
I see every diagnosis made, as a decision by a single, unique care provider. It is a conclusion drawn based on their assessment of the patient and evidence provided in terms of reports from various tests ordered, health history and symptoms they present with. A diagnosis is an evidence-based decision by an individual based on their own practices, experience, successes and training.
What about pilots and nuclear operators? When they have to react to unexpected conditions in their working environments, don’t they have to go through the same information gathering process to make a decision? Is this their ‘diagnosis’ of the situation? What disciplined thought processes do they have to go through in their minds to assess, diagnose and take appropriate correct actions to produce successful outcomes? To me, the human reasoning process is similar, even though the conditions will be different.
Does the fact that in the cases of pilots and nuclear operators (and others in such critical occupations), a wrong decision could take their lives as well, impact their decision? Doctor’s diagnoses of their patients will not often threaten their own lives. Does that make a difference in the soundness of the decision? Would they make a different decision if it were their own lives (or that of their families)? I don’t know, as I am not an MD and do not sit in their seats to know such pressures.
I am in no way minimizing the critical role of healthcare providers. I have the utmost respect for this profession. I am trying to draw a correlation between viewing other businesses as a system, and viewing a hospital as a system.
Doctors also have the burden of dealing with external influences (sociotechnical factors) that impact their desire to diagnose properly the first time (i.e. – insurance companies and regulators). We as laymen, may simply see a ‘diagnosis’ as a one time action, however, the reality is that a diagnosis is an evolving process.
Insurance companies often force an initial diagnosis to be entered into an EHR system (Electronic Health Record) via a code, in order for the physician/hospital to be reimbursed accordingly. The reality is the final diagnosis does not always match the original and/or working diagnosis, because additional testing/information has come to light and proven the diagnosis to be something different. However, the initial diagnosis is sometimes not removed from the EHR system and ends up following that patient for the rest of their lives.
Many of the manufacturing businesses that I have dealt with over the past 30+ years make ‘things’. They make oil, paper, beer, steel, power, etc. I have worked with service companies whose end product is customer satisfaction (i.e. – package delivery, banking, telecommunications, etc.). Every business produces a deliverable to their customer of some type.
The product of a hospital (to me) is quality of life!! It is much more important than the widget industries I deal with most of the time. These industries have proven that true Reliability concepts (including the role of human factors/human performance sciences in the decision-making/reasoning practice via Root Cause Analysis) are extremely profitable while making people safer. Given that, why does healthcare lag manufacturing in this area? Why don’t they lead in this area considering their ‘product’ is the most important?
We all have a vested interest in our healthcare system as we will all use it at some point in our lives. This is why I believe that learning and sharing such best practices is in everyone’s best interest.
I think the core of all undesirable outcomes is understanding why well-intentioned decision-makers, make poor decisions, at the time they make them. If we focus our efforts on this reasoning process, we will uncover our deficient management/organizational systems (i.e. – policies, procedures, practices, management oversight, human factors/human performance systems, training systems, purchasing systems, etc.) that are helping to trigger the bad decisions.
Simply blaming the decision-maker and disciplining them is a recipe for the recurrence of the decision by someone else, using the same deficient systems and reasoning.
At some point leadership will have to look in the mirror and consider that they may be part of the problem?
From a Reliability perspective the fewer surprises (unexpected interruptions to Reliability of operations), the more productive and safe the system will be for all involved. This makes everyone happy (finance, stakeholders, employees/patients, community & shareholders!)
Leave a Reply