Data is good. Quality data is better. “Big data” is even fashionable. But it won’t help you solve a “small data” problem.
Let me guess. The quality of your CMMS data is not great, but if it was, you could really do something with it. Since decades, in fact. Just need to re-tweak those failure codes!
And recently, you were very tempted by a consultant’s new “data-driven” approach that promised to deliver staggering results. And why not?
Because it’s not a “big data” problem.
Some problems are not to be solved by “big data”
The truth is that the complex stochastic behavior of your production system cannot be tamed by a single algorithm. Nor can its behavior be known via the collection of “unstructured” data. Figure 1 shows an example of the unstructured data contained within your CMMS.
Not convinced? Take a look at the OREDA handbook; an amazing collection of 39000 failure and 73000 maintenance records for 17000 equipment. All the high-quality data the heart desires. But can you do anything useful with it?
“Significance and structure” are more important than “quality and quantity”
Don’t get me wrong; data is still good. But the focus on data “quality” and “quantity” is, in this case, misplaced. The analysis of a complex stochastic system requires more focus to be placed on data “significance” and “structure”. Figure 2 illustrates the derivation of “significant” data from the CMMS data shown at Figure 1.
“Significant” data, when appropriately structured, will describe the behavior of the production system and thereby enable its optimization. Data “structure” is understood to be the rules and relationships that define how an individual data value impacts the system behavior.
Data may be structured, for example, by placing it within the context of a model that simulates the production system behavior. Refer, for example, Figure 3. The original CMMS data (refer Figure 1) has been turned into significant data (refer Figure 2) and is given “structure” at Figure 3.
The “small data” problem and its implications
Parameters describing a pump’s stochastic behavior are estimated at Figure 2. The confidence intervals are broad, owing to the quantity of available data. This reflects the “standard” situation within the context of a process plant environment.
The uncertainty is NOT a problem. It is THE problem. It is THE NATURE of the process plant reliability engineering problem. A small data problem.
The small data problem implies that mitigating measures need to be stochastically robust to compensate for the inherent uncertainty. For example, the installation of a redundant pump is a stochastically robust mitigating measure.
At Figure 3, a simple system is modeled. The simple model may be placed within a larger model, and so on, until eventually the entire production system is modeled. Hundreds of small data problems.
Do not despair! Just apply the principles of systems reliability engineering!
So, what have we learned? Let me summarize:
- Your CMMS is full of insignificant, unstructured data.
- You have not started collecting the “significant” data that describes your production system.
- You have no way of structuring your “significant” data.
- You have no time to solve the hundreds of small data problems.
But there is no need to despair. The CMMS contains valuable data that can be used to generate significant data. Your team has a heap of knowledge, experience and intuition that can be used to generate significant data, e.g. via expert interview. And the principles of systems reliability engineering will help you prioritize where to start and the level of detail required; not all subsystems are worth optimizing!
RAMS Mentat GmbH has developed an innovate technical and systems engineering approach – and supporting tools – that enables the reliability and safety performance of an entire production system to be optimized with consideration of capital investment, operational and maintenance cost constraints.