Lifetime Evaluation vs. Measurement. Part 2.
Guest post by Oleg Ivanov
A result of life testing can be measurement or evaluation of the lifetime.
Measurement of the lifetime requires a lot of testing to failure. The results provide us with the life (time-to-failure) distribution of the product itself. It is long and expensive.
Evaluation of the lifetime does not require as many test samples and these tests can be without failures. It is faster and cheaper [1]. A drawback of the evaluation is that it does not give us the lifetime distribution. The evaluation checks the lower bound of reliability only, and interpretation of the results depends on the method of evaluation (the number of samples, test conditions, and the test time).
Measurement and evaluation properties are different (“the lifetime distribution” vs. “the lower bound”). Thus, the application of the measurement and evaluation of the lifetime must be different. We see this difference for complex systems. For example, we can use the measurements (life distribution) directly for a reliability block diagram model for a complex system, whereas using the evaluations directly for calculations of the system may be incorrect.
There is an old parable “The blind men and the elephant”:
Six blind men study an unknown phenomenon – in this case, an elephant. The first blind man touched the elephant’s leg and reported that the unknown phenomenon was similar to a tree trunk. The second blind man touched the elephant’s stomach and said that the elephant was like a wall. The third blind man touched the elephant’s ear and asserted that the phenomenon was precisely like a fan. The fourth blind man touched the elephant’s tail and described the elephant as a piece of rope. The fifth blind man felt the elephant’s tusks and declared the phenomenon to be a spear. The sixth blind man touched the elephant’s snout and with great fear announced the phenomenon was a snake.
Although one’s subjective experience is true, it may not be the totality of truth. In our case, we cannot determine the reliability of the system from the evaluations of the reliability of components by a reliability block diagram model. Does this seem heretical?
Here a simple example:
Case 1. We tested 30 two-component products without failures. The estimation of the product reliability for the test time is R = 0.9 with CL = 0.95 by using the well-known expression 1− CL = Rn for binomial tests without failures.
Case 2. We tested separately 30 “components 1” and 30 “components 2” of this two-component product without failures too. The estimation of the component 1 reliability for the test time is R = 0.9 with CL = 0.95 by using 1− CL = Rn. The estimation of the component 2 reliability for the test time is R = 0.9 with CL = 0.95 too by using 1− CL = Rn.
“A lot of research has been done on how to estimate the system reliability and its confidence intervals from its subsystem test data” [2]. There are methods on how to evaluate system reliability from component reliabilities when there are no failures. The expression 1− CL = Rn is the distribution of the reliability of a component. Guo et al. [2] have shown that this distribution is the same as a beta distribution β(r; a = n, b = 1), where a and b are parameters of the beta distribution.
The calculation of the component reliability is shown in first two lines of the following table:
Component | N | Mean | Variance | a | b | R(CL = 0.95) |
1 | 30 | 0.967742 | 0.000975546 | 30 | 1 | 0.904966147 |
2 | 30 | 0.967742 | 0.000975546 | 30 | 1 | 0.904966147 |
System | 0.936524 | 0.001828198 | 29.51588 | 2.000521 | 0.853739992 |
The mean and the variance of the component reliabilities calculated as for a beta distribution are represented here as well. The mean and the variance of the reliability of a sequential system formed from the two components are represented in the third line of the table. The distribution of the system reliability is approximated by a beta distribution β(r; a = 29.5, b = 2) having the same first two moments. The evaluation of the reliability of the product is R = 0.85 with CL = 0.95.
The method is good, but questions appear. There is a big difference between estimations R = 0.9 and R = 0.85. Can you explain this difference? CL is the same. These components have shown an identical ability to pass the tests in a product or separately. (We are certain that the test environment of the components was reproduced completely.) In practice, we must pass more tests at the component level to receive the same reliability.
It would be correct if we have a measured reliability of R = 0.9 for component 1 and R = 0.9 for component 2 and have calculated the reliability of the product as R = 0.81. It is incorrect for evaluations, however, because we multiply our mistakes only. These mistakes are independent of the method of evaluation of the system reliability from the component reliabilities. These mistakes are made at the level of the component reliability as in the story about the elephant.
Why does this happen? The “worst-case” method that we use for deriving evaluations is the reason for this. The expression 1− CL = Rn is a special case of this method.
- http://nomtbf.com/2016/03/lifetime-evaluation-v-measurement/.
- “Reliability Estimation for One-Shot Systems with Zero Component Test Failures” by Huairui Guo, Sharon Honecker, Adamantios Mettas, and Doug Ogden (ReliaSoft). Presented at RAMS 2010 (January 25–28, in San Jose, CA).
Anderson says
Interestingly, if the two components (shown in the Table) were tested as a system with no failures then Rs at 95% CL would be 0.904966147.
This verifies your statement that is incorrect to multiply component reliabilities to evaluate a system reliability.
[Note: my major technology is one-shot devices that rarely fail when used within specification limits.]
Oleg Ivanov says
Hi Anderson,
Yes, it is. Testing the system as a whole is the “Сase 1” above.