Suppose installed base or cohorts in successive periods have different reliabilities due to nonstationarity? What does that do to forecasts, estimates, reliability predictions, diagnostics, spares stock levels, maintenance plans, etc.? Assuming stationarity is equivalent to assuming all installed base, cohorts, or ships have the same reliability functions. At what cost? Assuming a constant failure rate is equivalent to assuming everything has exponentially distributed time to failure or constant failure rate. At what cost?
Benefits and Costs of Assumptions?
What are the benefits of reliability assumptions (in addition to obvious convenience)?
- Assuming a constant failure rate makes it easy to compute the failure rate: failure counts/numbers at risk. You don’t even need lifetime data! MTBF predictions easy too:
MTBF prediction = 1/Σ(Component Failure Rates*(Fudge Factors).
https://sites.google.com/site/fieldreliability/would-you-like-constant-failure-rate/. - Assuming stationarity means you can use all the data to estimate reliability or survival functions, with assumptions (exponential, Weibull, lognormal, etc.) or without (nonparametric), which reduces the variance of the estimates.
What are the costs of assumptions?
- It’s not easy to quantify the cost of assumptions, even faulty assumptions. I have examples of faulty assumptions, (shortage of AVDS 1790 tanks engines for M88A1s in Gulf war) but I am looking for examples of the costs.
- The sine qua non of scientific publications is that enough data and information should be published so that readers could verify the results, if they had the inclination, time, and $$$. (UC Berkeley Public Health Department lecture circa 1970) Publications exclude data that may contradict assumptions.
- The cost of the software specialized to fit the assumptions and the time required to learn how to use the software.
- The taxpayer costs of research grants, reviews, and standards incurred in the promulgation of standards and regulations based on questionable assumptions.
I can’t think of an all-purpose measure of assumption costs, but there is an all-purpose measure of information: entropy https://accendoreliability.com/can-estimate-reliability-without-life-data/.
Kullback-Leibler Divergence [aka cross-entropy] quantifies the information gained by not assuming stationarity or not assuming exponential distributions (constant failure rate) https://en.wikipedia.org/wiki/Kullback–Leibler_divergence. Its formula is ΣP(t)*ln(P(t)/Q(t)) where P(t) and Q(t) are discrete probability distributions and Q(t) is the assumed probability density function discretized corresponding to P(t) intervals; e.g., P(t)=λexp(-λt)Δt for exponential. Bang-per-buck could be regarded as (d Decision / d Information)*(d Information / d $$$). Do your decisions depend on information? Do you use all the information in your data?
A survival function is “the ‘probability’ that a patient, device, or other object of interest will ‘survive’ past a certain time” [age] [Wikipedia]. It’s the same as a reliability function. Actuaries, biostatisticians, and reliability engineers believe you have to have lifetime data to make nonparametric estimates of survival or reliability functions [Ulpian (220 AD), Lifetime Data Analysis journal, Lawless, Klein and Moeschberger, etc.]. {The list of reliability citations that claim lifetime data is required is too long to cite here.)
People have been making nonparametric estimates of reliability from population ships and returns counts, without unwarranted assumptions, since 1990 [George, Oscarsson and Hallstrom]. People have been making nonparametric survival function estimates from case and death counts for epidemics since AIDS and SARS [George, Harris and Ratner, Chan].
Nonparametric failure rate function estimates make it evident whether failure rate is a constant as a function of age. Have we checked for stationarity and ships and returns counts or case and death counts? Not until now. Make nonparametric estimates of reliability and survival functions, for each period’s ships or for each cohort. Compare reliability and survival function estimates, by cohort, to see whether stationarity is a viable assumption.
Table 1 is the some early COVID-19 data and the maximum likelihood estimates of the survival (reliability) function G(t), P[Death date-Case report date > t], and time-specific actuarial rates a(t). The differences in table 1 actuarial rates indicate the actuarial death rate is not constant.
Period | Cases | Deaths | G(t) | a(t) |
1 | 5 | 0 | 0 | 0 |
2 | 23 | 1 | 0.0255 | 0.0255 |
3 | 78 | 2 | 0.0255 | 0 |
4 | 252 | 6 | 0.0255 | 0 |
5 | 442 | 13 | 0.0294 | 0.004 |
6 | 398 | 17 | 0.0342 | 0.0049 |
7 | 488 | 26 | 0.0342 | 0 |
8 | 564 | 29 | 0.0342 | 0 |
9 | 546 | 26 | 0.0342 | 0 |
10 | 672 | 25 | 0.0342 | 0 |
11 | 606 | 26 | 0.0342 | 0 |
12 | 747 | 21 | 0.0342 | 0 |
13 | 917 | 24 | 0.0342 | 0 |
14 | 954 | 22 | 0.0342 | 0 |
15 | 1096 | 23 | 0.0342 | 0 |
Figure 1 is a “broom” chart, a graph of cumulative distribution, reliability, or survival functions: from the first two periods, first three periods, etc. to all periods, the longest line. (Jerry Ackaret taught me that name for my charts.) Figure 1 shows that the full 15-period data set was not stationary: different period cohorts had different survival functions. The pattern of decreasing distribution functions as lines get longer indicates nonstationarity perhaps due to improvement in treatment or changes in data reporting. This broom chart shows steadily decreasing cumulative distribution function (maximum likelihood) estimates as subsets added more recent periods.
SELInc products (Schweitzer Engineering Laboratories) used EEPROMS made by Catalyst Semiconductor. Some EEPROMs lost their little memories after Catalyst Semiconductor shrunk the die size in early 1999. Later EEPROMs were OK. Catalyst Semiconductor was not interested in process control using broom charts for early warning. OnSemi bought Catalyst Semiconductor in 2008.
Estimate Reliability Functions for Each Cohort
To estimate the variance of demands for replacement parts, I adapted the “bootstrap” [Efron, https://en.wikipedia.org/wiki/Bootstrapping_(statistics)]. The bootstrap variance is computed by sampling from the estimated distribution of times-to-failures (presumably from lifetime data) and computing the variance of the samples. To bootstrap simulated samples of a nonstationary process, without life data, I needed to estimate the reliability functions for each cohort, NOT the broom chart estimates from cohort subsets from periods 1 and 2, periods 1, 2, and 3, etc. So I made spreadsheets for maximum likelihood and least squares reliability function estimates from each cohort.
The least squares estimate minimizes the sum of squared errors (SSE) between observed deaths and hindcast deaths, sums for each period cohort. Hindcasts are actuarial forecasts of each periods’ cohort deaths as functions of actuarial rates, with different actuarial rates for each period cohort. Sums of hindcasts from each cohort are estimates of period deaths, from cohort from period 1, periods 1 and 2, periods 1, 2, and 3, etc. Excel Solver finds the actuarial rates that minimize the SSE.
The figure 3, maximum likelihood estimates by period cohort, uses the fact that each period cohort deaths could be modeled as Poisson outputs of an M(t)/G(s)/Infinity self-service system that has nonstationary Poisson input distribution (M(t)) and the sum of independent Poisson processes (cohorts) is also Poisson. I wrote formulas for the Poisson likelihood of deaths in each period in terms of the times-to-deaths from each period cohort. I used Excel Solver iteratively to find maximum likelihood estimates of the probability distributions G(s;t) for each cohort s=1,2,…,15. (Solver allows no more than 200 variables, and the maximization involved 225 variables. So I reran Solver on different subsets: periods and ages.)
The nearly vertical cumulative distribution function estimate is from period 1, the oldest cohort, with only 5 cases. It is an outlier but doesn’t affect simulations much. All estimates are for G(s,t) for t=1,2,…,15, even though cohorts from periods 2,3,4,…,15 did not contribute deaths of ages 15, 14,…,2 respectively. The magic of maximum likelihood maximizes the probability of what is observed using the variables allowed to vary, G(s,t) for t=1,2,…,15, using all the information in observations. Maximum likelihood acts as if there is some structure among the cohorts, not necessarily stationarity.
Figure 4 shows the Kullback-Leibler divergence of nonparametric distribution function estimated for each cohort from the estimate from all cohorts, column D in table 1. Period 1 estimator is based on only five cases and was much larger than other the other period cohort estimates. It shows little divergence after the first period and before the 7th or 8th periods.
Figure 5 shows the Kullback-Leibler divergence of nonparametric distribution function estimated for each period cohort from the exponential distribution estimate, from each cohort, with different failure rates. The vertical scales of figures 4 and 5 are the same. The Kullback-Leibler divergence of period cohort survival functions from all-cohort survival function (figure 4) is approximately 1/4 the divergence of cohort survival functions from period cohort-specific exponential distributions. Imagine the divergence from assuming the same single exponential distribution for all cohorts!
Advice!
Don’t do as I did; check your data for nonstationarity before assuming all cohorts have same reliability functions. I should have checked for stationarity. For this data set, assuming stationarity had approximately 1/4 the Kullback-Leibler divergence as assuming constant failure rates. Don’t assume a constant failure rate! If you want me to check for stationarity or some distribution, send me your data. Ships (installed base by age) and returns (complaints, repairs, failures, even spares sales) counts are statistically sufficient!
References
Chan, Kuen Chuen Gary, “Survival Analysis Without Survival Data: Connecting Length-Biased and Case-Control Data,” Biometrika. 2013; 100(3): 10.1093/biomet/ast008., doi: 10.1093/biomet/ast008, 2013
George, L. L., “Field Reliability Estimation Without Life Data,” ASA, SPES Newsletter, pp. 13−15, Dec. 1999 (For more, see https://sites.google.com/site/fieldreliability/)
Harris, Carl M. and Edward Rattner, “Estimating and projecting regional HIV/AIDS cases and costs, 1990−2000: A case study,” Interfaces, Vol. 27, No. 5, pp. 38−53, 1997
Klein, J. P. and Moeschberger, M. L, Survival Analysis: Techniques For Censored And Truncated Data, New York, Springer; 2003
Lawless, Jerald, Statistical Models and Methods for Lifetime Data (Second ed.), Wiley, ISBN 978-0471372158,2002
Lifetime Data Analysis journal, www.springer.com/journal/10985/
Oscarsson, Patric and Örjan Hallberg, “ERIVIEW 2000 – A Tool for the Analysis of Field Statistics,” Ericsson Telecom AB, 2000
Ulpian, “Life Table,” https://en.wikipedia.org/wiki/Ulpian’s_life_table
Leave a Reply