Myron Tribus’ UCLA Statistical Thermodynamics class introduced me to entropy, -SUM[p(t)ln(p(t))]. (p(t) is the probability of state t of a system.) Professor Tribus later advocated maximum-entropy reliability estimation, because that “…best represents the current state of knowledge about a system…” [Principle of maximum entropy – Wikipedia] Caution! This article contains statistical neurohazards.
Claude Shannon wrote that entropy (log base 2) represents information bits, “…an absolute mathematical limit on how well data from the source can be losslessly compressed onto a perfectly noiseless channel.” [Beirlant et al.]
Maximum likelihood estimation is one way to estimate reliability from data. It maximizes the probability density function of observed data, PRODUCT[p(t)], e.g., for observed failures at ages t. It is equivalent to maximize -SUM[ln(p(t)]. Maximum entropy reliability estimation maximizes entropy -SUM[p(t)ln(p(t)]. That’s same as maximizing the expected value, -SUM[p(t)ln(p(t)], of the log likelihood -ln(p(t). Fine, if you have life data, ages at failures t censored or not. [Read more…]