#### with co-author Mark Fiedeldey

Working with data, we often choose a model to represent that data. We then use the data to estimate the parameters of the chosen model and we then calculate a confidence interval about the model’s parameters. The confidence interval gives us a numerical assessment of how certain we are, based on our data, of the true value of the distribution parameter we estimated.

Let’s work through a simple numerical example to see where it leads. Suppose we have the following set of 6 data points:

Data |

75.02 |

70.01 |

66.42 |

87.75 |

72.21 |

75.95 |

This data could be times to failure from a reliability test, some dimension of a set of randomly chosen parts from a production lot, or some other source. In any case, we have chosen to model this characteristic with the normal distribution based on prior information. We want to estimate the average value of this characteristic and calculate the 90% confidence bounds which will define our confidence interval. The average value is a straightforward calculation with the result being

$$ \displaystyle\large \bar{X}=74.56 $$The two-sided confidence bounds for a small sample (n≤30) that is approximately normally distributed are found from

$$ \displaystyle\large \bar{X}\pm t_{\frac{\alpha}{2},n-1}\frac{S}{\sqrt{n}} $$where

**t** is the critical value from the t distribution

**α **is one minus the confidence

**S **is the standard deviation of the data

**n **is the sample size

The standard deviation of the data, like the average, is straightforward. For our example data set, the standard deviation is

$$ \displaystyle\large S=7.33 $$To find the critical value of t, consider that we have a confidence of 90%, or .9. Therefore alpha, which equals 1 – Confidence, equals .1. Using the table below, we find that the critical t(alpha/2, n-1) of t(0.05,5) equals 2.015.

Substituting the values into the formula for the confidence bounds gives

Lower bound = 68.53

Upper bound = 80.59

From this we state that we are 90% confident that the population mean of the characteristic we measured is between 68.53 and 80.59. On one hand, this is an accurate statement. But on the other, it isn’t accurate at all. While the population mean is not known to us, it does exist and it is a fixed value. Therefore, the population mean is either inside our interval or it isn’t inside our interval. There is no probability statement to make.

The concept of a confidence interval was developed by Jerzy Neyman in his 1937 book titled “Outline of a Theory of Statistical Estimation Based on the Classical Theory of Probability”. The process followed in our example is fine but it provides only one result. We have one set of data which gave us one mean, one standard deviation and one confidence interval. In order to apply any probability statement, we need to repeat the process many, many times. If we obtained another independently selected set of 6 measurements and computed another 90% confidence interval, we would have two intervals as results. If we then repeated the process again, and again, many, many times, we would have a very large set of confidence intervals. At that point we could state that 90% of the intervals in our very large set contain the population mean. Nothing more. We don’t know which intervals in our set contain the population mean, only that 90% do contain the population mean. In addition, we can state nothing about any single confidence interval in our very large set of intervals.

The convergence of the percentage of intervals containing the population mean to the selected confidence level is easily demonstrated through simulation. Below is a chart showing the result of 5,000 replications of the process described above. Given a normal distribution with a mean of 75 and a standard deviation of 7, the simulation program randomly draws a set of 6 values. The 90% confidence interval about the mean is computed as described above and whether the interval contains the population mean of 75 or not is recorded. The graph shows the overall percentage of intervals that contain the population mean as the 5,000 replications are completed.

As shown in the graph, the percentage of confidence intervals that include the population mean starts at 100% meaning the first interval included the population mean. The percentage is fairly erratic even after about 1,000 replications. Eventually though, the percentage converges to approximately 90% which is the confidence chosen at the start of the simulation.

Confidence intervals can be easily misinterpreted and we, as quality and reliability professionals, need to understand the calculations we make and use caution when drawing conclusions about our results.

## Authors’ Biographies

Mark Fiedeldey is a senior reliability engineer and an ASQ member. He is an ASQ-certified reliability engineer and quality engineer. He can be reached at www.linkedin.com/in/mark-fiedeldey.

Larry George says

OK I learned about confidence intervals at UC Berkeley. From Neyman’s contemporaries.

BUT what if you have population data??? The mean is the mean, the variance is the variance, and the distribution is an empirical distribution, unless you force the data to fit into some model.

I’ve been using ships and returns counts to make nonparametric estimates of discrete field reliability and failure rate functions from ships and returns counts; they are population data.

Sure there may be lack of confidence about the future; are the ships and returns processes stationary? If not, I have a case for confidence in future parameters or distributions. Tolerance or prediction intervals deal with that.