
Our work as quality and reliability engineers, or as countless other technical positions across every industry, relies heavily on the instrumentation we use. Torque meters, tensile testers, micrometers, spectrometers and coordinate measuring machines provide critical data about the variation within the processes we design and maintain.
But these tools execute measurement processes which, like all processes, introduce variation into the results they generate. This fact – that every gage contributes variation to the values it reports – is the basis for Measurement Systems Analysis (MSA), a collection of statistical tools and approaches designed to isolate and quantify sources of measurement error.
The distribution of measurement errors for a given gage, that is, the distribution of the values derived by subtracting the true or accepted value of a component from the gage’s reported measurement of that component, can be regarded as normal.
For example, imagine we use a pair of shop-floor digital micrometers to measure the outside diameters of 50 engine value lifters. Then we measure those same 50 valve lifters on a high-end laser scan micrometer with a sub-micron accuracy. Because the laser micrometer is far more accurate than the shop micrometers, we may choose to accept its output as true, and therefore regard any deviation between the two instruments as micrometer gage error. We can often model that error as normally distributed with an estimable arithmetic average and standard deviation.
The six most discussed sources of measurement error — linearity, stability, bias, repeatability, reproducibility, and resolution — fall into two categories: those that shift the measurand away from its true value and those that widen the measurand’s dispersion around its true value.
Linearity, stability, and bias are measurement errors that affect the average of the error distribution. In other words, these errors tend to shift the measurement result away from the true or accepted value. On the other hand, repeatability, reproducibility, and resolution affect the standard deviation of the error distribution since then tend to widen the dispersion of measured values around the true or accepted value. Let’s take a more detailed look at each of these types of measurement error.
Linearity describes how consistently the measurement system responds across the range of measurements. A non-linear measurement system might have errors that change depending on the value being measured. For instance, weight scales tend to be less accurate at heavier weights than lighter ones.
Stability reflects the consistency of measurements over time. Instability can arise from wear and tear of the equipment, a lack of cleaning and maintenance, or operator variability over time. As an example, Rockwell hardness testers tend to drift over time due to wear and the accumulation of fine debris in the mechanical levers that generate the test load.
Bias is the systematic error that causes measurements to consistently deviate from the true value. For example, a scale might always show a weight that is 2 grams higher than the actual weight. Prominent causes of bias are gages that are not “zeroed out” or calibrated correctly.
Repeatability refers to the variation in measurements when the same operator uses the same measurement system under the same conditions repeatedly. It reflects the inherent variability in the system itself. Design flaws, limitations of manufacturing, fluctuations in ambient conditions and more lead to repeatability errors.
Reproducibility describes the variability when different operators use the same measurement system under similar conditions. It focuses on operator-to-operator or setup-to-setup variability. Operators who manually apply varying levels of torque to the thumbwheel of a pair of calipers for instance, introduce reproducibility error into its results.
Resolution refers to the smallest change in a measured value that a system can detect. Insufficient resolution results in rounding errors and the inability to detect fine variations. A 3-place digital display for instance will fail to capture more minute variations even if the gage itself is capable of more accurate measurements.
Understanding these six types of measurement error is foundational to applying the MSA techniques through which these errors can be isolated, quantified, and with corrective activities, possibly reduced.
Leave a Reply