The term Measurement Systems Analysis refers to a collection of experimental and statistical methods designed to evaluate the error introduced by a measurement system and the resulting usefulness of that system for a particular application.
Measurement systems range from the simplest of gages like steel rulers to the most complex, multi-sensor measurement systems. Yet regardless of their sophistication, all gages are flawed and fail to deliver a perfectly accurate result to their users. This idea is best expressed by an equation fundamental to measurement science,
Y = T + e
where
Y = the resulting value of the measurement process i.e. what the gage reads
T = the true, often unknown, measurement of the object under evaluation
e = the error introduced by the measurement system
Measurement errors come in a variety of forms such as instability, which is the drift of a gage over time, and nonlinearity which is the inconsistency of a gage across its measurement span. But because of their applicability to a multitude of commonly used gages, two forms of measurement system error, repeatability and reproducibility, are the most discussed and analyzed among quality engineers and calibration specialists. In fact, these errors are often evaluated together in a specialized study called Gage R&R Analysis.
Repeatability, also known as Equipment Variation (EV), refers to the variation in measurements when the same operator uses the same measurement system under the same conditions repeatedly. EV is caused by a wide range of issues such as friction within the gage mechanisms, dust and debris, minor design flaws, wear in the gage components, and more. These sources of variation result in the random dispersion of measurement values around the true measurement of the component under test.
Reproducibility, also known as Appraiser Variation (AV), refers to the variation in measurements when different operators use the same measurement system under similar conditions. Consider as an example the telescoping bore gage shown in Figure 1.
These gages are widely used in metal cutting processes, particularly on deep bore applications such as engine block cylinders. To use this gage, the operator twists open the knurled knob to release the spring-loaded rods at the T-end of the gage, allowing them to contract and extend freely. The operator then inserts the T-end into the bore at the desired depth, allowing the rounded rod ends to touch the bore wall. Once the gage is correctly positioned, the operator twists close the knurled knob locking the rods into place. The operator then removes the gage from the bore and measures the distance between the rod ends with micrometers or similar tool.
Each manual step in this process – positioning the gage within the bore, locking the rods into place, measuring the distance between the rods ends, etc. – introduces another source of difference between operators. Compounded together, these differences form AV.
Gage R&R Analysis is performed using one of two major methods: the Average and Range method and the ANOVAmethod. Most fill-in-the-blank Excel-based Gage R&R templates rely on the simpler Average and Range method, which is also easier to learn for quality professionals because of its similarities to process control charting.
The study design involves first selecting the number of parts, number of operators, and number of measurement trials per part. For instance, a 10x3x3 study requires 10 parts x 3 operators x 3 trials per part for a total of 90 measurements.
From these data, the study facilitator calculates various average and range values equating to the differences between parts, between trials, and between operators. These differences are then converted into estimations of standard deviation using statistical constants called K factors
For instance, the Average and Range method requires the facilitator to calculate the range of measured values for each appraiser-part combination. In the 10x3x3 example, this results in 30 ranges. The grand average of those range values (called R-bar) equates to the average dispersion of measurement values despite measuring the same parts with the same gage. Using a K factor called K1, R-bar is converted into EV, an estimation of the standard deviation of the repeatability error for that measurement system. A similar method is used to calculate AV, the standard deviation estimation of the reproducibility error for the measurement system.
The Average and Range method of Gage R&R analysis is often the first method measurement professionals learn as a means of formally evaluating gage error. More comprehensive methods include the ANOVA method of Gage R&R analysis and a host of other tools used to evaluate linearity, stability, and bias. But regardless of the chosen method, recognizing that the output of a gage is only a “partial truth” is the key to understanding the importance of measurement system analysis. By acknowledging and quantifying the inherent variability in any measurement system, quality professionals can make informed decisions, improve processes, and ensure data-driven insights lead to meaningful actions. Ultimately, the goal is not just to evaluate the gage but to enhance confidence in the measurements that drive critical business and engineering decisions.
.
Ray Harkins is the General Manager of Lexington Technologies in Lexington, North Carolina. He earned his Master of Science from Rochester Institute of Technology and his Master of Business Administration from Youngstown State University. He also teaches manufacturing and business-related skills such as Quality Engineering Statistics,Reliability Engineering Statistics, Failure Modes and Effects Analysis (FMEA), and Root Cause Analysis and the 8D Corrective Action Process through the online learning platform, Udemy. He can be reached via LinkedIn at linkedin.com/in/ray-harkins or by email at the.mfg.acad@gmail.com.
Leave a Reply