The better reliability performing systems start the design process with controlling variability.
Variability of materials and processes involved thought the product lifecycle. Reliability performance occurs as a result of the decisions made throughout the design process.
When focused on understanding and minimizing variability, the design becomes robust and reliable.
One of the activities in Army basic training is learning to fire a rifle. The majority of the discussion is on consistency. After our first attempt to hit a target with 10 shots, a drill sergeant would look at the results. In my case, the holes in the paper target were widely scattered across the right side of the target and not all shots even hit the target at all. He sighed and told me to focus on taking aim, breathing, and squeezing the target exactly the same each time. He said we first had to get a consistent process or tight shot group, then they could help me adjust my aim to center the shot group on the center of the target.
It’s the same with statistical process control. We start by looking at the range chart and if it is not stable and consistent, the plot of the average readings will not help improve the process. First, get a tight and consistent shot group.
In designing for reliability, again it’s the same. We make decisions all through the process focused on the functions and performance of the product. This decision process may include time to market and cost considerations.
What separates reliable designs from not so good, is the inclusion of the impact of variability.
Measurement variability
The source of all our information is data based on the measurements we or our supplier make.
Every measurement system adds some amount of measurement error. Errors may include bias, linearity, stability, repeatability and reproducibility types of errors.
A great first step for any measurement system you will rely on for data to make design decisions is a Gage Repeatability and Reproducibility study. There are a few ways to conduct a Gage R&R study and the easiest is called a Range and Average Method. It will provide a breakdown of the portion of measurement error contributed from appraiser or equipment sourced errors, and a proportion of the tolerance consumed by measurement error.
In general, if the measurement error is greater than 10% of the tolerance related to the measurements, the measurement system is not adequate for the task.
Calibration is not sufficient to minimize measurement error. Learn about and conduct Gage R&R studies to really understand and improve the data you collect and use to make decisions especially in the design process.
Tolerance analysis
I cringe when reviewing a drawing or set of specifications and all the tolerances at set at a blanket value. This implies that every tolerance has the same importance as all others. This may be true, yet is often not possible, nor necessary for the design and resulting system to function correctly.
We know there are many sources of variability when creating components or parts. That is the purpose of tolerances, is in part to acknowledge the amount of variation that will be present and limit the variability such that the system still functions.
In the design process setting tolerance balances creates a robust and reliable product that performs even with the random set of actual sizes and values of the assembled components. Thus a crucial step in the design process is to understand the variability of the components, parts, and assembly processes.
In many cases, we have the expected variability and in some we have to collect measurements and estimate the range of variability that will occur.
Worst case analysis
Many design teams use either worst case analysis for setting tolerances or root sum squared( RSS) analysis to set tolerances if they are not use a default setting. Worst case is conservative as it estimates the ability of the design to function even is the collection of parts are at their extreme values.
Will the circuit still work if the resister is at it’s maximum value and the capacitor is on the low end of it’s range of values? Same applies to mechanical systems.
RSS analysis
While worst case analysis is conservative and fairly easy to implement, it is possible the design will not function or be possible to assembly (I.e. Hole alignment) under worst case situations, given the technology and assembly processes.
Instead we count on the very rare probability of every part in a system being at worst case values. It is not likely to occur, thus using a RSS analysis approach provides a way to combine the standard deviations of the part variations.
While not as conservative as the absolute worst case values, it will minimize failures to only a small fraction of total systems created. Basically it implies that most parts will be near the nominal or target value and few will exist near the extremes.
Monte Carlo analysis
A third method is more accurate and requires more information. Monte Carlo analysis permits us to simulate a hole alignment, for example, using the distributions of the part variation.
Yes, this takes knowing the part variation distribution, not just the mean and standard deviation values, yet it allows the distributions to accurately reflect the spread of the values and avoid assuming they are normally distributed.
Process control and capability
We often assume statistical process control is a manufacturing tool to monitor and control the assembly process. The tools and techniques apply in the design process, as it forms the basis for setting tolerances and design approaches to accommodate the naturally occurring variation in the parts.
Do design a robust product we need to know the variability of the parts and assembly process and the best way to determine these values is to measure production. At which point it is often too late to alter the fundamental design. Thus, we need to collect variation data using similar assembly processes, from our suppliers processes, and though experimentation.
Like measurement error, processes to create parts or system vary. And, not all variation is bad, too much is often very bad. Uncontrolled or unstable variation will lead to product failures.
Early in the design, focusing on understanding and controlling variation allows us to select the best parts, design stable assembly processes, and create products with a ‘tight shot group’ that hits the reliability (cost, yield, schedule, and function) targets.
Summary
It’s in the design process that variability really matters.
This is just a summary of the tools that your design team should be using to identify and design reliable products. Good measurements, stable processes allow meaningful tolerances, which then convey the design intent. Creating products that accommodate the variation of parts and processes are robust and reliable.
These systems do not occur by chance, it take focusing on and minimizing variability to achieve.
Leave a Reply