Calculating Acceleration Factor
Chris and Fred discuss accelerated testing – and how we come up with acceleration factors. Accelerated testing is great! It allows us to compress an entire lifecycle into a short test duration for us to quickly understand the reliability characteristics of our system … ONLY if we know what we are doing. Keen to learn more?
Join Chris and Fred as they discuss accelerated testing as it relates to a real life problem. A listener sent us a message about an accelerated testing paradigm where ‘software’ was providing ‘weird’ results. At one test level, the software suggested one distribution. At another level, the software suggested another. So what is going on?
- If the failure mechanism is the same, we should see the same distribution. For example, if we are accelerating fatigue by increasing the rate of cycles or the cyclic stress, we should typically see the lognormal distribution describe the cycles to failure at different stress levels. If we accelerate the temperature, and the underlying dominant chemical failure mechanism doesn’t change, then again, we should see the same distribution describe time to failure. So if we see two different time to failure distributions, this immediately suggests that accelerating stresses actually changes the dominant failure mechanism. But there is another explanation.
- 8 points … is not enough. So no wonder the software in this case was struggling! The Weibull distribution can mimic lots of other probability distributions. So if you have a ‘mixed’ suggestion and only a small number of data points, this can make sense. So … you need more data.
- We need more than two stress levels. Accelerated testing involves increasing stresses. But to get accelerated testing data ‘back’ to actual use situations, we need to create what we call Acceleration Factors (AFs). And AFs are based on a Physics of Failure (PoF) Model. Which we typically need to assume – based on a good understanding of how things fail. And to make sure our PoF model is right, we need to have at least three different stress levels to see if we are on the money.
- … and we need to be careful of censored data. We don’t know in this scenario if the 8 data (failure) points we have were from a larger number of systems where a lot of systems were still working. This brings us to another problem where we toss out data we think ‘doesn’t fit.’ For example, do we throw out ‘early’ failures because they are ‘quality and not reliability? Do we throw out a couple of other failures because they are ‘outliers?’ You really shouldn’t, unless you can conduct a Root Cause Analysis (RCA) and identify that the test itself caused that failure.
- … and also – be careful with the software. Software is a tool. It doesn’t replace our capacity to make decisions. You need to look more deeply into it, and perhaps understand that you have a small amount of information which (quite rightly) causes all manners of problems when it comes to trying to guess the ‘best’ distribution.
Enjoy an episode of Speaking of Reliability. Where you can join friends as they discuss reliability topics. Join us as we discuss topics ranging from design for reliability techniques to field data analysis approaches.
Leave a Reply