Years ago a client asked for help in reducing the amount of reliability testing they did for each project. They had a sense that some of the testing wasn’t useful. What they want to know was how to select the appropriate testing and be sure they wouldn’t miss anything important.
Do you conduct a lot of reliability and environmental testing? Does each test generally result in no failures and no actionable information? Is the sample size for each test small (providing little chance of finding failure modes that occur in less than 20% of units) due to constraints on the prototype budget or time?
And, does anyone really use the results of the testing to make decisions?
Why do we do reliability testing?
The answer to this question should be clear for each test you plan and execute. If not, or if the reasons have been lost to lore, then it’s time to find an answer.
Reliability testing is expensive. It should have a clear connection to an objective of the organization.
Traditionally we do reliability testing for a few of reasons.
- To discover design or process weaknesses
- To demonstrate or estimate the useful life
- To meet a customer requirement/request
If your first response to ‘why’ is ‘because that is what we always do’ or ‘that is what is expected’, then it’s time to reset your thinking on reliability testing.
I suggest that you should only do reliability testing that is
- directly connected to making a decision in the program
- a test designed to provide meaningful information
- capable of providing useful engineering information
- a direct requirement from a customer (hopefully they pay for it, too!)
Let’s take a look at each of these suggestions
Connect to a decision
The very first reliability test I was asked to design included a specific request for completion by a specific date.
It was a major design review to determine if the new program would continue in development or not. A pivotal piece of information was the estimated useful lifetime of the new design. Providing an answer after the design review date would delay the program at best, or cause its cancellation due to uncertainty around the product’s reliability.
Another example involves a team that conducted discovery testing and regularly uncovered design flaws and weaknesses.
The design team ignored the results as they did not understand the nature of the testing. It was only a few years later when someone noticed field failures matching the previously discovered issues that the test results were recognized as useful.
Test the results
Design reviews and major milestones are common key points for decisions. Another point of decision may be the selection of a technology, material or vendor. Another set of decisions may involve design freeze date, marketing material freeze, or the setting of product launch regions and dates.
If the test is not connected to someone making a decision, then who do you send the results? If no one is going to use the test results, I suggest you do not conduct the testing.
Design to create meaningful information
Sample size matters.
If I test two products and one fails, I have evidence of a 50% failure rate (roughly).
If I test ten products and one fails, I have evidence of a 10% failure rate.
For many products, a ten percent failure rate would be a show stopper. And, if testing two products you have a very low probability of finding 10% issues. The same when testing 10 units, finding a 1% problem is unlikely.
For discovery testing (HALT, for example) get failures and do the failure analysis to root cause. There is not value in ‘passing’ HALT by not finding failures.
In every test, do the math and determine the chance of finding an important defect rate. You should assume that only a small proportion of units have the flaw that will lead to failure. Not every unit will have every flaw, there is just too much variation in materials, assembly, and testing.
Do the math
Do the math – this includes confidence intervals, hypotheses testing, sample size, and design of experiments as needed. In order to detect failure mechanisms that may only occur in 1% of units, it may take careful test design to detect the failure mechanism. It may require using subsystems, coupons, or less expensive test vehicles.
If it’s not important to find very small probabilities of failure – then what is the limit? At what level of detection does the testing require. If the results are not going to used or understood, it’s likely that we should not do the test.
Engineering Information
Ideally, every test will provide information that is meaningful and useful.
Beyond a complete test report, is there meaningful results that are useful for the engineering team to address the issues found. Or sufficient information to use the results as a foundation for future improvements or testing.
If conducting a life test, for example, the results should include the foundation or confirmation of a time to failure model. Plus, any failures that occur should include complete root cause analysis, to confirm the failures expected actually occurred, and to highlight potential areas for design changes that would extend the useful life.
Consider a drop test
For a drop test on a microphone light device, we included a finite element simulation. The model and point of fracture coincided allowing the mechanical engineering team to understand the failures observed and to quickly make improvements.
The drop test plan included estimates and assumptions about product use.
This included expected drop heights and number of drops. It also included testing to failure (in some cases over 100 drops from 3 meters) that enabled an ongoing test that could detect changes in the assembly process long before other testing methods (we used a CUSUM control chart to monitor the drops to failure values.)
Reliability testing rarely stands alone and used only once. The results should provide a foundation for many decisions, not just a pass/fail result.
Customer Requirement
It happens that customers make specific reliability testing requests or requirements.
It may be a condition of sale. Just meet their request and whenever possible extend the test plan to connect the test to a decision (not just the customers), make the test meaningful by considering the math and work to get engineering meaningful results.
The customer is not buying a test result, they are buying your product. They want a relabel product and one method to help make that happen is to request specific testing. While a customer request may not be a useful test, it may be necessary.
Selecting the right reliability tests
In summary, the minimum set of testing the provides value to decision makers is often the right set of tests.
Nothing more.
Leave a Reply