Why Do Reliability Testing
Reliability testing is expensive. The results are often not conclusive.
Yet we spend billions on environmental, accelerated, growth, step stress and other types of reliability tests. We bake, shake, rattle and roll prototypes and production units alike. We examine the collected data in hopes of glimpsing the future.
Reliability testing is time and resource consuming. Yet, the results are dismissed, revised, reworked, or ignored. Testing reveals a unique failure mechanism and it is waived away as a fluke event. Failure analysis is not performed to determine root causes of each failure.
We are either doing too much testing or not the right reliability testing.
To Learn Something
Any reliability test is performed, or should be performed to answer a question. The results of this specific test will help us:
- Understand what will fail (find weakest elements of the system)
- Estimate the time till solder joints will fail due to thermal cycling stress
- Calculate the rate of wear of the brake pads under xyz set of conditions
- Confirm a set of assumptions used in a finite element analysis based simulation
- Measure and model the strength of a specific flange (strength variability measurements)
Or, something along those lines.
The objective of testing is often to see what happens, to run some tests, to comply with a specific standard. To often testing is done with no audience. No one to read the report. No one to take action one way or the other on results.
Testing done because it is the same suite of tests we run on all products is rarely useful. Stop it.
If the reliability testing results are too late to alter the design, there is little chance to improve the design based on the results. If the test design has three units run a gauntlet of stresses, and none fail, what have you learned? Is it sufficient information to make decision if we have 60% confidence the unit is at least 73% reliability over the testing conditions?
Do the results provide evidence that the unit being tested will or will not experience a specific failure mechanism? Does the stress being applied have the capability to excite the failure of interest, or does having no failures mean the design is robust enough?
Make sure each reliability test started has the ability to answer the question the needs an answer.
To Compare to Objectives
One reason to conduct testing is the check if vendor A is better than vendor B. Another is to determine the changes to the design actually make the product more robust to the same stress. Another is the comparison of the results to the reliability objectives.
All good reasons to run a test. There is a decision required. Vendor A or B, did the design change fix the problem, or have we met the goals yet.
How many of your reliability tests have such a clear objective and meaningful connection to a decision? If your testing has this approach when the results become available will the results provide sufficient evidence to permit a decision? Shades of sample size calculations and statical knowledge lurk here.
To Assure Performance
Let’s say a customer requirement is to run a test to demonstrate the reliability of the solution. You take a look at the test requirements and recognize the applied stress, say temperature variation, focused on a failure mechanisms that doesn’t exist in your product. It may have been a problem with previous designs.
The applied stress will not causes failures (the new design will easily pass).
And, we also know, the life limiting failure mechanism is accelerated with humidity cycling. Since the customer required test doesn’t require humidity cycling, the chance of this new failure mechanism causing a failure is very low.
We will be able to meet the customer’s test requirement, yet current understanding of the application environment which includes humidity cycling, will likely not meet the life expectations.
What would you do? Inform the customer of the concern around the stress of humidity cycling. Say nothing as the customer is always right. Add a line to the user’s manual to control humidity during use.
What question is the customer’s test trying to answer? What is it they really want to know?
Does Your Reliability Test Add Value?
In short each test should inspire action, a course of action, a path to improvement. Each test should provide sufficient information to permit decisions. Each test has to add value. Be of use.
One way to insure each test has the capability to add value is to ask before starting the test:
- Who needs the test results and for what decision?
- Will the test results provide the right information to help that person make a decision?
- Will the test provide meaningful results – will it answer the right questions?
If the reliability test has little potential value, inconclusive results or no-one needs the results to move forward, then why do the test?
How do you justify the investment in reliability testing? Are you getting the value you should from each test?
Leave a Reply