
Dealing with No Failures
Abstract
Mojan and Fred tackle the common engineering dilemma of what to do when a test concludes with zero failures. They discuss how to “artificially” assume a failure to benchmark against existing models and how to determine if your product is actually better than expected or if your test was simply flawed.
Key Points
Join Mojan and Fred as they discuss strategies for interpreting “perfect” test results and how to use them to validate or challenge your reliability models.
Topics include:
Assumed Failure Analysis: When no failures occur, you can’t build a curve; instead, assume a failure at the next time interval and plot it on a Weibull chart to see if it falls to the left (worse) or right (better) of your predicted model line.
Verifying Test Integrity: Before celebrating zero failures, you must ask the “sanity check” questions: Was the equipment actually plugged in? Did it truly have the opportunity to fail? Was the stress level high enough to trigger the expected mechanism?
Redefining the Result: Zero failures might mean you are missing a different failure mechanism entirely, or it might mean your definition of failure is too narrow.
The Decision-Making Mandate: Even with a “no-failure” dataset, engineers are still required to make a decision. Using assumed failure percentiles allows for a structured, data-driven way to provide a “confidence” statement to the program.
Enjoy an episode of Speaking of Reliability. Where you can join friends as they discuss reliability topics. Join us as we discuss topics ranging from design for reliability techniques to field data analysis approaches.

Show Notes
Ask a question or send along a comment.
Please login to view and use the contact form.
Leave a Reply