The bane of our existence is one thing, generating enough data to demonstrate statistical confidence. Every reliability engineer, every project manager, every Director and VP all have the same moment of panic in a new product development program. In synchronicity they put their head in their hand. It’s when the required number of test units and calendar time to demonstrate a required confidence in the reliability goal is calculated. It’s usually about ten times more units than can be acquired and about two times longer than the entire product development program timeline.
It’s important to note that this is a universal issue across all types of products in just about every industry. To fully demonstrate a high confidence in a reliability goal requires a significant amount of test time with units that fully represent the production equivalent design. That’s not permissible in today’s rapid product cycle. Soon after the manufacturing prototypes have been made we are headed to full production. The data for confidence calculations that has usually been generated by the time prototypes are available is from sub-assembly testing programs and the very beginning of a system reliability growth program.
Information such as statistical confidence in a reliability goal serves a single purpose, to create a quantifiable metric for program/business decisions. This metric is not stating what the reliability of the product is. It is measuring how confident we are in the product’s ability to perform the reliability goal. It’s a measurement of our knowledge of the product, or better stated, how well what we know translates to the larger population. If we are making a decision at the end of the development program, “to release or not release?”, we do not need a single preselected confidence value. If we feel that a certain level of risk is ok with regard to the total product picture then that is fine. It’s one piece of data in a big decision.
What is important to me is that the team can demonstrate to leadership, and any outside evaluation (regulatory organizations), that they used the time and resource available as wisely as possible. Evidence of this is a plan that was directed by smart investigation with risk assessment and investigative testing tools. Resource should be directed to the areas of greatest technical or historical risk. That is a set of difficult targets to define. The goal of reliability isn’t to make a design work, that was already accomplished, it is to reduce the affects of variability on the most vulnerable areas.
I also advise resisting the temptation to apply a new method you found in an article that makes it possible to demonstrate a 90% confidence in a 99% reliability goal with three units and four weeks of test time. Those articles and papers are everywhere, and for one good reason. Everybody is desperate for a quicker better way. It’s the same reason you see new diet and exercise fads every year. The entire world is desperate for a way to lose weight while eating whatever they want and doing one simple exercise for five minutes a day. It takes a long time to demonstrate a confidence in a mass produced product, there’s no way around it. Good planning will optimize generating the right information early on.
-Adam
Feel free to contact us about our services at www.apexridge.com
No Fields Found.
Kirk Gray says
Good article Adam. You have underlined the reasons that we must get more information on strength and weakness in products with the few samples that are allocated for testing. It is also important to know that many causes of unreliability come during production when a manufacturing “excursion” introduces a latent defect to what was a robust design when first tested.
Adam Bahret says
Thank You. I don’t see enough risk analysis early in programs to ensure resource is applied efficiently.
Thanks again
Adam