As reliability engineers, we have a large number of tools available.
From project planning, system modeling, data analysis, test planning, to risk identification and defect discovery, we have techniques, procedures, algorithms to help us identify and solve reliability problems.
We also may ways to apply an individual reliability task.
We could do an exploratory drop test to see what happens if anything. Or, we could conduct a full characterization study of the force signature of drops onto different surfaces and faces from a range of heights. Or something in-between.
We have options and thus need to make choices.
Are you using the right tool for the specific situation you are facing?
Fitting reliability tasks to specific situations
There is more than one way to solve a problem or answer a question.
That is true, and some approaches are more efficient and accurate.
The trouble with so many tools available to use is we may avoid thinking through if the particular approach we are using is appropriate for the specific issue. How do you know if what you’re doing is the right thing to be doing?
Let’s look at setting up an accelerated life test to determine the time to failure distribution for a new solder joint design. You are faced with choices for the approach.
- You could use existing models and simply run a confirmation test to validate a portion of the existing model.
- You could make simplifying assumptions based on similar solder systems and conduct a single stress test.
- You could assume the Norris-Landzberg model applies and use step stress to get results a bit faster.
- You could focus on crack propagation and measure crack length vs number of cycles and create a new model
You have choices, more than quickly listed here.
You also have constraints on time, resources, samples, test facilities. Just as with any recommendation for the application of a specific reliability engineering tool, the approach should minimize the chance of not achieving meaningful results given the constraints.
If you make the wrong choice, say assume the Norris-Landzberg is correct when in fact is does not apply to your situation, the results you produce will not reflect the actual time to failure for the new solder joint design. It could be earlier, leading to significant costly field failures, or last longer resulting in no problems.
The choice of the testing approach, like any choice of approach we make, has the opportunity to have catastrophic consequences, be close enough to make be a big deal.
One problem we have is the long delay before we get feedback on our task choices.
Using the same plan as last time
Beyond specific approaches for test planning, this same issue arises when laying out a reliability project plan.
We may have just successfully worked with a team to launch a reliable new widget. The tools implemented helped the team make decisions that resulted in the production of a reliable system that, so far, is meeting customer expectations.
Just because that set of tools in the plan to support the previous product worked does not suggest the same approach using the same tools will work again.
The next product development project is different. New materials, new processes, maybe a focus on cost reduction, possibly a focus on a new market. It has a different set of constraints and risks.
Pulling the old plan off the shelf and repeating the same actions will not produce the same results.
In the previous project that created a new platform, for example, the team did not have any field data or experience with what kinds of failures may occur. HALT was a wonderful tool to help the team discover the lurking failure mechanisms to address.
In the next project with a focus on cost reducing the same platform, the team has a long list of issues left over from the previous work, plus the addition a few new problems discovered by customers.
Here the team is starting with a Pareto chart full of issues to address. Sure HALT may reveal a few more issues, yet it is less important in the cost reduction project then the prior project.
Focus on the outcomes
One way to improve the value of each tool selected for your project is to focus on how that specific tool will provide the results your team needs to make a decision.
What are the specific set of risks, constraints, and timeline for decisions? Focus on lining up reliability activities that enhance the ability of the team to make decisions that improve the resulting reliability performance.
Focus on the results each tool creates, not that you simply accomplished the task.
Creating a parts count prediction for a circuit board is an accomplished task, and if no one reads or uses the resulting prediction to make a decision other than checking it off a to-do list, then I suggest that prediction should not have been done.
Make sure each reliability task adds value
The focus on decisions is key.
The resulting information and recommendations from the array of reliability tools have to enable and encourage the entire team to meet the reliability objectives and customer expectations.
If conducting an accelerated life test will cost $100k to accomplish, the results should impact decisions of $1 million. A decision such as changing the package, manufacturing process, or start shipping or not. Big decisions.
If the FMEA study will tie up 5 engineers for two days, the resulting focus provided the team should save more than 10 days of work for that same team. While there are not guarantees that any specific reliability activity will create value, the selected tool should have the best chance that it will be valuable. Consider and compare the other options available to confront the challenges of creating a reliability product, then select the tools that have the best chance of being valuable.
Do you have a set of go-to tools that you always add to your project plan? Is that wise?
How do you select which activities to accomplish? Have you challenged to standard list built into every plan recently? Let me know by adding your thoughts and comments below.
Finding Value (book)