When I joined Hewlett-Packard in 1988, I was assigned to a team that was working on a design for manufacturability manual for printed circuit board designers.
Our primary objective was to provide performance and cost information that could be used to guide decisions about different design options.
My favorite project during that time was a predictive model to estimate the manufacturing yield of a PCB design based on a composite “complexity” metric.
Because we were an internal supplier, I was able to look at the actual lot yields for hundreds of active part numbers with known design parameters, so it seemed like a fairly straightforward exercise to experiment with different regression models to find an optimum fit between complexity and yield.
This turned out to be a lot more complicated than I expected, mainly because manufacturing yields are not normally distributed.
The simple arithmetic mean of a bunch of individual lot yields is pretty much meaningless, and the time interval between lots meant that the process itself wasn’t the same each time.
(For those who care about the technical details, when you plot individual lot yields for a large population of lots it seems to fit a Weibull distribution from reliability engineering, which isn’t really surprising if you think about it.)
What I was really looking for was the theoretical maximum yield enabled by a given set of design parameters, but what I had were the actual lot yields, each of which were influenced by the inherent variability of parts and manufacturing processes, including workmanship.
For the purposes of the DFM manual, it wasn’t necessary to predict the actual yield; it was enough to provide a model to compare the theoretical yield for two or more design options.
I was in for a lot of data crunching, but in the end, we got a useful model. More than ten years later I still saw copies of that DFM manual on the desks of PCB designers.
So what’s my point?
The point is that product quality depends both on the design and all the steps required to create a product from that design.
Eliminating special causes and reducing the variability of parts and processes can help approach the theoretical maximum yield, but the design establishes an upper limit of quality that cannot be exceeded in the real world without improving the design itself.
This is another reason why it’s so important to stabilize and ultimately freeze the design so the manufacturing processes can stabilize before the ramp to full production.
By the way, I don’t think any practical, cost-effective design can guarantee 100% yield at the factory, or have zero field failures.
A single lot may have 100% yield, but that’s just a sample of a larger population of all possible combinations of part and process variation, assuming the same process each time.
If you could do a Monte Carlo simulation that accounts for all sources of manufacturing variability (assuming only common causes), and run the simulation to a very large number of trials, you could come up with a pretty good estimate of the maximum yield.
But, even if all the processes were running at six sigma, you would still have some small percentage of non-conformance.
Related:
Introduction to Design for Reliability (article)
Understanding the Design Process (article)
Product Reliability Design Guidelines (article)
Leave a Reply