It’s hard enough to get people to focus on quality when their company’s name is on the product.
It’s even more challenging when the design and/or manufacturing of the product is outsourced. How do you effectively manage product quality indirectly through suppliers and subcontractors?
In one of my previous jobs, my responsibilities included managing a factory quality organization at one of the world’s leading contract manufacturers.
This was a very high volume production environment, and our customer required weekly and sometimes daily monitoring of several key performance metrics for all their CM’s.
These were also the focus of our quarterly business reviews attended by high-level management on both sides, so managing these metrics got a lot of attention.
Cost, inventory, and throughput metrics are fairly easy to understand and manage, but quality is not as straightforward.
Ultimately, user-appreciated quality as measured by field failures, return rates, and other warranty costs is a trailing indicator that isn’t detectable until weeks or months after the product leaves the factory.
It’s obviously not practical to do full functional and life testing on every finished product, so manufacturers (and sometimes their customers) sample the finished products and perform measurements and abbreviated tests that are designed to give everyone a high level of confidence about the quality of the larger population.
The yields and defect rates from these end-of-line tests and audits are typically used as a proxy measure for product quality.
This is necessary and useful, however, these measures alone can’t be used to isolate the contribution of the supplier’s performance to product quality, particularly where the customer provides the product design, part specifications, and sometimes even the manufacturing process design.
What is the specific contribution of the supplier and how can that be measured?
The supplier is clearly accountable for those elements of the value delivery system that they directly control, and in the case of quality, this includes all sources of production variability.
Suppliers should be measured according to their understanding and management of these sources of variability.
Where the customer provides the original design, someone (depending on the contractual relationship) must perform an analysis of the design to identify the critical part and performance specifications — ideally, variable data, not attribute data — that must be controlled.
It’s the unique responsibility of the supplier to implement a process that can deliver products with dimensions and other characteristics that vary randomly according to common causes and are in conformance with the specs.
That means the supplier must implement measurement and tracking of the critical part characteristics to build control charts, assess the variability of the production process, identify and eliminate special causes to establish a stable process, and then assess the capability of the process to meet the specs.
This is simpler when the supplier’s output is a single part, but it’s harder to manage when the output is a fully-assembled product.
End-of-line tests and measurements are too late and don’t provide enough detail to understand special causes of variability.
The challenge to suppliers is to use the initial design analysis and information from failed units to identify intermediate or in-process measures that can provide an early indicator of quality.
Lack of value for failure analysis
Unfortunately, this kind of failure analysis is not typically valued in a manufacturing environment that’s focused on meeting the production schedule, where failed units are reworked and retested until they can be added to the outgoing shipment.
It’s ironic because one of the biggest opportunities to improve throughput and productivity is to spend a little time understanding and eliminating the causes of test failures.
The supplier must demonstrate their ability to understand and manage the variability of the individual process steps that are critical to final product quality.
Sources of variability include incoming parts, materials, consumables and sub-assemblies; the performance of operators, tools, jigs, and fixtures; and production environmental conditions such as temperature, humidity, and cleanliness.
What does this mean for people who want to assess the quality performance of their suppliers?
Assuming the critical parameters and performance characteristics have been already defined, the supplier should be able to present evidence that all processes that contribute to those parameters and characteristics are stable and in control.
By the way, that should be necessary criteria for start-of-production. If the processes are not stable, the supplier should be able to explain what they’re doing to eliminate special causes, supported by sound statistical and engineering analysis.
If the processes are stable but not capable of meeting specifications, that requires a collaborative investigation to determine how to modify the process, again something that should be done before start-of-production.
Under no circumstances should a supplier tamper with a process that’s unstable (they should be removing special causes instead), or otherwise take unauthorized action in response to a quality problem.
If you’re not asking for this information, then you don’t understand your supplier’s responsibility for quality.
Supplier Management Fundamentals (article)