One of the most persistent points of confusion in quality engineering is the difference between traditional statistical process capability analysis and the Six Sigma approach. Specifically, why does Six Sigma define a “six sigma” process as having 3.4 defective parts per million (DPPM), when a straightforward application of statistical tables suggests that six standard deviations from the mean should correspond to a far lower defect rate—about 2 parts per billion? The answer lies in what Six Sigma practitioners call the 1.5 sigma shift.
The Statistical Reality
For any normal distribution, the probability of exceeding six standard deviations from the mean—either in the upper or lower tail—is extraordinarily small. Using a standard Z-table or the following formula in Excel:
=2*(1-NORM.S.DIST(6,TRUE))*1,000,000
we find that only about 2 parts per billion (PPB) of units should be outside the six sigma limits. This is orders of magnitude smaller than the well-known Six Sigma claim of 3.4 parts per million (PPM).
The 1.5 Sigma Shift: The Six Sigma Interpretation
To reconcile this discrepancy, Six Sigma methodology applies a 1.5 sigma shift to all process capability calculations. The reasoning is that no matter how rigorous a short-term capability study might be, long-term process variation is inevitable due to factors like tool wear, operator variation, material inconsistencies, and environmental changes. To account for this, Six Sigma assumes that a process operating at a measured short-term level of 6 sigma will actually degrade over time and behave more like a 4.5 sigma process in the long run.
By shifting the process mean 1.5 sigma closer to a specification limit, Six Sigma recalculates defect rates using Z = 4.5, which corresponds to:
=2*(1-NORM.S.DIST(4.5,TRUE))*1,000,000
This results in approximately 3.4 defective parts per million—the widely cited Six Sigma benchmark for process excellence.
Is the 1.5 Sigma Shift Justified?
Here’s where things get controversial. The 1.5 sigma shift is not a universal statistical truth—it’s an empirical rule of thumb. While some long-term degradation is expected in most processes, the exact amount of shift is highly dependent on the specific process and operating conditions. Applying a fixed 1.5 sigma adjustment to all processes, in all industries, under all conditions is an oversimplification.
In high-precision industries such as aerospace, medical device manufacturing, and semiconductor fabrication, where process controls are exceptionally tight, a 1.5 sigma shift may significantly overestimate long-term variation. Conversely, in industries with highly variable inputs and complex processes, the assumption may be too conservative.
Why This Matters for Quality and Reliability Professionals
If you’re working in reliability or quality engineering, it’s crucial to recognize whether a given capability analysis assumes the 1.5 sigma shift or not. Many online DPPM conversion tables and software tools bake in this assumption, which can lead to misleading conclusions if applied blindly.
So the next time you’re evaluating a process capability report or using Six Sigma metrics to drive decision-making, ask yourself:
- Is this defect rate based on a raw statistical calculation or a Six Sigma-adjusted estimate?
- Does the 1.5 sigma shift reflect the actual long-term variation of this process?
- Would a data-driven approach using historical process drift be more appropriate than applying a blanket assumption?
The Bottom Line
The 1.5 sigma shift is a useful heuristic, but it is not a universal law of nature. While it provides a convenient way to account for long-term process variation, it should be used with a critical eye and an understanding of the specific process dynamics at play. When in doubt, examine real historical data rather than relying solely on theoretical adjustments.
Understanding the nuances of process capability metrics is essential for making informed, data-driven decisions in quality and reliability engineering. Whether you accept the Six Sigma definition or prefer a more traditional statistical interpretation, being aware of the 1.5 sigma shift debate will help you navigate the complex landscape of process performance analysis with confidence.Ray Harkins is the General Manager of Lexington Technologies in Lexington, North Carolina. He earned his Master of Science from Rochester Institute of Technology and his Master of Business Administration from Youngstown State University. He also teaches manufacturing and business-related skills such as Quality Engineering Statistics, Reliability Engineering Statistics, Failure Modes and Effects Analysis (FMEA), and Root Cause Analysis and the 8D Corrective Action Process through the online learning platform, Udemy. He can be reached via LinkedIn at linkedin.com/in/ray-harkins or by email at the.mfg.acad@gmail.com.
Leave a Reply