
Proper control chart selection is critical to realizing the benefits of Statistical Process Control. Many factors should be considered when choosing a control chart for a given application. These include:
Your Reliability Engineering Professional Development Site
A listing in reverse chronological order of articles by:
by Steven Wachs Leave a Comment

Proper control chart selection is critical to realizing the benefits of Statistical Process Control. Many factors should be considered when choosing a control chart for a given application. These include:
by Ray Harkins Leave a Comment

This strange word andragogy was popularized in the early 1970’s by educational researcher, Malcolm Knowles. It is etymologically rooted in the Greek language from two words “aner”, which means “man” and “agogos”, which means “to lead”. Fused together, andragogy means “leading men”, or to paraphrase, leading or educating adults. Andragogy is often contrasted with pedagogy, typically referring to the education of children. [Read more…]
by Oleg Ivanov Leave a Comment

Thems that die’ll be the lucky ones.
~ Robert Louis Stevenson
This post is a continuation of the series “Is the HALT a Life Test or not?”
Test two samples and demonstrate reliability R=0,99 over a lifetime with CL=0,99. This is real. But what payment will we pay, besides the duration of the test on triple lifetime, which can be significantly accelerated by the way? [Read more…]
by Steven Wachs Leave a Comment

Statistically based DOE provides several advantages over more simplistic approaches such “one-factor-at-a-time” experimentation. These advantages include:
This article will explore the first two advantages in a bit more detail. The second two advantages will be discussed in the next article post. [Read more…]

Myron Tribus’ UCLA Statistical Thermodynamics class introduced me to entropy, -SUM[p(t)ln(p(t))]. (p(t) is the probability of state t of a system.) Professor Tribus later advocated maximum-entropy reliability estimation, because that “…best represents the current state of knowledge about a system…” [Principle of maximum entropy – Wikipedia] Caution! This article contains statistical neurohazards.
Claude Shannon wrote that entropy (log base 2) represents information bits, “…an absolute mathematical limit on how well data from the source can be losslessly compressed onto a perfectly noiseless channel.” [Beirlant et al.]
Maximum likelihood estimation is one way to estimate reliability from data. It maximizes the probability density function of observed data, PRODUCT[p(t)], e.g., for observed failures at ages t. It is equivalent to maximize -SUM[ln(p(t)]. Maximum entropy reliability estimation maximizes entropy -SUM[p(t)ln(p(t)]. That’s same as maximizing the expected value, -SUM[p(t)ln(p(t)], of the log likelihood -ln(p(t). Fine, if you have life data, ages at failures t censored or not. [Read more…]
by Steven Wachs Leave a Comment

Statistical Process Control charts have been called the Voice of the Process. Progressive manufacturers utilize control charts to “listen” to their processes so that potentially harmful changes will be quickly detected and rectified. However, not all SPC programs deliver to their highest capability as there are many elements to get right to achieve maximum utility. Highly effective SPC programs combine technical competencies, such as using an appropriate chart and sample size for the application, with effective management techniques such as enabling operator buy-in and involvement. This article identifies ten keys that unleash the power of SPC. [Read more…]
by Larry George Leave a Comment

Ralph Evans was editor of the IEEE Transactions on Reliability from 1969 until 2004. He was a very good editor for my 1977 article, and he used me as a reviewer, because I was critical of BS and academic exercises. Ralph moved to University Retirement Community, Davis, CA. He died in 2013, https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6587564. I wish I’d known he lived nearby so I could have visited and argued with him.
Ralph’s editorials [1 and 2] pled, “Data, Data, Oh Where Art Thou Data?” He wrote, “Field-data are largely garbage. I believe they deserve all the negative thinking possible.” “True field-data are wonderful-much better than fancy equations. Unfortunately, they are very difficult to get. Thus data from the field are largely garbage because they do not represent what really happened.” [Read more…]
by Ray Harkins Leave a Comment

Aside from meeting specific requirements within quality standards such as ISO 9001 and ISO 13485, well-designed quality system metrics can also serve as meaningful indicators of the strengths and weaknesses of your organization’s processes. As a quality manager, I often consider how precisely our quality system objectives and other metrics describe the effectiveness of our quality processes. Certain metrics such as customer-reported DPPM and customer survey results usually serve to indicate your customers’ satisfaction related to quality. As metrics such as these are tracked over time, managers get a general sense of improvement or decline. Composite measures such as these, however, do not discriminate between quality assurance (preventive) and quality control activities. [Read more…]
by Steven Wachs Leave a Comment

Process Stability and Process Capability are both extremely important aspects of any manufacturing process. Often the concepts behind process stability and process capability and the relationship between them are misunderstood. This article attempts to clarify both ideas and the relationship between them. [Read more…]
by Carl S. Carlson Leave a Comment

“Many ideas grow better when transplanted into another mind than the one where they sprang up.”
Oliver Wendell Holmes
In the international FMEA community, one of the hot topics is how much of an FMEA can be automated versus how much needs to be team-based. Some experts say the future of FMEA requires an automated approach, as systems are getting more and more complex. Others say FMEA must always be grounded in a team of subject matter experts, narrowly focused on the highest priority issues.
In this article, I will share my thoughts on why FMEA needs to be team-based, and what elements can be prepopulated or automated.

When planning and designing an experiment, it may be tempting to try and accomplish all the objectives is a single experiment. The thinking is often that experimentation is time consuming and expensive, so one experiment must be better than multiple experiments.
However, in general, it is a good idea to plan for multiple experiments which often is a much more efficient approach. We like to think of experimentation as a methodology that is best implemented in phases. We define experimental phases as: [Read more…]
by Larry George Leave a Comment

My wife and I were in Firestone-Walker Brewery (Buellton, California) after Solvang Danish Days. (That’s me playing in the Solvang Village band.) My wife was comparing an Adam Firestone photo on the wall with a man at a table. I was admiring a woman seated near the bar with balletic posture. The balletic woman picked up a pizza and delivered it to the man and sat with him. My wife went over and asked the man if he was Adam Firestone? He was, with his sister Polly. While my wife chatted with them, I did not engage, because I was responsible for FORD recalling the Firestone tire sizes that Firestone did NOT recall. [Read more…]
by Ray Harkins Leave a Comment

Researchers in psychology and other social sciences have long been aware of the observer effect—a phenomenon that occurs when the subject of a study alters their behavior because they are aware of the observer’s presence. Researchers typically design their experiments to reduce or eliminate this effect to avoid skewing the results of the study. Beyond the realm of research, though, an understanding of the observer effect and its applications is valuable wherever people’s actions are being evaluated. [Read more…]
by Steven Wachs Leave a Comment

Experimentation is frequently performed using trial and error approaches which are extremely inefficient and rarely lead to optimal solutions. Furthermore, when it’s desired to understand theeffect of multiple variables on an outcome (response), “one-factor-at-a-time” trials are often performed. Not only is this approach inefficient, it inhibits the ability to understand and model how multiple variables interact to jointly affect a response. Statistically based Design of Experiments provides a methodology for optimally developing process understanding via experimentation. [Read more…]
by Steven Wachs Leave a Comment

Ask people involved with the design and manufacture of a product the following question: “What is Quality?” Many if not most of the responses will be some form of the following: “Quality is ensuring that our products meet the customer (or engineering) specifications. Unfortunately, this leads to a “conformance to specifications” or a “Product Control” approach to quality. [Read more…]
Ask a question or send along a comment.
Please login to view and use the contact form.