Accendo Reliability

Your Reliability Engineering Professional Development Site

  • Home
  • About
    • Contributors
  • Reliability.fm
    • Speaking Of Reliability
    • Rooted in Reliability: The Plant Performance Podcast
    • Quality during Design
    • Critical Talks
    • Dare to Know
    • Maintenance Disrupted
    • Metal Conversations
    • The Leadership Connection
    • Practical Reliability Podcast
    • Reliability Matters
    • Reliability it Matters
    • Maintenance Mavericks Podcast
    • Women in Maintenance
    • Accendo Reliability Webinar Series
    • Asset Reliability @ Work
  • Articles
    • CRE Preparation Notes
    • on Leadership & Career
      • Advanced Engineering Culture
      • Engineering Leadership
      • Managing in the 2000s
      • Product Development and Process Improvement
    • on Maintenance Reliability
      • Aasan Asset Management
      • CMMS and Reliability
      • Conscious Asset
      • EAM & CMMS
      • Everyday RCM
      • History of Maintenance Management
      • Life Cycle Asset Management
      • Maintenance and Reliability
      • Maintenance Management
      • Plant Maintenance
      • Process Plant Reliability Engineering
      • ReliabilityXperience
      • RCM Blitz®
      • Rob’s Reliability Project
      • The Intelligent Transformer Blog
    • on Product Reliability
      • Accelerated Reliability
      • Achieving the Benefits of Reliability
      • Apex Ridge
      • Metals Engineering and Product Reliability
      • Musings on Reliability and Maintenance Topics
      • Product Validation
      • Reliability Engineering Insights
      • Reliability in Emerging Technology
    • on Risk & Safety
      • CERM® Risk Insights
      • Equipment Risk and Reliability in Downhole Applications
      • Operational Risk Process Safety
    • on Systems Thinking
      • Communicating with FINESSE
      • The RCA
    • on Tools & Techniques
      • Big Data & Analytics
      • Experimental Design for NPD
      • Innovative Thinking in Reliability and Durability
      • Inside and Beyond HALT
      • Inside FMEA
      • Integral Concepts
      • Learning from Failures
      • Progress in Field Reliability?
      • Reliability Engineering Using Python
      • Reliability Reflections
      • Testing 1 2 3
      • The Manufacturing Academy
  • eBooks
  • Resources
    • Accendo Authors
    • FMEA Resources
    • Feed Forward Publications
    • Openings
    • Books
    • Webinars
    • Journals
    • Higher Education
    • Podcasts
  • Courses
    • 14 Ways to Acquire Reliability Engineering Knowledge
    • Reliability Analysis Methods online course
    • Measurement System Assessment
    • SPC-Process Capability Course
    • Design of Experiments
    • Foundations of RCM online course
    • Quality during Design Journey
    • Reliability Engineering Statistics
    • Quality Engineering Statistics
    • An Introduction to Reliability Engineering
    • An Introduction to Quality Engineering
    • Process Capability Analysis course
    • Root Cause Analysis and the 8D Corrective Action Process course
    • Return on Investment online course
    • CRE Preparation Online Course
    • Quondam Courses
  • Webinars
    • Upcoming Live Events
  • Calendar
    • Call for Papers Listing
    • Upcoming Webinars
    • Webinar Calendar
  • Login
    • Member Home

by Steven Wachs Leave a Comment

Analyzing the Experiment (Part 6) – Prediction Uncertainty and Model Validation

Analyzing the Experiment (Part 6) – Prediction Uncertainty and Model Validation

In the last Article, we explored the use of contour plots and other tools (such as a response optimizer) to help us quickly find solutions to our models.  In this article, we will look at the uncertainty in these predictions.  We will also discuss model validation to ensure that technical assumptions that are inherent in the modeling process is satisfied.

We again start by revisiting the battery life DOE example that was discussed in the previous article.  Recall that previously we used the optimizer to find a solution for the wall thickness that would produce a target battery life of 45.  As a reminder, we constrained the solution to use a Lithium battery since this made the response (battery life) insensitive to changes in ambient temperature.  In the table below, we see that the wall thickness (uncoded) should be set at 1.44 (mm).

An important question with any model is how well does it predict?  Suppose we actually produced a bunch of Lithium batteries with the wall thickness = 1.44.  Would we always expect to get a battery life of exactly 45?  The answer is NO!.  Our model was not perfect (not all variation is explained) and even if it was, the model effects and parameters will based on average responses.  The solution we found predicts that the average battery life will be 45.0 if we set wall thickness at 1.44.

At the bottom of the output above, we see both a 95% Confidence Interval (CI) and a 95% Prediction Interval (PI).  The confidence interval tells us how much uncertainty exists in the average response.  Thus, if we produce a lithium battery with a wall thickness of 1.44 mm, then we could expect that the range 43.475 to 46.525 has a 95% probability of containing the true average response.  The prediction interval is always wider because it provided the range over which we may expect individual response values to fall.  Thus, if we produce a lithium battery with a wall thickness of 1.44 mm, then we could expect that the range 41.339 to 48.661 has a 95% probability of containing the individual values.  This is how much variation we could expect, given the uncertainty in our model.  Note that the model uncertainty is both a function of the amount of data used to build the model as well as the experimental error observed in the study.

In the next article, we will look a bit more at the uncertainty in these predictions.  We will also, talk about model validation to ensure that technical assumptions that are inherent in this process are satisfied.

It is important to recognize the degree of uncertainty when using predictive models for making predictions, helping to set specifications, etc.

Next, let’s look at Model Validation.  The basic reason for validating the model are summarized below.  To perform the validation, we calculate the residuals associated with each treatment.

Model validation is performed to:

  • test important assumptions in the modeling procedure
  • test for significant non-linearities (2-level designs assume linearity)
  • understand the magnitude of errors in model predictions

Residuals are the differences between the actual responses and the model’s predicted responses (differences result from model lack of fit as well as experimental error)

The residuals (errors) are calculated by any DOE software program, but they are not difficult to compute.  We’ll look at a simple example.

The experiment above is an experiment with 3 factors (2 levels).  The right-most column contains the observed response and the model that was developed is shown at the bottom.  (X3 and X1X3 were significant).  For each row, we can plug in the actual coded values for X1 and X3 and compute the result, using the model.  For the first row, both X1 and X3 are “low” (-1).  So, we have:

y-hat = 20 + 4.5(-1) – 2.5(-1)(-1)

= 20 – 4.5 -2.5 = 13

Thus, the predicted value is 13 and the observed value is 13, so the residual is 13-13 = 0.  The residuals for each row are calculated similarly.

Once we compute all the residuals, we can determine how large they are and also graph them on various plots to determine if there are any significant violations of the model assumptions.  The list below summarizes how residuals should behave if our modeling assumptions are satisfied.

Residuals should:

  • average zero
  • follow a normal distribution
  • exhibit no pattern relative to the predicted response
  • exhibit no pattern relative to run order
  • exhibit no pattern relative to factor levels

Let’s consider some examples of residual plots.

The upper left plot shows a normal probability plot which is used to check if residuals are reasonably described by a normal distribution.  Since the points are close to linear on this plot, the normality assumption is satisfied.  The upper right shows residuals vs. the model predicted values (fitted value).  We are just looking for random values around zero and this one looks fine.

Let’s look at a few examples of model violations.  In the plot below (residuals vs. predicted values) the module tends to over-predicts for smaller predicted values and under-predicts for larger predicted values.  A valid model should not exhibit this pattern as it should predict similarly across the range of predicted values.

In the same kind of plot below, we see that the variability in residuals is not constant across the range of predicted values.  This condition is called heterskedasticity, and means that the size of the model errors changes significantly across the range of predicted values.  Constant variance of residuals is a requirement of the modeling method.

Below is a plot of residuals vs. the factor levels for a given factor.  Just like across response levels, we shouldn’t see a pattern across factor levels.

Finally, we should look at the residuals vs. the run order of the experiment.  Non-random patterns may be indicative of a change occurring during the conduct of the experiment (assuming that the experiment was randomized).  For example, in the plot below the first half of the runs look very different than the second half with regard to the predictive ability.

Violations of these rules may indicate non-linear responses, missing important factors, or other issues.  Non-constant variances across the range or lack of normality can often be corrected by transforming the response values before developing the predictive model.

Filed Under: Articles, Integral Concepts, on Tools & Techniques

« Facilitation Skill #2: Controlling Discussion
Asset Management is more than Maintenance »

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Articles by Steven Wachs, Integral Concepts
in the Integral Concepts article series

Join Accendo

Receive information and updates about articles and many other resources offered by Accendo Reliability by becoming a member.

It’s free and only takes a minute.

Join Today

Recent Articles

  • What is the Difference Between Quality Assurance and Quality Control?
  • Covariance of the Kaplan-Meier Estimators?
  • Use Of RFID In Process Safety: Track Hazardous Chemicals And Track Personnel
  • How to Reduce Maintenance Cost The Right Way
  • Significance Over Success. Innovation Over Change. Anticipation Over Agility

© 2023 FMS Reliability · Privacy Policy · Terms of Service · Cookies Policy

This site uses cookies to give you a better experience, analyze site traffic, and gain insight to products or offers that may interest you. By continuing, you consent to the use of cookies. Learn how we use cookies, how they work, and how to set your browser preferences by reading our Cookies Policy.