Accendo Reliability

Your Reliability Engineering Professional Development Site

  • Home
  • About
    • Contributors
    • About Us
    • Colophon
    • Survey
  • Reliability.fm
    • Speaking Of Reliability
    • Rooted in Reliability: The Plant Performance Podcast
    • Quality during Design
    • CMMSradio
    • Way of the Quality Warrior
    • Critical Talks
    • Asset Performance
    • Dare to Know
    • Maintenance Disrupted
    • Metal Conversations
    • The Leadership Connection
    • Practical Reliability Podcast
    • Reliability Hero
    • Reliability Matters
    • Reliability it Matters
    • Maintenance Mavericks Podcast
    • Women in Maintenance
    • Accendo Reliability Webinar Series
  • Articles
    • CRE Preparation Notes
    • NoMTBF
    • on Leadership & Career
      • Advanced Engineering Culture
      • ASQR&R
      • Engineering Leadership
      • Managing in the 2000s
      • Product Development and Process Improvement
    • on Maintenance Reliability
      • Aasan Asset Management
      • AI & Predictive Maintenance
      • Asset Management in the Mining Industry
      • CMMS and Maintenance Management
      • CMMS and Reliability
      • Conscious Asset
      • EAM & CMMS
      • Everyday RCM
      • History of Maintenance Management
      • Life Cycle Asset Management
      • Maintenance and Reliability
      • Maintenance Management
      • Plant Maintenance
      • Process Plant Reliability Engineering
      • RCM Blitz®
      • ReliabilityXperience
      • Rob’s Reliability Project
      • The Intelligent Transformer Blog
      • The People Side of Maintenance
      • The Reliability Mindset
    • on Product Reliability
      • Accelerated Reliability
      • Achieving the Benefits of Reliability
      • Apex Ridge
      • Breaking Bad for Reliability
      • Field Reliability Data Analysis
      • Metals Engineering and Product Reliability
      • Musings on Reliability and Maintenance Topics
      • Product Validation
      • Reliability by Design
      • Reliability Competence
      • Reliability Engineering Insights
      • Reliability in Emerging Technology
      • Reliability Knowledge
    • on Risk & Safety
      • CERM® Risk Insights
      • Equipment Risk and Reliability in Downhole Applications
      • Operational Risk Process Safety
    • on Systems Thinking
      • The RCA
      • Communicating with FINESSE
    • on Tools & Techniques
      • Big Data & Analytics
      • Experimental Design for NPD
      • Innovative Thinking in Reliability and Durability
      • Inside and Beyond HALT
      • Inside FMEA
      • Institute of Quality & Reliability
      • Integral Concepts
      • Learning from Failures
      • Progress in Field Reliability?
      • R for Engineering
      • Reliability Engineering Using Python
      • Reliability Reflections
      • Statistical Methods for Failure-Time Data
      • Testing 1 2 3
      • The Hardware Product Develoment Lifecycle
      • The Manufacturing Academy
  • eBooks
  • Resources
    • Special Offers
    • Accendo Authors
    • FMEA Resources
    • Glossary
    • Feed Forward Publications
    • Openings
    • Books
    • Webinar Sources
    • Journals
    • Higher Education
    • Podcasts
  • Courses
    • Your Courses
    • 14 Ways to Acquire Reliability Engineering Knowledge
    • Live Courses
      • Introduction to Reliability Engineering & Accelerated Testings Course Landing Page
      • Advanced Accelerated Testing Course Landing Page
    • Integral Concepts Courses
      • Reliability Analysis Methods Course Landing Page
      • Applied Reliability Analysis Course Landing Page
      • Statistics, Hypothesis Testing, & Regression Modeling Course Landing Page
      • Measurement System Assessment Course Landing Page
      • SPC & Process Capability Course Landing Page
      • Design of Experiments Course Landing Page
    • The Manufacturing Academy Courses
      • An Introduction to Reliability Engineering
      • Reliability Engineering Statistics
      • An Introduction to Quality Engineering
      • Quality Engineering Statistics
      • FMEA in Practice
      • Process Capability Analysis course
      • Root Cause Analysis and the 8D Corrective Action Process course
      • Return on Investment online course
    • Industrial Metallurgist Courses
    • FMEA courses Powered by The Luminous Group
      • FMEA Introduction
      • AIAG & VDA FMEA Methodology
    • Barringer Process Reliability Introduction
      • Barringer Process Reliability Introduction Course Landing Page
    • Fault Tree Analysis (FTA)
    • Foundations of RCM online course
    • Reliability Engineering for Heavy Industry
    • How to be an Online Student
    • Quondam Courses
  • Webinars
    • Upcoming Live Events
    • Accendo Reliability Webinar Series
  • Calendar
    • Call for Papers Listing
    • Upcoming Webinars
    • Webinar Calendar
  • Login
    • Member Home
Home » Podcast Episodes » Quality during Design » QDD 057 Design Input & Specs vs. Test & Measure Capability

by Dianna Deeney Leave a Comment

QDD 057 Design Input & Specs vs. Test & Measure Capability

Design Input & Specs vs. Test & Measure Capability

When defining design inputs and specifications, what does a design engineer need to consider about the test and measure capability? What are the typical ways that we assess the variability that a test or measurement introduces into our result?

 

 

View the Episode Transcript

What’s the insight to action?

  • Continue to recognize that there’s variation in everything, including how we test and measure our products. We need to understand how that variability affects the results because we’re making decisions based off of those results.
  • Remember that rule of thumb, that 10 years from now another engineer in a similar position as you will be able to pick up and recreate and reproduce the results that you were getting with your test and measurement system.
  • Check your test methods. Even if you’re using standard test and measurement methods, double check that it’s validated for the range that you need for your design.

Citations:

Explore More

Get a deeper dive of these topics through these other QDD episodes:

The Way We Test Matters

Statistical vs. Practical Significance

What is ‘Production Equivalent’ and Why Does it Matter?

Episode Transcript

You’re listening to an installment of the Quality during Design “Versus Series”. In this series, we’re comparing concepts within quality and reliability to better understand them and how they can affect product design engineering. We have eight episodes in this series, which means we’ll be reviewing at least 16 topics let’s get started. Hello and welcome to quality during design the place to use quality thinking to create products others love for less. My name is Diana Deeney. I’m a senior level quality professional and engineer with over 20 years of experience in manufacturing and design. Listen in and then join the conversation. Visit quality during design.com and subscribe.

Welcome to Quality during Design for products, others love for less. I’m your host, Diana Deeney. This is the place where we talk about quality and reliability engineering, concepts, methods, and techniques, and how they fit into product design development and design engineering. Today, we’re comparing a couple of topics, namely design inputs, and specifications versus the test and measurement capability and how we’re assessing those design inputs and specs.

Just a couple of episodes ago, we talked about how there’s variability in the manufacturing process and how that compares to the design specifications and limits that we’re setting. We concluded that there’s variation in everything, and that also includes how we’re testing and measuring the characteristics of our product. How we measure and how we test is also introducing variability. In fact, let’s take a look at the definition of measurement from the ASQ.

ASQ defines measurement as “an approximation or estimate the value of the specific quantity subject to measurement, which is complete only when accompanied by a quantitative statement of its uncertainty.” What’s important about this definition is that a measurement is really just an estimate of whatever we’re measuring and that we need to understand it’s uncertainty and the variation that the measurement method itself is introducing.

Understanding that measurement and test introduces variability in the data is important for product design engineering for several reasons. One of them is that we’re setting design specifications and limits for our products. So we need to consider the test and measurement capability for one. Are we able to test it and are we able to measure it to the significance that we’re defining our spec against? We are also controlling risk. Sometimes we are defining a detection control to control a risk of something bad happening. Is that control and is that measurement adequate enough to be able to control the risk? Is our test effective at measuring our design?

We also need to consider consistency in our test and our measurement and its outputs over time. And this could be for a lot of future development work and for different reasons during the product development design. One is proving stability of design outputs over time. Or maybe we’re developing the next generation of product and it needs to perform comparably to what we’ve had in the field for years. Are we changing materials? So is the strength the same over time from when we started to after we’ve preconditioned it or put it through some stress testing?

Level of detail is another thing that we need to think about when we’re thinking about the variability introduced by our test and measurement methods. For test plans and protocols and measuring instructions, there is a rule of thumb: another engineer 10 years from now will be able to replicate your results. Why this rule of thumb? Because it helps to control our test and our measurement systems so that when we’re looking at the results, they’re not confounded by unnecessary variability in the way we test and measure or the product.

So how do we assess the way that we test and measure? How do we figure out what uncertainty is being introduced by how we’re testing and measuring? There are several measures of how we test and measure.

When considering the whole measuring process and system, we can evaluate it against its precision in accuracy. Precision has to do with repeatability and reproducibility. When we measure the same part with our measuring process and system, are we able to consistently get the same result? When we’re evaluating our measuring process for accuracy, we’re looking at what we’re observing, what we’re measuring versus a reference standard. Can we measure the parts accurately over a range of different parts sizes and is the measurement stable over time?

When we’re thinking about equipment and instruments, we can evaluate those for their consistency and their sensitivity and readability. Consistency is the ability of the equipment to give us the same reading on the instrument scale. When the same dimension is measured. Sensitivity and readability are more constant factors about the equipment and instrument.

Now, of course, those are measures of how we measure things. Precision, accuracy, consistency, sensitivity, and readability are all things we consider when we’re designing and devising a way to measure our product. So now how do we control those measurement processes and systems and equipment and instruments so that we’re confident whenever we use them? To evaluate the precision of our measuring process or system, we can do an R and R study (repeatability and reproducibility study). To ensure accuracy and consistency, we can rely on calibration. To ensure the whole measuring process is with its equipment and instruments is in control, we might rely on a measurement assurance protocol or MAP.

What’s today’s insight to action? Continue to recognize that there’s variation in everything, including how we test and measure our products. We need to understand how that variability affects the results because we’re making decisions based off of those results. Remember that rule of thumb, that 10 years from now another engineer in a similar position as you will be able to pick up and recreate and reproduce the results that you were getting with your test and measurement system. And even if there’s standard test and measurement methods are used, just double check that it’s validated for the range that you need for your design.

If you like the content in this episode, visit quality during design.com, where you can subscribe to the weekly newsletter to keep in touch. This has been a production of Deeney enterprises. Thanks for listening.

 

Filed Under: Quality during Design

About Dianna Deeney

Dianna is a senior-level Quality Professional and an experienced engineer. She has worked over 20 years in product manufacturing and design and is active in learning about the latest techniques in business.

Dianna promotes strategic use of quality tools and techniques throughout the design process.

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Quality during Design podcast logo

Tips for using quality tools and methods to help you design products others love, for less.


by Dianna Deeney
Quality during Design,
Hosted on Buzzsprout.com
Subscribe and enjoy every episode
Google
Apple
Spotify

Join Accendo

Receive information and updates about articles and many other resources offered by Accendo Reliability by becoming a member.

It’s free and only takes a minute.

Join Today

© 2025 FMS Reliability · Privacy Policy · Terms of Service · Cookies Policy

Book the Course with John
  Ask a question or send along a comment. Please login to view and use the contact form.
This site uses cookies to give you a better experience, analyze site traffic, and gain insight to products or offers that may interest you. By continuing, you consent to the use of cookies. Learn how we use cookies, how they work, and how to set your browser preferences by reading our Cookies Policy.