Accendo Reliability

Your Reliability Engineering Professional Development Site

  • Home
  • About
    • Contributors
  • Reliability.fm
    • Speaking Of Reliability
    • Rooted in Reliability: The Plant Performance Podcast
    • Quality during Design
    • Critical Talks
    • Dare to Know
    • Maintenance Disrupted
    • Metal Conversations
    • The Leadership Connection
    • Practical Reliability Podcast
    • Reliability Matters
    • Reliability it Matters
    • Maintenance Mavericks Podcast
    • Women in Maintenance
    • Accendo Reliability Webinar Series
    • Asset Reliability @ Work
  • Articles
    • CRE Preparation Notes
    • on Leadership & Career
      • Advanced Engineering Culture
      • Engineering Leadership
      • Managing in the 2000s
      • Product Development and Process Improvement
    • on Maintenance Reliability
      • Aasan Asset Management
      • CMMS and Reliability
      • Conscious Asset
      • EAM & CMMS
      • Everyday RCM
      • History of Maintenance Management
      • Life Cycle Asset Management
      • Maintenance and Reliability
      • Maintenance Management
      • Plant Maintenance
      • Process Plant Reliability Engineering
      • ReliabilityXperience
      • RCM Blitz®
      • Rob’s Reliability Project
      • The Intelligent Transformer Blog
    • on Product Reliability
      • Accelerated Reliability
      • Achieving the Benefits of Reliability
      • Apex Ridge
      • Metals Engineering and Product Reliability
      • Musings on Reliability and Maintenance Topics
      • Product Validation
      • Reliability Engineering Insights
      • Reliability in Emerging Technology
    • on Risk & Safety
      • CERM® Risk Insights
      • Equipment Risk and Reliability in Downhole Applications
      • Operational Risk Process Safety
    • on Systems Thinking
      • Communicating with FINESSE
      • The RCA
    • on Tools & Techniques
      • Big Data & Analytics
      • Experimental Design for NPD
      • Innovative Thinking in Reliability and Durability
      • Inside and Beyond HALT
      • Inside FMEA
      • Integral Concepts
      • Learning from Failures
      • Progress in Field Reliability?
      • Reliability Engineering Using Python
      • Reliability Reflections
      • Testing 1 2 3
      • The Manufacturing Academy
  • eBooks
  • Resources
    • Accendo Authors
    • FMEA Resources
    • Feed Forward Publications
    • Openings
    • Books
    • Webinars
    • Journals
    • Higher Education
    • Podcasts
  • Courses
    • 14 Ways to Acquire Reliability Engineering Knowledge
    • Reliability Analysis Methods online course
    • Measurement System Assessment
    • SPC-Process Capability Course
    • Design of Experiments
    • Foundations of RCM online course
    • Quality during Design Journey
    • Reliability Engineering Statistics
    • An Introduction to Reliability Engineering
    • An Introduction to Quality Engineering
    • Process Capability Analysis course
    • Root Cause Analysis and the 8D Corrective Action Process course
    • Return on Investment online course
    • CRE Preparation Online Course
    • Quondam Courses
  • Webinars
    • Upcoming Live Events
  • Calendar
    • Call for Papers Listing
    • Upcoming Webinars
    • Webinar Calendar
  • Login
    • Member Home

by Dianna Deeney Leave a Comment

QDD 052 Discrete Data vs. Continuous Data

Discrete Data vs. Continuous Data

Once we’ve decided to control something (think of our prevention and detection controls), we then need to decide how to measure it. Different controls may need different measuring requirements, which can give us discrete or continuous data.

We treat these data types differently when collecting it, determining sample sizes, and analyzing it for results. Tune-in to learn more about how to take the next step in defining controls: figuring out how to measure it and considering the data.

 

View the Episode Transcript

When we’re creating design specifications, we’re thinking how it links back to our controls. And we think forward to how someone is going to measure it.

No matter which method that we choose to collect data, we need to verify that the controls are judged or measured consistently, that equipment and tools are capable with the correct level of significance, and – if it’s a qualitative measure – different people will come to the same conclusion.

Another QDD episode you might like:
Designing Specs for QA

 

Citations

Post graphic attribution:
Background vs. vector created by macrovector – www.freepik.com Data vector created by storyset – www.freepik.com

Video graphic attribution:
warehouse worker Warehouse worker photo created by aleksandarlittlewolf – www.freepik.com
yes no Icon vector created by starline – www.freepik.com

 

Episode Transcript

You’re listening to an installment of the “Quality during Design Versus Series”. In this series, we’re comparing concepts within quality and reliability to better understand them and how they can affect product design engineering. We have eight episodes in this series, which means we’ll be reviewing at least 16 topics. Let’s get started. Hello and welcome to Quality during Design, the place to use quality thinking to create products others love for less. My name is Dianna Deeney. I’m a senior level quality professional and engineer with over 20 years of experience in manufacturing and design. Listen in and then join the conversation. Visit qualityduringdesign.com and subscribe.

Hi, welcome to quality during design for products, others love for less. I am your host, Dianna Deeney. We are in a “Quality during Design Versus Series” where we comparing at least two different quality topics and learning how we can apply those to product design. In the previous episode, we talked about controls, specifically prevention and detection controls the kind of things that we want to implement within our design in our users’ process that will help detect or prevent potential failures and reduce the risk that our product has on its own performance on usability and other safety measures. Well, once we’ve defined a control, we then need to figure out how to measure it. So today we’re talking about two different types of data that we can use for measurement to verify the effectiveness of our controls. Today, we’re specifically talking about discrete data and continuous data.

Now as design engineers, we need to handle and deal with and translate a lot of different data. Our data could be coming from our own formative or summative studies about our users with our product, or it could be coming from a third party about our users. We need to be able to analyze that data and translate that into customer needs or requirements. We also have data from bench-top testing and from our test lab. We could also be looking at data that our suppliers are giving us. If we’re evaluating a component for our design and they give us data, we need to be able to interpret and look at it properly so that we understand if it’s going to be appropriate for our device. And, also, we are setting specifications and limits and tolerances that are important to the functionality and safety and performance of our design. What type of data we might need to properly monitor our design controls needs to be considered because we also need to communicate that to others.

Now, why do we need to categorize our data, the type of data that we’re going to be collecting? I asked you to categorize your controls for your designs last week. And now I you to think about what category your data lies in Well categorizing these things helps us plan ahead, and it makes us pay attention to the details that matter. If not now, when we’re working on our design specs, then definitely later when we’re deciding on next steps or passing it off for somebody else to measure for control. Choosing the right kind of data, be it discrete or continuous – it affects a few things about our data. It affects how, and when we collect it. And it also determines sample sizes: how many do we need to be able to make a certain confidence statement about the results? And it’ll also dictate what type of analysis that are capable of being performed.

So let’s go ahead and define what discrete data is versus continuous data. Discrete data can be thought of as counts of categories, like number of failures in a given time period, number of cycles until the first failure, or number of defects on a part. Discrete data could be proportion, like proportion nonconforming. It can also be binary: yes/no, good/bad, pass/fail. Attribute data is a type of discrete data. Discrete data is also known as qualitative because we’re collecting information about the “quality” of our product. Usually, discrete data is easy and quick to collect. Think of check sheets. Continuous data, on the other hand, is measured using a continuous scale, like time to failure, length, weight, or diameter, or even temperature. Continuous data is also known as variable or quantitative because it’s a measure of a quantity of something. Continuous data usually requires the use of measurement tools or equipment.

When we’re collecting data to measure the effectiveness of the controls that we put in place to control a risk or a failure, we need to take a closer look at our control and its purpose. Are we preventing something or detecting it? And that might determine when it is going to be measuring our control. What’s the criticality associated with that control? Is it really severe or is it high? And that could affect the precision and the confidence that we want in the results. Discrete data is usually associated with low precision measurements: think of visual standards and visual inspections for a quality of a product. Visual quality standards are those things that are pictures or diagrams that explain to all users and all inspectors what’s acceptable and what’s not. Anything that is qualitative is usually a low precision data measurement. Because of that, it also requires a higher sample size so that we can achieve the desired confidence and statistical significance that we want when we’re analyzing the data. On the other hand, continuous data is considered high precision. We are using equipment and tools to measure something about our product. Because of its high precision nature. We usually need less samples in order to claim the confidence and statistical significance that we’d want when we’re analyzing the data. No matter what we’re measuring, if it’s discreet or continuous, we need to ensure that our test methods are validated and that any gauge R and R studies are performed.

Now, those are some of the things we can start to think about as we’re designing measures for our controls. But now let’s look forward a little bit into how we want to analyze the different data types. To visualize discrete data, we can plot it on a bar chart. A bar chart are categories on the x-axis and frequency on the y-axis. A Pareto chart uses a bar chart. Our discrete data can be further analyzed using a probability mass function, or PMF. The output of this is a probability at a specific value. Binomial and poison distributions are common with discrete data. For continuous data, we can visualize it using a histogram. A histogram is different from a bar chart. A bar chart has categories. A histogram has intervals of data. We can further analyze our continuous data with the probability density function to get a probability of an outcome. We can calculate the area under a curve between two points. If you remember calculus and integrals, that’s the method that we use for continuous data. Normal, Weibull, lognormal, and exponential are types of distributions with continuous data. No matter if we’re looking at discrete or continuous data, both can use a cumulative distribution function. It plots a cumulative probability from zero to one along the y-axis for any given value along the x-axis.

Something to watch for is if our discrete data happens to have digits: it would not be appropriate for us to treat that as continuous data. For example, we have a survey from one to five. We would not want to report the analysis as a mean and standard deviation. We could report it as 13% of respondents answered one and 20% respondents answer two and et cetera.

So, what is today’s insight to action? And what can you do with what we’ve been talking about today? When we’re creating design specifications, we’re are thinking how it links back to our controls. And we think forward to how someone is going to measure it. No matter which method that we choose to collect data, we need to verify that the controls are judged or measured consistently that equipment and tools are capable with the correct level of significance. And if it’s a qualitative measure, different people will come to the same conclusion.

If you like the content in this episode, visit qualityduringdesign.com, where you can subscribe to the weekly newsletter to keep in touch. This has been a production of Deeney Enterprises. Thanks for listening!

 

Filed Under: Quality during Design

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Quality during Design podcast logo

Tips for using quality tools and methods to help you design products others love, for less.


by Dianna Deeney
Quality during Design,
Hosted on Buzzsprout.com
Subscribe and enjoy every episode
Google
Apple
Spotify

Recent Episodes

QDD 100 Lessons Learned from Coffee Pod Stories

QDD 099 Crucial Conversations in Engineering, with Shere Tuckey (A Chat with Cross-Functional Experts)

QDD 098 Challenges Getting Team Input in Concept Development

QDD 097 Brainstorming within Design Sprints

QDD 096 After the ‘Storm: Compare and Prioritize Ideas

QDD 095 After the ‘Storm: Pareto Voting and Screening Methods

QDD 094 After the ‘Storm: Group and Explore Ideas

QDD 093 Product Design with Brainstorming, with Emily Haidemenos (A Chat with Cross Functional Experts)

QDD 092 Ways to Gather Ideas with a Team

QDD 091 The Spirits of Technical Writing Past, Present, and Future

QDD 090 The Gifts Others Bring

QDD 089 Next Steps after Surprising Test Results

QDD 088 Choose Reliability Goals for Modules

QDD 087 Start a System Architecture Diagram Early

QDD 086 Why Yield Quality in the Front-End of Product Development

QDD 085 Book Cast

QDD 084 Engineering in the Color Economy

QDD 083 Getting to Great Designs

QDD 082 Get Clarity on Goals with a Continuum

QDD 081 Variable Relationships: Correlation and Causation

QDD 080 Use Meetings to Add Productivity

QDD 079 Ways to Partner with Test Engineers

QDD 078 What do We do with FMEA Early in Design Concept?

QDD 077 A Severity Scale based on Quality Dimensions

QDD 076 Use Force Field Analysis to Understand Nuances

QDD 075 Getting Use Information without a Prototype

QDD 074 Finite Element Analysis (FEA) Supplements Test

QDD 073 2 Lessons about Remote Work for Design Engineers

QDD 072 Always Plot the Data

QDD 071 Supplier Control Plans and Design Specs

QDD 070 Use FMEA to Design for In-Process Testing

QDD 069 Use FMEA to Choose Critical Design Features

QDD 068 Get Unstuck: Expand and Contract Our Problem

QDD 067 Get Unstuck: Reframe our Problem

QDD 066 5 Options to Manage Risks during Product Engineering

QDD 065 Prioritizing Technical Requirements with a House of Quality

QDD 064 Gemba for Product Design Engineering

QDD 063 Product Design from a Data Professional Viewpoint, with Gabor Szabo (A Chat with Cross Functional Experts)

QDD 062 How Does Reliability Engineering Affect (Not Just Assess) Design?

QDD 061 How to use FMEA for Complaint Investigation

QDD 060 3 Tips for Planning Design Reviews

QDD 059 Product Design from a Marketing Viewpoint, with Laura Krick (A Chat with Cross Functional Experts)

QDD 058 UFMEA vs. DFMEA

QDD 057 Design Input & Specs vs. Test & Measure Capability

QDD 056 ALT vs. HALT

QDD 055 Quality as a Strategic Asset vs. Quality as a Control

QDD 054 Design Specs vs. Process Control, Capability, and SPC

QDD 053 Internal Customers vs. External Customers

QDD 052 Discrete Data vs. Continuous Data

QDD 051 Prevention Controls vs. Detection Controls

QDD 050 Try this Method to Help with Complex Decisions (DMRCS)

QDD 049 Overlapping Ideas: Quality, Reliability, and Safety

QDD 048 Using SIPOC to Get Started

QDD 047 Risk Barriers as Swiss Cheese?

QDD 046 Environmental Stress Testing for Robust Designs

QDD 045 Choosing a Confidence Level for Test using FMEA

QDD 044 Getting Started with FMEA – It All Begins with a Plan

QDD 043 How can 8D help Solve my Recurring Problem?

QDD 042 Mistake-Proofing – The Poka-Yoke of Usability

QDD 041 Getting Comfortable with using Reliability Results

QDD 040 How to Self-Advocate for More Customer Face Time (and why it’s important)

QDD 039 Choosing Quality Tools (Mind Map vs. Flowchart vs. Spaghetti Diagram)

QDD 038 The DFE Part of DFX (Design For Environment and eXcellence)

QDD 037 Results-Driven Decisions, Faster: Accelerated Stress Testing as a Reliability Life Test

QDD 036 When to use DOE (Design of Experiments)?

QDD 035 Design for User Tasks using an Urgent/Important Matrix

QDD 034 Statistical vs. Practical Significance

QDD 033 How Many Do We Need To Test?

QDD 032 Life Cycle Costing for Product Design Choices

QDD 031 5 Aspects of Good Reliability Goals and Requirements

QDD 030 Using Failure Rate Functions to Drive Early Design Decisions

QDD 029 Types of Design Analyses possible with User Process Flowcharts

QDD 028 Design Tolerances Based on Economics (Using the Taguchi Loss Function)

QDD 027 How Many Controls do we Need to Reduce Risk?

QDD 026 Solving Symptoms Instead of Causes?

QDD 025 Do you have SMART ACORN objectives?

QDD 024 Why Look to Standards

QDD 023 Getting the Voice of the Customer

QDD 022 The Way We Test Matters

QDD 021 Designing Specs for QA

QDD 020 Every Failure is a Gift

QDD 019 Understanding the Purposes behind Kaizen

QDD 018 Fishbone Diagram: A Supertool to Understand Problems, Potential Solutions, and Goals

QDD 017 What is ‘Production Equivalent’ and Why Does it Matter?

QDD 016 About Visual Quality Standards

QDD 015 Using the Pareto Principle and Avoiding Common Pitfalls

QDD 014 The Who’s Who of your Quality Team

QDD 013 When it’s Not Normal: How to Choose from a Library of Distributions

QDD 012 What are TQM, QFD, Six Sigma, and Lean?

QDD 011 The Designer’s Important Influence on Monitoring After Launch

QDD 010 How to Handle Competing Failure Modes

QDD 009 About Using Slide Decks for Technical Design Reviews

QDD 008 Remaking Risk-Based Decisions: Allowing Ourselves to Change our Minds.

QDD 007 Need to innovate? Stop brainstorming and try a systematic approach.

QDD 006 HALT! Watch out for that weakest link

QDD 005 The Designer’s Risk Analysis affects Business, Projects, and Suppliers

QDD 004 A big failure and too many causes? Try this analysis.

QDD 003 Why Your Design Inputs Need to Include Quality & Reliability

QDD 002 My product works. Why don’t they want it?

QDD 001 How to Choose the Right Improvement Model

© 2023 FMS Reliability · Privacy Policy · Terms of Service · Cookies Policy

This site uses cookies to give you a better experience, analyze site traffic, and gain insight to products or offers that may interest you. By continuing, you consent to the use of cookies. Learn how we use cookies, how they work, and how to set your browser preferences by reading our Cookies Policy.