Accendo Reliability

Your Reliability Engineering Professional Development Site

  • Home
  • About
    • Contributors
  • Reliability.fm
    • Speaking Of Reliability
    • Rooted in Reliability: The Plant Performance Podcast
    • Quality during Design
    • Critical Talks
    • Dare to Know
    • Maintenance Disrupted
    • Metal Conversations
    • The Leadership Connection
    • Practical Reliability Podcast
    • Reliability Matters
    • Reliability it Matters
    • Maintenance Mavericks Podcast
    • Women in Maintenance
    • Accendo Reliability Webinar Series
    • Asset Reliability @ Work
  • Articles
    • CRE Preparation Notes
    • on Leadership & Career
      • Advanced Engineering Culture
      • Engineering Leadership
      • Managing in the 2000s
      • Product Development and Process Improvement
    • on Maintenance Reliability
      • Aasan Asset Management
      • CMMS and Reliability
      • Conscious Asset
      • EAM & CMMS
      • Everyday RCM
      • History of Maintenance Management
      • Life Cycle Asset Management
      • Maintenance and Reliability
      • Maintenance Management
      • Plant Maintenance
      • Process Plant Reliability Engineering
      • ReliabilityXperience
      • RCM Blitz®
      • Rob’s Reliability Project
      • The Intelligent Transformer Blog
    • on Product Reliability
      • Accelerated Reliability
      • Achieving the Benefits of Reliability
      • Apex Ridge
      • Metals Engineering and Product Reliability
      • Musings on Reliability and Maintenance Topics
      • Product Validation
      • Reliability Engineering Insights
      • Reliability in Emerging Technology
    • on Risk & Safety
      • CERM® Risk Insights
      • Equipment Risk and Reliability in Downhole Applications
      • Operational Risk Process Safety
    • on Systems Thinking
      • Communicating with FINESSE
      • The RCA
    • on Tools & Techniques
      • Big Data & Analytics
      • Experimental Design for NPD
      • Innovative Thinking in Reliability and Durability
      • Inside and Beyond HALT
      • Inside FMEA
      • Integral Concepts
      • Learning from Failures
      • Progress in Field Reliability?
      • Reliability Engineering Using Python
      • Reliability Reflections
      • Testing 1 2 3
      • The Manufacturing Academy
  • eBooks
  • Resources
    • Accendo Authors
    • FMEA Resources
    • Feed Forward Publications
    • Openings
    • Books
    • Webinars
    • Journals
    • Higher Education
    • Podcasts
  • Courses
    • 14 Ways to Acquire Reliability Engineering Knowledge
    • Reliability Analysis Methods online course
    • Measurement System Assessment
    • SPC-Process Capability Course
    • Design of Experiments
    • Foundations of RCM online course
    • Quality during Design Journey
    • Reliability Engineering Statistics
    • Quality Engineering Statistics
    • An Introduction to Reliability Engineering
    • An Introduction to Quality Engineering
    • Process Capability Analysis course
    • Root Cause Analysis and the 8D Corrective Action Process course
    • Return on Investment online course
    • CRE Preparation Online Course
    • Quondam Courses
  • Webinars
    • Upcoming Live Events
  • Calendar
    • Call for Papers Listing
    • Upcoming Webinars
    • Webinar Calendar
  • Login
    • Member Home

by Dianna Deeney Leave a Comment

QDD 031 5 Aspects of Good Reliability Goals and Requirements

5 Aspects of Good Reliability Goals and Requirements

Good reliability requirements are going to drive our design decisions relating to the concept, the components, the materials, and other stuff. So, the moment to start defining reliability requirements is early in the design process. But, what makes a well-defined reliability requirement? There are five aspects it should cover: do you know what they are? 

We’ll describe what makes a good reliability requirement and examples of common (but not good) requirements.

 

View the Episode Transcript

 

One of the first steps in defining any good requirements is characterizing our product’s use and operating needs. This means knowing who is going to be using our product, in what way, and in what type of environment. This is one of the reasons why I promote an early usability engineering cycle. This can be done in tandem with a technical assessment of any new design. Both the usability engineering and technical assessment cycles can be part of the concept evaluation phase. 

The 5 aspects of a good reliability requirement:

  1. measurement of time
  2. reliability at specific points in time
  3. a desired confidence level
  4. a definition of failure
  5. the operating and environmental conditions

Our example from this podcast: 99% reliability in system start-up to at least 300 rpm is required after 600 on-off cycles of operation with 95% confidence when operating in an environment with a temperature range of –15℃ to 40℃.

How does this example fit within our 5 aspects of a good reliability requirement?

  1. cycles
  2. 99% reliability after 600 on-off cycles
  3. 95% confidence
  4. system start up less than 330 rpm
  5. temperature range of –15℃ to 40℃ (assuming no other environmental or user factors)

Which one of these aspects of reliability requirements have you struggled with in the past? Leave me a comment on this blog. 

Citations

noMTBF.com A site dedicated to the eradication of the misuse of MTBF. There are posts and pages of examples explaining the pitfalls of this type of reliability requirement.

Reliability Requirements and Specifications is an article that steps through the mathematical implications of different types of reliability requirements and explains why they are good or not good. [“Reliability Requirements and Specifications.” Reliability HotWire, iss. 80, Oct. 2007. Reliasoft Corporation. www.weibull.com/hotwire/issue80/relbasics80.htm. Accessed 29 Sep 2021.]

Episode Transcript

Good reliability requirements are going to drive our design decisions relating to the concept, the components, the materials, and other stuff. So, the moment to start defining reliability requirements is early in the design process. But, what makes a well-defined reliability requirement? There are five aspects it should cover: do you know what they are? Let’s review, after this brief introduction.

Hello and welcome to quality during design the place to use quality thinking to create products others love, for less. My name is Dianna. I’m a senior level quality professional and engineer with over 20 years of experience in manufacturing and design. Listen in and then join the conversation at QualityDuringDesign.com.

Today, we’ll describe what makes a good reliability requirement. In general, One of the first steps in defining any good requirements is characterizing our product’s use and operating needs. This means knowing who is going to be using our product, in what way, and in what type of environment. This is one of the reasons why I promote an early usability engineering cycle. This can be done in tandem with a technical assessment of any new design. Both the usability engineering and technical assessment cycles can be part of the concept evaluation phase.

Think of the product’s use and operating needs in terms of the stresses that our product is going to see once it’s released to the market. These stresses are introduced through the users themselves, and the environment that our product is exposed to.

This could include things like temperature and humidity. It’s knowing things like, our product is supposed to function with vibration in an area with exposure to extreme temperature cycles.
It could also be that our product is cycled on and off multiple times; my car has an auto-off function. It turns off the engine while I’m stopped at a traffic light. It’s ignition switch is cycling on and off multiple times during my drives, so that switch for the engine is going to have to be more durable, or more reliable, than my 20-year old family van that doesn’t have that feature.
Is our product expected to have routine maintenance? If so, then we can start to define what type of maintenance we’re expecting, or, really, what our customers could be expecting to have to do for maintenance of our product.
And, we can consider our users, too. Are we designing a product where a user is expected to twist a handle on our design? Is our product intended to help users that lack strength for some reason (perhaps they are recovering from a surgery or otherwise injured or impaired)? Or, are our users professional mechanics that are used to using manual hand-tools. The reliability of our twist mechanism design may depend on our user. So, users may be be a factor in setting the reliability requirements.
Once we understand our users and use environment and what it is our design is supposed to do, then we can start defining good reliability requirements. We can start with what we know and adjust as we move through our design process and learn more. But, the more we can settle early-on the better.

Good requirements are measurable. What do good reliability requirements include? They should include 5 aspects. Let’s talk about each of these.

1) Measurement of time: Doesn’t have to be clock or calendar time. Could be cycles, distance, or number of batches. We use the measure that is associated with the aging of the product. Let’s assume we’re designing a product that gets switched on and off, measured in cycles. Our reliability requirement will include a measurement of on-off cycles.

2) Reliability at specific points in time: It may help us to think of reliability as just one minus the probability of failure. And, it can be multi-step, too, including different reliabilities at different points in time. Continuing our example of our product, with a reliability measured in cycles: 99% reliability is required after 600 on-off cycles of operation. If we wanted to make this a multi-step reliability requirement, we could add: 95% reliability is required at the end of 1,000 on-off cycles.

3) A desired confidence level: If we don’t specify a confidence level, then we can assume a 50% confidence level. I’ve never had a team define their confidence level in anything at a 50% level. Who wants to be ½ confident? We can state our desired confidence level as part of our reliability requirement. The confidence level that we choose can be dependent upon customer perception, the effect on the overall function of our product, or how serious it is if our product doesn’t work (to name a few). Why do we add a confidence level? Because there’s variation in everything, both in how we make product and how we measure it. Setting a confidence level accounts for the variability we’re going to see in our test data. Continuing with our example product: 99% reliability is required after 600 on-off cycles of operation with 95% confidence.

4) A definition of failure. What is considered a failure, or what is the function of that part supposed to be? Systems degrade, but at what functional point is performance no longer acceptable? For our example, our measurement is on-off cycles. What is considered a failure? Is it that it no longer turns on at all? Or, is it that it needs to meet a specific revolutions per minute? Maybe our product will no longer work at all if it doesn’t spin fast enough. Continuing our example, our reliability requirement has evolved to: 99% reliability in system start-up to at least 300 rpm is required after 600 on-off cycles of operation with 95% confidence.

There is one last thing we need to include. We need to clearly state the

5) Operating and environmental conditions: This involves more of our usability engineering information that we talked about earlier in this episode: what are the external stress factors? What is our preventive maintenance? What is the experience level of our product’s users and operators? This can be added to fully describe our reliability requirement. There are some different ways we could state this. We could state it as an average value of stress, or a high-stress value that corresponds with most of our users; we could use a high-low limit; we could describe it using profiles of two or more stresses, or we could describe it as a distribution. To describe this with a distribution, we could say something like, ““when operating in an environment that follows a normal distribution with a mean of 45-degree C and a standard deviation of 10-degree C.”

Building upon our example that we’ve been building upon throughout our 5 steps, our final reliability requirement could be: 99% reliability in system start-up to at least 300 rpm is required after 600 on-off cycles of operation with 95% confidence when operating in an environment with a temperature range of –15-degree Celsius to 40-degree Celsius. That is a long-worded requirement, but it addresses the 5 areas that we should cover when defining reliability requirements.

Now that we’ve set some expectations, what are NOT good reliability requirements?

A Statement like “our product must meet or exceed customer expectations” is not a good reliability requirement. Of course we want our customers to be happy. But, this type of requirement lacks any measurable targets and doesn’t at all include any of the 5 aspects we just talked about. We need to take this very broad idea and get specific about our product. Maybe we’ll start with understanding our customer expectations, which we can then translate into numerical measures.

MTTF, MTBF or any other variation of a mean time to ‘something’. This type of requirement really bugs reliability engineers, even though we may see it commonly used. MTTF or between failure is just that…a MEAN, an average. We know that we can have a product fail at 4 cycles and 8 cycles and get a MTTF of 6 cycles. We can also have another product fail at 1 cycle and another at 10 cycles and get the same MTTF: 6 cycles. But, we wouldn’t expect these two different products, with the same MTTF, perform the same in the field [likely not!]. Also, for reliability engineers, using MTTF assumes a constant failure rate as part of the specification. If we’re using MTTF, then we’ll need to demonstrate (or verify) through test that the product does follow a constant failure rate. MTTF and MTBF do not adequately describe the failure rate function of a product.  On this podcast blog, I’ll include links to articles and websites by reliability engineers that beg us to stop using this metric. They also include a breakdown of the mathematics, so check them out.

To end on an up-note, we talked about 5 aspects of a good reliability requirement: measurement of time, reliability at specific points in time, a desired confidence level, a definition of failure, and the operating and environmental conditions. We can certainly start to define these during the concept evaluation phase when we start both the usability engineering and technical assessment cycles at the front-end of our design process.

What’s today’s insight to action? Take a look at your requirements for your new design. Do your reliability requirements include these 5 aspects?  If not, can you fill in the blanks with what you know? If you don’t know the answers, then those might be gaps you should start to investigate to be able to answer. Defining good reliability requirements helps ensure that your product is one that your customers love, will help direct you on the components and design decisions, and will most certainly help you with the testing of your product.

Please visit this podcast blog and others at qualityduringdesign.com. Subscribe to the weekly newsletter to keep in touch. If you like this podcast or have a suggestion for an upcoming episode, let me know. You can find me at qualityduringdesign.com, on LinkedIn, or you could leave me a voicemail at 484-341-0238. This has been a production of Denney Enterprises. Thanks for listening!

Filed Under: Quality during Design, The Reliability FM network

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Quality during Design podcast logo

Tips for using quality tools and methods to help you design products others love, for less.


by Dianna Deeney
Quality during Design,
Hosted on Buzzsprout.com
Subscribe and enjoy every episode
Google
Apple
Spotify

Recent Episodes

QDD 101 Quality Tools are Legos of Development (and Their 7 Uses)

QDD 100 Lessons Learned from Coffee Pod Stories

QDD 099 Crucial Conversations in Engineering, with Shere Tuckey (A Chat with Cross-Functional Experts)

QDD 098 Challenges Getting Team Input in Concept Development

QDD 097 Brainstorming within Design Sprints

QDD 096 After the ‘Storm: Compare and Prioritize Ideas

QDD 095 After the ‘Storm: Pareto Voting and Screening Methods

QDD 094 After the ‘Storm: Group and Explore Ideas

QDD 093 Product Design with Brainstorming, with Emily Haidemenos (A Chat with Cross Functional Experts)

QDD 092 Ways to Gather Ideas with a Team

QDD 091 The Spirits of Technical Writing Past, Present, and Future

QDD 090 The Gifts Others Bring

QDD 089 Next Steps after Surprising Test Results

QDD 088 Choose Reliability Goals for Modules

QDD 087 Start a System Architecture Diagram Early

QDD 086 Why Yield Quality in the Front-End of Product Development

QDD 085 Book Cast

QDD 084 Engineering in the Color Economy

QDD 083 Getting to Great Designs

QDD 082 Get Clarity on Goals with a Continuum

QDD 081 Variable Relationships: Correlation and Causation

QDD 080 Use Meetings to Add Productivity

QDD 079 Ways to Partner with Test Engineers

QDD 078 What do We do with FMEA Early in Design Concept?

QDD 077 A Severity Scale based on Quality Dimensions

QDD 076 Use Force Field Analysis to Understand Nuances

QDD 075 Getting Use Information without a Prototype

QDD 074 Finite Element Analysis (FEA) Supplements Test

QDD 073 2 Lessons about Remote Work for Design Engineers

QDD 072 Always Plot the Data

QDD 071 Supplier Control Plans and Design Specs

QDD 070 Use FMEA to Design for In-Process Testing

QDD 069 Use FMEA to Choose Critical Design Features

QDD 068 Get Unstuck: Expand and Contract Our Problem

QDD 067 Get Unstuck: Reframe our Problem

QDD 066 5 Options to Manage Risks during Product Engineering

QDD 065 Prioritizing Technical Requirements with a House of Quality

QDD 064 Gemba for Product Design Engineering

QDD 063 Product Design from a Data Professional Viewpoint, with Gabor Szabo (A Chat with Cross Functional Experts)

QDD 062 How Does Reliability Engineering Affect (Not Just Assess) Design?

QDD 061 How to use FMEA for Complaint Investigation

QDD 060 3 Tips for Planning Design Reviews

QDD 059 Product Design from a Marketing Viewpoint, with Laura Krick (A Chat with Cross Functional Experts)

QDD 058 UFMEA vs. DFMEA

QDD 057 Design Input & Specs vs. Test & Measure Capability

QDD 056 ALT vs. HALT

QDD 055 Quality as a Strategic Asset vs. Quality as a Control

QDD 054 Design Specs vs. Process Control, Capability, and SPC

QDD 053 Internal Customers vs. External Customers

QDD 052 Discrete Data vs. Continuous Data

QDD 051 Prevention Controls vs. Detection Controls

QDD 050 Try this Method to Help with Complex Decisions (DMRCS)

QDD 049 Overlapping Ideas: Quality, Reliability, and Safety

QDD 048 Using SIPOC to Get Started

QDD 047 Risk Barriers as Swiss Cheese?

QDD 046 Environmental Stress Testing for Robust Designs

QDD 045 Choosing a Confidence Level for Test using FMEA

QDD 044 Getting Started with FMEA – It All Begins with a Plan

QDD 043 How can 8D help Solve my Recurring Problem?

QDD 042 Mistake-Proofing – The Poka-Yoke of Usability

QDD 041 Getting Comfortable with using Reliability Results

QDD 040 How to Self-Advocate for More Customer Face Time (and why it’s important)

QDD 039 Choosing Quality Tools (Mind Map vs. Flowchart vs. Spaghetti Diagram)

QDD 038 The DFE Part of DFX (Design For Environment and eXcellence)

QDD 037 Results-Driven Decisions, Faster: Accelerated Stress Testing as a Reliability Life Test

QDD 036 When to use DOE (Design of Experiments)?

QDD 035 Design for User Tasks using an Urgent/Important Matrix

QDD 034 Statistical vs. Practical Significance

QDD 033 How Many Do We Need To Test?

QDD 032 Life Cycle Costing for Product Design Choices

QDD 031 5 Aspects of Good Reliability Goals and Requirements

QDD 030 Using Failure Rate Functions to Drive Early Design Decisions

QDD 029 Types of Design Analyses possible with User Process Flowcharts

QDD 028 Design Tolerances Based on Economics (Using the Taguchi Loss Function)

QDD 027 How Many Controls do we Need to Reduce Risk?

QDD 026 Solving Symptoms Instead of Causes?

QDD 025 Do you have SMART ACORN objectives?

QDD 024 Why Look to Standards

QDD 023 Getting the Voice of the Customer

QDD 022 The Way We Test Matters

QDD 021 Designing Specs for QA

QDD 020 Every Failure is a Gift

QDD 019 Understanding the Purposes behind Kaizen

QDD 018 Fishbone Diagram: A Supertool to Understand Problems, Potential Solutions, and Goals

QDD 017 What is ‘Production Equivalent’ and Why Does it Matter?

QDD 016 About Visual Quality Standards

QDD 015 Using the Pareto Principle and Avoiding Common Pitfalls

QDD 014 The Who’s Who of your Quality Team

QDD 013 When it’s Not Normal: How to Choose from a Library of Distributions

QDD 012 What are TQM, QFD, Six Sigma, and Lean?

QDD 011 The Designer’s Important Influence on Monitoring After Launch

QDD 010 How to Handle Competing Failure Modes

QDD 009 About Using Slide Decks for Technical Design Reviews

QDD 008 Remaking Risk-Based Decisions: Allowing Ourselves to Change our Minds.

QDD 007 Need to innovate? Stop brainstorming and try a systematic approach.

QDD 006 HALT! Watch out for that weakest link

QDD 005 The Designer’s Risk Analysis affects Business, Projects, and Suppliers

QDD 004 A big failure and too many causes? Try this analysis.

QDD 003 Why Your Design Inputs Need to Include Quality & Reliability

QDD 002 My product works. Why don’t they want it?

QDD 001 How to Choose the Right Improvement Model

© 2023 FMS Reliability · Privacy Policy · Terms of Service · Cookies Policy

This site uses cookies to give you a better experience, analyze site traffic, and gain insight to products or offers that may interest you. By continuing, you consent to the use of cookies. Learn how we use cookies, how they work, and how to set your browser preferences by reading our Cookies Policy.