Accendo Reliability

Your Reliability Engineering Professional Development Site

  • Home
  • About
    • Contributors
  • Reliability.fm
    • Speaking Of Reliability
    • Rooted in Reliability: The Plant Performance Podcast
    • Quality during Design
    • Critical Talks
    • Dare to Know
    • Maintenance Disrupted
    • Metal Conversations
    • The Leadership Connection
    • Practical Reliability Podcast
    • Reliability Matters
    • Reliability it Matters
    • Maintenance Mavericks Podcast
    • Women in Maintenance
    • Accendo Reliability Webinar Series
    • Asset Reliability @ Work
  • Articles
    • CRE Preparation Notes
    • on Leadership & Career
      • Advanced Engineering Culture
      • Engineering Leadership
      • Managing in the 2000s
      • Product Development and Process Improvement
    • on Maintenance Reliability
      • Aasan Asset Management
      • CMMS and Reliability
      • Conscious Asset
      • EAM & CMMS
      • Everyday RCM
      • History of Maintenance Management
      • Life Cycle Asset Management
      • Maintenance and Reliability
      • Maintenance Management
      • Plant Maintenance
      • Process Plant Reliability Engineering
      • ReliabilityXperience
      • RCM Blitz®
      • Rob’s Reliability Project
      • The Intelligent Transformer Blog
    • on Product Reliability
      • Accelerated Reliability
      • Achieving the Benefits of Reliability
      • Apex Ridge
      • Metals Engineering and Product Reliability
      • Musings on Reliability and Maintenance Topics
      • Product Validation
      • Reliability Engineering Insights
      • Reliability in Emerging Technology
    • on Risk & Safety
      • CERM® Risk Insights
      • Equipment Risk and Reliability in Downhole Applications
      • Operational Risk Process Safety
    • on Systems Thinking
      • Communicating with FINESSE
      • The RCA
    • on Tools & Techniques
      • Big Data & Analytics
      • Experimental Design for NPD
      • Innovative Thinking in Reliability and Durability
      • Inside and Beyond HALT
      • Inside FMEA
      • Integral Concepts
      • Learning from Failures
      • Progress in Field Reliability?
      • Reliability Engineering Using Python
      • Reliability Reflections
      • Testing 1 2 3
      • The Manufacturing Academy
  • eBooks
  • Resources
    • Accendo Authors
    • FMEA Resources
    • Feed Forward Publications
    • Openings
    • Books
    • Webinars
    • Journals
    • Higher Education
    • Podcasts
  • Courses
    • 14 Ways to Acquire Reliability Engineering Knowledge
    • Reliability Analysis Methods online course
    • Measurement System Assessment
    • SPC-Process Capability Course
    • Design of Experiments
    • Foundations of RCM online course
    • Quality during Design Journey
    • Reliability Engineering Statistics
    • Quality Engineering Statistics
    • An Introduction to Reliability Engineering
    • An Introduction to Quality Engineering
    • Process Capability Analysis course
    • Root Cause Analysis and the 8D Corrective Action Process course
    • Return on Investment online course
    • CRE Preparation Online Course
    • Quondam Courses
  • Webinars
    • Upcoming Live Events
  • Calendar
    • Call for Papers Listing
    • Upcoming Webinars
    • Webinar Calendar
  • Login
    • Member Home

by Dianna Deeney Leave a Comment

QDD 096 After the ‘Storm: Compare and Prioritize Ideas

After the ‘Storm: Compare and Prioritize Ideas

We’re in our 5th episode of our series about generating ideas with our team toward action. The first two episodes were all about idea generation. The 3rd was about grouping and exploring ideas. The 4th was about screening ideas. Now, we’ll look at ways to compare ideas.

We’re still considering that we’re just after brainstorming, at the point where we have many ideas and no next steps.

Let’s compare ideas with our team so we can move toward action. We explore these Quality Tools and how to use them after a brainstorming or other idea-generating team activity:

  • paired comparison
  • prioritization matrix
  • DMRCS

 

View the Episode Transcript

 


Reminders when evaluating ideas with a team

We need to Mind our Mindset

Recognize that it’s difficult to evaluate ideas from a brainstorming activity into actions for next steps.

We’re handling ideas systematically with our team to get the maximum benefit from our creative phase.

We want to control our itch for a quick decision on the best idea – to do so would ruin our efforts toward creativity and innovative ideas.

We aren’t looking to eliminate ideas. We’re looking to develop them to the best solution we think there could be.

 

Yes, we approach activities with the  spirit of developing creative ideas. We say things like, “That’s a great idea, what can we do to make it work?” or “What is it about this idea we can use?”

No, we don’t want to just eliminate ideas. We try to avoid first jumping to say things like, “That’s a great idea, but here’s why it won’t work.”

Discussing Ideas

We’d like consensus on a clear option, which is that place where everyone supports the decision, even if it wasn’t their first choice.

We discuss to clarify ideas. If it’s not clear, then let’s make sure that everyone understands the information about some ideas.

We don’t need to pressure anyone to change votes, but we do need to ensure we’re all voting on the same idea, or the same understanding of an idea.


Compare Ideas

Paired Comparison

We can pair-wise compare ideas against each other. We can apply a weight when comparing which idea we think is better when compared to the other. Like this:

 

Prioritization Matrix

Have a lot of criteria and a lot of choices? Consider iterating through paired comparisons to get to a weighted ranking of ideas based on all the criteria.

  1. Weight Criteria
  2. Compare all ideas against each criterion
  3. Score and rank each idea considering the weights

 

DMRCS

We can also take a disciplined approach with DMRCS.

  • Define
  • Measure
  • Reduce
  • Combine
  • Select

 


Other Quality during Design podcast episodes you might like:

Try this Method to Help with Complex Decisions (DMRCS)


Learn more about a prioritization matrix through ASQ:

What is a Decision Matrix? Pugh, Problem, or Selection Grid | ASQ


Previous episodes in our series about generating ideas with our team toward action:

Episode 1: Ways to Gather Ideas with a Team

Episode 2: Product Design with Brainstorming, with Emily Haidemenos (A Chat with Cross Functional Experts)

Episode 3: After the ‘Storm: Group and Explore Ideas

Episode 4: After the ‘Storm: Pareto Voting and Screening Methods


Team picture designed by macrovector / Freepik Other pictures designed by Freepik.


Episode Transcript

This is our fifth episode in our series about generating ideas with our team toward action. In the first two episodes was started at the beginning of 2023. We generated ideas and discussed different ways that we can approach our problem or our goal in order to generate ideas. We’re now at the point where we have a lot of ideas and we can’t really define next steps. We have to narrow into our choice in order to take action. We have talked about ideas to screen and select ideas using quality tools. We talked about affinity diagrams and two by two charts and multivoting. Today we’re going to talk about paired comparison. More about paired comparison after this brief introduction.

Hello and welcome to Quality During Design, the place to use quality thinking to create products, others love for less. Each week we talk about ways to use quality during design, engineering, and product development. My name is Diana Dini. I’m a senior level quality, professional, and engineer with over 20 years of experience in manufacturing and design. Listen in and then join us. Visit quality during design.com.

Do you know what 12 things you should have before a design concept makes it to the engineering drawing board where you’re setting specifications. I’ve got a free checklist for you and you can do some assessments of your own. Where do you stack up with the checklist? You can log into a learning portal to access the checklist and an introduction to more information about how to get those 12 things. To get this free information, just sign up@qualityduringdesign.com. On the homepage, there’s a link in the middle of the page. Just click it and say, I want it.

Welcome back. We’re using quality tools as visual aids to do team-based activities, to take a creative phase of teamwork and move that toward decisions where we can take next steps. We’re not really looking to get a quick decision and just eliminate a bunch of ideas. Instead, we’re handling ideas systematically so that we can get the most benefit from our creative phase. What is it about this idea we can use? What can we do to make this idea work? Here’s our scenario for this episode. We have done, uh, team brainstorming activity or idea generation activity, and we’ve already taken some steps to prioritize or group them or reduce our list of ideas. With some projects, there’s a lot of ideas, so we need to group them and further explore them before we can decide what actions to take. In that case, we may have used an affinity diagram team sorting method, and if we needed to further explore ideas, we used a fishbone or a tree diagram.

All of these are visual quality tools. If our team needed to think through ideas together, then we likely used a list reduction activity to reduce the number of ideas and combine like ideas. There are times that we’ll do a brainstorming session, group ideas in a two by two chart, like urgent versus important or value versus effort, and then decide which action to take, and then we’re done. We move on with actions.

And sometimes, still, there’s too many good ideas, so we use the brain trust of the team to vote on what they think are the top priorities. We use multivoting to narrow the list of choices and further discuss options. Now what? Now we may still need to make a final decision. One way we can do this is to use paired comparisons.

In paired comparisons, we’re going to compare ideas against each other. Idea versus idea in pairs or sets. This does different things than the other things that we’ve done. In the two by two chart, we compared ideas against criteria. In multivoting, we compared ideas against a goal In paired comparisons, we’re comparing ideas against each other, and we’re also ranking ideas based on individual preferences versus overall criteria.

Here’s what I mean. For each pair of ideas, we’re going to make a comparison. We’re going to use a three level comparison like major, moderate, or minimal, or we could use a two level comparison. We’re assessing each idea against the other in this comparison, so we would say that this idea is moderately better than the other idea or this idea is minorly better than the other idea. We’re not only choosing which idea is the better out of the two, but we’re also assigning a weight to it. How much better is this idea than the other idea? The better the idea is, the higher the value we assign, each idea that is better gets a plus value added to its rank. Here’s an example. We’re going to compare idea alpha against Idea Bravo, and we think Bravo is moderately better, so we’re going to assign Bravo a moderately better ranking and given a value of two. Then we compare Idea Charlie against Idea Bravo, and we think Bravo is significantly better in a major way, so we’re going to assign idea Bravo a value of three. Now, idea Bravo has a total ranking value of two plus three equals five. In the end, we’ll have our list of ideas that are ranked in order of preference. I’ll post an example of paired comparison on the podcast blog and the example compares five ideas, and it’ll be easier to see how this all fits together and how paired comparison works.

We can compare ideas systematically with our team. We could also combine this activity with the multivoting. Either way, we are comparing ideas and giving them a rank and priority based on which one is better and by how much. Our goal is team consensus, which again is that place where everyone supports the decision even if it wasn’t their first choice, and we’re going to look for a clear option, so we need to ensure that we’re all voting on the same ideas or the same understanding of the ideas. In last week’s episode with Multivoting, we talked about the importance of getting the decider involved and they can get involved in the paired comparison process, also. Or another way to get the decider involved is to have the best idea and then an alternative idea ready with the pros and cons of each. The paired comparison approach will give you that listing of ideas in order of rank.

What if the decision that we face is really complicated or expensive? Which supplier should we use? For example, we have options for how we want to run our experiment. Which options should we choose to get the maximum benefit out of the activity?

When the decision has competing objectives and we have multiple criteria that we can measure our ideas against, then we want to do something different than a binary yes no or pass fail. We can do a prioritization matrix which adds weighted value to both the criteria and the options. To do this, we have a goal that is clearly defined. We have criteria that the decision must meet, and then we do a paired comparison of the criteria first to give them a weighted value. Then we compare the ideas versus their criteria using those weighted values. We can do this as a team or we can do this like we do the affinity diagrams and the multivoting. Individuals give their weights and they’re tallied after each step. Big differences are discussed for understanding. This method requires more work and a lot more matrices. Not only are we comparing criteria against each other, we’re comparing the ideas against each individual criteria, and then we’re combining it all in the end to give us a ranking of the idea. But when we have a decision that is complicated or that there’s a lot at stake, these extra steps could mean a better decision.

Another more robust method is D M R C S. Short for define, measure, reduce, combine, and select. It’s modeled after the six sigma D M A I C instead. D M R C S is a structured understanding of competitive choices. This method may add more vigor to our problem statement. It also helps us to slow down to consider how we’re going to measure our options and then consider if the measures we choose link back to what we’re trying to accomplish.

The analysis is more rigorous too. With appropriate measures, we can start to analyze our ideas graphically for comparisons and plots like scatter plots and mixture plots. We get into more detail about DMRCS in another podcast episode, which I’ll link to in the blog. In that episode, I quoted a statistician, and I think it applies at the end of this episode, also. Dr. Anderson Cook warned about some assumptions when using any of these decision making models. She says, “Sometimes even the best decision making can yield an undesirable outcome in the post-decision assessment. Try to separate the outcome from the process of making the decision. Sometimes you’re just unlucky that this should not negate the fact that you followed a sound process to make the decision. The opposite is also possible. You can see a great outcome despite a non-thoughtful process, but you shouldn’t always count on this working out.”

In other words, using a team to generate ideas and then systematically following through on those ideas with the team in order to make a decision or come to a conclusion can maximize all that creativity that our team has. If we rush to a decision about which option to take without really thoroughly exploring the ideas that our team created, we’re cutting short the creation process, and we may end up with an idea that was quick to come to, but may not be the best idea or the most creative idea that we could have come up with.

What’s today’s insight to action? Paired comparison is a way that we can systematically compare ideas against each other and against multiple criteria. And we can use a DMRCS process, which allows us to have a structured understanding of many competitive choices. By using some of these quality tools on the back end of a brainstorming session, where we’re doing quiet brainstorming individually coming up with ideas, we’re giving ourselves the best chance to come up with the best idea by using frameworks in order to share, discuss, prioritize, and decide on the best way forward. Listen in to next week’s episode where I’ll be talking about design sprints. A lot of what we’ve been covering in this mini-series folds nicely into that. I look forward to joining you. Then,

If you like this topic or the content in this episode, there’s much more on our website, including information about how to join our signature coaching program. The quality during design journey consistency is important, so subscribe to the weekly newsletter. This has been a production of Deeney Enterprises. Thanks for listening.

 

Filed Under: Quality during Design

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Quality during Design podcast logo

Tips for using quality tools and methods to help you design products others love, for less.


by Dianna Deeney
Quality during Design,
Hosted on Buzzsprout.com
Subscribe and enjoy every episode
Google
Apple
Spotify

Recent Episodes

QDD 102 Get Design Inputs with Flowcharts

QDD 101 Quality Tools are Legos of Development (and Their 7 Uses)

QDD 100 Lessons Learned from Coffee Pod Stories

QDD 099 Crucial Conversations in Engineering, with Shere Tuckey (A Chat with Cross-Functional Experts)

QDD 098 Challenges Getting Team Input in Concept Development

QDD 097 Brainstorming within Design Sprints

QDD 096 After the ‘Storm: Compare and Prioritize Ideas

QDD 095 After the ‘Storm: Pareto Voting and Screening Methods

QDD 094 After the ‘Storm: Group and Explore Ideas

QDD 093 Product Design with Brainstorming, with Emily Haidemenos (A Chat with Cross Functional Experts)

QDD 092 Ways to Gather Ideas with a Team

QDD 091 The Spirits of Technical Writing Past, Present, and Future

QDD 090 The Gifts Others Bring

QDD 089 Next Steps after Surprising Test Results

QDD 088 Choose Reliability Goals for Modules

QDD 087 Start a System Architecture Diagram Early

QDD 086 Why Yield Quality in the Front-End of Product Development

QDD 085 Book Cast

QDD 084 Engineering in the Color Economy

QDD 083 Getting to Great Designs

QDD 082 Get Clarity on Goals with a Continuum

QDD 081 Variable Relationships: Correlation and Causation

QDD 080 Use Meetings to Add Productivity

QDD 079 Ways to Partner with Test Engineers

QDD 078 What do We do with FMEA Early in Design Concept?

QDD 077 A Severity Scale based on Quality Dimensions

QDD 076 Use Force Field Analysis to Understand Nuances

QDD 075 Getting Use Information without a Prototype

QDD 074 Finite Element Analysis (FEA) Supplements Test

QDD 073 2 Lessons about Remote Work for Design Engineers

QDD 072 Always Plot the Data

QDD 071 Supplier Control Plans and Design Specs

QDD 070 Use FMEA to Design for In-Process Testing

QDD 069 Use FMEA to Choose Critical Design Features

QDD 068 Get Unstuck: Expand and Contract Our Problem

QDD 067 Get Unstuck: Reframe our Problem

QDD 066 5 Options to Manage Risks during Product Engineering

QDD 065 Prioritizing Technical Requirements with a House of Quality

QDD 064 Gemba for Product Design Engineering

QDD 063 Product Design from a Data Professional Viewpoint, with Gabor Szabo (A Chat with Cross Functional Experts)

QDD 062 How Does Reliability Engineering Affect (Not Just Assess) Design?

QDD 061 How to use FMEA for Complaint Investigation

QDD 060 3 Tips for Planning Design Reviews

QDD 059 Product Design from a Marketing Viewpoint, with Laura Krick (A Chat with Cross Functional Experts)

QDD 058 UFMEA vs. DFMEA

QDD 057 Design Input & Specs vs. Test & Measure Capability

QDD 056 ALT vs. HALT

QDD 055 Quality as a Strategic Asset vs. Quality as a Control

QDD 054 Design Specs vs. Process Control, Capability, and SPC

QDD 053 Internal Customers vs. External Customers

QDD 052 Discrete Data vs. Continuous Data

QDD 051 Prevention Controls vs. Detection Controls

QDD 050 Try this Method to Help with Complex Decisions (DMRCS)

QDD 049 Overlapping Ideas: Quality, Reliability, and Safety

QDD 048 Using SIPOC to Get Started

QDD 047 Risk Barriers as Swiss Cheese?

QDD 046 Environmental Stress Testing for Robust Designs

QDD 045 Choosing a Confidence Level for Test using FMEA

QDD 044 Getting Started with FMEA – It All Begins with a Plan

QDD 043 How can 8D help Solve my Recurring Problem?

QDD 042 Mistake-Proofing – The Poka-Yoke of Usability

QDD 041 Getting Comfortable with using Reliability Results

QDD 040 How to Self-Advocate for More Customer Face Time (and why it’s important)

QDD 039 Choosing Quality Tools (Mind Map vs. Flowchart vs. Spaghetti Diagram)

QDD 038 The DFE Part of DFX (Design For Environment and eXcellence)

QDD 037 Results-Driven Decisions, Faster: Accelerated Stress Testing as a Reliability Life Test

QDD 036 When to use DOE (Design of Experiments)?

QDD 035 Design for User Tasks using an Urgent/Important Matrix

QDD 034 Statistical vs. Practical Significance

QDD 033 How Many Do We Need To Test?

QDD 032 Life Cycle Costing for Product Design Choices

QDD 031 5 Aspects of Good Reliability Goals and Requirements

QDD 030 Using Failure Rate Functions to Drive Early Design Decisions

QDD 029 Types of Design Analyses possible with User Process Flowcharts

QDD 028 Design Tolerances Based on Economics (Using the Taguchi Loss Function)

QDD 027 How Many Controls do we Need to Reduce Risk?

QDD 026 Solving Symptoms Instead of Causes?

QDD 025 Do you have SMART ACORN objectives?

QDD 024 Why Look to Standards

QDD 023 Getting the Voice of the Customer

QDD 022 The Way We Test Matters

QDD 021 Designing Specs for QA

QDD 020 Every Failure is a Gift

QDD 019 Understanding the Purposes behind Kaizen

QDD 018 Fishbone Diagram: A Supertool to Understand Problems, Potential Solutions, and Goals

QDD 017 What is ‘Production Equivalent’ and Why Does it Matter?

QDD 016 About Visual Quality Standards

QDD 015 Using the Pareto Principle and Avoiding Common Pitfalls

QDD 014 The Who’s Who of your Quality Team

QDD 013 When it’s Not Normal: How to Choose from a Library of Distributions

QDD 012 What are TQM, QFD, Six Sigma, and Lean?

QDD 011 The Designer’s Important Influence on Monitoring After Launch

QDD 010 How to Handle Competing Failure Modes

QDD 009 About Using Slide Decks for Technical Design Reviews

QDD 008 Remaking Risk-Based Decisions: Allowing Ourselves to Change our Minds.

QDD 007 Need to innovate? Stop brainstorming and try a systematic approach.

QDD 006 HALT! Watch out for that weakest link

QDD 005 The Designer’s Risk Analysis affects Business, Projects, and Suppliers

QDD 004 A big failure and too many causes? Try this analysis.

QDD 003 Why Your Design Inputs Need to Include Quality & Reliability

QDD 002 My product works. Why don’t they want it?

QDD 001 How to Choose the Right Improvement Model

© 2023 FMS Reliability · Privacy Policy · Terms of Service · Cookies Policy

This site uses cookies to give you a better experience, analyze site traffic, and gain insight to products or offers that may interest you. By continuing, you consent to the use of cookies. Learn how we use cookies, how they work, and how to set your browser preferences by reading our Cookies Policy.