
Selecting Tests
Abstract
Dianna and Fred discuss the critical process of selecting appropriate reliability tests, maximizing value while avoiding the common pitfalls of over-testing or testing the wrong parameters.
ᐅ Play Episode
Your Reliability Engineering Professional Development Site
Host of Quality during Design podcast and co-host of the Speaking of Reliability podcast.
This author's archive lists contributions of articles and episodes.
Dianna is a senior-level Quality Professional and an experienced engineer. She has worked over 20 years in product manufacturing and design and is active in learning about the latest techniques in business.
Dianna promotes strategic use of quality tools and techniques throughout the design process.
by Dianna Deeney Leave a Comment

Dianna and Fred discuss the critical process of selecting appropriate reliability tests, maximizing value while avoiding the common pitfalls of over-testing or testing the wrong parameters.
ᐅ Play Episode
by Dianna Deeney Leave a Comment
by Dianna Deeney Leave a Comment

In this episode, we analyze Tesla’s battery development as a case study. We delve into their use of five clear-cut constraint categories that define failure conditions upfront: the Economic filter, Performance filter, Scalability filter, Resource filter, and System filter.
We discuss the challenges engineers face in letting go of projects due to the sunk cost fallacy, where prior investments irrationally influence future choices, leading to the creation of “zombie projects”.
Learn why defining explicit kill criteria before development begins is a vital, often overlooked exercise that saves resources and ensures rational decision-making.
by Dianna Deeney Leave a Comment

We turn late-stage design surprises into a strategic plan by assigning explicit confidence levels, stacking evidence, and using the three-dial model of time, cost, and confidence boost. We show how to work backward from a system test to cheaper steps that drive faster, clearer decisions.
• applying the three dials of time, cost, confidence
• sequencing with the work-backwards strategy
• avoiding overtesting, undertesting, wrong testing
• turning confidence into a team communication tool
• practical next steps to build the confidence muscle
by Dianna Deeney Leave a Comment

Late-stage design just hit a snag—now comes the moment that separates guesswork from great engineering. We walk through a clear, repeatable method to investigate unexpected failures and make high-impact decisions with confidence. Instead of hunting for a perfect test, we set a confidence target and stack multiple forms of imperfect evidence until we close the gap.
If you’re navigating late-stage product development and want a calm, methodical way to move from 40% to 90% confidence, this framework will help you choose the next best step, allocate limited time and budget, and know when to stop.
Join the Substack for monthly guides, templates, and Q&A where I help you apply these to your specific projects. Visit qualityduringdesign.substack.com.
by Dianna Deeney Leave a Comment

We break down why risk analyses often become risk theater and replace them with a simple, practical impact vs likelihood matrix that guides action. From quick wins to high-stakes unknowns, we show how to calibrate effort, buy the right learning, and move with confidence.
Subscribe to get the template and how-to, plus join us on this 3-month series.
by Dianna Deeney Leave a Comment

If you reach for the nearest “risk” template, it might cause more problems.
There are two very different jobs we ask risk tools to do. In this episode, we talk about how to pick the one that actually moves your project forward.
Along the way, we call out organizational risks that belong in resilience planning, not product FMEAs.
by Dianna Deeney Leave a Comment

Dianna and Fred discuss the implementation and effectiveness of Quality Management Systems (QMS) and how they impact organizational performance.
ᐅ Play Episode
by Dianna Deeney Leave a Comment

Dianna and Fred discuss customer service and AI: focusing on the impacts and challenges presented by the adoption of Artificial Intelligence (AI) for customer service.
ᐅ Play Episode
by Dianna Deeney Leave a Comment

Big changes, clearer focus, and more ways to learn together. We’re tightening our cadence to two episodes a month and building monthly themes that travel across the podcast, blog, and a new Substack home—so you can go beyond ideas and into practice with tools, Q&A, and live community sessions.
Here’s what’s new and why it matters. The podcast keeps its familiar format, but now each month has a focused theme that carries into Substack deep dives. Subscribers get comprehensive guides, open Q&A weeks where we answer your specific questions in the comments, and a one-hour live chat each month to pressure-test methods on real scenarios. It’s a smarter learning loop: listen, explore, apply—then come back with better questions. You’ll also get access to the strategy vault filled with templates, worksheets, and facilitation guides.
We’re also thrilled to announce the launch of Pierce the Design Fog, a practical playbook for product, engineering, and UX teams who need structure without losing speed or humanity. With models like the concept space and ADEPT framework, you’ll align cross-functional teams, turn insights into actionable design inputs, and make confident calls under uncertainty. There’s a companion card deck—Concept Quest: Design Discovery—that acts like a portable facilitator, with prompts and instructions to guide workshops. Pre-order before October 14, 2025, to enter the card deck giveaway and bring these methods straight into your team’s next session.
Subscribe to the show, check out the Substack at qualityduringdesign.substack.com, and leave a review to help more builders find us.
by Dianna Deeney Leave a Comment

How do you balance customer wants with project constraints? If your customer-facing teammates are saying our customers want this, that and the other thing, which ones do we prioritize over others?
Not all features are equal in the eyes of our customers. And not all features are value-added, either.
In this episode, we delve into how to prioritize customer wants using the powerful Kano Model, a tool that maps customer satisfaction against the implementation of product features.
You’ll learn how to differentiate between essential and non-essential features, ensuring that your design truly resonates with your customers. This episode walks through the intricacies of the Kano Model’s two-by-two matrix and the different satisfaction levels represented by various lines and curves.
Too complex? We break it down. Prioritize your features based on their impact to the customer using their voice. Then, consider how well you want to implement that in your design using the Kano Model.
Get ready for practical tips and proven strategies to enhance your product’s value while managing cost, time, and design trade-offs. This episode is an introduction to the Kano Model for design.
by Dianna Deeney Leave a Comment


We explore the critical transition from concept development to engineering solutions in product development, highlighting quality tools that bridge the gap between customer needs and technical design inputs.
by Dianna Deeney Leave a Comment

Dianna and Carl discuss team creativity techniques, especially relating to FMEA.
ᐅ Play Episode
by Dianna Deeney Leave a Comment


It’s important to evaluate the customer’s use process during concept development.
Rather than focusing solely on what your product does, understanding how users will interact with it creates opportunities to design more intuitive, enjoyable experiences. By mapping out the steps users take from beginning to end using process flowcharts, development teams gain clarity on inputs, outputs, and the journey between them.
Whether you need to simplify complex steps, compare competitor approaches, or identify critical-to-quality elements, these analytical methods help prioritize design decisions based on what truly matters to users.
The goal is creating products that feel intuitive and natural, preventing those awkward validation testing moments when engineers want to shout, “You’re doing it wrong!” When we evaluate the use process early, we develop products others love while minimizing costly redesigns and user frustration.

Dianna and Carl discuss the relationship between Hazard Analysis (HA) and Failure Mode and Effects Analysis (FMEA).
ᐅ Play Episode
Ask a question or send along a comment.
Please login to view and use the contact form.