Perils of Assuming a Model
Abstract
Chris and Fred discuss some of the issues you face when selecting a (potentially incorrect) model. And how much uncertainty still remains when you do. Hate statistics … but need a nice and easy introduction to this sort of stuff?
Key Points
Join Chris and Fred as they discuss some of the issues and problems we are faced with when we select a model.
Topics include:
- Confidence is a measure of you. Look at the video below! This is what happens in our confidence of a (Weibull) probability distribution when we get more data. The contour lines around the ‘bell’ curve get much ‘tighter’ when we have more data. This shows the statistical effect more data can have on your confidence that you understand the nature of the underlying probability distribution.
- But you can be ‘fakely’ confident as well. What happens if we assume the wrong model? Let’s do exactly the same thing we did above, but instead of using a curve that aligns with the density of the data points, use a clearly incorrect model. Which in this case is the Exponential distribution.
- And this podcast is all about confidence … and how it affects everything from what you measure, test, specify and design into your system.
Enjoy an episode of Speaking of Reliability. Where you can join friends as they discuss reliability topics. Join us as we discuss topics ranging from design for reliability techniques to field data analysis approaches.
- Social:
- Link:
- Embed:
Show Notes
Leave a Reply