Accendo Reliability

Your Reliability Engineering Professional Development Site

  • Home
  • About
    • Contributors
  • Reliability.fm
    • Speaking Of Reliability
    • Rooted in Reliability: The Plant Performance Podcast
    • Quality during Design
    • Critical Talks
    • Dare to Know
    • Maintenance Disrupted
    • Metal Conversations
    • The Leadership Connection
    • Practical Reliability Podcast
    • Reliability Matters
    • Reliability it Matters
    • Maintenance Mavericks Podcast
    • Women in Maintenance
    • Accendo Reliability Webinar Series
    • Asset Reliability @ Work
  • Articles
    • CRE Preparation Notes
    • on Leadership & Career
      • Advanced Engineering Culture
      • Engineering Leadership
      • Managing in the 2000s
      • Product Development and Process Improvement
    • on Maintenance Reliability
      • Aasan Asset Management
      • CMMS and Reliability
      • Conscious Asset
      • EAM & CMMS
      • Everyday RCM
      • History of Maintenance Management
      • Life Cycle Asset Management
      • Maintenance and Reliability
      • Maintenance Management
      • Plant Maintenance
      • Process Plant Reliability Engineering
      • ReliabilityXperience
      • RCM Blitz®
      • Rob’s Reliability Project
      • The Intelligent Transformer Blog
    • on Product Reliability
      • Accelerated Reliability
      • Achieving the Benefits of Reliability
      • Apex Ridge
      • Metals Engineering and Product Reliability
      • Musings on Reliability and Maintenance Topics
      • Product Validation
      • Reliability Engineering Insights
      • Reliability in Emerging Technology
    • on Risk & Safety
      • CERM® Risk Insights
      • Equipment Risk and Reliability in Downhole Applications
      • Operational Risk Process Safety
    • on Systems Thinking
      • Communicating with FINESSE
      • The RCA
    • on Tools & Techniques
      • Big Data & Analytics
      • Experimental Design for NPD
      • Innovative Thinking in Reliability and Durability
      • Inside and Beyond HALT
      • Inside FMEA
      • Integral Concepts
      • Learning from Failures
      • Progress in Field Reliability?
      • Reliability Engineering Using Python
      • Reliability Reflections
      • Testing 1 2 3
      • The Manufacturing Academy
  • eBooks
  • Resources
    • Accendo Authors
    • FMEA Resources
    • Feed Forward Publications
    • Openings
    • Books
    • Webinars
    • Journals
    • Higher Education
    • Podcasts
  • Courses
    • 14 Ways to Acquire Reliability Engineering Knowledge
    • Reliability Analysis Methods online course
    • Measurement System Assessment
    • SPC-Process Capability Course
    • Design of Experiments
    • Foundations of RCM online course
    • Quality during Design Journey
    • Reliability Engineering Statistics
    • An Introduction to Reliability Engineering
    • An Introduction to Quality Engineering
    • Process Capability Analysis course
    • Root Cause Analysis and the 8D Corrective Action Process course
    • Return on Investment online course
    • CRE Preparation Online Course
    • Quondam Courses
  • Webinars
    • Upcoming Live Events
  • Calendar
    • Call for Papers Listing
    • Upcoming Webinars
    • Webinar Calendar
  • Login
    • Member Home

by Fred Schenkelberg Leave a Comment

Body of Knowledge 2009 version

ASQ’s CRE Body of Knowledge

 

 

Taken in entirety from http://prdweb.asq.org/certification/control/reliability-engineer/bok on April 6th, 2016. This is the 2009 update to the BoK.

Reprinted with permission from American Society for Quality ©2008 ASQ, www.asq.org. No further distribution allowed without permission.

The topics in this Body of Knowledge include additional detail in the form of subtext explanations and the cognitive level at which the questions will be written. This information will provide useful guidance for both the Examination Development Committee and the candidates preparing to take the exam. The subtext is not intended to limit the subject matter or be all-inclusive of what might be covered in an exam. It is intended to clarify the type of content to be included in the exam. The descriptor in parentheses at the end of each entry refers to the highest cognitive level at which the topic will be tested. A more comprehensive description of cognitive levels is provided at the end of this document.

I. RELIABILITY MANAGEMENT (18 Questions)

I. A. Strategic management

  1. Benefits of reliability engineering 
    Describe how reliability engineering techniques and methods improve programs, processes, products, systems, and services. (Understand)
  2. Interrelationship of safety, quality, and reliability 
    Define and describe the relationships among safety, reliability, and quality. (Understand)
  3. Role of the reliability function in the organization 
    Describe how reliability techniques can be applied in other functional areas of the organization, such as marketing, engineering, customer /product support, safety and product liability, etc. (Apply)
  4. Reliability in product and process development
    Integrate reliability engineering techniques with other development activities, concurrent engineering, corporate improvement initiatives such as lean and six sigma methodologies, and emerging technologies. (Apply)
  5. Failure consequence and liability management 
    Describe the importance of these concepts in determining reliability acceptance criteria. (Understand)
  6. Warranty management 
    Define and describe warranty terms and conditions, including warranty period, conditions of use, failure criteria, etc., and identify the uses and limitations of warranty data. (Understand)
  7. Customer needs assessment 
    Use various feedback methods (e.g., quality function deployment (QFD), prototyping, beta testing) to determine customer needs in relation to reliability requirements for products and services. (Apply)
  8. Supplier reliability
    Define and describe supplier reliability assessments that can be monitored in support of the overall reliability program. (Understand)

I. B. Reliability program management

  1. Terminology 
    Explain basic reliability terms (e.g., MTTF, MTBF, MTTR, availability, failure rate, reliability, maintainability). (Understand)
  2. Elements of a reliability program 
    Explain how planning, testing, tracking, and using customer needs and requirements are used to develop a reliability program, and identify various drivers of reliability requirements, including market expectations and standards, as well as safety, liability, and regulatory concerns. (Understand)
  3. Types of risk 
    Describe the relationship between reliability and various types of risk, including technical, scheduling, safety, financial, etc. (Understand)
  4. Product lifecycle engineering 
    Describe the impact various lifecycle stages (concept/design, introduction, growth, maturity, decline) have on reliability, and the cost issues (product maintenance, life expectation, software defect phase containment, etc.) associated with those stages. (Understand)
  5. Design evaluation 
    Use validation, verification, and other review techniques to assess the reliability of a product’s design at various lifecycle stages. (Analyze)
  6. Systems engineering and integration
    Describe how these processes are used to create requirements and prioritize design and development activities. (Understand)

I. C. Ethics, safety, and liability

  1. Ethical issues 
    Identify appropriate ethical behaviors for a reliability engineer in various situations. (Evaluate)
  2. Roles and responsibilities 
    Describe the roles and responsibilities of a reliability engineer in relation to product safety and liability. (Understand)
  3. System safety 
    Identify safety-related issues by analyzing customer feedback, design data, field data, and other information. Use risk management tools (e.g., hazard analysis, FMEA, FTA, risk matrix) to identify and prioritize safety concerns, and identify steps that will minimize the misuse of products and processes. (Analyze)

II. PROBABILITY AND STATISTICS FOR RELIABILITY (27 Questions)

II. A. Basic concepts

  1. Statistical terms 
    Define and use terms such as population, parameter, statistic, sample, the central limit theorem, etc., and compute their values. (Apply)
  2. Basic probability concepts 
    Use basic probability concepts (e.g., independence, mutually exclusive, conditional probability) and compute expected values. (Apply)
  3. Discrete and continuous probability distributions
    Compare and contrast various distributions (binomial, Poisson, exponential, Weibull, normal, log-normal, etc.) and their functions (e.g., cumulative distribution functions (CDFs), probability density functions (PDFs), hazard functions), and relate them to the bathtub curve. (Analyze)
  4. Poisson process models 
    Define and describe homogeneous and non-homogeneous Poisson process models (HPP and NHPP). (Understand)
  5. Non-parametric statistical methods 
    Apply non-parametric statistical methods, including median, Kaplan-Meier, Mann-Whitney, etc., in various situations. (Apply)
  6. Sample size determination
    Use various theories, tables, and formulas to determine appropriate sample sizes for statistical and reliability testing. (Apply)
  7. Statistical process control (SPC) and process capability
    Define and describe SPC and process capability studies (Cp, Cpk, etc.), their control charts, and how they are all related to reliability. (Understand)

II. B. Statistical inference

  1. Point estimates of parameters 
    Obtain point estimates of model parameters using probability plots, maximum likelihood methods, etc. Analyze the efficiency and bias of the estimators. (Evaluate)
  2. Statistical interval estimates 
    Compute confidence intervals, tolerance intervals, etc., and draw conclusions from the results. (Evaluate)
  3. Hypothesis testing (parametric and non-parametric)
    Apply hypothesis testing for parameters such as means, variance, proportions, and distribution parameters. Interpret significance levels and Type I and Type II errors for accepting/rejecting the null hypothesis. (Evaluate)

III. RELIABILITY IN DESIGN AND DEVELOPMENT (26 Questions)

III. A. Reliability design techniques

  1. Environmental and use factors 
    Identify environmental and use factors (e.g., temperature, humidity, vibration) and stresses (e.g., severity of service, electrostatic discharge (ESD), throughput) to which a product may be subjected. (Apply)
  2. Stress-strength analysis 
    Apply stress-strength analysis method of computing probability of failure, and interpret the results. (Evaluate)
  3. FMEA and FMECA 
    Define and distinguish between failure mode and effects analysis and failure mode, effects, and criticality analysis and apply these techniques in products, processes, and designs. (Analyze)
  4. Common mode failure analysis 
    Describe this type of failure (also known as common cause mode failure) and how it affects design for reliability. (Understand)
  5. Fault tree analysis (FTA) and success tree analysis (STA) 
    Apply these techniques to develop models that can be used to evaluate undesirable (FTA) and desirable (STA) events. (Analyze)
  6. Tolerance and worst-case analyses 
    Describe how tolerance and worst-case analyses (e.g., root of sum of squares, extreme value) can be used to characterize variation that affects reliability. (Understand)
  7. Design of experiments
    Plan and conduct standard design of experiments (DOE) (e.g., full-factorial, fractional factorial, Latin square design). Implement robust-design approaches (e.g., Taguchi design, parametric design, DOE incorporating noise factors) to improve or optimize design. (Analyze)
  8. Fault tolerance 
    Define and describe fault tolerance and the reliability methods used to maintain system functionality. (Understand)
  9. Reliability optimization 
    Use various approaches, including redundancy, derating, trade studies, etc., to optimize reliability within the constraints of cost, schedule, weight, design requirements, etc. (Apply)
  10. Human factors 
    Describe the relationship between human factors and reliability engineering. (Understand)
  11. Design for X (DFX) 
    Apply DFX techniques such as design for assembly, testability, maintainability environment (recycling and disposal), etc., to enhance a product’s producibility and serviceability. (Apply)
  12. Reliability apportionment (allocation) techniques 
    Use these techniques to specify subsystem and component reliability requirements. (Analyze)

III. B. Parts and systems management

  1. Selection, standardization, and reuse 
    Apply techniques for materials selection, parts standardization and reduction, parallel modeling, software reuse, including commercial off-the-shelf (COTS) software, etc. (Apply)
  2. Derating methods and principles
    Use methods such as S-N diagram, stress-life relationship, etc., to determine the relationship between applied stress and rated value, and to improve design. (Analyze)
  3. Parts obsolescence management 
    Explain the implications of parts obsolescence and requirements for parts or system requalification. Develop risk mitigation plans such as lifetime buy, backwards compatibility, etc. (Apply)
  4. Establishing specifications
    Develop metrics for reliability, maintainability, and serviceability (e.g., MTBF, MTBR, MTBUMA, service interval) for product specifications. (Create)

IV. RELIABILITY MODELING AND PREDICTIONS (22 Questions)

IV. A. Reliability modeling

  1. Sources and uses of reliability data
    Describe sources of reliability data (prototype, development, test, field, warranty, published, etc.), their advantages and limitations, and how the data can be used to measure and enhance product reliability. (Apply)
  2. Reliability block diagrams and models 
    Generate and analyze various types of block diagrams and models, including series, parallel, partial redundancy, time-dependent, etc. (Create)
  3. Physics of failure models 
    Identify various failure mechanisms (e.g., fracture, corrosion, memory corruption) and select appropriate theoretical models (e.g., Arrhenius, S-N curve) to assess their impact. (Apply)
  4. Simulation techniques
    Describe the advantages and limitations of the Monte Carlo and Markov models. (Apply)
  5. Dynamic reliability
    Describe dynamic reliability as it relates to failure criteria that change over time or under different conditions. (Understand)

IV. B. Reliability predictions

  1. Part count predictions and part stress analysis
    Use parts failure rate data to estimate system- and subsystem-level reliability. (Apply)
  2. Reliability prediction methods 
    Use various reliability prediction methods for both repairable and non-repairable components and systems, incorporating test and field reliability data when available (Apply)

V. RELIABILITY TESTING (24 Questions)

V. A. Reliability test planning

  1. Reliability test strategies
    Create and apply the appropriate test strategies (e.g., truncation, test–to-failure, degradation) for various product development phases. (Create)
  2. Test environment
    Evaluate the environment in terms of system location and operational conditions to determine the most appropriate reliability test. (Evaluate)

V. B. Testing during development
Describe the purpose, advantages, and limitations of each of the following types of tests, and use common models to develop test plans, evaluate risks, and interpret test results. (Evaluate)

  1. Accelerated life tests (e.g., single-stress, multiple-stress, sequential stress, step-stress)
  2. Discovery testing (e.g., HALT, margin tests, sample size of 1),
  3. Reliability growth testing (e.g., test, analyze, and fix (TAAF), Duane)
  4. Software testing (e.g., white-box, black-box, operational profile, and  fault-injection)

V. C. Product testing
Describe the purpose, advantages, and limitations of each of the following types of tests, and use common models to develop product test plans, evaluate risks, and interpret test results. (Evaluate)

  1. Qualification/demonstration testing (e.g., sequential tests, fixed-length tests)
  2. Product reliability acceptance testing (PRAT)
  3. Ongoing reliability testing (e.g., sequential probability ratio test [SPRT])
  4. Stress screening (e.g., ESS, HASS, burn-in tests)
  5. Attribute testing (e.g., binomial, hypergeometric)
  6. Degradation (wear–to-failure) testing

VI. MAINTAINABILITY AND AVAILABILITY (15 Questions)

VI. A. Management strategies

  1. Planning 
    Develop plans for maintainability and availability that support reliability goals and objectives. (Create)
  2. Maintenance strategies 
    Identify the advantages and limitations of various maintenance strategies (e.g., reliability-centered maintenance (RCM), predictive maintenance, repair or replace decision making), and determine which strategy to use in specific situations. (Apply).
  3. Availability tradeoffs 
    Describe various types of availability (e.g., inherent, operational), and the tradeoffs in reliability and maintainability that might be required to achieve availability goals. (Apply)

VI. B. Maintenance and testing analysis

  1. Preventive maintenance (PM) analysis 
    Define and use PM tasks, optimum PM intervals, and other elements of this analysis, and identify situations in which PM analysis is not appropriate. (Apply)
  2. Corrective maintenance analysis 
    Describe the elements of corrective maintenance analysis (e.g., fault-isolation time, repair/replace time, skill level, crew hours) and apply them in specific situations. (Apply)
  3. Non-destructive evaluation 
    Describe the types and uses of these tools (e.g., fatigue, delamination, vibration signature analysis) to look for potential defects. (Understand)
  4. Testability 
    Use various testability requirements and methods (e.g., built in tests (BITs), false-alarm rates, diagnostics, error codes, fault tolerance) to achieve reliability goals (Apply)
  5. Spare parts analysis 
    Describe the relationship between spare parts requirements and reliability, maintainability, and availability requirements. Forecast spare parts requirements using field data, production lead time data, inventory and other prediction tools, etc. (Analyze)

VII. DATA COLLECTION AND USE (18 Questions)

VII. A. Data collection

  1. Types of data 
    Identify and distinguish between various types of data (e.g., attributes vs. variable, discrete vs. continuous, censored vs. complete, univariate vs. multivariate). Select appropriate data types to meet various analysis objectives. (Evaluate)
  2. Collection methods 
    Identify appropriate methods and evaluate the results from surveys, automated tests, automated monitoring and reporting tools, etc., that are used to meet various data analysis objectives. (Evaluate)
  3. Data management 
    Describe key characteristics of a database (e.g., accuracy, completeness, update frequency). Specify the requirements for reliability-driven measurement systems and database plans, including consideration of the data collectors and users, and their functional responsibilities. (Evaluate)

VII. B. Data use

  1. Data summary and reporting
    Examine collected data for accuracy and usefulness. Analyze, interpret, and summarize data for presentation using techniques such as trend analysis, Weibull, graphic representation, etc., based on data types, sources, and required output. (Create)
  2. Preventive and corrective action
    Select and use various root cause and failure analysis tools to determine the causes of degradation or failure, and identify appropriate preventive or corrective actions to take in specific situations. (Evaluate)
  3. Measures of effectiveness
    Use various data analysis tools to evaluate the effectiveness of preventive and corrective actions in improving reliability. (Evaluate)

VII. C. Failure analysis and correction

  1. Failure analysis methods 
    Describe methods such as mechanical, materials, and physical analysis, scanning electron microscopy (SEM), etc., that are used to identify failure mechanisms. (Understand)
  2. Failure reporting, analysis, and corrective action system (FRACAS)
    Identify the elements necessary for a FRACAS to be effective, and demonstrate the importance of a closed-loop process that includes root cause investigation and follow up. (Apply)

Levels of Cognition
based on Bloom’s Taxonomy – Revised (2001)

In addition to content specifics, the subtext for each topic in this BOK also indicates the intended complexity level of the test questions for that topic. These levels are based on “Levels of Cognition” (from Bloom’s Taxonomy – Revised, 2001) and are presented below in rank order, from least complex to most complex.

Remember
Recall or recognize terms, definitions, facts, ideas, materials, patterns, sequences, methods, principles, etc.

Understand 
Read and understand descriptions, communications, reports, tables, diagrams, directions, regulations, etc.

Apply 
Know when and how to use ideas, procedures, methods, formulas, principles, theories, etc.

Analyze
Break down information into its constituent parts and recognize their relationship to one another and how they are organized; identify sublevel factors or salient data from a complex scenario.

Evaluate
Make judgments about the value of proposed ideas, solutions, etc., by comparing the proposal to specific criteria or standards.

Create 
Put parts or elements together in such a way as to reveal a pattern or structure not clearly there before; identify which data or information from a complex set is appropriate to examine further or from which supported conclusions can be drawn.

View Previous View Next

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

  • CRE Preparation Course
    • Course Introduction
      • Welcome
      • Introduction
      • Thank You for Your Interest in the Course
      • Exam Day
      • Preparation Approach
      • Discussion Forums Introduction
      • CRE Sample Quiz
      • Terms Glossary
      • Math Quiz
      • Body of Knowledge 2009 version
      • Body of Knowledge 2018 version
    • Reliability Management
      • Reliability Management Introduction
    • I. A. Strategic Management
      • Strategic Management Introduction
      • I. A. 1. Benefits of Reliability Engineering
      • I. A. 2. Interrelationship of Safety, Quality, and Reliability
      • I. A. 3. Role of the Reliability Function
      • I. A. 4. Product and Process Development
      • I. A. 5. Failure Consequences and Liability Management
      • I. A. 6. Warranty Management
      • I. A. 7. Customer Needs Assessment
      • I. A. 8. Supplier Reliability
      • I. A. Strategic Management Quiz
      • I. A. Bonus — Building Influence
    • I. B. Reliability Program Management
      • Reliability Program Management Introduction
      • I. B. 1. Terminology
      • I. B. 2. Elements of a Reliability Program
      • I. B. 3. Types of Risk
      • I. B. 4. Product Lifecycle Engineering
      • I. B. 5. Design Evaluation
      • I. B. 6. Systems Engineering and Integration
      • I. B. Reliability Program Management Quiz
    • I. C. Ethics, Safety, and Liability
      • Ethics, Safety, and Liability Introduction
      • I. C. 1. Ethical Issues
      • I. C. 2. Roles and Responsibilities
      • I. C. 3. System Safety
      • I. C. Ethics, Safety, and Liability Quiz
    • II. Probability and Statistics for Reliability
      • Probability and Statistics for Reliability Introduction
    • II. A. Basic Concepts
      • Basic Concepts Introduction
      • II. A. I. Statistical Terms
        • II. A. I. a. Basic Statistical Terms
        • II. A. I. b. Measures of Central Tendency
        • II. A. I. c. Central Limit Theorem
        • II. A. I. d. Measures of Dispersion
        • II. A. 1. e. COV and a Couple of Laws
      • II. A. 2. Basic Probability Concepts
        • II. A. 2. a. Probability
        • II. A. 2. b. Laws and Counting
        • II. A. 2. c. Expectation
      • II. A. 3. Discrete and Continuous Probability Distributions
        • II. A. 3. a. The Four Functions
        • II. A. 3. b. Continuous Distributions
        • II. A. 3. c. Discrete Distributions
        • II. A. 3. d. Bathtub Curve
      • II. A. 4. Poisson Process Models
        • Poisson Process Models Introduction
        • II. A. 4. a. Homogeneous Poisson Process
        • II. A. 4. b. Repair System Terminology
        • II. A. 4. c. Non-Homogenous Poisson Process
        • II. A. 4. d. Mann Reverse Arrangement Test
        • II. A. 4. e. Laplace’s Trend Test
        • II. A. 4. f. Fisher’s Composite Test
      • II. A. 5. Non-Parametric Statistical Methods
        • II. A. 5. a. The Approach
        • II. A. 5. b. Ranking
        • II. A. 5. c. Reliability and Comparisons
        • Non-Parametric Statistical Methods Introduction
      • II. A. 6. Sample Size Determination
        • II. A. 6. Sample Size Determination
      • II. A. 7. Statistical Process Control and Process Capability
        • II. A. 7. a. Control Charts Introduction
        • II. A. 7. b. X-bar and R charts
        • II. A. 7. c. Selecting Control Charts
        • II. A. 7. d. Individual and Moving Range Charts
        • II. A. 7. e. Attribute Charts
        • II. A. 7. f. The Analysis
        • II. A. 7. g. Process Capability
        • II. A. 7. h. Standard Normal and z-values
        • II. A. 7. i. Capability and Charts
        • II. A. 7. j. Pre-Control Charts
        • Statistical Process Control and Process Capability Introduction
      • II. A. Basic Concepts Quiz
    • II. B. Statistical Inference
      • Statistical Inference Introduction
      • II. B. 1. Point Estimates of Parameters
      • II. B. 2. a. Statistical Intervals – Point Estimates
      • II. B. 2. b. Statistical Intervals – MTBF Estimates
      • II. B. 3. a. Hypothesis Testing – The Process
      • II. B. 3. b. Hypothesis Testing – Means
      • II. B. 3. c. Hypothesis Testing – Variance
      • II. B. 3. d. Hypothesis Testing – Comparisons
      • II. B. Statistical Inference Quiz
    • III. Reliability in Design and Development
      • Reliability in Design and Development Introduction
    • III. A. Reliability Design Techniques
      • Reliability Design Techniques Introduction
      • III. A. 1. Environmental and Use Factors
      • III. A. 2. Stress-Strength Analysis
      • III. A. 3. FMEA and FMECA
      • III. A. 4. Common Mode Failure Analysis
      • III. A. 5. Fault and Success Tree Analysis
      • III. A. 6. Tolerance and Worst-Case Analysis
    • III. A. 7. Design of Experiments
      • Design of Experiments Introduction
      • III. A. 7. a. How We Experiment
      • III. A. 7. b. Differences and Approaches
      • III. A. 7. c. Language of DOE
      • III. A. 7. d. Only the Right Experiments
      • III. A. 7. e. Steps to Accomplish
      • III. A. 7. f. Dealing with Measurements
      • III. A. 7. g. Interactions and Confounding
      • III. A. 7. h. Adjusting the Design
      • III. A. 7. i. Classical DOE
      • III. A. 7. j. Various Designs
      • III. A. 7. k. A Simple Taguchi Example
      • III. A. 7. l. Robust Design
    • III. A. more Reliability Design Techniques
      • III. A. 8. Fault Tolerance
      • III. A. 9. Reliability Optimization
      • III. A. 10. Human Factors
      • III. A. 11. Design for X – DFX
      • III. A. 12. Reliability Apportionment or Allocation Techniques
      • III. A. Reliability Design Techniques Quiz
    • III. B. Parts and Systems Management
      • Parts and Systems Management Introduction
      • III. B. 1. a. Selection, Standardization, and Reuse – Parts
      • III. B. 1. b. Selection, Standardization, and Reuse – Software
      • III. B. 2. Derating Methods and Principles
      • III. B. 3. Parts Obsolescence Management
      • III. B. 4. Establishing Specifications
      • III. B. Parts and Systems Management Quiz
    • IV. Reliability Modeling and Predictions
      • Reliability Modeling and Predictions Introduction
    • IV. A. Reliability Modeling
      • Reliability Modeling Introduction
      • IV. A. 1. Sources and Uses of Reliability Data
      • IV. A. 2. a. Reliability Block Diagrams and Models – Series Systems
      • IV. A. 2. b. Reliability Block Diagrams and Models – Parallel Systems
      • IV. A. 2. c. Reliability Block Diagrams and Models – Redundancy
      • IV. A. 2. d. Reliability Block Diagrams and Models – Complex
      • IV. A. 2. e. Reliability Block Diagrams and Models – Keynote
      • IV. A. 3. Physics of Failure Models
      • IV. A. 4. a. Simulation Techniques – Markov Models
      • IV. A. 4. b. Simulation Techniques – Monte Carlo
      • IV. A. 5. Dynamic Reliability
      • IV. A. Reliability Modeling quiz
    • IV. B. Reliability Predictions
      • Reliability Predictions Introduction
      • IV. B. 1. Parts Count Predictions and Parts Stress Analysis
      • IV. B. 2. a. Reliability Prediction Models – Considerations
      • IV. B. 2. b. Reliability Prediction Models – Uncertainty
      • IV. B. 2. c. Reliability Prediction Models – Tolerance Intervals
      • IV. B. Reliability Predictions quiz
    • V. Reliability Testing
      • Reliability Testing Introduction
    • V. A. Reliability Testing Planning
      • Reliability Testing Planning Introduction
      • V. A. 1. a. Reliability Test Strategies – Types of Testing
      • V. A. 1. b. Reliability Test Strategies – Human Factors Testing
      • V. A. 2. Test Environment
      • V. A. Reliability Test Planning quiz
    • V. B. Testing During Development
      • Testing During Development Introduction
      • V. B. 1. Accelerated Life Tests
      • V. B. Bonus – A Few Models
      • V. B. 2. Discovery Testing
      • V. B. 3. Reliability Growth Testing
      • V. B. 4. Software Testing
      • V. B. Testing During Development quiz
    • V. C. Product Testing
      • Product Testing Introduction
      • V. C. 1. a. Qualification Demonstration Testing – PRST
      • V. C. 1. b. Qualification Demonstration Testing – Success Testing
      • V. C. 2. Product Reliability Acceptance Testing – PRAT
      • V. C. 3. Ongoing Reliability Testing
      • V. C. 4. Stress Screening
      • V. C. 5. Attribute Testing
      • V. C. 6. Degradation Testing
      • V. C. Bonus – Acceleration Factors
      • V. C. Product Testing quiz
    • VI. Maintainability and Availability
      • Maintainability and Availability Introduction
    • VI. A. Management Strategies
      • Management Strategies Introduction
      • VI. A. 1. a. Planning
      • VI. A. 1. b. Planning – System Effectiveness
      • VI. A. 1. c. Planning – Reliability Time
      • VI. A. 2. a. Maintenance Strategies – RCM
      • VI. A. 2. b. Maintenance Strategies – TPM
      • VI. A. 2. c. Maintenance Strategies – Allocation
      • VI. A. 3. Availability Tradeoffs
      • VI. A. Management Strategies quiz
    • VI. B. Maintenance and Testing Analysis
      • Maintenance and Testing Analysis Introduction
      • VI. B. 1. Preventative Maintenance Analysis
      • VI. B. 2. Corrective Maintenance Analysis
      • VI. B. 3. Non-Destructive Evaluation
      • VI. B. 4. Testability
      • VI. B. 5. Spare Parts Analysis
      • VI. B. Maintenance and Testing Analysis quiz
    • VII. Data Collection and Use
      • Data Collection and Use Introduction
    • VII. A. Data Collection
      • Data Collection Introduction
      • VII. A. 1. a. Types of Data
      • VII. A. 1. b. Types of Data – Censored Data
      • VII. A. 2. Collection Methods
      • VII. A. 3. Data Management
      • VII. A. Data Collection quiz
    • VII. B. Data Use
      • Data Use Introduction
      • VII. B. 1. Data Summary and Reporting
      • VII. B. 2. Preventive and Corrective Actions
      • VII. B. 3. Measures of Effectiveness
      • VII. B. Data Use quiz
    • VII. C. Failure Analysis and Correction
      • Failure Analysis and Correction Introduction
      • VII. C. 1. Failure Analysis Methods
      • VII. C. 2. Failure Reporting, Analysis, and Corrective Action System
      • Exam Day Bonus
      • VII. C. Failure Analysis and Correction quiz

© 2023 FMS Reliability · Privacy Policy · Terms of Service · Cookies Policy

This site uses cookies to give you a better experience, analyze site traffic, and gain insight to products or offers that may interest you. By continuing, you consent to the use of cookies. Learn how we use cookies, how they work, and how to set your browser preferences by reading our Cookies Policy.