Second in a series exploring sample exam questions.
If you have other ways to sort out these questions, please comment and let us learn and compare approaches.
6. A parts-count reliability prediction is calculated by summing only the
(A) parts failure rates
(B) number of parts in the system
(C) variances of the part failure rates
(D) parts failure rates with application stress
Answer
There is a range of prediction techniques. The parts count method is the simplest that I know and only considered the part failure rates. There are methods that add quality factors, stress factors, and others modifiers and that method is called part stress count methods (or similar).
B is summing the number of parts. It may be roughly correlated to the reliability or inverse of failure rate (more parts being less reliability) yet that is not a prediction method.
C sums variances, and as far as I know there are no known methods (statistical, prediction or other) that uses a sum of variances values. Also, it is often very difficult to find variance value related to failure rates other than very crude estimates.
D includes the application stress which is a step more complicated than the parts-count prediction method.
A is correct. It is a feature of using the exponential distribution that permits the addition of failure rates.
$latex displaystyle&s=3 R(t)=left( {{e}^{-{{lambda }_{1}}t}} right)left( {{e}^{-{{lambda }_{2}}t}} right)cdots left( {{e}^{-{{lambda }_{n}}t}} right)={{e}^{-left( {{lambda }_{1}}+{{lambda }_{2}}+cdots +{{lambda }_{n}} right)t}}$
Reference: O’Connor, Patrick D. T., Practical Reliability Engineering, 4th ed., Halsted Press, 2002, pp. 164-165.
7. A part has a constant hazard rate. If preventive maintenance is used, the part’s failure probability will be affected in which of the following ways?
(A) It will increase.
(B) It will decrease to a fixed value greater than zero.
(C) It will decrease to zero.
(D) It will remain the same.
Answer
The only life data distribution with a constant hazard rate is the exponential distribution. the key word here is constant and implies that the rate of failure is not dependent on time or anything else. The arrival rate of failures is constant.
Therefore, if the hazard rate is constant the parts probability of failure does not change, thus not increasing or decreasing making A, B and C incorrect. The reliability value will change with time, as the part continues to have a chance to fail, and the more time (chances to fail) the less reliability the part over the duration.
D is correct, the chance to fail, or failure probability, will not change as it is given as constant. (Note: I find in practice this to be a simplifying assumption and is often wrong and leads to poor decisions.)
Reference: O’Connor, Patrick D. T., Practical Reliability Engineering, 4th ed., Halsted Press, 2002, pp. 403-404.
8. The use of experimental design techniques early in the process development stage generally results in
(A) increased personnel and product costs
(B) increased product development time
(C) decreased variability around target requirements
(D) decreased start-up process yields
Answer
I may be reading into the questions a little by translating the phrase ‘experimental design techniques’ as the meaning design of experiments. The other key phrase is the process development stage, which implies the process to manufacture or assemble the product.
A is not correct as using well thought out experiments generally reduces personnel and product costs. It is one of the hallmarks of doing well-designed experiments, you get meaningful results quickly.
B is not correct on two counts. One if we are focused on the process development stage it is a different stage than the development stage. Yes, they might overlap or be related in some manner, yet the question is focused on process development. Second, if using DOE during product development it generally helps the development team identify and resolve issues efficiently and generally takes less time.
D may be correct and is not as good as C. Yields often relies on the specifications, technology, and process variability. The use of DOE tends to reduce the product or process robustness to day to day variations in stresses, environment, and materials. The improved process tends to improve yields and is only one contribution to the resulting yield.
C is correct. Experimental design techniques tend to focus on minimizing variability or the response of the process or product to naturally occurring variability. The experiments help us to identify the factors and their settings that maximize the ability to hit the target despite the variability that occurs. This is done by focusing controls on factors that most influence the results.
Reference: Montgomery, Douglas C., Design & Analysis of Experiments, 7th ed., New York: John Wiley and Sons, 2009, p 7.
9. A qualification test is planned to establish whether a unit meets required minimum mean time between failures (MTBF). Which of the following tools can be used to estimate the chance that the unit will pass the test even if its true MTBF is below the required level?
(A) Operating characteristic curve
(B) Block diagram
(C) Fault tree
(D) Transition state matrix
Answer
Key elements of the question include ‘test … planned’, ‘minimum MTBF’ (i.e. failure rate), and ‘chance’. In essence, we want to estimate the likelihood of either passing a bad unit or failing a good unit. That should remind you of type I and type II or alpha and beta confidence values.
The B Block diagram and C Fault tree responses are not correct as they are both modeling tools to represent system reliability. Both tools have the ability to estimate reliability performance and may or may not include confidence. It might be possible, although difficult to estimate performance on a particular test, and not these tools primary purpose.
D is not correct and I had to look up Transition state matrix ( the first inclination was it has something to do with Markov or Petri Net modeling – which again is not related to test planning). A state-transition matrix is related to control theory and test planning directly.
A is correct. An OC curve plots the probability of concluding acceptance of the unit (or lot) based on the testing results versus the unknown actual percent defective (or failure rate). The OC curve directly permits determining the probability of accepting the unit based on a test even if the actual failure rate (inverse of MTBF) is higher than expected or desired.
Reference: O’Connor, Patrick D. T., Practical Reliability Engineering, 4th ed., Halsted Press, 2002, p. 354.
10. The reliability block diagram of a system is shown in the following figure with component reliability noted in each block.
What is the reliability of the system?
(A) 0.670
(B) 0.726
(C) 0.804
(D) 0.820
Answer
This is a classic question that requires knowledge of reliability block diagram series and parallel structures and formulas. As shown it will take a sequence of reductions or simplifications to determine the system reliability. Eventually, we want a simple model that is all series elements or all parallel elements. In this case block A is in series with the reminder of blocks. There are two sets of the block (B, C & D, E) which are in series (B is in series with C, for example) and the set B, C is in parallel with D, E.
First reduce the B, C series to a single block, as well as D, calculating the E set. Which will calculate the parallel structure. Series structure can be reduced with simple finding the product of the elements in series. In this case (0.70)(0.80) = 0.56. Since D, E blocks have the same values, they also equal 0.56.
Second, reduce the two elements in parallel. The B, C set has a reliability of 0.56, as does the D, E set. The formula shortcut is to find the product of unreliabilities ( one minus reliability) the concert back to reliability (again 1 minus). This works out at
1 – [ ( 1 – 0.56 )( 1 – 0.56 ) ] = 1 – [ (0.44)(0.44) ] = 1 – 0.1936 = 0.8064
Third, we now have two elements in series, block A and the value of the combined blocks of B, C, D, and E. Find the product of the series elements, ( 0.09 )( 0.8064 ) = 0.72576
Notice I didn’t round off along the way and keep four digits at least one more than available responses. Round off with the final result to the precision of the possible responses. Rounding to three decimal places, we get 0.726, which is B.
C is close to the 0.8064 which is the result of just the combined elements, B, C, D, and E.
A is close to the result if you do not use the unreliability of the parallel sets (B, C & D, E).
D is close to the result if you treat block A and the combined blocks B, C, D, E as in parallel.
None of the incorrect responses are exactly according to the quick calculations with errors, yet may be considered close enough to cause you to select the wrong response. Of course, there well may be other ways to incorrectly do the calculations and find the match in the wrong responses.
Reference: Ireson, W. Grant, Clyde F. Coombs, Jr. and Richard Y. Moss, eds., Handbook of Reliability Engineering and Management, 2nd ed., New York: McGraw-Hill Professional, 1996, pp. 6.39-6.41. ISBN 0070127506
Leave a Reply