I needed multivariate fragility functions for seismic risk analysis of nuclear power plants. I didn’t have any test data, so Lawrence Livermore Lab paid “experts” for their opinions! I set up the questionnaires, asked for percentiles, salted the sample to check for bias, asked for percentiles of conditional fragility functions to estimate correlations, and fixed pairwise correlations to make legitimate multivariate correlation matrixes. Subjective percentiles provide more distribution information than parameter or distribution assumptions, RPNs, ABCD, high-medium-low, or RCM risk classifications.
Seismic fragility refers to components’ or structures’ strengths-at-failures due to earthquakes. A seismic fragility function is the probability of surviving earthquake response stress, often peak ground acceleration. Post-earthquake inspections record damages and provide inputs for estimating subjective fragility functions and the correlations of strength-at-failures. Correlations are necessary for seismic risk analyses, because seismic stresses and components’ strengths at failures are dependent.
A fragility function is the cumulative distribution function of strength-at-failure, the complement of reliability as a function of stress intensity. Fragility functions are needed for reliability analyses as P[component Failure] = P[Stress > strength] = ∫R(x)dG(x), x=0 to ∞, where R(x) = P[Stress > x] and G(x) is the fragility function [Kennedy et al., Kapur and Lamberson, NUREG/CR 3558, https://accendoreliability.com/what-to-do-with-obsolescent-nuclear-engineers/]. Also, electrical generation failure (LOLP, Loss Of Load Probability) is modeled as P[Load > Capacity] or a multivariate version [George 1998, George and Wells].
Why multivariate? Seismic risk analysis of power plants is reliability computation of systems whose component failures depend on earthquake responses and whose strengths-at-failures may be statistically dependent. The system failure model is, multivariate P[g(Stress, strength)=failure]. System reliability is described by a “structure” function, g(Stress, strength), from fault-tree analysis. Components’ strength data is summarized in estimates of multivariate fragility functions of component strength random variables, including their correlations. This multivariate stress-strength analysis has been incorporated into risk analysis standards [ASME/ANS, ANSI/ANS, NRC, NUREG/CR 4334, IAEA TECDOC, EPRI].
Where’s the fragility data? Fragility functions can be estimated from subjective opinions on the percentiles. Subjective percentiles may be less biased than opinions about distribution parameters [NAP, Hora et al. and Karvetsky et al.]. This article provides solutions to fragility function estimation using subjective percentiles and test data; how should:
- subjective percentiles be used to estimate subjective fragility functions?
- dependence be estimated?
- subjective percentiles be combined with test data?
- fragility functions be combined for several failure modes into a composite fragility function?
- inherent randomness and uncertainty due to lack of knowledge be represented?
- correlations be estimated for multivariate fragility functions?
Subjective percentiles are assumed to be independent estimates of percentiles; i.e., percentiles from different people are regarded as statistically independent.
This paper shows how to quantify:
- Least-squares parameter estimators for normal and lognormal fragility functions, based on subjective percentiles; the method is applicable to any invertible cumulative distribution function,
- Composite fragility function combining several failure modes,
- Estimators of variation within and between groups of experts for nonidentically distributed subjective percentiles,
- Weighted least-squares estimators when subjective percentiles have higher variation at higher percentiles,
- Weighted least-squares and Bayes parameter estimators based on combining subjective percentiles and test data, and
- Least squares correlation estimates and Frobenius distance to nearest positive definite correlation matrix.
Expert opinions on subjective percentiles could be used to estimate subjective reliability functions too. Integrate a subjective reliability function from zero to infinity to obtain a subjective MTBF estimate!
Introduction to Subjective Fragility Functions
In 1980, there was little relevant component strength-test data for seismic risk analysis. Components may have been acceptance tested but not tested to failure. Or they may have been tested to failure but not under earthquake loads. Kennedy et al. reported on experts’ opinions on means and 10th percentiles. I asked for experts’ opinions on fragility function percentiles.
Questionnaires were sent to 253 experts to obtain strength-at-failure percentile estimates of nuclear power plant components in earthquakes. Forty experts returned 120 questionnaires on 31 component categories. The questionnaires asked for the 10-th, 50-th and 90-th percentiles of the fragility functions for the three failure modes judged most likely by each expert. Three percentiles seemed adequate to represent spread of distributions without straining credibility and to test goodness of fit. The questionnaires also asked experts for their self-credibility weights.
The objectives were to estimate components’ fragility functions and their correlations. This requires estimation of fragility functions for each failure mode, combining modal fragility function estimates, and deriving correlations. Pairwise correlations had to be checked to see whether correlation matrixes were positive definite. Because the fragility function estimates come from subjective percentiles and test data, the uncertainty due to lack of knowledge about the true strength also had to be quantified.
Estimating Fragility Functions from Percentiles
We used least squares to estimate fragility function parameters: for normal, lognormal, and exponential fragility functions. The estimators were easily modified to accept weighted data. The weights were credibility ratings given by the respondents or by the data analyst.
Define X(I,q) as the q-th subjective percentile given by the i-th expert. The assumed model for X(I,q) is
X(I,q) = x(q)+E(I,q),
where x(q) is the q-th population percentile of the reference population, assumed to be the aggregation of subjective fragility functions for all experts; and E(I,q) is a random variable with E(E(I,q)) = 0 and Var(E(I,q)) = σE2. The subjective percentiles from each expert were assumed to be uncorrelated. For each percentile, the model assumed the i-th expert’s opinion was randomly selected from the population of opinions of all experts. This is a simplification; it is doubtful that an expert’s opinion about three percentiles are uncorrelated, nor can it be expected that experts are independent sources of information.
Table 1 shows an example of parameter inputs for simulating expert opinion percentiles for equivalent lognormal and normal fragility functions. The means and standard deviations are small to avoid blowing up spreadsheet values and to avoid negative normal lower percentiles. Fragility function parameter estimates could be rescaled to compare with handbook values and to input to seismic risk analyses.
Table 1. Inputs for simulating expert opinion percentiles. The “Range”-name parameters are for simulation of a mean and standard deviation for each expert. The lognormal parameters (ln(X)) correspond with the normal parameters (X).
Parameter | lognormal | Normal |
Mean | 1 | 2.718418 |
Stdev | 0.01 | 0.027185 |
RangeMean | 0.05 | 0.135921 |
RangeStdev | 0.0005 | 0.001359 |
Table 2. Simulated expert opinion percentiles.
Percentile | Experts | |||||||
Lognormal | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 |
10% | 2.701 | 2.682 | 2.746 | 2.675 | 2.724 | 2.667 | 2.709 | 2.666 |
50% | 2.736 | 2.716 | 2.780 | 2.708 | 2.759 | 2.701 | 2.743 | 2.700 |
90% | 2.772 | 2.751 | 2.815 | 2.743 | 2.794 | 2.735 | 2.778 | 2.734 |
Normal | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 |
10% | 2.709 | 2.617 | 2.674 | 2.735 | 2.645 | 2.687 | 2.621 | 2.686 |
50% | 2.743 | 2.651 | 2.709 | 2.769 | 2.680 | 2.721 | 2.655 | 2.721 |
90% | 2.778 | 2.686 | 2.745 | 2.804 | 2.715 | 2.755 | 2.690 | 2.755 |
The parameter estimators for an assumed fragility function minimize the sum of squared differences between the sample percentiles and the same percentiles of the hypothesized fragility function. Let θ denote the vector of parameters of the assumed fragility function F(x, θ), and let F-1(q, θ) denote its inverse. The objective is to find the value of θ that minimizes SUM[SUM[(X(i,q)-F-1(q,θ))2]] where the sums are over experts i=1,2,…,n and percentiles q. It is not necessary to have the same set of percentiles q for all experts. The number of percentiles q must be at least as large as that of number of parameters in θ.
Assuming the population fragility function can be approximated by the normal distribution, the parameters to be estimated are the mean and standard deviation, μ and σ. The percentiles of the normal distribution are x(q) = μ+σz(q) , where z(q) is the q-th percentile of the standard normal distribution. The objective is to find estimates of μ and σ to minimize SUM[SUM[(X(i,q)-μ-σz(q))2]].
The normal equations yield, assuming experts give 10-th, 50-th and 90-th percentiles, the parameter estimators are:
μ = (Xbar(.1)+Xbar(.5)+Xbar(.9))/3
where Xbar() is the average of the subjective percentiles offered by the n experts, and
σ = SUM[X(I,.9)-X(I,.1))/(2nz(.9)); i=1,2,…,n].
If the assumed fragility function is lognormal with E(ln(X)) = μ and Var (In(X))= σ2, the objective function is SUM[SUM[(ln(X(i,q))-μ-σz(q))2]], where the sums are over i=1,2,…,n and q. The estimators are:
μ = (AVERAGE(ln(X(.1)) + AVERAGE(ln(X(.5)) + AVERAGE(ln(X(.9)))/3 and
σ = SUM[X(I,.9)-X(I,.1))/(2nz(.9));i=1,2,…,n]
In either assumption, the estimators of μ and σ are unbiased. Their variances are σΕ2/3n and σΕ2/(2nz(.9)) respectively. An estimate of the variation between experts is
σΕ2= SUM[SUM[(X(i,q)-μ-σz(q))2]]. The sums are over i=1,2,…,n and q.
Table 3 and figure 1 show the results assuming lognormal or normal fragility functions. The table 3 estimates agree pretty well with table 1 parameter inputs. The fragility function curves if figure 1 have similar shapes, because the lognormal and normal parameters in table 1 were chosen that way.
Table 3. Parameter estimates from subjective percentiles in table 2.
Parameter | Lognormal | Normal |
Mean | 1.0045 | 2.7064 |
Stdev | 0.0099 | 0.0269 |
Var(Mean) | 2.87 | 0.006 |
Var(Stdev) | 1.66 | 0.0004 |
Experts’ opinions on percentiles gave estimates of mean strengths of common materials used in nuclear power plants that were pretty good, resembling handbook values. But experts overestimated standard deviations by as much as a factor of two [Kececioglu and Selstad]. Let me know If you want the spreadsheet that did the tables and graph or if the hypothesized fragility is exponential or other fragility function.
Combining Fragility Functions for Several Failure Modes
If a component can fail in several modes, it is of interest to find a single-variate fragility function which describes the component failure in the weakest mode. Figure 2 is a fault tree of component failure, when a component can fail in two modes.
Here are two methods for determining the fragility function for component failure when its marginal fragility function for each failure mode is estimated. (The methods for two failure modes are easily extended to more modes.) If failure in either mode is caused by the same response variable such as earthquake, combining modes is easy.
Define strength or capability of component to resist failure as S1, and S2 in mode 1 and 2 respectively. The marginal fragility functions of S1 and S2 are F1(s) and F2(s). Let R denote the stress seen by the component. It must have the same units as S1 and S2. The response cumulative distribution function is FR(r). The probability of component failure is P[S1<R OR S2<R] = ∫P[S1<s OR S2<s|R=s]dFR(s), where the integral is over s from 0 to infinity.
If S1 and S2 are independent, P[Component Failure]= ∫(1-P[S1>s|R=s]P[S2>s|R=s])dFR(s) =
∫(1-(1-F1(s)(1-F2(s))dFR(s). The combined fragility function is 1-(1-F1(s)(1-F2(s).
If responses causing the two failure modes differ but come from a common response, then transform responses R1 and R2 from common response R. For example, R1 and R2 may be maximum displacement and peak velocity, both of which are related to peak acceleration, R. Let g1(R) and g2(R) be known functions relating the responses to the common response R. The combined fragility function is 1-(1-F1(g1(s))(1-F2(g2(s)).
Using Grouped and Weighted Subjective Percentiles for Estimating Fragility Functions and Quantifying Uncertainty Due to Lack of Knowledge
The least squares procedure assumes identically distributed errors for each of the subjective percentiles. Such estimates are not always identically distributed. Experts could be estimating percentiles for different types of components for the same generic component (e.g. different sized valves). Similarly, experts could be grouped by their background and experience. Consequently, subjective percentiles from experts in different groups may not be identically distributed. Also, opinions from the different experts could be weighted differently by the experts themselves (self or peer weighting) or by the person conducting the survey.
The analysis of a model with group effects and weighted experts estimates the fragility function parameters and the variation between experts within a group and between groups [Working and Hotelling, Karvetski et al.]. The latter estimates can help to quantify “uncertainty” in the parameter estimators due to differences between experts. [Please refer to the articles by George and Mensing for these methods, or ask me to reproduce them in spreadsheet form.]
Using Subjective Percentiles and Test Data
This section describes a least squares and a Bayesian method for combining subjective percentiles and test data. The least squares method treats the test data as an empirical distribution function with percentiles at each observed failure strength. These percentiles are put in the weighted sum of squared deviations just like subjective percentiles. The Bayesian method uses the subjective percentiles to estimate a prior distribution of the fragility function parameters .The test data is assumed to be a random sample from the fragility function. The Bayes method gives the posterior distribution of the fragility function parameters. Substituting the expected values (Bayes estimates) of the parameters into the fragility function yields a posterior Bayes estimate of the fragility function.
The least squares method inputs consist of subjective percentiles and test data. The measurement unit of the test data must be the same as that of the subjective percentiles. The objective is to estimate the parameters of the fragility function. The parameter estimators minimize the weighted sum of squared deviations between the fragility function and the percentiles of the empirical distribution of the test data or the subjective percentiles.
The inputs are strengths-at-failures in order of magnitude X(i), I = 1,2,…,k. The original sample may be larger, of size m, including survivors of stresses greater than X(k). Subjective percentiles are denoted X(I,q).
The objective function for estimating the mean and variance of strength at failure, assuming strength is normally distributed is WD*SUM[(X(i)-μ-σ*z(i/m))2]+WS*SUM[SUM[(X(I,q)-μ-σ*z(q))2]] to be minimized over μ and σ. The weights WD and WS must be specified by the user.
For example, assume WD = WS = 1, and the 10th, 50th, and 90th percentiles are given by each expert. The solutions were derived analytically [George and Mensing]. I programmed the objective function for direct minimization using Excel Solver, for any assumed invertible distribution.
Table 4. Inputs for example of combining test data and subjective percentiles. Eight test data examples were simulated from lognormal distribution from a sample of size 10. Expert opinions 10%, 50%, and 90% were simulated from normal distribution with parameters corresponding to lognormal distribution.
WD | 1 | Test data weight |
WS | 1 | Subjective opinion weight |
Samplem | 10 | Number of units tested, including survivors |
CombMean | 2.7169 | Estimate of combined mean |
CombStdev | 0.2697 | Estimate of combined standard deviation |
Table 5. Sample data and residuals using formula WD*SUM[(X(i)μσ*z(i/m))2]+WS*SUM[SUM[(X(I,q)μ-σ*z(q))2]]
Failure | Percentile | Test data | Residuals squared |
1 | 6.25% | 2.3387 | 0.00106 |
2 | 18.75% | 2.4970 | 5.07E-05 |
3 | 31.25% | 2.5998 | 0.000593 |
4 | 43.75% | 2.6885 | 0.001599 |
5 | 56.25% | 2.7756 | 0.003444 |
6 | 68.75% | 2.8703 | 0.00724 |
7 | 81.25% | 2.9884 | 0.016933 |
8 | 93.75% | 3.1907 | 0.060952 |
The mean and standard deviation combined estimates from table, 4, 2.732 and 0.2739, agree pretty well with normal parameters used to simulate the lognormal test data and the normal expert opinions.
The Bayesian procedure to incorporate test data assumes a lognormal fragility function and the joint conjugate prior distribution for lognormal mean and variance. If the test data is not censored or truncated, then the posterior joint distribution of lognormal mean and variance has the same form as the prior distribution.
The Bayesian procedure involves using the subjective percentiles to determine initial values for the parameters of the prior distribution, and then using the test data to evaluate the posterior marginal means for the lognormal mean and variance. These Bayes estimators are then used as estimators for the parameters of the lognormal fragility function. Please refer to the reports by George and Mensing. I will program them if someone sends subjective opinions and test data.
Multivariate Fragility Functions and Correlations
Earthquake responses are vibrations, so fragility functions could be and have been estimated from tests such as shaker table data. However, testing individual components or structures will never quantify dependence or even correlation unless multiple units are tested simultaneously on the same shaker table. That is reason to use field data from earthquake damage!
It is reasonable to expect that system failure probability increases, even at an increasing rate, with correlations ρ of component fragilities, depending on system “structure” function: ∂P[System failure]/∂ρ and ∂P[System failure]2/∂ρ2 are positive. It is important to quantify the effects of components’ strength-at-failure dependence [Baker, Bradley and Lee]. However, most seismic risk analyses involve creative methods to avoid quantifying dependence. [Fleming and Mikschi, Mosleh et al, NUREG/CR-5485 and 6268]. Probabilistic risk analysis (PRA) randomized parameters of simple risk analyses to represent uncertainty, lack of knowledge, and mathematically convenient approximations.
For an example of using expert opinions to estimate correlations, suppose one value of a conditional probability is estimated; assume it is the mean, median, or mode of a normal distribution; i.e., P[X1≤x1|X2=x2]=0.5. Then, correlation ρ=σ2*(μ1×1)/(σ1*(μ2-x2)) as in table 6. If only a percent is given, p, different from 50%, then solve p = μ1+σ1*ρ*(x2μ2)/σ2+z(p)*(1-ρ2)*σ12 for ρ.
Table 6. Example of correlation of conditional normal correlation computation.
X1 | X2 | |
Mean | 1 | 1 |
Stdev | 0.2 | 0.2 |
x1|x2 | 0.8 | 0.5 |
Correlation | 0.400 | 0.400 |
I asked for subjective percentiles of conditional distributions of strengths at failures, P[X1 > a|X2 = b] for a set of percentiles {a, b}. I checked for logical consistency: P[X1 > a|X2 = b] should decrease in b, because correlation is probably positive. How to extract correlations from conditional distribution percentiles?
Assume jointly (log)normally distributed strengths-at-failures X1 and X2. Minimize SUM[Observed – Expected]2 or SUM[Observed – Expected]2/Expected over all experts and percentiles a and b. “Observed” = opinions on P[X1 > a|X2 = b], and “Expected” = ugly formula, derived in Mathematica, copied and translated into Excel cell formula and VBA. That yields pairwise correlations, which do not necessarily yield a positive definite (legitimate) correlation matrix.
Check whether the covariance matrix of pairwise correlations, ΣX,Y, is positive definite. I used a spreadsheet to compute the determinants of all principal minors of the covariance matrix. If they are all positive, then the matrix is positive definite. If NOT, find nearest positive definite matrix by Frobenius distance and input that matrix. Alternatively, the maximum entropy covariance matrix estimate is positive definite. Contact pstlarry@yahoo.com for help with either alternative.
Fragility Correlations from Seismic Damage Records
Seismic failure data is preferred to expert opinions. Post-earthquake inspections record damages and provide data for estimating seismic fragility functions and their correlations of strength-at-failures. The collected earthquake damage data includes estimated peak ground accelerations in the neighborhoods. [Anagnos, PG&E and SCE substations, and SQUG, nuclear power plant (or similar) components]. This earthquake damage data and local PGA (Peak Ground Acceleration) are statistically sufficient to estimate seismic fragility functions, correlations, and sample uncertainty in the estimates. NUREG/CR parameter estimates based on subjective opinions and shaker table data differ substantially from earthquake-based parameter estimates [Anagnos, SQUG].
Table 7. Compare NRC NUREG/CR parameters (columns 2 and 3) and lognormal fragility parameter estimates PGA (g) (columns 5 and 6).
Component | Median | Log Stdev | NUREG/CR- | Median | Log Stdev | Source |
Transformer | 1.386 | 0.1 | 4659 | 1.34 | 0.99 | Anagnos |
Circuit Breaker | 7.63 | 0.48 | 3558 | 9.09 | 2.00 | Anagnos |
Disconnect Switch | 2.33 | 0.47 | 4659 | 1.40 | 0.82 | Anagnos |
Bus Support | 0.84 | 0.63 | Anagnos | |||
Diesel Generator | 0.63 or 0.92 | 0.25 or 0.35 | 4334 App. D | 1.05 | 1.9 | SQUG |
The maximum likelihood method finds parameter values that maximize the probability of seismic damage failure observations (likelihood). Maximum likelihood was also used to represent the bivariate probabilities for correlation estimation. Figure 3 shows least squares fragility function estimates that minimize the squared differences between observed and expected proportions of seismic damage failures. Both methods give parameter estimates in terms of strengths-at-failures PGA (g). Tables 8 and 9 show the correlation estimates computed using Mathematica and Excel, Solver, and VBA. [George 2015]
Table 8. Logarithmic strength-at-failure fragility function correlations for pairs of like components.
Earthquake | Component Pair | Correlation |
Whittier Narrows | Transformer | 0.7139 |
Whittier Narrows | Diesel Generator | 0.991 |
2-Whittier Narrows | Transformer | 0.9517 |
Santa Barbara | Transformer | 0.9999? |
Loma Preita | Transformer | 0.9906 |
Loma Preita | Bus Support | 0.9598 |
Northridge | Transformer | 0.6979 |
Alaska 1964 | Diesel Generator | 0.335 |
San Fernando | Bus Support | 0.7295 |
Chile 1985 | Diesel Generator | 0.909 |
Coalinga 1983 | Diesel Generator | 0.372 |
Table 9. Logarithmic strength-at-failure correlations for pairs of transformers and bus supports
Earthquake | Correlation |
Whittier Narrows | 0.8549 |
Loma Prieta | 0.9906 |
Northridge | 0.3983 |
Recommendation
Armchair probability or risk classifications used for screening components, such as FMECA, and RCM do not provide enough information to quantify fragility functions, reliability functions, failure rate functions, or their dependence [Aven]. Subjective opinions supplemented with seismic failure data conditional on earthquake do provide subjective fragility functions and their correlation matrix are required for seismic risk analyses. Earthquake data are available and preferred. I will help use it and combine it with subjective percentiles.
REFERENCES
Anagnos, Thalia, “Development of an Electrical Substation Equipment Performance Database for Evaluation of Equipment Fragilities,” San Jose Stat Univ. and PG&E, April 1999
ANSI/ANS (American National Standards Institute/American Nuclear Society). “Probabilistic Seismic Hazard Analysis,” ANSI/ANS-2.29-2008, Le Grange Park, Illinois, 2008
ASME/ANS “Standard for Level 1/Large Early Release Frequency Probabilistic Risk Assessment for Nuclear Power Plant Applications,” Standard RA-Sb-2013
Terje Aven, “Probabilities and Background Knowledge as a Tool to Reflect Uncertainties in Relation to Intentional Act,” Reliability Engineering and System Safety, Volume 119, pp. 229-234, November 2013
Baker, Jack W. “Introducing Correlation among fragility functions for multiple components,” 14th WCEE, Beijing, Oct. 2008
Bradley, Brendon A. and Dominic S. Lee, “Component Correlations in Structure-Specific Seismic Loss Estimation,” Earthquake Engng. Struct. Dyn., vol. 39, pp. 237–258, DOI 10.1002/eqe.937, 2010
EPRI (Electric Power Research Institute), “A Methodology for Assessment of Nuclear Power Plant Seismic Margin,” EPRI Report NP-6041-SL, Revision 1, Palo Alto, CA, 1991
Karl N. Fleming and Thomas J. Mikschl, “Technical Issues in the Treatment of Dependence in Seismic Risk Analysis,” NEA/CSNI/R(99)28
L. L. George, “Seismic Fragility Function Estimation,” Test Engineering and Management, Vol. 77, No 4, pp. 16-21, Aug.-Sept. 2015
L. L. George, “Multivariate Mechanical Reliability,” ASQ Reliability Review, Vol. 18, No. 4, Dec. 1998
L. L. George and J. E. Wells, “The Reliability of Systems of Dependent Components , Proceedings of ASQC National Meeting, San Francisco, April 1981
L. L. George and R. W. Mensing, “Using Subjective Percentiles and Test Data for Estimating Fragility Functions,” UCRL 81547, DoE Statistical Symposium Berkeley, CA, Oct. 1980, https://www.osti.gov/biblio/6688601-OhdPDF/?msclkid=7ace2b2fbcf511ecbf2da408f04a4947
L. L. George and R. W. Mensing, “Using Subjective Percentiles to Quantify Uncertainty and Estimate Correlations of Fragility Functions,” UCRL-86224, DOE Statistical Symposium, Idaho Falls, ID, Oct. 1982
Stephen C. Hora et al. “Median Aggregation of Distribution Functions,” Decision Analysis, Vol. 10, issue 4, Dec. 2013, http://pubsonline.informs.org/doi/abs/10.1287/deca.2013.0282
IAEA TECDOC-1937, “Probabilistic Safety Assessment for Seismic Events,” 2020
K. C. Kapur and L. Lamberson, Reliability in Engineering Design, Wiley, New York, 1977
Chris. Karvetski et al. “Probabilistic Coherence Weighting for Optimizing Expert Forecasts,: Decision Analysis, Vol. 10, issue 4, Dec. 2013, http://pubsonline.informs.org/doi/abs/10.1287/deca.2013.0279
D. B. Kececioglu, R. E. Smith, and E. A. Selstad, “Distribution of Strength in Sample Fatigue and Associated Reliability,” Ann. Reliability and Maintainability Conference, Detroit, MI, pp.659-672, July 1970
R.P. Kennedy, C. A. Cornell, R. D. Campbell, H.F. Perla, “Probabilistic Seismic Safety Study of an Existing Nuclear Power Plant,” Nucl. Eng. Des., 59:315–338, 1980
NAP, “Review of Recommendations for Probabilistic Seismic Hazard Analysis: Guidance on Uncertainty and the Use of Experts Treatment of Uncertainty,” National Academies Press, 1997
NUREG/CR-3558 “Handbook of nuclear power plant seismic fragilities, Seismic Safety Margins Research Program,” https://doi.org/10.2172/5313138, Nuclear Regulatory Commission, Washington, DC., Dec. 1983
NUREG/CR-4334, “An Approach to the Quantification of Seismic Margins in Nuclear Power Plants,” U.S. NRC, August 1985
NUREG/CR 4659, “Seismic Fragility of Nuclear Power Plant Components (Phase 2). Switchgear, I and C Panels (NSSS) and Relays,” by K. K. Bandyopadhyay, C. H. Hofmayer, K. M. Kassir, and S. E. Pepper, Brookhaven and US NRC, 1990
NUREG/CR-5485, “Guidelines on Modeling Common-Cause Failures in Probabilistic Risk Assessment,” by A. Mosleh,. D. M. Rasmuson, F. M. Marshall, INEL and Univ. of Maryland, Nov. 1998
NUREG/CR-6268 Rev. 1, “Common-Cause Failure Database and Analysis System: Event Data Collection, Classification, and Coding,“ by T. E. Wierman, D. M. Rasmuson, and A. Mosleh, INEL, 2007
NUREG/CR-6372, “Seismic Science Hazard Analysis Committee,“ Recommendations for PSHA: Guidance Uncertainty and Use of Experts,” two volumes, US NRC, 1997
SQUG, “Summary of Seismic Adequacy of Twenty Classes of Equipment Required for the Safe Shutdown of Nuclear Plants,” EQE Engineering [for EPRI on behalf of Seismic Qualification Utility Group] NP-7149-D, March 1991
H. Working and H. Hotelling, “Application of the Theory of Error to the Interpretation of Trends” , J. Am. Stat. Assn. , Suppl. (Proc.) vol. 24, pp. 73-85, 1929.
Larry George says
Thanks to Fred for publishing the article about subjective fragility function estimation. It is a legitimate statistical exercise when there is no alternative (or when data costs too much, takes too much time or there’s too little data, and when the consequences of using expert opinions are not too risky).
However, see https://www.linkedin.com/groups/1857182/ for “Predict MTBF without life data using subjective percentiles.” I posted it two weeks ago in the “No MTBF” LinkedIn group as a joke.