Accendo Reliability

Your Reliability Engineering Professional Development Site

  • Home
  • About
    • Contributors
    • About Us
    • Colophon
    • Survey
  • Reliability.fm
    • Speaking Of Reliability
    • Rooted in Reliability: The Plant Performance Podcast
    • Quality during Design
    • CMMSradio
    • Way of the Quality Warrior
    • Critical Talks
    • Asset Performance
    • Dare to Know
    • Maintenance Disrupted
    • Metal Conversations
    • The Leadership Connection
    • Practical Reliability Podcast
    • Reliability Gang
    • Reliability Hero
    • Reliability Matters
    • Reliability it Matters
    • Maintenance Mavericks Podcast
    • Women in Maintenance
    • Accendo Reliability Webinar Series
  • Articles
    • CRE Preparation Notes
      • Reliability Bites
    • NoMTBF
    • on Leadership & Career
      • Advanced Engineering Culture
      • ASQR&R
      • Engineering Leadership
      • Managing in the 2000s
      • Product Development and Process Improvement
    • on Maintenance Reliability
      • Aasan Asset Management
      • AI & Predictive Maintenance
      • Asset Management in the Mining Industry
      • CMMS and Maintenance Management
      • CMMS and Reliability
      • Conscious Asset
      • EAM & CMMS
      • Everyday RCM
      • History of Maintenance Management
      • Life Cycle Asset Management
      • Maintenance and Reliability
      • Maintenance Management
      • Plant Maintenance
      • Process Plant Reliability Engineering
      • RCM Blitz®
      • ReliabilityXperience
      • Rob’s Reliability Project
      • The Intelligent Transformer Blog
      • The People Side of Maintenance
      • The Reliability Crime Lab
      • The Reliability Mindset
    • on Product Reliability
      • Accelerated Reliability
      • Achieving the Benefits of Reliability
      • Apex Ridge
      • Beyond the Numbers
      • Breaking Bad for Reliability
      • Field Reliability Data Analysis
      • Metals Engineering and Product Reliability
      • Musings on Reliability and Maintenance Topics
      • Product Validation
      • Reliability by Design
      • Reliability Competence
      • Reliability Engineering Insights
      • Reliability in Emerging Technology
      • Reliability Knowledge
    • on Risk & Safety
      • CERM® Risk Insights
      • Equipment Risk and Reliability in Downhole Applications
      • Operational Risk Process Safety
    • on Systems Thinking
      • The RCA
      • Communicating with FINESSE
    • on Tools & Techniques
      • Big Data & Analytics
      • Experimental Design for NPD
      • Innovative Thinking in Reliability and Durability
      • Inside and Beyond HALT
      • Inside FMEA
      • Institute of Quality & Reliability
      • Integral Concepts
      • Learning from Failures
      • Progress in Field Reliability?
      • R for Engineering
      • Reliability Engineering Using Python
      • Reliability Reflections
      • Statistical Methods for Failure-Time Data
      • Testing 1 2 3
      • The Hardware Product Develoment Lifecycle
      • The Manufacturing Academy
  • eBooks
  • Resources
    • Special Offers
    • Accendo Authors
    • FMEA Resources
    • Glossary
    • Feed Forward Publications
    • Openings
    • Books
    • Webinar Sources
    • Journals
    • Higher Education
    • Podcasts
  • Courses
    • Your Courses
    • 14 Ways to Acquire Reliability Engineering Knowledge
    • Live Courses
      • Introduction to Reliability Engineering & Accelerated Testings Course Landing Page
      • Advanced Accelerated Testing Course Landing Page
    • Integral Concepts Courses
      • Reliability Analysis Methods Course Landing Page
      • Applied Reliability Analysis Course Landing Page
      • Statistics, Hypothesis Testing, & Regression Modeling Course Landing Page
      • Measurement System Assessment Course Landing Page
      • SPC & Process Capability Course Landing Page
      • Design of Experiments Course Landing Page
    • The Manufacturing Academy Courses
      • An Introduction to Reliability Engineering
      • Reliability Engineering Statistics
      • An Introduction to Quality Engineering
      • Quality Engineering Statistics
      • FMEA in Practice
      • Process Capability Analysis course
      • Root Cause Analysis and the 8D Corrective Action Process course
      • Return on Investment online course
    • Industrial Metallurgist Courses
    • FMEA courses Powered by The Luminous Group
      • FMEA Introduction
      • AIAG & VDA FMEA Methodology
    • Barringer Process Reliability Introduction
      • Barringer Process Reliability Introduction Course Landing Page
    • Fault Tree Analysis (FTA)
    • Foundations of RCM online course
    • Reliability Engineering for Heavy Industry
    • How to be an Online Student
    • Quondam Courses
  • Webinars
    • Upcoming Live Events
    • Accendo Reliability Webinar Series
  • Calendar
    • Call for Papers Listing
    • Upcoming Webinars
    • Webinar Calendar
  • Login
    • Member Home
Home » Articles » on Tools & Techniques » Progress in Field Reliability? » Proportional Hazard Reliability Deterioration? 

by Larry George Leave a Comment

Proportional Hazard Reliability Deterioration? 

Proportional Hazard Reliability Deterioration? 

Broom” charts are reliability function estimates from different or successive production cohorts. Their differences may contain actionable information. How to quantify and use that information? This article provides an alternative to traditional Duane, AMSAA, and Crow reliability growth models, based on Cox’ proportional hazards model for test or field reliability data. This article provides:

  • Broom charts that show reliability growth or deterioration
  • Reliability growth references, including credit to my UC Berkeley professors
  • Proportional hazards (PH) model(s) of reliability growth with vs. without lifetime data
  • Suggestions for what to do about reliability deterioration

Suppose successive test samples or production cohorts have reliability growth or deterioration, caused by TAAF (Test, Analyze And Fix), configuration, environment, stress, or vendor changes, or ??? Suppose product cohorts have “proportional hazard” functions. 

Biostatisticians estimate “hazard” functions, also known as failure rate functions in reliability lingo. Proportional hazard functions seem like a reasonable assumption, because generations of products have similar designs, parts, production processes, customers, environments, and lifetimes.

Reliability growth may not be MTBF growth, because most products’ lives are less than MTBF. We need ways to quantify reliability growth or deterioration, besides MTBF growth.

This article shows a PH model from Fred’s monthly production cohorts (ships)and their lifetime data in Kaplan-Meier Nevada table 1. It will also compare the PH model of Fred’s monthly failure counts, without lifetime data; i.e., cohort lifetimes vs. monthly return sums, using just monthly ships and failure counts, available from revenue and cost data required by GAAP, and some work.

Reliability Growth Illustrated?

Each line in figure 1 is a nonparametric reliability estimate from each months’ production (cohort). The figure’s legend is cohort production month (7 months of data). Later months’ lines are shorter because there is less data from more recent months. Shorter lines from more recent months’ cohorts appear to be more reliable. That’s because later cohorts’ (shorter lines) are higher than earlier cohorts

Reliability graph showing decline over months for different data sets.
Figure 1. Reliability “broom” chart shows reliability function estimates from each month’s cohort, Dec. 2009 to May 2010, improving. 
Reliability trend graph showing multiple colored lines representing different years (97-4 to 99-4) with a sharp decline in reliability for one line.
Figure 2. Catalyst Semiconductor EEPROMs’ reliability broom chart from quarterly cohorts

Figure 2 shows a broom chart of EEPROM reliability deterioration in January 1999. The EEPROM vendor shrunk the die size. Some January EEPROMs lost their little memories.

An alternative to broom charts for detecting problems is proposed by Wu and Meeker, “Early Detection of Reliability Problems Using Information from Warranty Databases”. They “…recommend a nonparametric approach based on warranty report counts modeled with a Poisson distribution with report intensities that depend on production period and number of periods in service. This is equivalent to fitting a piece-wise exponential distribution to the available data and does not require specification or use of a particular distributional form for the time-to-(failure)-report distribution.” This seems equivalent to the assumptions behind USAF Logistics Command actuarial methods and Poisson demand for engines circa 1974 [George 1978]. This does not quantify reliability growth in an actionable way. 

Reliability Growth References

J. T. Duane observed that test time T divided by the number of failures N(T), T/N(T), plotted on log-log paper averaged linear. That meant MTBF=βT-α, where T is the total test time for parameters α and β. ReliaSoft says Duane published that in a 1962 paper [Duane 1964]. I found early reliability growth references by my UC Berkeley professors William Jewell and Richard Barlow. [Jewell 1963, Barlow and Scheur 1966].

It is convenient to have an MTBF growth model when testing to achieve an MTBF specification. That way one could predict a date on which reliability testing will show the MTBF specification will be met with specified confidence. Larry Crow extended the MTBF model to produce confidence limits on MTBFs. “The model analyzes the reliability growth progress within each test phase…” http://reliawiki.org/index.php/Crow-AMSAA_(NHPP)/. That inspired software, standards, and, more reliability growth articles. The reference by MacDiarmid and Morris contains a lot of reliability growth history and standards. 

Duane-Crow-AMSAA reliability growth is really MTBF growth. It is for test-to-failure sample scenarios. It is irrelevant if reliability is good, because few field failures occur, at ages much less than MTBF. MTBF growth is irrelevant for infant mortality. MTBF growth in HALT (Highly Accelerated Life Testing) may not translate into reality, because highly accelerated testing may change the shape of failure rate functions. 

Want to verify reliability (not just MTBF) growth resulting from test data, TAAF, or product or component improvements over time, tests, or renewal counts? See NIST/SEMATECH e-handbook “How can you model reliability Growth?” for some alternatives to MTBF growth, https://www.itl.nist.gov/div898/handbook/apr/section1/apr19.htm/.

Now there’s an R-program, “ReliaGrowR”, for Reliability Growth Analysis (RGA). The project implements core reliability growth models, including the Duane, Crow, AMSAA, piecewise NHPP (Non-Homogeneous Poisson Processes), and piecewise NHPP with changepoint detection! ReliaGrowR is available on the Comprehensive R Archive Network (CRAN) and was verified with example analyses, unit tests, and cross-platform checks to ensure reliability and stability [Govan]. ReliaGrowR models a piecewise linear NHPP model. 

In a January 2026 memo, the USAF forbid the use of actuarial failure rate functions (equivalent to nonparametric reliability functions) and actuarial forecasts. The memo requires the use or average “Removals”/Time, (Failures/Time), https://accendoreliability.com/progress-in-usaf-engine-logistics/. Imagine reliability growth by monitoring average Failures/Time in successive calendar intervals. Good luck with that.

Reliability Estimates: Kaplan-Meier by Cohort

Table 1 shows Fred’s monthly sales or ships and corresponding monthly failures by cohort. The bottom row failure sums contain reliability information but do not indicate which cohorts they came from. Assume Fred’s monthly failures were dead-forever, not renewals or relevations. (“Relevations” are renewals or restorations to some good-as-old state [Krakowski].)

Table 1. Lifetime Data, in the form of Nevada Table, from “Nevada Charts to Gather Data”, by Fred Schenkelberg, https://accendoreliability.com/nevada-charts-gather-data/.

Month Ship JanFebMar Apr MayJun
Jan  35193637 103
Feb  6292  48203524
Mar  7132  8142531
Apr  5633   4136
May  4222    58
Jun  4476     6
Sums3127431019458878

Reliability estimates in table 2 and figure 3, by cohort month of production, show deterioration and perhaps improvement in May and June! How to quantify cohort changes in reliability function estimates?

Table 2. Reliability estimates, by cohort, horizontally in each row, for ages 1-6 months. 

Month Ships123456
Jan  35190.99910.99740.99660.99460.99170.9909
Feb  62920.99940.99810.99490.98930.9855 
Mar  71320.99890.99690.99340.9890  
Apr  56330.99750.99310.9875   
May  42220.99690.9955    
Jun  44760.9982     
A line graph showing reliability trends across six months
Figure 3. Broom chart shows successive cohorts’ reliability estimates from table 2 

Proportional Hazards Model of Cohort Reliability Growth

Cox’ proportional hazards (PH) models are popular in social sciences and medical science to assess associations between external variables (factors) and reliability, https://en.wikipedia.org/wiki/Proportional_hazards_model/. A PH model of failure rate function is a(t;z,β) = ao(t)Exp[z*β], where ao(t) is an underlying failure rate function, z could be 1, 2, 3,… cohorts, changes, improvements, fixes, etc., and β is a regression coefficient. Failure rate functions could remain proportional, because generations of products use similar designs, parts, production processes, customers, and environments.

Professor William Jewell proposed a failure rate function a(t; z) = β1*ao(t)+β2*a(t|z), where t is time between failures, and z is the time history of failures X(1), X(1)+X(2),…That is an example of a PH model, because ln[ao(t)Exp[z*β] = ln[ao(t)]+z*β, if ln[ao(t)] is linear.

Table 3. Check proportionality! The table shows cohort failure rates and their ratios of successive monthly cohorts’ failure rates (Jan/Feb, Feb/Mar, etc.) ratios, a(t; z)/a(t; z+1), z=1,2,…,5

 Age, Months23456
Jan0.000852510.000317860.002176720.00245420.00175160.0013404
Feb0.000472930.000854680.000916140.00152330.0010013 
Mar0.000564390.001555950.002029240.0006999  
Apr0.001524890.002807220.00153532   
May0.002755870.00177040    
Jun0.00162285     
 RatiosJan/FebFeb/MarMare/AprApr/MayMay/Jun
 Age =11.341011.341590.267860.35640.73316
 Age =20.566750.647090.904661.27589 
 Age =30.451330.441090.63521  
 Age =40.807173.11925   
 Age =51.72276    

Ratios of actuarial failure rates vary! The average ratio of 0.974 is pretty close to 1.0, but the standard deviation is 0.729 and the coefficient of variation is 78%. Proceed with PH model fit anyway, for comparison without lifetime data in Nevada table 1, using just ships and the bottom row sums! 

Use least squares to fit the cohort actuarial failure rates derived from table 2 reliability functions with the PH model (a(t;z,β)=ao(t)Exp[z*β]), by changing β. Minimize the sum of squared errors (SSE) between estimated cohort actuarial rates and PH model. The minimal β=0.11177 indicates failure rate increases with cohorts, confirming the reliability decrease in figure 3! 

Table 4. Least squares to fit the PH model to the cohort actuarial rates from table 3: a(t; z,β)=ao(t)Exp(z*β) where ao(t) is the actuarial failure rate from the Kaplan-Meier reliability estimates from table 2 and z=cohorts 1,2,…,6. 

123456
0.0010370.001160.0012970.001450.0016220.001813
0.0020050.0022420.0025070.0028030.003135 
0.0022570.0025240.0028230.003157  
0.0048610.0054350.006078   
0.0039200.004384    
0.000961     
3.5704E-05<- SSE0.11177<- β  

Without Lifetime Data? 

Lifetime data is not required to quantify reliability growth, despite what people think! LinkedIn poll results: 91% said lifetime data is required to estimate reliability or survival functions, and 9% said lifetime data is NOT required. Nobody said “Don’t Know” [George, May 2023].

Periodic ships and returns or failure counts (bottom row sums from Nevada table 1) are statistically sufficient to make nonparametric reliability estimates [George, May 2022]. The ships and returns data could come from revenue and service cost (e.g., spares sales), data required by GAAP. 

To fit the PH model, compute reliability and failure rate function estimates from successive cohorts, excluding earlier cohorts and their failure sums as shown in table 5. Figure 4 plots the maximum likelihood reliability function estimates. These estimates should not be compared with figure 3 reliability estimates, because cohorts are defined differently. Nevertheless, figure 4 shows reliability deterioration similar to figure 3. 

Table 5. Ships and column failure sums starting with cohort 1 rows 2-7, cohort 2 rows 8-12, etc. from Nevada table 1. The “Failures” column rows 2-7 are same as table 1 bottom row sums.

Month Ships Failures
Jan  35193
Feb  629210
Mar  713219
Apr  563345
May  422288
Jun  447678
Feb  62924
Mar  713216
Apr  563338
May  422278
Jun  447675
Mar71328
Etc.563318

Figure 4. Reliability function estimate computed from successively later cohorts. Longest line is from Jan-June and shortest line is from June. 

Table 6. Ratios of successive failure rate function estimates, without lifetime data.

1/22/33/44/55/6
1.802620.371912.375971.611041.74926
0.837950.549300.451472.17652 
0.370120.554271.32170  
0.553331.58564   
1.69817    

Table 7. Computes PH β Using Solver to minimize SSE between cohort failure rate function ratios (similar to table 3) and proportional hazards model ratios (shown).  

12345
0.0010080.0011900.0014070.0016630.001966
0.0005590.0006610.0007810.000923 
0.0006670.0007890.000932  
0.0018020.002130   
0.003257    
8.564E-06<- SSE0.167108539<- β 

Ratios of actuarial failure rates vary! Average ratio 1.2006 is pretty close to 1.0, but the  β with lifetime data by cohort = 0.11177 vs. β without…= 0.16711. Note also SSE=8.564E-06 without lifetime data is smaller too: SSE=3.57E-05 with lifetime data in Nevada table.

There’s a price for not having lifetime data. β-without lifetime data = 0.16711 may exaggerate increases in failure rate function estimates. Maybe that’s good for early warning. Maybe it’s biased by earlier, larger cohorts. At least it quantifies trend in failure rate function estimates.

If you don’t have lifetime data, ships and returns counts are statistically sufficient and available from revenue and service cost data required by GAAP (and some work)! Is the difference in β with vs. without lifetime data worth the cost of lifetime data? Lifetime data requires tracking products and service parts by name, serial number, and ages at first use to first failure and survivors’ ages. Lifetime data presumes dead-forever, and may not include renewal or recurrent processes.

Try the PH model [Evanco], for comparison without lifetime data, for software. For other attempts, see articles by Folorunso et al. and Strunz et al. PH models for factors have FDA approval!

What could We Do About Reliability Deterioration?

If reliability is deteriorating: Why? What did it? When? How to fix it? Cost? Broom charts may identify outlying cohorts as in figure 2. Use “Statistical Reliability Control” [George, Aug. 2023]. 

You could relate β to bang per buck, dβ/d$$$, for alternative improvements and their $$$ costs per unit change in β, and use budget constrained optimization to maximize reliability as a function of β. β is simple, scalar; one measure for all cohorts, although later cohorts are smaller and have shorter lifetimes and fewer failures. You could make PH parameters z and β into vectors and a(t; z, β(j))=ao(t)Exp[Σβ(j)*z], j=1,2,… to incorporate more z-factors besides cohort in the PH model.

Kullback-Leibler divergences (bits), as in “Statistical Reliability Control”, might be a better indicator of reliability deterioration than a proportional hazards model, https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence/. Kullback-Leibler divergence is a pairwise multiple measure of differences between old probability density functions p(t) and new q(t), Σp(t)log(p(t)/q(t)), for some range t=1,2,…k. Kullback-Leibler divergence is intended for pairwise comparisons of sums of probability density functions, Σp(t) and Σq(t), over same range(s), usually Σp(t) and Σq(t) =1.0. In the reliability growth context, you may only have new q(t) for small t = k, maybe t = cohorts k-1,k.

What Can You Do to Improve Reliability?

Ask yourself and customers if the product’s reliability is good enough? That requires knowing the product’s reliability: https://accendoreliability.com/reliability-from-current-status-data/. I have had some success by offering to share a vendor’s part’s reliability estimate, in our products, if they would share all their customers’ ships and returns (failure) counts. I provided estimates of vendor’s part’s reliability, in our products vs. vendor’s part’s population reliability.  

Designers claim that their designs determine reliability. A Sun Computer design engineer asked, “Why do we need you? We design reliability into our products.” I gave him nonparametric estimates of field failure rate functions for several products (1996), showing infant mortality and wearout beginning within 12 months. Process, shipping, installation, training, and usage all conspire to deteriorate design reliability.

What if Renewal Process?

Duane-Crow-AMSAA reliability growth is NOT for renewal, recurrent, repair, or replacement process data. Reliability growth implies renewals are better than old. What if recurrent processes have independent but not identical distributions for successive lifetimes? Assume Cox’ proportional hazards model of failure rate functions for successive lifetimes? It’s been done before. 

MCF, is the Mean Cumulative (failures) Function, E[N(t)], the expected value of the number of failures in a calendar or operating time t. MCF is for systems that are repairable or recurrent processes [Nelson, Ascher and Feingold, Trindade]. Reliability growth or deterioration is shown by curvature of the MCF as time progresses. But what if you don’t have lifetime data?

What if successive failures are renewal (independent, good-as-new, lifetimes) or relevation processes? Relevation means repair after failure brings product back to life good-as-old or somewhere between good-as-old and good-as-new or perhaps even better. Refer to “Hysterecal Renewal Processes” for some examples [George, Aug. 2023].

Stay tuned to www.accendoreliability.com for how to estimate cohort proportional hazards reliability growth for renewal or recurrent processes, with or without lifetime data!

Reliability Growth References

Barlow, Richard E. and Ernest M. Scheuer, “Reliability Growth During a Development Testing Program,” Technometrics, Vol. 8, pp. 53-60, 1966

Larry H. Crow, “Confidence Interval Procedures for Reliability Growth Analysis”, Technical Report No. 197, June 1977

J. T. Duane, “Learning Curve Approach to Reliability Monitoring”, IEEE Transactions on Aerospace, vol. 2, pp. 563-566, 1964

Robert Easterling, “The Assessment of System and Component Reliabilities Based on Both System and Component Test Results”, Note 3 of Sandia Lab Probability and Statistics Notes, Dec. 1970 

Easterling, R. G., and R. R. Prairie. “Combining Component and System Information.” Technometrics, vol. 13, no. 2, pp. 271–80. JSTOR, https://doi.org/10.2307/1266789/, 1971

Robert G. Easterling, Mainak Mazumdar, Floyd Spencer, and Kathleen Diegert, “System-Based Component Test Plans and Operating Characteristics: Binomial Data”, Technometrics, (33) 3, March 2012

P. B. Govan, “ReliaGrowR: Modeling and Plotting Functions for Reliability Growth Analysis,” Reliability and Maintainability Symposium (RAMS), Miramar Beach, FL, USA, pp. 1-6, doi: 10.1109/RAMS50514.2026.11424445, 2026

Joe Alex Granado and Tongdan Jin, “Spare Provisioning for System Maintenance under Reliability Growth-A Case Study”, INFORMS meeting, Austin, TX, Nov. 2010

William S. Jewell, “A General Framework for Learning Curve Reliability Growth Models,” UC Berkeley Operations Research Center, AFOSR-81-8122, April 1963  

William S. Jewell, “Reliability Growth as an Artifact of Renewal Testing”, University of California, ORC 78-9, Operations Research Center, Berkeley, June 1978 or October 1978

Krakowski, M. “The Relevation Transform and a Generalization of the Gamma Distribution Function”, Revue Francaise d’Automatique, Informatique et Recherche Operationnelle, vol. 7 , pp. 107–20, doi:10.1051/ro/197307V201071, 1973

Preston R. MacDiarmid and Seymour F. Morris, “Reliability Growth Testing Effectiveness”, RADC-TR-84-20, AD-A141 232,  Jan. 1984

MIL-HDBK-189c, “Reliability Growth Management”, June 2011

MIL-STD-1635, “Reliability Growth Testing”, Feb. 1978

ReliaSoft, “Reliability Growth & Repairable System Data Analysis Reference”, https://help.reliasoft.com/reference/reliability_growth_and_repairable_system_analysis/pdfs/rga_ref.pdf/, 1992-2005 

Huaiqing Wu and William Q. Meeker, “Early Detection of Reliability Problems Using Information From Warranty Databases”, March 2001

MCF References

Stephen A. Smith and Shmuel Oren, “Reliability Growth of Repairable Systems”, Naval Research Logistics Quarterly,Vol. 27, Issue 4, pp. 539-547, Dec. 1980

Jan Block, Alreza Ahmadi, Tommy Tyrberg, and Uday Kumar, “Fleet-Level Reliability Analysis of Repairable Units: A Non-Parametric Approach using the Mean Cumulative Function”, International Journal of Performability Engineering,Vol. 9, No. 3, pp. 333-344, May 2013

David Trindade and S. Nathan, ”Field Data Analysis for Repairable Systems: Status and Industry Trends”, In: Misra, K.B. (eds) Handbook of Performability Engineering. Springer, London. https://doi.org/10.1007/978-1-84800-131-2_26, 2008

Harold Ascher and Harry Feingold, Repairable Systems Reliability: Modeling, Inference, Misconceptions and Their Causes, Marcel Dekker, 1984

References to PH and Alternatives?

David R. Cox, “Regression Models and Life-Tables”. Journal of the Royal Statistical Society, Series B. vol. 34 (2), pp. 187–220, doi:10.1111/j.2517-6161.1972.tb00899.x, JSTOR 2985181, MR 0341758, 1972

W. M. Evanco, “Using a Proportional Hazards Model to Analyze Software Reliability,” STEP ’99. Proceedings Ninth International Workshop Software Technology and Engineering Practice, Pittsburgh, PA, USA, pp. 134-141, doi: 10.1109/STEP.1999.798487, 1999

Serifat Folorunso, Richard Oluwaseun Kehinde, Ibrahim Arionola Fayemi, and Sukurat Salam, “Deep Learning-Based Survival Analysis and Recurrence Prediction in Breast Cancer Patients Using Clinical and Genomic Data,” SCOPUA Journal of Applied Statistical Research, Vol.2, Issue 1, https://doi.org/10.64060/JASR.v2i1.4, March 2026 

Richard Strunz and Jeffrey W. Herrmann, “Planning, Tracking, and Projecting Reliability Growth A Bayesian Approach”, Proc. Ann. Reliability & Maintainability Symposium, 2012

References by L. L. George

“Revaluation of the Air Force Actuarial Method for Forecasting Engine Requirements,” Proceedings of the Annual Reliability and Maintainability Symposium, pp. 7-10, January, 1978 

“Credible Reliability Test Planning”, https://accendoreliability.com/credible-reliability-test-planning/, March 2022

“How Can You Estimate Reliability Without Life Data?”, (Myron Tribus) https://accendoreliability.com/can-estimate-reliability-without-life-data/, May 2022

“Is Lifetime Data Required”, https://accendoreliability.com/poll-is-life-data-required/, May 2023

“Statistical Reliability Control”, https://accendoreliability.com/statistical-reliability-control/, August 2023“Proportional Hazards Reliability of Hysterecal Recurrent Processes”, https://accendoreliability.com/proportional-hazards-reliability-of-hysterecal-recurrent-processes/, August 2023

Filed Under: Articles, on Tools & Techniques, Progress in Field Reliability?

About Larry George

UCLA engineer and MBA, UC Berkeley Ph.D. in Industrial Engineering and Operations Research with minor in statistics. I taught for 11+ years, worked for Lawrence Livermore Lab for 11 years, and have worked in the real world solving problems ever since for anyone who asks. Employed by or contracted to Apple Computer, Applied Materials, Abbott Diagnostics, EPRI, Triad Systems (now http://www.epicor.com), and many others. Now working on actuarial forecasting, survival analysis, transient Markov, epidemiology, and their applications: epidemics, randomized clinical trials, availability, risk-based inspection, Statistical Reliability Control, and DoE for risk equity.

« What Is RCM? Understand It in 90 Seconds!
Why Your Assets Aren’t Delivering Value — And How SAMP Fixes That »

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Articles by Larry George
in the Progress in Field Reliability? article series

Join Accendo

Receive information and updates about articles and many other resources offered by Accendo Reliability by becoming a member.

It’s free and only takes a minute.

Join Today

Recent Articles

  • Why Your Assets Aren’t Delivering Value — And How SAMP Fixes That
  • Proportional Hazard Reliability Deterioration? 
  • What Is RCM? Understand It in 90 Seconds!
  • Leadership Foundations – Reliability Engineering Influence
  • Diesel Generator Stress

© 2026 FMS Reliability · Privacy Policy · Terms of Service · Cookies Policy

Book the Course with John
  Ask a question or send along a comment. Please login to view and use the contact form.
This site uses cookies to give you a better experience, analyze site traffic, and gain insight to products or offers that may interest you. By continuing, you consent to the use of cookies. Learn how we use cookies, how they work, and how to set your browser preferences by reading our Cookies Policy.