Accendo Reliability

Your Reliability Engineering Professional Development Site

  • Home
  • About
    • Contributors
    • About Us
    • Colophon
    • Survey
  • Reliability.fm
    • Speaking Of Reliability
    • Rooted in Reliability: The Plant Performance Podcast
    • Quality during Design
    • CMMSradio
    • Way of the Quality Warrior
    • Critical Talks
    • Asset Performance
    • Dare to Know
    • Maintenance Disrupted
    • Metal Conversations
    • The Leadership Connection
    • Practical Reliability Podcast
    • Reliability Hero
    • Reliability Matters
    • Reliability it Matters
    • Maintenance Mavericks Podcast
    • Women in Maintenance
    • Accendo Reliability Webinar Series
  • Articles
    • CRE Preparation Notes
    • NoMTBF
    • on Leadership & Career
      • Advanced Engineering Culture
      • ASQR&R
      • Engineering Leadership
      • Managing in the 2000s
      • Product Development and Process Improvement
    • on Maintenance Reliability
      • Aasan Asset Management
      • AI & Predictive Maintenance
      • Asset Management in the Mining Industry
      • CMMS and Maintenance Management
      • CMMS and Reliability
      • Conscious Asset
      • EAM & CMMS
      • Everyday RCM
      • History of Maintenance Management
      • Life Cycle Asset Management
      • Maintenance and Reliability
      • Maintenance Management
      • Plant Maintenance
      • Process Plant Reliability Engineering
      • RCM Blitz®
      • ReliabilityXperience
      • Rob’s Reliability Project
      • The Intelligent Transformer Blog
      • The People Side of Maintenance
      • The Reliability Mindset
    • on Product Reliability
      • Accelerated Reliability
      • Achieving the Benefits of Reliability
      • Apex Ridge
      • Breaking Bad for Reliability
      • Field Reliability Data Analysis
      • Metals Engineering and Product Reliability
      • Musings on Reliability and Maintenance Topics
      • Product Validation
      • Reliability by Design
      • Reliability Competence
      • Reliability Engineering Insights
      • Reliability in Emerging Technology
      • Reliability Knowledge
    • on Risk & Safety
      • CERM® Risk Insights
      • Equipment Risk and Reliability in Downhole Applications
      • Operational Risk Process Safety
    • on Systems Thinking
      • The RCA
      • Communicating with FINESSE
    • on Tools & Techniques
      • Big Data & Analytics
      • Experimental Design for NPD
      • Innovative Thinking in Reliability and Durability
      • Inside and Beyond HALT
      • Inside FMEA
      • Institute of Quality & Reliability
      • Integral Concepts
      • Learning from Failures
      • Progress in Field Reliability?
      • R for Engineering
      • Reliability Engineering Using Python
      • Reliability Reflections
      • Statistical Methods for Failure-Time Data
      • Testing 1 2 3
      • The Hardware Product Develoment Lifecycle
      • The Manufacturing Academy
  • eBooks
  • Resources
    • Accendo Authors
    • FMEA Resources
    • Glossary
    • Feed Forward Publications
    • Openings
    • Books
    • Webinar Sources
    • Journals
    • Higher Education
    • Podcasts
  • Courses
    • Your Courses
    • 14 Ways to Acquire Reliability Engineering Knowledge
    • Live Courses
      • Introduction to Reliability Engineering & Accelerated Testings Course Landing Page
      • Advanced Accelerated Testing Course Landing Page
    • Integral Concepts Courses
      • Reliability Analysis Methods Course Landing Page
      • Applied Reliability Analysis Course Landing Page
      • Statistics, Hypothesis Testing, & Regression Modeling Course Landing Page
      • Measurement System Assessment Course Landing Page
      • SPC & Process Capability Course Landing Page
      • Design of Experiments Course Landing Page
    • The Manufacturing Academy Courses
      • An Introduction to Reliability Engineering
      • Reliability Engineering Statistics
      • An Introduction to Quality Engineering
      • Quality Engineering Statistics
      • FMEA in Practice
      • Process Capability Analysis course
      • Root Cause Analysis and the 8D Corrective Action Process course
      • Return on Investment online course
    • Industrial Metallurgist Courses
    • FMEA courses Powered by The Luminous Group
      • FMEA Introduction
      • AIAG & VDA FMEA Methodology
    • Barringer Process Reliability Introduction
      • Barringer Process Reliability Introduction Course Landing Page
    • Fault Tree Analysis (FTA)
    • Foundations of RCM online course
    • Reliability Engineering for Heavy Industry
    • How to be an Online Student
    • Quondam Courses
  • Webinars
    • Upcoming Live Events
    • Accendo Reliability Webinar Series
  • Calendar
    • Call for Papers Listing
    • Upcoming Webinars
    • Webinar Calendar
  • Login
    • Member Home
Home » Articles » on Product Reliability » Reliability in Emerging Technology » The McNamara Fallacy 

by Christopher Jackson 1 Comment

The McNamara Fallacy 

The McNamara Fallacy 

… and why we still make rubbish products that break a lot

Modern militaries don’t win many wars these days. The most dominant, well-funded, highly-trained armies have consistently lost (or at least not won) the Korean War, Vietnam War, Afghanistan War, arguably the Iraq War, and plenty of others. And many dominant, well-funded, highly-trained companies are consistently spitting out unreliable or unimaginative products that smaller and less dogmatic companies have no problem bettering.

To understand why, let’s look at a man called Robert McNamara.

He earned a degree in economics, an MBA from Harvard Business School, ultimately teaching accounting as its youngest professor. He was really good with statistics and worked with the United States Army Air Forces (USAAF) Office of Statistical Control, focusing on all the data and details of supporting operations throughout World War II. He was one of several colleagues from this office who were then hired by the Ford Motor Company in the 1950s, turning around a chaotic management system.

We owe our ‘addiction’ to spreadsheets to Robert McNamara, whose management style was labelled ‘scientific’ due to its focus on statistics and numbers. And it worked. But this is not where the story ends.

Let’s move on to Vietnam

In 1960, then US President John F. Kennedy interviewed McNamara, who was president of the Ford Motor Company at the time, to offer him the role of either Secretary of Defense or Treasury. McNamara chose Defense. The Vietnam War was already well underway in practice, if not in name. The US was supporting French efforts to combat ‘communist threats’ in countries like Vietnam, but really wasn’t doing a lot. In 1965, McNamara was in charge when the US fabricated an attack by North Vietnamese (communist) forces on one of their naval vessels in the Gulf of Tonkin off the Vietnamese coast, triggering McNamara and politicians alike to argue for a full-scale US-led ground offensive.

And one of the most useless, costly, deadly wars started, with its legacy of deformed children (from defoliation chemicals), maimed farmers (from land mines), and plenty of other issues still wreaking havoc today.

We are still talking about how bad it is today

The Vietnam War became known as ‘McNamara’s war’ due to the control he exerted over it. This was originally a title he relished. But several disparaging terms that we use today eventually emerged from this passage in geopolitical history. The term “Credibility gap” was coined to describe where the general population became increasingly skeptical of politicians and government officials due to the lies they were telling (like those told about what happened in the Gulf of Tonkin). The term “micromanagement” emerged in the 1970s to describe several different leadership styles, including McNamara’s. 

But then it gets personal. “McNamara’s Folly” was used to refer to a program recruiting more than 300,000 soldiers who would otherwise fail the Army’s mental or medical standards (many of whom could barely speak English). They died at three times the rate of other combat soldiers.

And finally, there was the “McNamara Fallacy,” which is perhaps the most useful in terms of what we can learn about the way we do things today.

Cars aren’t rifles. Or soldiers.

The “McNamara fallacy,” also known as the “quantitative fallacy,” describes the tendency to make decisions based solely on observations that are easily quantifiable, while ignoring or dismissing all other metrics. McNamara frustrated many senior officers with his focus on measuring largely logistic, administrative, or numerical quantities. This can work really well in factories that manufacture cars, but not in a complex human-centric terrain that is modern warfare. So more important things, such as morale, enemy motivations, and civilian sentiment, were not considered during military planning. At all.

McNamara was well known for permanently sidelining officers who asked him to stop focusing on statistics and start looking at the more qualitative aspects of warfare. On the other hand, officers who enjoyed a zealot-like focus on statistics were promoted and rewarded. Enter General William Westmoreland, who became the US Commander of Military Assistance Command, Vietnam (MACV).

Westmoreland’s strategy was (statistically) simple: drive up the body count. Specifically, killing (lots) more of them than they kill of ours. This was perfect for McNamara, who didn’t comprehend warfare at all, but could stand behind numbers when arguing that things were going well (and for more funds).

But … this was disastrous. Military units were judged on how many enemy combatants they killed. This took the focus away from real military objectives (like targeting resupply routes, winning over the hearts and minds of the local population, and so on). Preventing an enemy offensive by outmaneuvering them or denying them access to key terrain was not worth it (no body count). Allowing them to start the offensive that risked soldiers on both sides, but would inevitably add to the unit’s body count, was worth it.

Military units were given body count quotas, which is hard to guarantee when your enemy is skilled at moving and evading. So civilians were killed and counted as enemy forces, or the numbers were simply inflated to placate their respective headquarters. Not only was nothing achieved militarily, but it also doomed the Vietnam War to failure as the indiscriminate killing hardened local Vietnamese against the US-led forces. Communist forces could easily recruit more soldiers, and they actually used these soldiers to achieve meaningful military objectives.

People who don’t understand something NEED statistics. People who do understand something USE statistics.

There is no such thing as an “omni-genius.” Notwithstanding, there is an entire class of people who have mastered the “omni-genius aesthetic.” There are governmental secretaries who move on to become chancellors of universities. There are generals who walk into the boardroom of aeronautic companies upon military retirement. Magazine editors who are appointed as CEOs of theme park companies. None of whom have any experience in “building” the organization they now run from the ground up. All of whom become ticking time bombs to preside over unmitigated disasters that they simply had no capability to prevent.

When people who are used to being treated as luminaries find themselves in charge of things they don’t understand, they tend to adopt the “McNamara Fallacy.” When you don’t understand something, having someone put a number in a table to quantify some sort of performance can be intoxicating. If there are warning signs of impending strategic problems for the company, as long as it doesn’t affect the statistics in the table (like quarterly profit), we wait until catastrophe forces devastating change.

But it gets worse.

People who don’t understand, and don’t want to understand, work hard to keep it that way. This means carefully selecting statistics that are the easiest to understand, perhaps the easiest to achieve, and do little more than provide a continual reassurance that nothing is ever wrong (even when it is).

Making high-quality and reliable products is as bad as it gets

The second easiest thing to (attempt to) quantify in the world of reliability and quality is the “mean time between failure (MTBF)”. The easiest thing to quantify is whether something passed a test (or not). 

Let’s say that you need to manufacture components (like the steel balls used in ball bearings) with a high degree of accuracy. A steel ball outside allowable tolerances is called “defective.” Perhaps we can tolerate 100 defective steel balls out of every million.  “Fake omni-geniuses” don’t like this. They want to demand that there are no defective steel balls. It is not possible to eliminate all defects, just as we can never eliminate car accidents, stillborn deaths, and so on.

But this is where it gets trickier. If you still want a statistic to say that the steel balls are OK, you would need to measure (perhaps) 1 million steel balls to see if no more than 100 are defective. You can’t measure much less. The same test applied to 1,000 steel balls needs fewer than 0.1 of them to be defective (you can’t measure 0.1 steel balls). Testing 1 million steel balls is usually too expensive or time-consuming. So we inevitably test fewer.

So if a “fake omni-genius” who can’t demand that there be no defective steel balls wants a statistic that they like, we might be able to measure only 100 steel balls out of every batch we manufacture. We might then proceed with this “qualification test” to give our “fake omni-genius” some assurance that everything is fine. But in reality, low-quality steel balls, where up to 10,000 defective out of every million, have a reasonable chance of passing a test where we look for no defective steel balls out of 100. So measuring 100 steel balls is very unlikely to uncover problems across all the steel balls we manufacture. This is great if we want to have meaningless meetings where we want to avoid the mention of any problems. 

Then there is the MTBF. Components that experience early infant mortality failures can have the same MTBF as other components that wear out and fail due to the accumulation of damage. This means that two products can have the same warranty period, the same MTBF, but 50 % of one product will fail in the warranty period, compared to less than 1 % for the other.

But the MTBF is relatively easy to measure, or approximate. And the ‘best’ part about it is that there are lots of different statistical approaches to spit out an MTBF number. Changing the assumptions for any of them allows a different (better) MTBF number to be created. And virtually every reliability ‘data’ sheet provided from a supplier will express their product’s reliability in terms of the MTBF.

Don’t believe me? There are still LED lightbulb manufacturers who advertise a 20+ year MTBF even though we know that is not possible. But that’s nothing compared to small electrical component manufacturers that routinely advertise MTBFs that exceed 1000 years. [1]

It’s not WHAT you think, it’s HOW you think

Sometimes, simple is harder than complex. We can’t all be good at everything. It doesn’t matter how simple or complex something is; if it is outside your skill set, you can’t do it. It doesn’t matter what public office you have held, how many military medals you have, or if you’re an amazing magazine editor. You can’t just slide into an organization that you don’t understand and dominate its leadership in a healthy way that achieves good outcomes.

That’s exactly what happened with Robert McNamara. Many people attest that he was a fundamentally good and decent person. But his success was in a very specific field where he found the right things to measure, and the right ways to manage them. He did not know much about warfare, which is a problem if you are the US Secretary of Defense. He actively walked away from people who were trying to explain some of the intricacies of success on the battlefield. His opponents knew what they were, which is why they won. And millions of soldiers and their families paid the price.

We are awash with false statistics when it comes to making high-quality, highly reliable products. It is easy to choose the ones we like, the ones to ignore, and how to go about manipulating each and every one of them to console us that everything is fine.

The best statistics are those we set specifically to see if we might have a problem in the future. Statistics can provide ‘early warning,’ help you ‘admire’ a problem, or be manipulated to convince you that everything is fine. 

When it comes to steel balls, you want someone who knows metallurgy inside and out. Someone who knows how to get the alloy composition correct, temperature right, maintain the slip plates on the production lines, and lots of other things. He or she is far less interested in measuring steel balls to find the defective ones, but way more interested in measuring the key factors that create the ‘perfect’ steel ball. But the only thing a “fake omni-genius” can ever comprehend is a simple, imperfect test at the end of everything that nominally tells us if everything is OK (or not).

When it comes to design and manufacture, you can never rest from trying to refine the process. Manufacturing equipment changes as it ages and sometimes deteriorates, just like your military adversary does on the battlefield. You should never ‘test to pass,’ but instead test to learn. 

And the right person for the job might not look particularly sharp in a suit. Or be the youngest professor ever at Harvard. But they might be one of the younger employees on staff. Seniority is linked to age and experience. Critical thinking is not.

But there are lessons for all organizations to focus on statistics that matter. Statistics that tell us how things will be in the future, and not those that assign a grade to what we have done in the past. You should never avoid or eliminate statistics as the right ones contain information you can’t get from anywhere else.

We should finish our thoughts by going back to Robert McNamara. He was asked to leave the office of the Secretary of Defense in 1968, three years after the 1965 Gulf of Tonkin incident that triggered what we now know as the Vietnam War. McNamara ended up being the Secretary of Defense for three out of the ten total years that “McNamara’s War” raged. He left office somewhat broken, realizing that he simply did not know what to do to win the war, or at least make it stop.

McNamara was not stupid. He was not evil. He was just the wrong person for the job, who used statistics to hide this fact. We are all guilty of using statistics to hide our inadequacies.

The trick is not letting our inadequacies become a statistic for someone else to write about.


[1] The examples specified are those manufacturers who specify a failure rate of 100 FITS or “failures in time” things like capacitors. One “failure in time” defined as one failure per billion hours. 100 FITs implies 100 failures per billion hours, or 1 failure per 10 million hours. 10 million hours is 1,141 years.

   Ask a question or send along a comment. Please login to view and use the contact form.

Filed Under: Articles, on Product Reliability, Reliability in Emerging Technology

About Christopher Jackson

Chris is a reliability engineering teacher ... which means that after working with many organizations to make lasting cultural changes, he is now focusing on developing online, avatar-based courses that will hopefully make the 'complex' art of reliability engineering into a simple, understandable activity that you feel confident of doing (and understanding what you are doing).

« Reliability Engineering 101

Comments

  1. Johan says

    September 26, 2025 at 6:03 AM

    Don’t know if you caught the film The Fog of War, which featured an older, more broken McNamera. It was hard to watch.

    This article changes everything I thought I knew about Imposter Syndrome. At the one end of the spectrum, there is the high-achiever suffering from crippling self-doubt. At the other, there is false confidence that cripples everything and everyone else. And at both ends and the middle is the need for every stakeholder in the economy to challenge themselves for as long as they are working.

    Personally, I’m not opposed to the idea of promoting a high-achiever to a job they know very little about, but your point here is well taken. It’s a sobering thought.

    Reply

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Article by Chris Jackson
in the Reliability in Emerging Technology series

Join Accendo

Receive information and updates about articles and many other resources offered by Accendo Reliability by becoming a member.

It’s free and only takes a minute.

Join Today

Recent Posts

  • The McNamara Fallacy 
  • Reliability Engineering 101
  • The Benefits of Establishing In-House Hardware Manufacturing
  • We’ll Meet Your Reliability But Not Your Spec
  • Return Parts Analysis – Why?

© 2025 FMS Reliability · Privacy Policy · Terms of Service · Cookies Policy

Book the Course with John
  Ask a question or send along a comment. Please login to view and use the contact form.
This site uses cookies to give you a better experience, analyze site traffic, and gain insight to products or offers that may interest you. By continuing, you consent to the use of cookies. Learn how we use cookies, how they work, and how to set your browser preferences by reading our Cookies Policy.