Accendo Reliability

Your Reliability Engineering Professional Development Site

  • Home
  • About
    • Contributors
  • Reliability.fm
    • Speaking Of Reliability
    • Rooted in Reliability: The Plant Performance Podcast
    • Quality during Design
    • Critical Talks
    • Dare to Know
    • Maintenance Disrupted
    • Metal Conversations
    • The Leadership Connection
    • Practical Reliability Podcast
    • Reliability Matters
    • Reliability it Matters
    • Maintenance Mavericks Podcast
    • Women in Maintenance
    • Accendo Reliability Webinar Series
    • Asset Reliability @ Work
  • Articles
    • CRE Preparation Notes
    • on Leadership & Career
      • Advanced Engineering Culture
      • Engineering Leadership
      • Managing in the 2000s
      • Product Development and Process Improvement
    • on Maintenance Reliability
      • Aasan Asset Management
      • CMMS and Reliability
      • Conscious Asset
      • EAM & CMMS
      • Everyday RCM
      • History of Maintenance Management
      • Life Cycle Asset Management
      • Maintenance and Reliability
      • Maintenance Management
      • Plant Maintenance
      • Process Plant Reliability Engineering
      • ReliabilityXperience
      • RCM Blitz®
      • Rob’s Reliability Project
      • The Intelligent Transformer Blog
    • on Product Reliability
      • Accelerated Reliability
      • Achieving the Benefits of Reliability
      • Apex Ridge
      • Metals Engineering and Product Reliability
      • Musings on Reliability and Maintenance Topics
      • Product Validation
      • Reliability Engineering Insights
      • Reliability in Emerging Technology
    • on Risk & Safety
      • CERM® Risk Insights
      • Equipment Risk and Reliability in Downhole Applications
      • Operational Risk Process Safety
    • on Systems Thinking
      • Communicating with FINESSE
      • The RCA
    • on Tools & Techniques
      • Big Data & Analytics
      • Experimental Design for NPD
      • Innovative Thinking in Reliability and Durability
      • Inside and Beyond HALT
      • Inside FMEA
      • Integral Concepts
      • Learning from Failures
      • Progress in Field Reliability?
      • Reliability Engineering Using Python
      • Reliability Reflections
      • Testing 1 2 3
      • The Manufacturing Academy
  • eBooks
  • Resources
    • Accendo Authors
    • FMEA Resources
    • Feed Forward Publications
    • Openings
    • Books
    • Webinars
    • Journals
    • Higher Education
    • Podcasts
  • Courses
    • 14 Ways to Acquire Reliability Engineering Knowledge
    • Reliability Analysis Methods online course
    • Measurement System Assessment
    • SPC-Process Capability Course
    • Design of Experiments
    • Foundations of RCM online course
    • Quality during Design Journey
    • Reliability Engineering Statistics
    • Quality Engineering Statistics
    • An Introduction to Reliability Engineering
    • An Introduction to Quality Engineering
    • Process Capability Analysis course
    • Root Cause Analysis and the 8D Corrective Action Process course
    • Return on Investment online course
    • CRE Preparation Online Course
    • Quondam Courses
  • Webinars
    • Upcoming Live Events
  • Calendar
    • Call for Papers Listing
    • Upcoming Webinars
    • Webinar Calendar
  • Login
    • Member Home

by Anne Meixner Leave a Comment

Applying S@ Faults with a Simulator: An Introduction

Applying S@ Faults with a Simulator: An Introduction

When I introduced you to the Stuck at Fault Model I stated that the size of VLSI devices necessitated the usage of Electronic Design Automation (EDA) tools to support testing.

My first full-time job at IBM exposed me to the world of test and to their EDA tools.

In the mid-1980’s, testing of logic devices relied upon the S@ fault model. Three common software tools included fault simulation, automatic test pattern generation, and fault diagnosis.

This article will provide an introduction to fault simulation as one can view the other two tools as applications built upon a fault simulator.

Fault Simulator Components

Engineers created fault simulators to deal with the complexity that computer devices (computer boards and silicon devices) held due to their size and the need to manufacture good parts. One can manually do a fault simulation on a logic gate and with some effort a small combinational circuit such as an adder. You may have noticed the tedium in performing the propagation of faults to an observable output. Hence, this is an ideal task to let a computer do the work for you.

In the diagram below illustrates conceptually the inputs, the fault model and the outputs.

Concept Fault Sim

S@ Fault Simulator Goals

Fault simulators perform the analysis with a specific representation of the Device Under Test (DUT). The S@ model most often has been applied at the logic gate level as introduced in the previous articles.  Naturally, you could apply this at the transistor and interconnect level. The fault simulator needs to compile a list of faults based upon the DUT description and the fault model used. This diagram assumes you already have a set of input stimulus to assess. With such a stimulus the simulator needs to assess how the faulty versions of the circuit behave. You can consider the S@ fault simulator as an optimized case of a logic simulator. It needs to be more efficient than a logic simulator as it performs not only good DUT simulation but many faulty DUT simulations.

Goals of the fault simulator include: assessing the percentage of model faults detected by the applied stimulus, providing the faulty responses per input stimulus per fault modeled. Creating the fault list is a straight forward process. At its core a fault simulator propagates faults through the provided netlist. The efficiency of fault simulators depends upon maximizing learnings as you apply a stimulus to your netlist. The effectiveness of the fault simulator can be assessed in terms of netlist size, ability to parallelize analysis of multiple faults and the compactness of the results.

Metrics for S@ Fault simulators

You’ll often hear that the S@ fault coverage for a set of test vectors is 95.5%. This is the most common metric because it’s the question that everyone wants to the answer for.  There are some other metrics that are useful in understanding improvements: faults not detected, redundant faults.

Reminder on Model Limitations

The S@ fault model has been around since the 1950’s and applied extensively in the early days of computers and VLSI devices. While knowing the S@ fault coverage of test patterns is necessary today it is not sufficient assessment of what is needed to test modern electronics. The underlying electronics technology has changed significantly over the decades: single Bipolar transistors to CMOS VLSI devices to Deep SubMicron devices. Manufacturing defects differ for each of these technologies and have morphed over the years. The S@ fault model does not reflect accurately how all defects can manifest into faulty electrical behavior. Over the last three decades there’s been an evolution of fault models used in digital testing. Stay tuned for articles later this year that will inform you more on this topic.

Are you enjoying what you have been learning at Testing 123?  Do you think that your co-workers would benefit from learning more?  Then consider contacting me about training tailored to your company’s needs.

Meanwhile remember testing takes time and thoughtful application,

Anne Meixner, PhD

You can find course notes from university classes on VLSI testing on line. The course notes from Dr Andre Ivanov’s EECE 578 class Integrated Circuit Design for Test (2008) provide a good introduction.

In 1966 John Paul Roth described one of the first algorithms for fault propagation. For one of my masters’ classes I took a class on test taught by Dr. Roth. He taught an informative class in which we had to program in a language of our choice a test related algorithm.  I chose to teach my self APL for this assignment. He often brought in a camera to take a picture of the whiteboards as he taught the class.

  1. P. Roth, “Diagnosis of Automata Failures: A Calculus and a Method,” in IBM Journal of Research and Development, vol. 10, no. 4, pp. 278-291, July 1966.

Abstract: The problem considered is the diagnosis of failures of automata, specifically, failures that manifest themselves as logical malfunctions. A review of previous methods and results is first given. A method termed the “calculus of D-cubes” is then introduced, which allows one to describe and compute the behavior of failing acyclic automata, both internally and externally. An algorithm, called the D-algorithm, is then developed which utilizes this calculus to compute tests to detect failures. First a manual method is presented, by means of an example. Thence, the D-algorithm is precisely described by means of a program written in Iverson notation. Finally, it is shown for the acyclic case in which the automation is constructed from AND’s, NAND’s, OR’s and NOR’s that if a test exists, the D-algorithm will compute such a test.

Filed Under: Articles, on Tools & Techniques, Testing 1 2 3 Tagged With: Digital Test, Mastery 1, Stuck at Fault Model

« Is Warranty Big Data?
A Framework for Risk Management »

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Article by Anne Meixner
in the Testing 1 2 3 series

Join Accendo

Receive information and updates about articles and many other resources offered by Accendo Reliability by becoming a member.

It’s free and only takes a minute.

Join Today

Recent Posts

  • So, What’s Still Wrong with Maintenance
  • Foundation of Great Project Outcomes – Structures
  • What is the Difference Between Quality Assurance and Quality Control?
  • Covariance of the Kaplan-Meier Estimators?
  • Use Of RFID In Process Safety: Track Hazardous Chemicals And Track Personnel

© 2023 FMS Reliability · Privacy Policy · Terms of Service · Cookies Policy

This site uses cookies to give you a better experience, analyze site traffic, and gain insight to products or offers that may interest you. By continuing, you consent to the use of cookies. Learn how we use cookies, how they work, and how to set your browser preferences by reading our Cookies Policy.