
A brief introduction to the statistical hypothesis test called the t-test. Useful when examining if there is a difference between the means of two groups.
[Read more…]Your Reliability Engineering Professional Development Site
A listing in reverse chronological order of articles by:
by Semion Gengrinovich Leave a Comment
A brief introduction to the statistical hypothesis test called the t-test. Useful when examining if there is a difference between the means of two groups.
[Read more…]by Semion Gengrinovich Leave a Comment
Why is confidence level so important in engineering test data analysis?
From the name itself it gives us a very good hint; Confidence level is giving the confidence in data analysis. In the next graph, you can find 10 samples and fitted Weibull 2p distribution with 95% of Confidence level:
[Read more…]by Christopher Jackson 4 Comments
There might not ever be a better demonstration of the saying that …
… a fish rots from it’s head.
Boeing is responsible for the half-baked Maneuvering Characteristics Augmentation System (MCAS) that was forced into its new 737 Max aircraft. This involved a decidedly awful attempt to convince the Federal Aviation Administration (FAA) that there was no need to subject said aircraft through all the checks and balances that you need to go through if it is in fact a brand-new and different type of plane. Which it was. This resulted in the deaths of 346 passengers and crew (along with plenty of claims that it was pilot error). And just to be clear, Boeing has since admitted that it’s employees defrauded the FAA during the original certification process – an admission it was not required to make if it was able to complete a three-year period of increased monitoring and reporting. Which it could not.
[Read more…]by Semion Gengrinovich Leave a Comment
Exploring the differences between HALT and ALT, or Highly Accelerated Life Test and Accelerated Life Test. Plus when to use which when.
[Read more…]by Semion Gengrinovich Leave a Comment
It seems a big misconception in different industries between those two terms “Reliability” and “Durability”. So, first thing first, I apply google research to find out what is definitions:
Reliability is defined as the probability that a product, system, or service will perform its intended function adequately for a specified period of time or will operate in a defined environment without failure. (Source American Society for Quality (ASQ))
Durability is the ability of a physical product to remain functional, without requiring excessive maintenance or repair, when faced with the challenges of normal operation over its design lifetime. (Source Wikipedia)
[Read more…]by Christopher Jackson Leave a Comment
A simple way of looking at our brain is by dividing it into the conscious, subconscious and unconscious minds. The conscious mind is all about what we are actively thinking about in the here and now. We might be navigating as we drive through the countryside. We might decide to take an exit from the main road because our conscious mind has worked out that the map we are looking at is showing us that’s what we need to do to get to where we want to go.
by Semion Gengrinovich Leave a Comment
A brief discussion on the difference and similarities of accelerated life testing (ALT) and durability testing. For one difference, ALT used high stress conditions to shorten the time to failure. Whereas, durability testing typically uses whole products and normal use condition.
[Read more…]by Debasmita Mukherjee Leave a Comment
Normal distribution is most common in real life scenarios be it modeling any reliability performance parameter at a specific time. Central Limit Theorem (CLT) shows why normal distribution occurs so often.
[Read more…]by Christopher Jackson Leave a Comment
When I was a bright eyed, motivated (younger) officer in the Australian Army, one my many tasks when deployed overseas was to raise paperwork to formally request ‘battlefield material’ to be sent back home from whatever country we were in. ‘Battlefield material’ was items that included a range of mementos, keepsakes, and things you would typically see in a museum to add to the historical collections of my battalions and regiments back home.
by Semion Gengrinovich Leave a Comment
Ernst Hjalmar Waloddi Weibull (18 June 1887 – 12 October 1979) was a Swedish engineer, scientist, and mathematician. (source Wikipedia)
[Read more…]by Christopher Jackson Leave a Comment
For thousands of years, doctors treated virtually every skin ailment by ‘letting’ or draining the blood of the patient. Leeches are really good at doing this as they quite literally drink up the allegedly ‘poisoned’ blood that is being removed. Of course, by the late 1800s, science had advanced to the point where it was realized that this was nonsense, and so leeches fell out of favour in the world of medicine.
But that same scientific revolution saw the development of drugs like heroin and cocaine to cure everything from schizophrenia through to children’s cough. With doctors prescribing these drugs left right and centre, and worldwide epidemic of drug-addiction misery was spawned.
[Read more…]by Debasmita Mukherjee Leave a Comment
Having a knowledge of how the data is distributed is critical to model failure times and life in reliability analysis. Every distribution is unique and suitable for different types of reliability data.
[Read more…]by Semion Gengrinovich Leave a Comment
The history of Design of Experiments (D.O.E) can be traced back to the work of various individuals, including Genichi Taguchi, a Japanese engineer and statistician. Taguchi made significant contributions to the field, particularly in the area of robust design, which aimed to improve the quality of products and processes. His work was influenced by the need for quality improvement in post-World War II Japan. Taguchi’s methodology, known as the Taguchi methods, was based on the concept of “robust parameter design,” which aimed to make processes and products insensitive to environmental factors or other variables that were difficult to control.
[Read more…]by Christopher Jackson Leave a Comment
It is no small irony that a software application that is designed to protect IT systems from malicious actors was behind the biggest IT outage in the history of computers. A company called Crowdstrike provides a ‘Falcon Sensor’ product that is intended to scan computers that use Microsoft operating systems for vulnerabilities. And this product is deployed so deeply into its host operating systems that it has access to the ‘kernel,’ which is the program that runs the basic code that links applications to the computer hardware (like memory, central processing unit and other devices). Unfortunately a Falcon Sensor update that Crowdstrike sent to its customers had a bug that was not picked up by its own validation programs (because it too had a bug). And unfortunately, it accesses a ‘forbidden’ part of the memory that causes the infamous BSOD or ‘blue screen of death.’ So airlines, hospitals, banks, hotels and lots of other companies simply couldn’t operate.
by Semion Gengrinovich Leave a Comment
And again there is no one answer for such simple question. Strongly depends on what type of test you need to conduct.
It is also very important to understand at which stage, design of the product. Usually at very early stages of the design there is many unexpected failures, when design is mature enough – failures become predictable, and there is one last period, called – wear out/aging stage.
[Read more…]