Today we look at a couple different ways to visualize the distribution of your data.

Understanding the distribution of your data can be useful for engineers undertaking various tasks. The fact of the matter is that there are many different ways in which one can get an idea of the distribution of the data they’re interested in, one of which is density curves.

Let’s look at a scenario where density curves can come in handy; one scenario I can think of and demonstrate here is common in reliability engineering, and it involves comparing two distributions, one of the strength of a product with one of the sources of stress that the product could be subjected to over its lifetime. You want to compare those two distributions to get an idea both of product performance and product reliability over time. More specifically, you do not want those two distributions to overlap, and ideally, there is a safety margin between them that accounts for changes over the lifetime of the product, e.g. decay in strength.

Let’s look at dataset of strength values of product as well as values of some stress factor that is able to be quantified. For demonstrational purposes, we will create an example dataset called **product_stress**, see below:

```
# WEEK 007: DATA DISTRIBUTION PLOTTING WITH KERNEL DENSITY ESTIMATION
# 0. INSTALL PACKAGE AND LOAD LIBRARIES ----
install.packages("overlapping")
library(tidyverse)
library(sherlock)
library(overlapping)
# 1. SET.SEED() FOR REPRODUCIBILITY ----
set.seed(132535)
# 2. CREATE DATASET ----
product_stress <- tibble(strength = rnorm(n = 100, mean = 38, sd = 5),
stress = rlnorm(n = 100, meanlog = 3, sdlog = 0.1))
# 2.1 TRANSFORM DATASET FOR PLOTTING ----
product_stress_pivoted <- product_stress %>%
pivot_longer(cols = everything(), names_to = "type", values_to = "value")
```

When we plot the values using sherlock’s **draw_categorical_scatterplot()**function, it becomes clear that there is overlap between the two distributions.

```
# 3. PLOTTING ----
# 3.1 ----
product_stress_pivoted %>%
draw_categorical_scatterplot(y_var = value,
grouping_var_1 = type,
group_color = TRUE,
alpha = 0.3)
```

Let’s plot this in a different way. How about density curves? Enter *kernel density estimation*, which is really just a fancy name for a smoothing method that makes density plots that the actual distribution of the data. Have you ever seen those silly normal curves plotted over a histogram that clearly showed some non-normal/skewed data? I have, and I do not like it very much, to put it lightly.

With kernel density estimation, you can plot density curves that follow the actual distribution of the data.

The **geom_density()** plotting function from *gpplot2* does exactly what I just described, so let’s see how to do it:

```
# 3.2 DENSITY PLOT ----
product_stress_pivoted %>%
ggplot(aes(value)) +
geom_density(aes(fill = type), alpha = 0.2, color = "grey80") +
scale_x_continuous(limits = c(0, 60), labels = scales::number_format(accuracy = 1)) +
theme_sherlock() +
scale_fill_sherlock()
```

Looks pretty *smooth*, right? (pun intended!)

We’ve got an overlap problem, which is not really good news for product performance and reliability, even if these are simply estimates.

One might be tempted to calculate the probability of such an event happening, and if I were you, I would want to, too. Here’s how you do it using the *overlapping*package.

```
# 4. JOINT PROBABILITY CALCULATION (OVERLAPPING AREA) ----
two_distributions <- as.list(product_stress)
overlapping::overlap(two_distributions)
joint_probability <- overlapping::overlap(two_distributions)$OV
joint_probability
```

resulting in an output of [1] 0.01393566

OK, so the probability of such an event happening given the estimates we are working with is about **1.39%**.

Tweaking the code for the the density curves will then give us a plot where the calculated probaililty is also displayed. Pretty neat: everything in one plot.

```
# 4. ADDING JOINT PROBABILITY TO DENSITY PLOT ----
product_stress %>%
pivot_longer(cols = everything(), names_to = "type", values_to = "value") %>%
ggplot(aes(value)) +
geom_density(aes(fill = type), alpha = 0.2, color = "grey80") +
scale_x_continuous(limits = c(0, 60), labels = scales::number_format(accuracy = 1)) +
theme_sherlock() +
scale_fill_sherlock() +
annotate(geom = "text", x = 40, y = 0.09,
label = str_glue("Joint probability: {joint_probability %>% scales::percent(accuracy = 0.01)}"),
color = "grey50", size = 5)
```

This was just one of the many, many examples where density curves can be really useful for you, and now you know how to make them in R. I would love to hear from you about what other uses you could have for this in your line of work.

In this week’s edition we learned how to create density plots using the **geom_density()** function and went over a brief reliability engineering example. Hope you enjoyed this edition.

Resources for this week’s edition:

- Code
- sherlock package
- geom_density() documentation
- overlapping package

Resources for learning R:

- R for Data Science: a very thorough reference book by Hadley Wickham, the creator of the tidyverse. Absolutely free of charge and full of relevant examples and practice tests.
- ggplot2 reference book: a super detailed online book on the gpplot2 plotting package.
- My favorite R course, Business Science DS4B101-R: I learned R mainly throgh this course. Highly recommended if you want to get up to speed and beyond in a relatively short time! It has everything one will need from data cleaning to data visualization to modeling. This course is especially useful for engineers trying to learn or get good at R as it heavily focuses on the fundamentals but goes way beyond just that. Note: this is an affiliate link, meaning you get a
*hefty discount*if you purchase a course, and I receive a small commission.

Larry George says

Your model, P{Fail]=P[Stress>Strength], extends to multi-component, multivariate examples where failure is defined by a structure function g(basic events) where each basic event is Stress > Strength. It accounts for dependence among stresses (e.g., earthquake) or strengths (similar components. https://accendoreliability.com/what-to-do-with-obsolescent-nuclear-engineers/#more-461419. I coded nuclear power plant risk analysis in FORTRAN (It used to be available on Internet. https://ftaassociates.files.wordpress.com/2018/12/J.-Wells-L.-George-and-G.-E.-Cummings-Phase-1-Report%E2%80%94Systems-Analysis-Project-VII-Seismic-Safety-Margins-Research-Program-NUREGCR-2015-Vol.-8-Rev-1-November-1983.pdf/) and implemented it in Excel workbook. I’ll translate it to R if anyone wants and has a real need? Loss-of-load probability on electrical distribution system? Network reliability?