We use a sample to estimate a parameter from a population. Sometimes the sample just doesnâ€™t have the ability to discern a change when it actually occurs.

In hypothesis testing, we establish a null and alternative hypothesis. We are setting up an experiment to determine if there is sufficient evidence that a process has changed in some way. The Type II Error, $-\beta-$ is a measure of the probability of not concluding the alternative hypothesis is true when in reality it is true.

The power, $-1-\beta-$, reflects the ability of the sample to correctly lead us to the conclusion that an actual change has occurred when in reality it actually has.

## The Difference to Detect Matters

With the Type I Error we are concerned with the distribution related to the null hypothesis, which we are assuming hasnâ€™t changed. If in reality there isnâ€™t any change and the null hypothesis is true, the error is related to the sample drawn resulting in believing a change has occurred. In essence, the samples came mostly from a tail of the distribution, while this is rare, it is possible. It is about the probability of the sample not representing the mean of the underlying population.

For the Type II Error, we now need to consider a second distribution. In this error, we are assuming that the process or the underlying distribution has in fact shifted or changed. The error is that we incorrectly conclude based on the sample that there is insufficient evidence to conclude the change occurred.

The hard part here is we do not know the mean of the shifted distribution. We donâ€™t know how much of an actual change has occurred. We are doing an experiment to determine if a change occurred or not. So, in order to calculate the probability of a Type II Error and the associated power, we need to consider how much of a change we want to be able to detect.

Before being able to calculate the power of the hypothesis test, we first need to determine the change that we desire to detect. Also, consider the former distribution mean, the null hypothesis, has some variability, and the shifted mean of the new distribution likewise has some variability. If the difference of interest is small compared to the variability of the two distributions, the two distributions overlap significantly. This results in a large probability that the sample could still be from the null hypothesis distribution.

Now if the change of interest is a large change in the mean value compared to the variability, the two distributions do not overlap very much. This suggests that is would be very unlikely that a sample drawn from the new, shifted, distribution would have much probability of being from the null hypothesis based distribution.

For the same sample size the larger the difference we desire to detect the higher the power. The further apart the two distributions the less likely it is that a sample could come from the tail of the null hypothesis, as the actual population (assuming the shift actually occurred) has altered the probability of the sample items coming what could be considered the null hypothesis.

## Calculating Power

Letâ€™s denote the null hypothesis distribution mean as $-\mu_{0} -$, and the shifted new actual distribution mean as $-\mu_{a} -$. To determine $-\beta-$ we are interested in the probability that z is less than:

$$ z_{\alpha}-\frac{\left|\mu_{0}-\mu_{a}\right|}{\sigma_{\bar{y}}} $$

The resulting difference is a z-value that we can use a standard normal table to estimate the are under the curve beyond that z-value. This estimates the probability of the sample from the new shifted distribution could have come from the region we considered for the Type I Error.

For a one-sided test we can calculate the power with:

$$ 1-\beta=1-P\left(z<z_{\alpha}-\frac{\left|\mu_{0}-\mu_{a}\right|}{\sigma_{\bar{y}}}\right) $$

For a two-sided test the calculation of power becomes:

$$ 1-\beta=1-P\left(z<z_{\alpha/2}-\frac{\left|\mu_{0}-\mu_{a}\right|}{\sigma_{\bar{y}}}\right) $$

## Leave a Reply