I recently received a question concerning what sample size to use when assessing call center calls. Not a lot of information in the request, so my answer was rather general. And, thought it might provide some insight to others facing sample size questions of their own.
—
Hi Noma,
First off I’m not sure I fully understand the question, and that is in part due to not having worked in a call center environment. I do understand sampling and may provide some ideas to help solve your problem.
The first question is about sampling calls per day by you and your fellow assessors. It is clear that the six assessors are not able to cover all the calls of the 20 call center agents. What is missing from the question is what are you measuring? Customer satisfaction, correct resolution of the customer’s issue, appropriately following call protocols, or something else? Be very clear on what is the measure. For the sake of providing a response let’s say you are able to judge if the agent appropriately addressed the caller’s issue or not. A binary response, or simply a call is either considered good or not (pass/fail). While this may oversimplify your situation, it may be instructive on sampling.
Recalling some basic terms from statistics, remember that a sample is taken from some defined population in order to characterize or understand the population. Here a sample of calls are assessed, and you are interested in what portion of calls are done adequately (pass). If you could measure all calls, that would provide the answer, yet a limit on resources requires that we use sampling to estimate the population proportion of adequate calls.
Next, consider how sure you want the results of the sample to reflect the true and unknown population results. For example, if you don’t assess any calls and simply guess at the result there would be little confidence in that result. Confidence in sampling in one manner represents the likelihood that the sample is with a range of the sample’s result. A 90% confidence means that if we repeatedly draw samples from the population that the result from the sample would be within a confidence bound (close to the actual and unknown result) 90% of the time. That also means that the estimate will be wrong 10% of the time due to errors caused by sampling. This error is simply the finite chance that the sample draws from more calls that ‘pass’ or ‘fail.’ The sample thus is not able to accurately reflect the true population.
Setting the confidence is a reflection of how much risk one is willing to take related to the sample providing an inaccurate result. A higher confidence requires more samples.
Here is a simple sample size formula that may be useful in some situations.
$$ \large\displaystyle n=\frac{\ln (1-C)}{\ln (\pi )}$$
n is samples size
C is confidence where 90% would be expressed as 0.9
π is proportion considered passing, in this case, good calls.
ln is the natural logarithm
If we want 90% confidence that at least 90% of all calls are judged good (pass), then we need at least 22 monitored calls.
This formula is a special case of the binomial sample size calculation and assumes that there are no failed calls in the calls monitored.
This assumes that if we assess 22 calls and not fail, that we have at least 90% confidence that the population has at least 90% good calls. If there is a failed call out the 22 assessments, we have evidence that we have less than 90% confidence of at least 90% good calls. This doesn’t provide information to estimate the actual proportion, yet a way to detect if the proportion falls below a set level.
If the intention is to estimate the population proportion of good vs bad calls, then we use a slightly more complex formula.
$$ \large\displaystyle n=\frac{z_{{}^{\alpha }\!\!\diagup\!\!{}_{2}\;}^{2}\pi (1-\pi )}{{{E}^{2}}}$$
π is the same, the proportion of good calls vs bad calls.
z is the area under a standard normal distribution corresponding to alpha/2 (for 90% confidence, we have 90 = 100%(1-α) thus in this case α is 0.1. And the area under the standard normal distribution is 1.645.
E is related to accuracy of the result. It defines a range within which the estimate should reside about the resulting estimate of the population value. A higher value of E reduces the number of samples needed, yet the result may be further away from the true value than desired.
The value of E depends on the standard deviation of the population if that is not known just use an estimate from previous measurements or run a short experiment to determine a reasonable estimate. If the proportion of bad calls is the same from day to day and from agent to agent, then the standard deviation may be relatively small. If on the other hand, there is agent-to- agent and day-to-day variation, the standard deviation may be relatively large and should be carefully estimated.
The z value is directly related to the confidence and affects sample size as discussed above.
Notice that π, the proportion of good calls, is in the formula. Thus if you are taking the sample to estimate an unknown π, then to determine sample size, assume π is 0.5. This will generate the largest possible sample size and permit an estimate of π with confidence 100%(1-α) and accuracy E or better. If you know π from previous estimates, thean use it to help reduce the sample size slightly.
Let do an example given we want 90% confidence, thus alpha is 0.1, and z α/2 is 1.645. And, let’s assume we do not have an estimate for π, and will use 0.5 for π in the equation. Lastly, we want the final estimate based on the sample to be within 0.1 (estimate of π +/- 0.1), thus E is 0.1. Running the calculation, we find we need to sample 1,178 calls to meet the constraints of confidence and accuracy. Increasing the allowable accuracy or increasing the sampling risk (higher E or higher C) may permit finding a meaningful sample size.
It may occur that to obtain a daily sample rate with acceptable confidence and accuracy is not possible, in that case, sample as many as you can. The results over a few days may provide enough of a sample to provide an estimate.
One consideration with the normal approximation of a binomial distribution for this second sample size formula is it breaks down when either π n and n(1-π) are less than 5. If either value is less than 5 then the confidence interval is large enough to be of little value. If you are in this situation, use the binomial distribution directly rather than the normal approximation.
One last note. In most sampling cases the overall size of the population doesn’t really matter too much. A population of about 100 is close enough to infinite that we do not consider the population size. A small population and need to sample may require special treatment of sampling with or without replacement, plus adjustments to the basic sample size formulas.
Creating the right sample size to a great extent depends on what you want to know about the population. In part, you need to know the final result to calculate the ‘right’ sample size, so it often just an estimate. Using the above equations and concepts you can minimize risk of determining an unclear result, yet it will always be an evolving process to determine the right sample size for each situation.
Related:
Sample Size – success testing (article)
Three considerations for sample size (article)
Statistical Terms (article)
Leave a Reply