Why using ‘n-1’ instead of ‘n’ in Sample Variance? (Full explanation and Python example)
For people who first learn statistics, they would often ask that why are we having ‘n-1’ in denominator for sample variance instead of ’n’. The reason we use n-1 in the formula for sample standard deviation instead of n (which is used in the formula for population standard deviation) has to do with statistical inference and the concept of degrees of freedom.
First, we have to understand what variance is trying to tell us. Variance is the expectation of the squared deviation of a random variable from its population mean or sample mean. and standard deviation is simply square root of it. Informally, variance estimates how far a set of numbers (random) are spread out from their mean value. In addition, we know the sum of deviation equals to zero.
Thus, specifying the values of any (n-1) of the quantities determines the remaining one.
For example, for n = 4
then automatically we know x_4 — mean of x = 2, because the sum deviation equals to zero. So only three of the four values of x_i are freely determines. Hence , ‘n-1’ is the degrees of freedom.
When we calculate the sample standard deviation, we are estimating the population standard deviation based on a sample of observations. However, because we only have a finite sample of observations, our estimate is subject to some amount of uncertainty. Specifically, we have one less piece of information than the number of observations we have (i.e., we lose one degree of freedom) when we use the sample mean to estimate the population mean.
To account for this, we use n-1 in the denominator of the formula for sample standard deviation instead of n. This adjustment (called Bessel’s correction) makes our estimate of the sample standard deviation slightly larger, which accounts for the additional uncertainty introduced by estimating the population standard deviation from the sample. By using n-1 in the denominator, we get an unbiased estimator of the population standard deviation, which allows us to make more accurate inferences about the population from which the sample was drawn.
Consider a population consisting of each of the numbers between 0 and 10.
import numpy as np
x = range(11)
pop_mean = np.mean(x)
#np.std is popluation mean, which divide by n
pop_std = np.std(x)
#np.std(sample,ddof = 1), use ddof = 1 to calculate the sample standard deviation
#output 5.0 3.162277
Because we have the whole population, we know that the true mean is µ = 5, and the standard deviation (σ) = 3.162277.
Please copy the following code to run on your own to see how it goes~
std_all = 
popstd_all = 
for i in range(100):
std_all.append(np.std(random.choices(x,k=3),ddof = 1))
print('Std by "n":',np.mean(popstd_all))
print('Std by "n-1":',np.mean(std_all))
#Std by "n": 2.2842874881103503
#Std by "n-1": 2.7700906468856896
You can see that n-1 tend to gives better estimate toward true standard deviation (σ) 3.162277.
Follow and upvote would help~
Variance - Wikipedia
In probability theory and statistics, variance is the expectation of the squared deviation of a random variable from…