Chebyshev’s theorem, also known as Chebyshev’s Inequality, states that the proportion of values of a dataset for K standard deviation is calculated using the equation:
Here, K is any positive integer greater than one. For example, if K is 1.5, at least 56% of the data values lie within 1.5 standard deviations from the mean for a dataset. If K is 2, at least 75% of the data values lie within two standard deviations from the mean of the dataset, and if K is equal to 3, then at least 89% of the data values lie within three standard deviations from the mean of that dataset.
Interestingly, Chebyshev’s theorem estimates the proportion of data that will fall inside (minimum proportion) and outside (maximum proportion) a given number of standard deviations. If K is equal to 2, then the rule suggests a possibility that 75% of the data values lie inside two standard deviations from the mean and 25 % of the data value lie outside the two standard deviations away from the mean. It is important to understand that this theorem provides only approximations and not exact answers.
One of the advantages of this theorem is that it can be applied to datasets having normal, unknown, or skewed distributions. In contrast, the empirical or three-sigma rule can only be used for datasets with a normal distribution.