The range rule of thumb is a statistical tool to understand and interpret the standard deviation. For a known standard deviation, the range rule can roughly estimate the maximum and minimum typical or usual values of that dataset. It is based on the principle that ninety-five percent of all dataset values lie within two standard deviations from the mean. Consider the marks scored by students with a mean of fifty and a standard deviation of fifteen. Using the formula, the minimum and maximum typical scores can be roughly predicted as 20 and 80. This indicates that marks scored by the majority of students would fall between 20 and 80. Anything less or more than this range is considered an outlier. Conversely, using the known range of a dataset, one can estimate an unknown standard deviation. For instance, if the range of exam scores is known, then the standard deviation can be estimated by dividing the range by four. Despite its simplicity, the range rule of thumb occasionally fails to predict outliers in a given dataset.