6.14:

Teorema do Limite Central

JoVE Core
Statistics
A subscription to JoVE is required to view this content.  Sign in or start your free trial.
JoVE Core Statistics
Central Limit Theorem

10,041 Views

00:00 min

April 30, 2023

The central limit theorem, abbreviated as clt, is one of the most powerful and useful ideas in all of statistics. The central limit theorem for sample means says that if you repeatedly draw samples of a given size and calculate their means, and create a histogram of those means, then the resulting histogram will tend to have an approximate normal bell shape. In other words, as sample sizes increase, the distribution of means follows the normal distribution more closely.

The sample size, n, that is required to be "large enough" depends on the original population from which the samples are drawn (the sample size should be at least 30, or the data should come from a normal distribution). If the original population is far from normal, then more observations are needed for the sample means or sums to be normal. Sampling is done with replacement.

It would be difficult to overstate the importance of the central limit theorem in statistical theory. Knowing that data, even if its distribution is not normal, behaves in a predictable way is a powerful tool.

The normal distribution has the same mean as the original distribution and variance that equals the original variance divided by the sample size. Standard deviation is the square root of the variance, so the standard deviation of the sampling distribution is the standard deviation of the original distribution divided by the square root of n. The variable n is the number of values that are averaged together, not the number of times the experiment is done.

This text is adapted from Openstax, Introductory Statistics, Section 7.0 Central Limit theorem.

This text is adapted from Openstax, Introductory Statistics, Section 7.1 Central Limit theorem for Sample Means (Averages).