1.13:

Significance Testing: Overview

JoVE 核
Analytical Chemistry
需要订阅 JoVE 才能查看此.  登录或开始免费试用。
JoVE 核 Analytical Chemistry
Significance Testing: Overview

1,577 Views

01:04 min

April 04, 2024

Significance testing is a set of statistical methods used to test whether a claim about a parameter is valid. In analytical chemistry, significance testing is used primarily to determine whether the difference between two values comes from determinate or random errors. The effect of a particular change in the measurement protocol, analyst, or sample itself can cause a deviation from the expected result. In the case of a suspected deviation/outlier, we need to be able to confirm mathematically that the deviation comes from a determinate source and that the observation with the deviation can be logically omitted from the analysis.

Two hypotheses are used as criteria for significance testing. The null hypothesis (H0) states that the values being compared do not differ from each other significantly. In other words, if any difference exists between two values, it is ascribed to an indeterminate error. The alternate hypothesis (HA) states that the compared values are not equal, and the difference is more significant than can be explained by indeterminate error.

Before the test is performed, the hypotheses need to be stated, and a significance level (α) needs to be set. The test statistic, based on the sample mean and standard deviation, is then calculated and compared to the tabulated values, which are set at particular significance levels and defined as one- or two-tailed. If the calculated test statistic exceeds the critical values (tabulated statistic), the null hypothesis is rejected, and we state that the difference between the two values cannot be explained by random, indeterminate error.

In one-tailed significance testing, the alternative hypothesis can specify that the observed value is either higher or lower than the expected value, but not both. In two-tailed significance testing, the alternative hypothesis can simply state that the observed value is not equal to the expected value, with no regard to the direction.

Significance testing can be used on different statistical parameters of one or more data sets. Tests are given different names depending on the parameters or purpose. Significance testing is frequently applied to compare an observed value with the mean or compare two means from two different data sets. These tests are known as t-tests. Significance tests can also be performed on the variance of two data sets. In this case, the test is known as an F-test. If a significance test is used to identify outliers, the test is called a Q-test.