Shapiro–Wilk test

From Infogalactic: the planetary knowledge core
Jump to: navigation, search

The Shapiro–Wilk test is a test of normality in frequentist statistics. It was published in 1965 by Samuel Sanford Shapiro and Martin Wilk.[1]

Theory

The Shapiro–Wilk test utilizes the null hypothesis principle to check whether a sample x1, ..., xn came from a normally distributed population. The test statistic is:

W = {\left(\sum_{i=1}^n a_i x_{(i)}\right)^2 \over \sum_{i=1}^n (x_i-\overline{x})^2}

where

  • x_{(i)} (with parentheses enclosing the subscript index i) is the ith order statistic, i.e., the ith-smallest number in the sample;
  • \overline{x} = \left( x_1 + \cdots + x_n \right) / n is the sample mean;
  • the constants a_i are given by[1]
(a_1,\dots,a_n) = {m^{\mathsf{T}} V^{-1} \over (m^{\mathsf{T}} V^{-1}V^{-1}m)^{1/2}}
where
m = (m_1,\dots,m_n)^{\mathsf{T}}\,
and m_1,\ldots,m_n are the expected values of the order statistics of independent and identically distributed random variables sampled from the standard normal distribution, and V is the covariance matrix of those order statistics.

The user may reject the null hypothesis if W is below a predetermined threshold[vague].

Interpretation

The null-hypothesis of this test is that the population is normally distributed. Thus if the p-value is less than the chosen alpha level, then the null hypothesis is rejected and there is evidence that the data tested are not from a normally distributed population. In other words, the data are not normal. On the contrary, if the p-value is greater than the chosen alpha level, then the null hypothesis that the data came from a normally distributed population cannot be rejected. E.g. for an alpha level of 0.05, a data set with a p-value of 0.02 rejects the null hypothesis that the data are from a normally distributed population.[2] However, since the test is biased by sample size,[3] the test may be statistically significant from a normal distribution in any large samples. Thus a Q–Q plot is required for verification in addition to the test.

Power analysis

Monte Carlo simulation has found that Shapiro–Wilk has the best power for a given significance, followed closely by Anderson–Darling when comparing the Shapiro–Wilk, Kolmogorov–Smirnov, Lilliefors, and Anderson–Darling tests.[4]

Approximation

Royston proposed an alternative method of calculating the coefficients vector by providing an algorithm for calculating values, which extended the sample size to 2000.[5] This technique is used in several software packages including R,[6] Stata,[7][8] SPSS and SAS.[9]

See also

References

Lua error in package.lua at line 80: module 'strict' not found.

  1. 1.0 1.1 Lua error in package.lua at line 80: module 'strict' not found. p. 593
  2. Lua error in package.lua at line 80: module 'strict' not found.
  3. Lua error in package.lua at line 80: module 'strict' not found.
  4. Lua error in package.lua at line 80: module 'strict' not found.
  5. Lua error in package.lua at line 80: module 'strict' not found.
  6. Lua error in package.lua at line 80: module 'strict' not found.
  7. Lua error in package.lua at line 80: module 'strict' not found.
  8. Shapiro–Wilk and Shapiro–Francia tests for normality
  9. Lua error in package.lua at line 80: module 'strict' not found.

External links