Approximate entropy

From Infogalactic: the planetary knowledge core
Jump to: navigation, search

In statistics, an approximate entropy (ApEn) is a technique used to quantify the amount of regularity and the unpredictability of fluctuations over time-series data.[1]

For example, there are two series of data:

series 1: (10,20,10,20,10,20,10,20,10,20,10,20...), which alternates 10 and 20.
series 2: (10,10,20,10,20,20,20,10,10,20,10,20,20...), which has either a value of 10 or 20, chosen randomly, each with probability 1/2.

Moment statistics, such as mean and variance, will not distinguish between these two series. Nor will rank order statistics distinguish between these series. Yet series 1 is "perfectly regular"; knowing one term has the value of 20 enables one to predict with certainty that the next term will have the value of 10. Series 2 is randomly valued; knowing one term has the value of 20 gives no insight into what value the next term will have.

Regularity was originally measured by exact regularity statistics, which has mainly centered on various entropy measures.[1] However, accurate entropy calculation requires vast amounts of data, and the results will be greatly influenced by system noise,[2] therefore it is not practical to apply these methods to experimental data. ApEn was developed by Steve M. Pincus to handle these limitations by modifying an exact regularity statistic, Kolmogorov–Sinai entropy. ApEn was initially developed to analyze medical data, such as heart rate,[1] and later spread its applications in finance,[3] psychology,[4] and human factors engineering.[5]

The algorithm

\text{Step 1}: Form a time series of data \ u(1), u(2),\ldots, u(N). These are \text{N} raw data values from measurement equally spaced in time.

\text{Step 2}: Fix \ m , an integer, and \ r, a positive real number. The value of  \ m represents the length of compared run of data, and  \ r specifies a filtering level.

\text{Step 3}: Form a sequence of vectors \mathbf{x}(1),\mathbf{x}(2),\ldots,\mathbf{x}(N-m+1), in \mathbf{R}^{m}, real \ m-dimensional space defined by \mathbf{x}(i) = [u(i),u(i+1),\ldots,u(i+m-1)].

\text{Step 4}: Use the sequence \mathbf{x}(1),\mathbf{x}(2),\ldots,\mathbf{x}(N-m+1) to construct, for each  \ i ,  1 \le i \le N-m+1

 C_i^m (r)=(\text{number of } x(j) \text { such that } d[x(i),x(j)] \leq r)/(N-m+1) \,

in which \ d[x, x^*] is defined as

 d[x,x^* ]=\max_a |u(a)-u^*(a)| \,

The  \ u(a) are the  \text {m} scalar components of  \mathbf{x} .  \ d represents the distance between the vectors  \mathbf{x}(i) and \mathbf{x}(j) , given by the maximum difference in their respective scalar components. Note that j takes on all values, so the match provided when i=j will be counted (the subsequence is matched against itself).

\text{Step 5}: Define

 \Phi ^m (r) = (N-m+1)^{-1} \sum_{i=1}^{N-m+1}log(C_i^m (r)),

\text{Step 6}: Define approximate entropy \ (\mathrm{ApEn}) as

\ \mathrm{ApEn} = \Phi ^m (r) - \Phi^{m+1} (r).

where \ log is the natural logarithm, for \ m and \ r fixed as in Step 2.

Parameter selection: typically choose \ m=2 or \ m=3 , and \ r depends greatly on the application.

An implementation on Physionet,[6] which is based on Pincus [2] use d[x(i),x(j)] < r whereas the original article uses  d[x(i),x(j)] \le r in Step 4. While a concern for artificially constructed examples, it is usually not a concern in practice.

The interpretation

The presence of repetitive patterns of fluctuation in a time series renders it more predictable than a time series in which such patterns are absent. ApEn reflects the likelihood that similar patterns of observations will not be followed by additional similar observations.[7] A time series containing many repetitive patterns has a relatively small ApEn; a less predictable process has a higher ApEn.

One example

File:Heartrate.jpg
Illustration of the Heart Rate Sequence

Suppose \ N=51 , and the sequence consists of 51 samples of heart rate equally spaced in time:

 \ S_N = \{85, 80, 89, 85, 80, 89, \ldots\}

(i.e., the sequence is periodic with a period of 3). Let's choose \ m=2 and \ r=3 (the values of \ m and \ r can be varied without affecting the result).

Form a sequence of vectors:

\mathbf{ x}(1) = [u(1) \,u(2)]=[85\, 80]
\mathbf{ x}(2) = [u(2)\, u(3)]=[80\, 89]
\mathbf{ x}(3) = [u(3)\, u(4)]=[89\, 85]
\mathbf{ x}(4) = [u(4)\, u(5)]=[85\, 80]

Distance is calculated as follows:

\ d[\mathbf{x}(1), \mathbf{x}(1)]=\max_a |u(a)-u^*(a)|=0<r=3

Note \ |u(2)-u(3) |>|u(1)-u(2) |, so

\ d[\mathbf{x}(1), \mathbf{x}(2)]=\max_a |u(a)-u^*(a)|=|u(2)-u(3)|=9>r=3

Similarly,

\ d[\mathbf{x}(1), \mathbf{x}(3)]=|u(2)-u(4) |=5>r
\ d[\mathbf{x}(1), \mathbf{x}(4)]=|u(1)-u(4) |=|u(2)-u(5) |=0<r

Therefore, \mathbf{ x}(j)\text{s} such that \ d[\mathbf{x}(1),\mathbf{x}(j)]\le r include  \mathbf{x}(1), \mathbf{x}(4), \mathbf{x}(7),\ldots,\mathbf{x}(49), and the total number is 17.

\ C_1^2 (3)=\frac{17}{50}
\ C_2^2 (3)=\frac{17}{50}
\ C_3^2 (3)=\frac{16}{50}
\ C_4^2 (3)=\frac{17}{50}\  \ldots

Please note in Step 4, for  \mathbf{x}(i) , \ 1 \le i \le N-m+1 . So the \mathbf{x}(j)\text{s} such that \ d[\mathbf{x}(3),\mathbf{x}(j)] < r include  \mathbf{x}(3), \mathbf{x}(6), \mathbf{x}(9),\ldots,\mathbf{x}(48), and the total number is 16.

\Phi^2 (3)=(50)^{-1} \sum_{i=1}^{50}log(C_i^2(3))\approx-1.0982

Then we repeat the above steps for m=3. First form a sequence of vectors:

\mathbf{ x}(1) = [u(1)\, u(2)\, u(3)]=[85\, 80\, 89]
\mathbf{ x}(2) = [u(2)\, u(3)\, u(4)]=[80\, 89\, 85]
\mathbf{ x}(3) = [u(3)\, u(4)\, u(5)]=[89\, 85\, 80]
\mathbf{ x}(4) = [u(4)\, u(5)\, u(6)]=[85\, 80\, 89]

By calculating distances between vector \mathbf{x}(i), \mathbf{x}(j), 1 \le i \le 49 , we find the vectors satisfying the filtering level have the following characteristic:

\ d[\mathbf{x}(i)\,\mathbf{x}(i+3)]=0<r

Therefore,

\ C_1^3 (3)=\frac{17}{49}
\ C_2^3 (3)=\frac{16}{49}
\ C_3^3 (3)=\frac{16}{49}
\ C_4^3 (3)=\frac{17}{49}\  \ldots
\Phi^3 (3)=(49)^{-1} \sum_{i=1}^{49}log(C_i^3(3))\approx-1.0982

Finally,

 \mathrm{ ApEn}=\Phi^2 (3)-\Phi^3 (3)\approx0.000010997

The value is very small, so it implies the sequence is regular and predictable, which is consistent with the observation.

Advantages

The advantages of ApEn include:[2]

  • Lower computational demand. ApEn can be designed to work for small data samples (n < 50 points) and can be applied in real time.
  • Less effect from noise. If data are noisy, the ApEn measure can be compared to the noise level in the data to determine what quality of true information may be present in the data.

Applications

ApEn has been applied to classify EEG in psychiatric diseases, such as schizophrenia,[8] epilepsy,[9] and addiction.[10]

Limitations

The ApEn algorithm counts each sequence as matching itself to avoid the occurrence of ln(0) in the calculations. This step might cause bias of ApEn and this bias causes ApEn to have two poor properties in practice:[11]

  • First, ApEn is heavily dependent on the record length and is uniformly lower than expected for short records.
  • Second, it lacks relative consistency. That is, if ApEn of one data set is higher than that of another, it should, but does not, remain higher for all conditions tested.

See also

References

  1. 1.0 1.1 1.2 Lua error in package.lua at line 80: module 'strict' not found.
  2. 2.0 2.1 2.2 Lua error in package.lua at line 80: module 'strict' not found.
  3. Lua error in package.lua at line 80: module 'strict' not found.
  4. Lua error in package.lua at line 80: module 'strict' not found.
  5. Lua error in package.lua at line 80: module 'strict' not found.
  6. [1]
  7. Lua error in package.lua at line 80: module 'strict' not found.
  8. Lua error in package.lua at line 80: module 'strict' not found.
  9. Lua error in package.lua at line 80: module 'strict' not found.
  10. Lua error in package.lua at line 80: module 'strict' not found.
  11. Lua error in package.lua at line 80: module 'strict' not found.