# Spectral density

(Redirected from Frequency spectrum)
The spectral density of a fluorescent light as a function of optical wavelength shows peaks at atomic transitions, indicated by the numbered arrows.
The voice waveform over time (left) has a broad audio power spectrum (right).

The power spectrum $S_{xx}(f)$ of a time series $x(t)$ describes the distribution of frequency components composing that signal. According to Fourier analysis any physical signal can be decomposed into a number of discrete frequencies, or a spectrum of frequencies over a continuous range. The statistical average of a certain signal or sort of signal (including noise) as analyzed in terms of its frequency content, is called its spectrum.

When the energy of the signal is concentrated around a finite time interval, especially if its total energy is finite, one may compute the energy spectral density. More commonly used is the power spectral density (or simply power spectrum), which applies to signals existing over all time, or over a time period large enough (especially in relation to the duration of a measurement) that it could as well have been over an infinite time interval. The power spectral density (PSD) then refers to the spectral energy distribution that would be found per unit time, since the total energy of such a signal over all time would generally be infinite. Summation or integration of the spectral components yields the total power (for a physical process) or variance (in a statistical process), identical to what would be obtained by integrating $x^2(t)$ over the time domain, as dictated by Parseval's theorem.

The spectrum of a physical process $x(t)$ often contains essential information about the nature of $x$. For instance, the pitch and timbre of a musical instrument are immediately determined from a spectral analysis. The color of a light source is determined by the spectrum of the electromagnetic wave's electric field $E(t)$ as it fluctuates at an extremely high frequency. Obtaining a spectrum from time series' such as these involves the Fourier transform, and generalizations based on Fourier analysis. In many cases the time domain is not specifically employed in practice, such as when a dispersive prism is used to obtain a spectrum of light in a spectrograph, or when a sound is perceived through its effect on the auditory receptors of the inner ear, each of which is sensitive to a particular frequency.

However this article concentrates on situations in which the time series is known (at least in a statistical sense) or directly measured (such as by a microphone sampled by a computer). The power spectrum is important in statistical signal processing and in the statistical study of stochastic processes, as well as in many other branches of physics and engineering. Typically the process is a function of time but one can similarly discuss data in the spatial domain being decomposed in terms of spatial frequency.

## Explanation

Any signal that can be represented as an amplitude that varies in time has a corresponding frequency spectrum. This includes familiar entities such as visible light (perceived as color), musical notes (perceived as pitch), radio/TV (specified by their frequency, or sometimes wavelength) and even the regular rotation of the earth. When these signals are viewed in the form of a frequency spectrum, certain aspects of the received signals or the underlying processes producing them are revealed. In some cases the frequency spectrum may include a distinct peak corresponding to a sine wave component. And additionally there may be peaks corresponding to harmonics of a fundamental peak, indicating a periodic signal which is not simply sinusoidal. Or a continuous spectrum may show narrow frequency intervals which are strongly enhanced corresponding to resonances, or frequency intervals containing almost zero power as would be produced by a notch filter.

In physics, the signal might be a wave, such as an electromagnetic wave, an acoustic wave, or the vibration of a mechanism. The power spectral density (PSD) of the signal describes the power present in the signal as a function frequency, per unit frequency. Power spectral density is commonly expressed in watts per hertz (W/Hz).[1]

When a signal is defined in terms only of a voltage, for instance, there is no unique power associated with the stated amplitude. In this case "power" is simply reckoned in terms of the square of the signal, as this would always be proportional to the actual power delivered by that signal into a given impedance. So one might use units of V2 Hz−1 for the PSD and V2 s Hz−1 for the ESD (energy spectral density)[2] even though no actual "power" or "energy" is specified.

Sometimes one encounters an amplitude spectral density (ASD), which is the square root of the PSD; the ASD of a voltage signal has units of V Hz−1/2.[3] This is useful when the shape of the spectrum is rather constant, since variations in the ASD will then be proportional to variations in the signal's voltage level itself. But it is mathematically preferred to use the PSD, since only in that case is the area under the curve meaningful in terms of actual power over all frequency or over a specified bandwidth.

For random vibration analysis, units of g2 Hz−1 are frequently used for the PSD of acceleration. Here g denotes the g-force.[4]

Mathematically it is not necessary to assign physical dimensions to the signal or to the independent variable. In the following discussion the meaning of x(t) will remain unspecified, but the independent variable will be assumed to be that of time.

## Definition

### Energy spectral density

Energy spectral density describes how the energy of a signal or a time series is distributed with frequency. Here, the term energy is used in the generalized sense of signal processing;[5] that is, the energy of a signal $x(t)$ is

$\int\limits_{-\infty}^\infty |x(t)|^2\, dt.$

The energy spectral density is most suitable for transients—that is, pulse-like signals—having a finite total energy. In this case, Parseval's theorem [6] gives us an alternate expression for the energy of the signal in terms of its Fourier transform, $\hat{x}(f)=\int\limits_{-\infty}^\infty e^{-2\pi ift}x(t) dt.$

$\int\limits_{-\infty}^\infty |x(t)|^2\, dt = \int\limits_{-\infty}^\infty |\hat{x}(f)|^2\, df.$

Here $f$ is the frequency in Hz, i.e., cycles per second. Often used is the angular frequency $\omega=2\pi f$. Since the integral on the right-hand side is the energy of the signal, the integrand $|\hat{x}(f)|^2$ can be interpreted as a density function describing the energy per unit frequency contained in the signal at the frequency $f$. In light of this, the energy spectral density of a signal $x(t)$ is defined as[6]

$S_{xx}(f) = |\hat{x}(f)|^2$

As a physical example of how one might measure the energy spectral density of a signal, suppose $V(t)$ represents the potential (in volts) of an electrical pulse propagating along a transmission line of impedance $Z$, and suppose the line is terminated with a matched resistor (so that all of the pulse energy is delivered to the resistor and none is reflected back). By Ohm's law, the power delivered to the resistor at time $t$ is equal to $V(t)^2/Z$, so the total energy is found by integrating $V(t)^2/Z$ with respect to time over the duration of the pulse. To find the value of the energy spectral density $S_{xx}(f)$ at frequency $f$, one could insert between the transmission line and the resistor a bandpass filter which passes only a narrow range of frequencies ($\Delta f$, say) near the frequency of interest and then measure the total energy $E(f)$ dissipated across the resistor. The value of the energy spectral density at $f$ is then estimated to be $E(f)/\Delta f$. In this example, since the power $V(t)^2/Z$ has units of V2 Ω−1, the energy $E(f)$ has units of V2 s Ω−1 = J, and hence the estimate $E(f)/\Delta f$ of the energy spectral density has units of J Hz−1, as required. In many situations, it is common to forgo the step of dividing by $Z$ so that the energy spectral density instead has units of V2 s Hz−1.

This definition generalizes in a straightforward manner to a discrete signal with an infinite number of values $x_n$ such as a signal sampled at discrete times $x_n=x(n\,\Delta t)$:

$S_{xx}(f) = (\Delta t)^2\left|\sum_{n=-\infty}^\infty x_n e^{-2\pi i f n}\right|^2= (\Delta t)^2 \hat x_d(f)\hat x_d^*(f),$

where $\hat x_d(f)$ is the discrete Fourier transform of $x_n$ and $\hat x_d^*(f)$ is the complex conjugate of $\hat x_d(f)$ . The sampling interval $\Delta t$ is needed to keep the correct physical units and to ensure that we recover the continuous case in the limit $\Delta t\rightarrow 0$; however, in the mathematical sciences, the interval is often set to 1.

### Power spectral density

The above definition of energy spectral density is suitable for transients (pulse-like signals) whose energy is concentrated around one time window; then the Fourier transforms of the signals generally exist. For continuous signals over all time, such as stationary processes, one must rather define the power spectral density (PSD); this describes how power of a signal or time series is distributed over frequency, as in the simple example given previously. Here, power can be the actual physical power, or more often, for convenience with abstract signals, is simply identified with the squared value of the signal. For example, statisticians study the variance of a function over time x(t) (or over another independent variable), and using an analogy with electrical signals (among other physical processes), it is customary to refer to it as the power spectrum even when there is no physical power involved. If one were to create a physical voltage source which followed x(t) and applied it to the terminals of a 1 ohm resistor, then indeed the instantaneous power dissipated in that resistor would be given by x2 watts.

The average power P of a signal $x(t)$ over all time is therefore given by the following time average:

$P = \lim_{T\rightarrow \infty} \frac 1 {2T} \int_{-T}^T x(t)^2\,dt.$

Note that a stationary process, for instance, may have a finite power but an infinite energy. After all, energy is the integral of power, and the stationary signal continues over an infinite time. That is the reason that we cannot use the energy spectral density as defined above in such cases.

In analyzing the frequency content of the signal $x(t)$, one might like to compute the ordinary Fourier transform $\hat{x}(\omega)$; however, for many signals of interest the Fourier transform does not formally exist.[N 1] Because of this complication one can as well work with a truncated Fourier transform $\hat{x}_T(\omega)$, where the signal is integrated only over a finite interval [0, T]:

$\hat{x}_T(\omega) = \frac{1}{\sqrt{T}} \int_0^T x(t) e^{-i\omega t}\, dt.$

Then the power spectral density can be defined as[8][9]

$S_{xx}(\omega) = \lim_{T \rightarrow \infty} \mathbf{E} \left[ | \hat{x}_T(\omega) | ^ 2 \right].$

Here E denotes the expected value; explicitly, we have[9]

$\mathbf{E} \left[ | \hat{x}_T(\omega) |^2 \right] = \mathbf{E} \left[ \frac{1}{T} \int\limits_0^T x^*(t) e^{i\omega t}\, dt \int\limits_0^T x(t') e^{-i\omega t'}\, dt' \right] = \frac{1}{T} \int\limits_0^T \int\limits_0^T \mathbf{E}\left[x^*(t) x(t')\right] e^{i\omega (t-t')}\, dt\, dt'.$

In the latter form (for a stationary random process), one can make the change of variables $\Delta t = t-t'$ and with the limits of integration (rather than [0,T]) approaching infinity, the resulting power spectral density $S_{xx}(\omega)$ and the autocorrelation function of this signal are seen to be Fourier transform pairs (Wiener–Khinchin theorem). The autocorrelation function is a statistic defined as $\gamma(\tau)=\langle X(t) X(t+\tau)\rangle$ (or more generally as $\gamma(\tau)=\langle X(t) X^*(t+\tau)\rangle$ in the case that X(t) is complex-valued). Provided that $\gamma(\tau)$ is absolutely integrable (which is not always true)[10],

$S_{xx}(\omega)=\int_{-\infty}^\infty \,\gamma(\tau)\,e^{-i\omega\tau}\,d \tau=\hat \gamma(\omega).$

Many authors use this equality to actually define the power spectral density.[11]

The power of the signal in a given frequency band $[f_1, f_2]$ (or $[\omega_1,\omega_2]$) can be calculated by integrating over frequency. Since $S_{xx}(-\omega) = S_{xx}(\omega)$, an equal amount of power can be attributed to positive and negative frequencies, which accounts for the factor of 2 in the following form (such trivial factors dependent on conventions used):

$P_\mathsf{bandlimited} = 2 \int_{f_1}^{f_2}\,S_{xx}(2\pi \! f) \, df = \frac{1}{\pi} \int_{\omega_1}^{\omega_2}\,S_{xx}(\omega) d\omega$

More generally, similar techniques may be used to estimate a time-varying spectral density. In this case the truncated Fourier transform defined above over the finite time interval (0, T) is not evaluated in the limit of T approaching infinity. This results in decreased spectral coverage and resolution since frequencies of less than 1/T are not sampled, and results at frequencies which are not an integer multiple of 1/T are not independent. Just using a single such time series, the estimated power spectrum will be very "noisy", however this can be alleviated if it is possible to evaluate the expected value (in the above equation) using a large (or infinite) number of short-term spectra corresponding to statistical ensembles of realizations of x(t) evaluated over the specified time window.

This definition of the power spectral density can be generalized to discrete time variables $x_n$. As above we can consider a finite window of $1\le n\le N$ with the signal sampled at discrete times $x_n=x(n\Delta t)$ for a total measurement period $T=N \Delta t$. Then a single estimate of the PSD can be obtained through summation rather than integration:

$\tilde{S}_{xx}(\omega)=\frac{(\Delta t)^2}{T}\left|\sum_{n=1}^N x_n e^{-i\omega n}\right|^2$.

As before, the actual PSD is achieved when N (and thus T) approach infinity and the expected value is formally applied. In a real-world application, one would typically average this single-measurement PSD over many trials to obtain a more accurate estimate of the theoretical PSD of the physical process underlying the individual measurements. This computed PSD is sometimes called a periodogram. This periodogram converges to the true PSD as the number of estimates as well as the averaging time interval T approach infinity (Brown & Hwang[12]).

If two signals both possess power spectral densities, then the cross-spectral density can similarly be calculated; as the PSD is related to the autocorrelation, so is the cross-spectral density related to the cross-correlation.

#### Properties of the power spectral density

Some properties of the PSD include:[13]

• The spectrum of a real valued process (or even a complex process using the above definition) is real and an even function of frequency: $S_{xx}(-\omega) = S_{xx}(\omega)$.
• If the process is continuous and purely indeterministic[clarification needed], the autocovariance function can be reconstructed by using the Inverse Fourier transform
• The PSD can be used to compute the variance (net power) of a process by integrating over frequency:
$\text{Var}(X_n) = \gamma_0 = 2 \int_0^{\infty}\! S_{xx}(\omega) \, d\omega.$
• Being based on the fourier transform, the PSD is a linear function of the autocovariance function in the sense that if $\gamma$ is decomposed into two functions $\gamma(\tau) = \alpha_1 \gamma_1(\tau) + \alpha_2 \gamma_2(\tau)$, then
$f = \alpha_1 S_{xx,1} + \alpha_2 S_{xx,2}.$

The integrated spectrum or power spectral distribution $F(\omega)$ is defined as[dubious ][14]

$F(\omega)= \int _{-\infty}^\omega S_{xx}(\omega')\, d\omega'.$

### Cross-spectral density

Given two signals $x(t)$ and $y(t)$, each of which possess power spectral densities $S_{xx}(\omega)$ and $S_{yy}(\omega)$, it is possible to define a cross-spectral density (CSD) given by

$S_{xy}(\omega) = \lim_{T\rightarrow\infty} \mathbf{E}\left\{\left[F_x^T(\omega)\right]^*F_y^T(\omega)\right\}.$

The cross-spectral density (or 'cross power spectrum') is thus the Fourier transform of the cross-correlation function.

$S_{xy}(\omega) = \int_{-\infty}^{\infty} R_{xy}(t) e^{-j \omega t} dt = \int_{-\infty}^{\infty} \left[ \int_{-\infty}^{\infty} x(\tau) \cdot y(\tau+t) d\tau \right] \, e^{-j \omega t} dt,$

where $R_{xy}(t)$ is the cross-correlation of $x(t)$ and $y(t)$.

By an extension of the Wiener–Khinchin theorem, the Fourier transform of the cross-spectral density $S_{xy}(\omega)$ is the cross-covariance function.[15] In light of this, the PSD is seen to be a special case of the CSD for $x(t) = y(t)$.

For discrete signals xn and yn, the relationship between the cross-spectral density and the cross-covariance is

$S_{xy}(\omega)=\frac{1}{2\pi}\sum_{n=-\infty}^\infty R_{xy}(n)e^{-j\omega n}$

## Estimation

The goal of spectral density estimation is to estimate the spectral density of a random signal from a sequence of time samples. Depending on what is known about the signal, estimation techniques can involve parametric or non-parametric approaches, and may be based on time-domain or frequency-domain analysis. For example, a common parametric technique involves fitting the observations to an autoregressive model. A common non-parametric technique is the periodogram.

The spectral density is usually estimated using Fourier transform methods (such as the Welch method), but other techniques such as the maximum entropy method can also be used.

## Properties

• The spectral density of $f(t)$ and the autocorrelation of $f(t)$ form a Fourier transform pair (for PSD versus ESD, different definitions of autocorrelation function are used). This result is known as Wiener–Khinchin theorem.
• One of the results of Fourier analysis is Parseval's theorem which states that the area under the energy spectral density curve is equal to the area under the square of the magnitude of the signal, the total energy:
$\int_{-\infty}^\infty \left| f(t) \right|^2\, dt = \int_{-\infty}^\infty ESD(\omega)\, d\omega.$
The above theorem holds true in the discrete cases as well. A similar result holds for power: the area under the power spectral density curve is equal to the total signal power, which is $R(0)$, the autocorrelation function at zero lag. This is also (up to a constant which depends on the normalization factors chosen in the definitions employed) the variance of the data comprising the signal.

## Example calculation

Suppose $x_n$, from $n=0$ to $N-1$ is a time series (discrete time) with zero mean. Suppose that it is a sum of a finite number of periodic components (all frequencies are positive):

\begin{align} x_n &= \sum_k A_k\cdot \sin(2\pi \nu_k n + \phi_k)\\ &= \sum_k \left(\overbrace{a_k}^{A_k \sin(\phi_k)} \cos(2\pi \nu_k n) + \overbrace{b_k}^{A_k \cos(\phi_k)} \sin(2\pi \nu_k n)\right) \end{align}

The variance of $x_n$ is, for a zero-mean function as above, given by $\frac 1N \sum_{n=0}^{N-1} x_n^2$. If these data were samples taken from an electrical signal, this would be its average power (power is energy per unit time, so it is analogous to variance if energy is analogous to the amplitude squared).

Now, for simplicity, suppose the signal extends infinitely in time, so we pass to the limit as $N\rightarrow \infty$. If the average power is bounded, which is almost always the case in reality, then the following limit exists and is the variance of the data.

$\lim _ {N\rightarrow \infty} \frac 1N \sum_{n=0}^{N-1} x_n^2.$

Again, for simplicity, we will pass to continuous time, and assume that the signal extends infinitely in time in both directions. Then these two formulas become

$x(t) = \sum_k A_k\cdot \sin(2\pi \nu_k t + \phi_k)$

and

$\lim _ {T\rightarrow\infty} \frac 1{2T} \int_{-T}^T x(t)^2 dt.$

The root mean square of $\sin$ is $1/\sqrt{2}$, so the variance of $A_k \sin(2\pi \nu_k t + \phi_k)$ is $A_k^2 / 2$. Hence, the contribution to the average power of $x(t)$ coming from the component with frequency $\nu_k$ is $A_k^2 / 2.$. All these contributions add up to the average power of $x(t)$.

Then the power as a function of frequency is $A_k^2/ 2$, and its statistical cumulative distribution function $S(\nu)$ will be

$S(\nu) = \sum _ {k : \nu_k < \nu} A_k^2/ 2.$

$S$ is a step function, monotonically non-decreasing. Its jumps occur at the frequencies of the periodic components of $x$, and the value of each jump is the power or variance of that component.

The variance is the covariance of the data with itself. If we now consider the same data but with a lag of $\tau$, we can take the covariance of $x(t)$ with $x(t+\tau)$, and define this to be the autocorrelation function $c$ of the signal (or data) $x$:

$c (\tau) = \lim _ {T\rightarrow\infty} \frac 1{2T} \int_{-T}^T x(t) x(t+\tau) dt.$

If it exists, it is an even function of $\tau$. If the average power is bounded, then $c$ exists everywhere, is finite, and is bounded by $c(0)$, which is the average power or variance of the data.

It can be shown that $c$ can be decomposed into periodic components with the same periods as $x$:

$c(\tau) = \sum_k \frac 12 A_k^2 \cos (2\pi \nu_k \tau).$

This is in fact the spectral decomposition of $c$ over the different frequencies, and is related to the distribution of power of $x$ over the frequencies: the amplitude of a frequency component of $c$ is its contribution to the average power of the signal.

The power spectrum of this example is not continuous, and therefore does not have a derivative, and therefore this signal does not have a power spectral density function. In general, the power spectrum will usually be the sum of two parts: a line spectrum such as in this example, which is not continuous and does not have a density function, and a residue, which is absolutely continuous and does have a density function.

$c(\tau) = \sum_k \frac 12 A_k^2 \cos (2\pi \nu_k \tau).$

This is in fact the spectral decomposition of $c$ over the different frequencies, and is related to the distribution of power of $x$ over the frequencies: the amplitude of a frequency component of $c$ is its contribution to the average power of the signal.

The power spectrum of this example is not continuous, and therefore does not have a derivative, and therefore this signal does not have a power spectral density function. In general, the power spectrum will usually be the sum of two parts: a line spectrum such as in this example, which is not continuous and does not have a density function, and a residue, which is absolutely continuous and does have a density function.

## Related concepts

• The spectral centroid of a signal is the midpoint of its spectral density function, i.e. the frequency that divides the distribution into two equal parts.
• The spectral edge frequency of a signal is an extension of the previous concept to any proportion instead of two equal parts.
• The spectral density is a function of frequency, not a function of time. However, the spectral density of small windows of a longer signal may be calculated, and plotted versus time associated with the window. Such a graph is called a spectrogram. This is the basis of a number of spectral analysis techniques such as the short-time Fourier transform and wavelets.
• A "spectrum" generally means the power spectral density, as discussed above, which depicts the distribution of signal content over frequency. This is not to be confused with the frequency response of a transfer function which also includes a phase (or equivalently, a real and imaginary part as a function of frequency). For transfer functions, (e.g., Bode plot, chirp) the complete frequency response may be graphed in two parts, amplitude versus frequency and phase versus frequency (or less commonly, as real and imaginary parts of the transfer function). The impulse response (in the time domain) $h(t)$, cannot generally be uniquely recovered from the amplitude spectral density part alone without the phase function. Although these are also fourier transform pairs, there is no symmetry (as there is for the autocorrelation) forcing the fourier transform to be real-valued. See spectral phase and phase noise.

## Applications

### Electrical engineering

Spectrogram of an FM radio signal with frequency on the horizontal axis and time increasing upwards on the vertical axis.

The concept and use of the power spectrum of a signal is fundamental in electrical engineering, especially in electronic communication systems, including radio communications, radars, and related systems, plus passive remote sensing technology. Electronic instruments called spectrum analyzers are used to observe and measure the power spectra of signals.

The spectrum analyzer measures the magnitude of the short-time Fourier transform (STFT) of an input signal. If the signal being analyzed can be considered a stationary process, the STFT is a good smoothed estimate of its power spectral density.

## Notes

1. Some authors (e.g. Risken[7] ) still use the non-normalized Fourier transform in a formal way to formulate a definition of the power spectral density
$\langle \hat x(\omega) \hat x^\ast(\omega') \rangle = 2\pi\,f(\omega)\,\delta(\omega-\omega')$,
where $\delta(\omega-\omega')$ is the Dirac delta function. Such formal statements may sometimes be useful to guide the intuition, but should always be used with utmost care.

## References

1. Gérard Maral (2003). VSAT Networks. John Wiley and Sons. ISBN 0-470-86684-5.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
2. Michael Peter Norton and Denis G. Karczub (2003). Fundamentals of Noise and Vibration Analysis for Engineers. Cambridge University Press. ISBN 0-521-49913-5.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
3. Michael Cerna and Audrey F. Harvey (2000). "The Fundamentals of FFT-Based Signal Analysis and Measurement" (PDF).<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
4. Alessandro Birolini (2007). Reliability Engineering. Springer. p. 83. ISBN 978-3-540-49388-4.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
5. Oppenheim; Verghese. Signals, Systems, and Inference. pp. 32–4.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
6. Stein, Jonathan Y. (2000). Digital Signal Processing: A Computer Science Perspective. Wiley. p. 115.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
7. Hannes Risken (1996). The Fokker–Planck Equation: Methods of Solution and Applications (2nd ed.). Springer. p. 30. ISBN 9783540615309.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
8. Fred Rieke, William Bialek, and David Warland (1999). Spikes: Exploring the Neural Code (Computational Neuroscience). MIT Press. ISBN 978-0262681087.CS1 maint: multiple names: authors list (link)<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
9. Scott Millers and Donald Childers (2012). Probability and random processes. Academic Press. pp. 370–5.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
10. The Wiener–Khinchin theorem makes sense of this formula for any wide-sense stationary process under weaker hypotheses: $\gamma$ does not need to be absolutely integrable, it only needs to exist. But the integral can no longer be interpreted as usual. The formula also makes sense if interpreted as involving distributions (in the sense of Laurent Schwartz, not in the sense of a statistical Cumulative distribution function) instead of functions. If $\gamma$ is continuous, Bochner's theorem can be used to prove that its Fourier transform exists as a positive measure, whose distribution function is F (but not necessarily as a function and not necessarily possessing a probability density).
11. Dennis Ward Ricker (2003). Echo Signal Processing. Springer. ISBN 1-4020-7395-X.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
12. Robert Grover Brown & Patrick Y.C. Hwang (1997). Introduction to Random Signals and Applied Kalman Filtering. John Wiley & Sons. ISBN 0-471-12839-2.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
13. Storch, H. Von; F. W Zwiers (2001). Statistical analysis in climate research. Cambridge University Press. ISBN 0-521-01230-9.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
14. An Introduction to the Theory of Random Signals and Noise, Wilbur B. Davenport and Willian L. Root, IEEE Press, New York, 1987, ISBN 0-87942-235-1
15. William D Penny (2009). "Signal Processing Course, chapter 7".<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>