Diagonally dominant matrix

From Infogalactic: the planetary knowledge core
Jump to: navigation, search

In mathematics, a matrix is said to be diagonally dominant if for every row of the matrix, the magnitude of the diagonal entry in a row is larger than or equal to the sum of the magnitudes of all the other (non-diagonal) entries in that row. More precisely, the matrix A is diagonally dominant if

|a_{ii}| \geq \sum_{j\neq i} |a_{ij}| \quad\text{for all } i, \,

where aij denotes the entry in the ith row and jth column.

Note that this definition uses a weak inequality, and is therefore sometimes called weak diagonal dominance. If a strict inequality (>) is used, this is called strict diagonal dominance. The unqualified term diagonal dominance can mean both strict and weak diagonal dominance, depending on the context.[1]

Variations

The definition in the first paragraph sums entries across rows. It is therefore sometimes called row diagonal dominance. If one changes the definition to sum down columns, this is called column diagonal dominance.

If an irreducible matrix is weakly diagonally dominant, but in at least one row (or column) is strictly diagonally dominant, then the matrix is irreducibly diagonally dominant.

Examples

The matrix

\mathbf A = \begin{bmatrix}
3 & -2 & 1\\
1 & -3 & 2\\
-1 & 2 & 4\end{bmatrix}

gives

|a_{11}| \ge |a_{12}| + |a_{13}|   since   |3| \ge |-2| + |1|
|a_{22}| \ge |a_{21}| + |a_{23}|   since   |-3| \ge |1| + |2|
|a_{33}| \ge |a_{31}| + |a_{32}|   since   |4| \ge |-1| + |2|.

Because the magnitude of each diagonal element is greater than or equal to the sum of the magnitude of other elements in the row, A is diagonally dominant.

The matrix

\mathbf B = \begin{bmatrix}
-2 & 2 & 1\\
1 & 3 & 2\\
1 & -2 & 0\end{bmatrix}

But here,

|b_{11}| < |b_{12}| + |b_{13}|   since   |-2| < |2| + |1|
|b_{22}| \ge |b_{21}| + |b_{23}|   since   |3| \ge |1| + |2|
|b_{33}| < |b_{31}| + |b_{32}|   since   |0| < |1| + |-2|.

Because |b_{11}| and |b_{33}| are less than the sum of the magnitude of other elements in their respective row, B is not diagonally dominant.

The matrix

\mathbf C = \begin{bmatrix}
-4 & 2 & 1\\
1 & 6 & 2\\
1 & -2 & 5\end{bmatrix}

gives

|c_{11}| \ge |c_{12}| + |c_{13}|   since   |-4| > |2| + |1|
|c_{22}| \ge |c_{21}| + |c_{23}|   since   |6| > |1| + |2|
|c_{33}| \ge |c_{31}| + |c_{32}|   since   |5| > |1| + |-2|.

Because the magnitude of each diagonal element is greater than the sum of the magnitude of the other elements in the row, C is strictly diagonally dominant.

Applications and properties

A strictly diagonally dominant matrix (or an irreducibly diagonally dominant matrix[2]) is non-singular. This result is known as the Levy–Desplanques theorem.[3] This can be proved, for strictly diagonal dominant matrices, using the Gershgorin circle theorem.

A Hermitian diagonally dominant matrix  A with real non-negative diagonal entries is positive semidefinite. (Proof: Let the diagonal matrix  D contain the diagonal entries of  A . Connect  A and D+I via a segment of matrices  M(t)=(1-t)(D+I)+tA . This segment consists of strictly diagonally dominant (thus nonsingular) matrices, except maybe for A. This shows that  \mathrm{det}(A) \ge 0. Applying this argument to the principal minors of  A , the positive semidefiniteness follows by Sylvester's criterion.)

If the symmetry requirement is eliminated, such a matrix is not necessarily positive semidefinite (for example,  \begin{pmatrix}-\sqrt 5&2&1\end{pmatrix}\begin{pmatrix}
1&1&0\\
1&1&0\\
1&0&1\end{pmatrix}\begin{pmatrix}-\sqrt 5\\2\\1\end{pmatrix}=10-5\sqrt 5<0 ); however, the real parts of its eigenvalues are non-negative (see Gershgorin's circle theorem).

Similarly, an Hermitian strictly diagonally dominant matrix with real positive diagonal entries is positive definite, as it equals to the sum of some Hermitian diagonally dominant matrix A with real non-negative diagonal entries (which is positive semidefinite) and xI for some positive real number x (which is positive definite).

No (partial) pivoting is necessary for a strictly column diagonally dominant matrix when performing Gaussian elimination (LU factorization).

The Jacobi and Gauss–Seidel methods for solving a linear system converge if the matrix is strictly (or irreducibly) diagonally dominant.

Many matrices that arise in finite element methods are diagonally dominant.

A slight variation on the idea of diagonal dominance is used to prove that the pairing on diagrams without loops in the Temperley–Lieb algebra is nondegenerate.[4] For a matrix with polynomial entries, one sensible definition of diagonal dominance is if the highest power of q appearing in each row appears only on the diagonal. (The evaluations of such a matrix at large values of q are diagonally dominant in the above sense.)

Notes

  1. For instance, Horn and Johnson (1985, p. 349) use it to mean weak diagonal dominance.
  2. Horn and Johnson, Thm 6.2.27.
  3. Horn and Johnson, Thm 6.1.10. This result has been independently rediscovered dozens of times. A few notable ones are Lévy (1881), Desplanques (1886), Minkowski (1900), Hadamard (1903), Schur, Markov (1908), Rohrbach (1931), Gershgorin (1931), Artin (1932), Ostrowski (1937), and Furtwängler (1936). For a history of this "recurring theorem" see: Lua error in package.lua at line 80: module 'strict' not found. Another useful history is in: Lua error in package.lua at line 80: module 'strict' not found.
  4. Lua error in package.lua at line 80: module 'strict' not found.

References

  • Gene H. Golub & Charles F. Van Loan. Matrix Computations, 1996. ISBN 0-8018-5414-8
  • Roger A. Horn & Charles R. Johnson. Matrix Analysis, Cambridge University Press, 1985. ISBN 0-521-38632-2 (paperback).

External links