Rank–nullity theorem

From Infogalactic: the planetary knowledge core
Jump to: navigation, search

<templatestyles src="Module:Hatnote/styles.css"></templatestyles>


In mathematics, the rank–nullity theorem of linear algebra, in its simplest form, states that the rank and the nullity of a matrix add up to the number of columns of the matrix. Specifically, if A is an m-by-n matrix (with m rows and n columns) over some field, then[1]

\operatorname{rk}(A) + \operatorname{nul}(A) = n .

This applies to linear maps as well. Let V and W be vector spaces over some field and let T : VW be a linear map. Then the rank of T is the dimension of the image of T and the nullity of T is the dimension of the kernel of T, so we have

\operatorname{dim}(\operatorname{im} (T)) + \operatorname{dim} (\operatorname{ker} (T)) = \operatorname{dim} (V) ,

or, equivalently,

\operatorname{rk}(T) + \operatorname{nul}(T) = \operatorname{dim}(V).

One can refine this statement (via the splitting lemma or the below proof) to be a statement about an isomorphism of spaces, not just dimensions.

More generally, one can consider the image, kernel, coimage, and cokernel, which are related by the fundamental theorem of linear algebra.

Proofs

We give two proofs. The first proof uses notations for linear transformations, but can be easily adapted to matrices by writing T(x) = Ax, where A is m × n. The second proof looks at the homogeneous system Ax = 0 associated with an m × n matrix A of rank r and shows explicitly that there exist a set of nr linearly independent solutions that span the null space of A. These proofs are also available in the book by Banerjee and Roy (2014) [2]

First proof: Suppose \{\mathbf{u}_1, \ldots, \mathbf{u}_m\} forms a basis of ker T. We can extend this to form a basis of V:  \{\mathbf{u}_1, \ldots, \mathbf{u}_m, \mathbf{w}_1, \ldots, \mathbf{w}_n\}. Since the dimension of ker T is m and the dimension of V is m + n, it suffices to show that the dimension of im T is n.

Let us see that \{T\mathbf{w}_1, \ldots, T\mathbf{w}_n \} is a basis of im T. Let v be an arbitrary vector in V. There exist unique scalars such that:

\mathbf{v}=a_1 \mathbf{u}_1 + \cdots + a_m \mathbf{u}_m + b_1 \mathbf{w}_1 +\cdots + b_n \mathbf{w}_n
\Rightarrow  T\mathbf{v} = a_1 T\mathbf{u}_1 + \cdots + a_m T\mathbf{u}_m + b_1 T\mathbf{w}_1 +\cdots + b_n T\mathbf{w}_n
\Rightarrow  T\mathbf{v} = b_1 T\mathbf{w}_1 + \cdots + b_n T\mathbf{w}_n \; \; \because T\mathbf{u}_i = 0

Thus, \{T\mathbf{w}_1, \ldots, T\mathbf{w}_n \} spans im T.

We only now need to show that this list is not redundant; that is, that \{T\mathbf{w}_1, \ldots, T\mathbf{w}_n \} are linearly independent. We can do this by showing that a linear combination of these vectors is zero if and only if the coefficient on each vector is zero. Let:

c_1 T\mathbf{w}_1 + \cdots + c_n T\mathbf{w}_n = 0 \Leftrightarrow T(c_1 \mathbf{w}_1 + \cdots + c_n \mathbf{w}_n)=0
\therefore c_1 \mathbf{w}_1 + \cdots + c_n \mathbf{w}_n \in \operatorname{ker} \; T

Then, since ui span ker T, there exists a set of scalars di such that:

c_1 \mathbf{w}_1 + \cdots + c_n \mathbf{w}_n = d_1 \mathbf{u}_1 + \cdots + d_m \mathbf{u}_m

But, since \{\mathbf{u}_1, \ldots, \mathbf{u}_m, \mathbf{w}_1, \ldots, \mathbf{w}_n\} form a basis of V, all ci, di must be zero. Therefore, \{T\mathbf{w}_1, \ldots, T\mathbf{w}_n \} is linearly independent and indeed a basis of im T. This proves that the dimension of im T is n, as desired.

In more abstract terms, the map T : V → im T splits.

Second proof: Let A be an m × n matrix with r linearly independent columns (i.e. rank of A is r). We will show that: (i) there exists a set of nr linearly independent solutions to the homogeneous system Ax = 0, and (ii) that every other solution is a linear combination of these nr solutions. In other words, we will produce an n × (nr) matrix X whose columns form a basis of the null space of A.

Without loss of generality, assume that the first r columns of A are linearly independent. So, we can write A = [A1:A2], where A1 is m × r with r linearly independent column vectors and A2 is m × (nr), each of whose nr columns are linear combinations of the columns of A1. This means that A2 = A1 B for some r × (nr) matrix B (see rank factorization) and, hence, A = [A1:A1B]. Let \displaystyle \mathbf{X} = 
\begin{pmatrix}
-\mathbf{B} \\
 \mathbf{I}_{n-r} 
\end{pmatrix} 
, where \mathbf{I}_{n-r} is the (nr) × (nr) identity matrix. We note that X is an n × (nr) matrix that satisfies


\mathbf{A}\mathbf{X} = [\mathbf{A}_1:\mathbf{A}_1\mathbf{B}]\begin{pmatrix}
-\mathbf{B} \\
 \mathbf{I}_{n-r} 
\end{pmatrix} = -\mathbf{A}_1\mathbf{B} + \mathbf{A}_1\mathbf{B} = \mathbf{O}\; .

Therefore, each of the nr columns of X are particular solutions of Ax = 0. Furthermore, the nr columns of X are linearly independent because Xu = 0 will imply u = 0:

 \mathbf{X}\mathbf{u} = \mathbf{0} \Rightarrow \begin{pmatrix}
-\mathbf{B} \\
 \mathbf{I}_{n-r} 
\end{pmatrix}\mathbf{u} = \mathbf{0} \Rightarrow \begin{pmatrix}
-\mathbf{B}\mathbf{u} \\
 \mathbf{u} 
\end{pmatrix} = \begin{pmatrix}
\mathbf{0} \\
 \mathbf{0}
\end{pmatrix} \Rightarrow \mathbf{u} = \mathbf{0}\; .

Therefore, the column vectors of X constitute a set of nr linearly independent solutions for Ax = 0.

We next prove that any solution of Ax = 0 must be a linear combination of the columns of X For this, let \displaystyle \mathbf{u} = \begin{pmatrix}
 \mathbf{u}_1 \\
 \mathbf{u}_2 
\end{pmatrix} be any vector such that Au = 0. Note that since the columns of A1 are linearly independent, A1x = 0 implies x = 0. Therefore,


\mathbf{A}\mathbf{u} = \mathbf{0} \Rightarrow  [\mathbf{A}_1:\mathbf{A}_1\mathbf{B}]\begin{pmatrix}
 \mathbf{u}_1 \\
 \mathbf{u}_2 
\end{pmatrix} = \mathbf{0} \Rightarrow \mathbf{A}_1(\mathbf{u}_1 + \mathbf{B}\mathbf{u}_2) = \mathbf{0}
\Rightarrow \mathbf{u}_1 + \mathbf{B}\mathbf{u}_2 = \mathbf{0} \Rightarrow \mathbf{u}_1 = -\mathbf{B}\mathbf{u}_2
 \Rightarrow \mathbf{u} = \begin{pmatrix}
 \mathbf{u}_1 \\
 \mathbf{u}_2 
\end{pmatrix} = \begin{pmatrix}
 -\mathbf{B} \\
  \mathbf{I}_{n-r} 
\end{pmatrix}\mathbf{u}_2 = \mathbf{X}\mathbf{u}_2.

This proves that any vector u that is a solution of Ax = 0 must be a linear combination of the nr special solutions given by the columns of X. And we have already seen that the columns of X are linearly independent. Hence, the columns of X constitute a basis for the null space of A. Therefore, the nullity of A is nr. Since r equals rank of A, it follows that rk(A) + nul(A) = n. QED.

Reformulations and generalizations

This theorem is a statement of the first isomorphism theorem of algebra for the case of vector spaces; it generalizes to the splitting lemma.

In more modern language, the theorem can also be phrased as follows: if

0 → UVR → 0

is a short exact sequence of vector spaces, then

dim(U) + dim(R) = dim(V).

Here R plays the role of im T and U is ker T, i.e.

 0 \rightarrow \ker T~\overset{Id}{\rightarrow}~V~\overset{T}{\rightarrow}~\operatorname{im} T \rightarrow 0

In the finite-dimensional case, this formulation is susceptible to a generalization: if

0 → V1V2 → ... → Vr → 0

is an exact sequence of finite-dimensional vector spaces, then

\sum_{i=1}^r (-1)^i\dim(V_i) = 0.[3]

The rank–nullity theorem for finite-dimensional vector spaces may also be formulated in terms of the index of a linear map. The index of a linear map T : VW, where V and W are finite-dimensional, is defined by

index T = dim(ker T) − dim(coker T).

Intuitively, dim(ker T) is the number of independent solutions x of the equation Tx = 0, and dim(coker T) is the number of independent restrictions that have to be put on y to make Tx = y solvable. The rank–nullity theorem for finite-dimensional vector spaces is equivalent to the statement

index T = dim(V) − dim(W).

We see that we can easily read off the index of the linear map T from the involved spaces, without any need to analyze T in detail. This effect also occurs in a much deeper result: the Atiyah–Singer index theorem states that the index of certain differential operators can be read off the geometry of the involved spaces.

Notes

  1. Meyer (2000), page 199.
  2. Lua error in package.lua at line 80: module 'strict' not found.
  3. Lua error in package.lua at line 80: module 'strict' not found.

References

  • Lua error in package.lua at line 80: module 'strict' not found.
  • Lua error in package.lua at line 80: module 'strict' not found..