Kernel (linear algebra)
||This article may require cleanup to meet Wikipedia's quality standards. The specific problem is: Decide whether to explain this in terms of linear maps or matrices (preferably the former) and use it consistently. Remove redundancies. Clean up markup. Do something with short sections. (September 2013)|
In linear algebra and functional analysis, the kernel (also null space or nullspace) of a linear map L : V → W between two vector spaces or two modules V and W is the set of all elements v of V for which L(v) = 0. That is
where 0 denotes the zero vector in W. The kernel of L is a linear subspace of the domain V.1 For a linear map given as a matrix A, the kernel is simply the set of solutions to the equation , where x and 0 are understood to be column vectors. The dimension of the null space of A is called the nullity of A.
- 1 Definition
- 2 Computation
- 3 Examples
- 4 Subspace properties
- 5 Basis
- 6 Relation to the row space
- 7 Nonhomogeneous equations
- 8 Left null space
- 9 Kernels in functional analysis
- 10 Numerical computation
- 11 See also
- 12 Notes
- 13 References
- 14 External links
From this viewpoint, the null space of A is the same as the solution set to the homogeneous system.
In this section, we show on a very simple example how the null space of a matrix may be computed. However the method which is sketched here is not practical for effective computations. A more efficient method is presented below.
Consider the matrix
The null space of this matrix consists of all vectors (x, y, z) ∈ R3 for which
This can be written as a homogeneous system of linear equations involving x, y, and z:
This can be written in matrix form as:
Using Gauss–Jordan elimination, this reduces to:
Now we can write the null space (solution to Ax = 0) in terms of c, where c is scalar:
Since c is a free variable this can be simplified to
The null space of A is precisely the set of solutions to these equations (in this case, a line through the origin in R3).
- If L: Rm → Rn, then the kernel of L is the solution set to a homogeneous system of linear equations. For example, if L is the operator:
- then the kernel of L is the set of solutions to the equations
- Let C[0,1] denote the vector space of all continuous real-valued functions on the interval [0,1], and define L: C[0,1] → R by the rule
- Then the kernel of L consists of all functions f ∈ C[0,1] for which f(0.3) = 0.
- Let C∞(R) be the vector space of all infinitely differentiable functions R → R, and let D: C∞(R) → C∞(R) be the differentiation operator:
- Then the kernel of D consists of all functions in C∞(R) whose derivatives are zero, i.e. the set of all constant functions.
- Let R∞ be the direct product of infinitely many copies of R, and let s: R∞ → R∞ be the shift operator
- Then the kernel of s is the one-dimensional subspace consisting of all vectors (x1, 0, 0, ...).
- If V is an inner product space and W is a subspace, the kernel of the orthogonal projection V → W is the orthogonal complement to W in V.
- Null(A) always contains the zero vector, since A0 = 0.
- If x ∈ Null(A) and y ∈ Null(A), then x + y ∈ Null(A). This follows from the distributivity of matrix multiplication over addition.
- If x ∈ Null(A) and c is a scalar, then cx ∈ Null(A), since A(cx) = c(Ax) = c0 = 0.
If L: V → W, then two elements of V have the same image in W if and only if their difference lies in the kernel of L:
This implies the rank-nullity theorem:
A basis of the null space of a matrix may be computed by Gaussian elimination.
Computing its column echelon form by Gaussian elimination (or any other available method), we get a matrix A basis of the null space of A consists in the non-zero columns of C such that the corresponding column of B is a zero column.
In fact, the computation may be stopped as soon as the upper matrix is in column echelon form: the remainder of the computation consists in changing the basis of the vector space generated by the columns whose upper part is zero.
For example, suppose that
Putting the upper part in column echelon form by column operations on the whole matrix gives
The last three columns of B are zero columns. Therefore, the three last vectors of C,
are a basis of the null space of A.
Let A be an m by n matrix (i.e., A has m rows and n columns). The product of A and the n-dimensional vector x can be written in terms of the dot product of vectors as follows:
Here a1, ... , am denote the rows of the matrix A. It follows that x is in the null space of A if and only if x is orthogonal (or perpendicular) to each of the row vectors of A (because if the dot product of two vectors is equal to zero they are by definition orthogonal).
The row space of a matrix A is the span of the row vectors of A. By the above reasoning, the null space of A is the orthogonal complement to the row space. That is, a vector x lies in the null space of A if and only if it is perpendicular to every vector in the row space of A.
The dimension of the row space of A is called the rank of A, and the dimension of the null space of A is called the nullity of A. These quantities are related by the equation
The equation above is known as the rank–nullity theorem.
The null space also plays a role in the solution to a nonhomogeneous system of linear equations:
If u and v are two possible solutions to the above equation, then
Thus, the difference of any two solutions to the equation Ax = b lies in the null space of A.
It follows that any solution to the equation Ax = b can be expressed as the sum of a fixed solution v and an arbitrary element of the null space. That is, the solution set to the equation Ax = b is
where v is any fixed vector satisfying Av = b. Geometrically, this says that the solution set to Ax = b is the translation of the null space of A by the vector v. See also Fredholm alternative and flat (geometry).
The left null space of a matrix A consists of all vectors x such that xTA = 0T, where T denotes the transpose of a column vector. The left null space of A is the same as the null space of AT. The left null space of A is the orthogonal complement to the column space of A, and is the cokernel of the associated linear transformation. The null space, the row space, the column space, and the left null space of A are the four fundamental subspaces associated to the matrix A.
The problem of computing the null space on a computer depends on the nature of the coefficients.
If the coefficients of the matrix are exactly given integers numbers, the column echelon form of the matrix may be computed by Bareiss algorithm more efficiently than with Gaussian elimination. It is even more efficient to use modular arithmetic, which reduces the problem to a similar one over a finite field.
For coefficients in a finite field Gaussian elimination works well, but for the large matrices that occur in cryptography and Gröbner basis computation, better algorithms are known, which have roughly the same computational complexity, but are faster and behave better with modern computer hardware.
For matrices whose entries are floating-point numbers, the problem of computing the null space makes sense only for matrices such that the number of rows is equal to their rank: because of the rounding errors, a floating-point matrix has almost always a full rank, even when it is an approximation of a matrix of a much smaller rank. Even for a full-rank matrix, it is possible to compute its null space only if it is well conditioned, i.e. it has a low condition number.
Even for a well conditioned full rank matrix, Gaussian elimination does not behave correctly: it introduces rounding errors that are too large for getting a significant result. As the computation of the null space of a matrix is a special instance of solving a homogeneous system of linear equations, the null space may be computed by any of the various algorithms designed to solve homogeneous systems. A state of the art software for this purpose is the Lapack library.
- System of linear equations
- Row and column spaces
- Row reduction
- Four fundamental subspaces
- Vector space
- Linear subspace
- Linear operator
- Function space
- Fredholm alternative
- Linear algebra, as discussed in this article, is a very well established mathematical discipline for which there are many sources. Almost all of the material in this article can be found in Lay 2005, Meyer 2001, and Strang 2005.
- This equation uses set-builder notation.
- Axler, Sheldon Jay (1997), Linear Algebra Done Right (2nd ed.), Springer-Verlag, ISBN 0-387-98259-0
- Lay, David C. (August 22, 2005), Linear Algebra and Its Applications (3rd ed.), Addison Wesley, ISBN 978-0-321-28713-7
- Meyer, Carl D. (February 15, 2001), Matrix Analysis and Applied Linear Algebra, Society for Industrial and Applied Mathematics (SIAM), ISBN 978-0-89871-454-8
- Poole, David (2006), Linear Algebra: A Modern Introduction (2nd ed.), Brooks/Cole, ISBN 0-534-99845-3
- Anton, Howard (2005), Elementary Linear Algebra (Applications Version) (9th ed.), Wiley International
- Leon, Steven J. (2006), Linear Algebra With Applications (7th ed.), Pearson Prentice Hall
- Serge Lang (1987). Linear Algebra. Springer. p. 59. ISBN 9780387964126.
- Lloyd N. Trefethen and David Bau, III, Numerical Linear Algebra, SIAM 1997, ISBN 978-0-89871-361-9 online version
|Wikibooks has a book on the topic of: Linear Algebra/Null Spaces|
- Hazewinkel, Michiel, ed. (2001), "Kernel of a matrix", Encyclopedia of Mathematics, Springer, ISBN 978-1-55608-010-4
- Gilbert Strang, MIT Linear Algebra Lecture on the Four Fundamental Subspaces at Google Video, from MIT OpenCourseWare
- Khan Academy, Introduction to the Null Space of a Matrix