Skip to main content Link Search Menu Expand Document (external link) Copy Copied

The Singular Value Decomposition

Suppose that T:VW is a linear map between inner product spaces. The adjoint $T^{}ofTisthelinearmapT^{}:W\to V$ satisfying w,Tv=Tw,v.

To see that $T^{}exists,fixwandconsiderthelinearmap\phi_w:v\mapsto \langle w, Tv\rangle.Since\phi_wbelongsto\Hom(V,F),thereisauniquev’\in Vsuchthat\phi_w(v)=\langle v’,v\rangle.Thenv’=T^{}w$ by definition. The adjoint has the properties:

  • (S+T)=S+T and (aS)=aS
  • (ST)=TS.
  • (T)=T
  • (T1)=(T)1
  • The identity is its own adjoint.

The spectral theorem (over R) applies to self-adjoint operators T:VV where V is an inner product space. The singular value decomposition is a generalization to operators T:VW. So let V and W be real inner product spaces and T:VW a linear map.

Let Q=TT. Then Q is a map from VV:

  • Q is self-adjoint.
  • Qv,v0 for all vV. A self adjoint operator with this property is called positive.
  • The null space of Q is the same as the null space of T.
  • The range (column space) of Q is the same as the column space of T.
  • The ranks of T, , and Q all coincide.

Proof:

The key point is that Qv,v=TTv,v=Tv,Tv. So Qv,v is greater than or equal to zero; and if it equals zero then Tv=0 and conversely. This proves that null(Q)=\null(T).

In general:

  • null space of T is orthogonal to the range of T.
  • range of T is orthogonal to the null space of T.
  • null space of T is orthogonal to the range of T.
  • range of T is orthogonal to the nullspace of Tast.

So since the null space of Q and the null space of T coincide, and Q is self-adjoint, we have: rangeQ=(nullQ)=(null(T))=range(T).

Also dimrange(T)=dimnull(T)=dimWdimnull(T)=dimrange(T).

Definition: If T:VW is a linear operator, then Q=TT is diagonalizable. Let Λ be diagonal the matrix of Q in an orthonomormal basis given by the spectral theorem, with eigenvalues listed in decreasing order. The singular values of T are the entries in Λ1/2 – the square roots of the eigenvalues of Q.

Proposition: (Singular value decomposition) Let T:VW be a linear map of inner product spaces with singular values s1,,sn. Then there are orthonormal basis e1,,en and f1,,fn for V and W such that T(v)=i=1nsi<v,ei>fi for all vV. In matrix terms this basically means that T is “diagonal” in the bases given by the e’s and f’s (although T needn’t be square).

This is usually written like this for matrices.

Theorem: (SVD) Let A be an m×n matrix over R. Then there are orthogonal matrices P and Q such that A=PDQ where P is m×m, Q is n×n, and D is m×n. The “diagonal” entries of D are the singular values of A.

This is the matrix version of the previous statement.

Proof:

There’s an orthonormal basis of V so that Qei=λiei where the λi are the eigenvalues of the self-adjoint operator Q. For the nonzero eigenvectors ei for i=1,,r, the vectors fi=Teiλi are orthonormal. We can complete the set of fi (if needed) to a basis of W. Any vV can be written v=i=1nv,eiei. The sought-after formula comes from applying T to this.