Design a site like this with WordPress.com

# Use of Netwon’s Method to Determine Matrix Eigenvalues

The problem here is to develop a routine that will determine one or more eigenvalues of a matrix using Newton’s method and considering the eigenvalue problem to be that of a nonlinear equation solution problem.

The simplest way to illustrate the problem and its solution is to use a 4 $\times$4 matrix.

Let us consider a square matrix $A$. The definition of an eigenvalue requires that

$Ax=\lambda x$

For our test case writing out the solutions for the left and right hand sides yields

$\left[\begin{array}{c} a_{{1,1}}x_{{1}}+a_{{1,2}}x_{{2}}+a_{{1,3}}x_{{3}}+a_{{1,4}}x_{{4}}\\ a_{{2,1}}x_{{1}}+a_{{2,2}}x_{{2}}+a_{{2,3}}x_{{3}}+a_{{2,4}}x_{{4}}\\ a_{{3,1}}x_{{1}}+a_{{3,2}}x_{{2}}+a_{{3,3}}x_{{3}}+a_{{3,4}}x_{{4}}\\ a_{{4,1}}x_{{1}}+a_{{4,2}}x_{{2}}+a_{{4,3}}x_{{3}}+a_{{4,4}}x_{{4}} \end{array}\right]=\left[\begin{array}{c} \lambda\, x_{{1}}\\ \lambda\, x_{{2}}\\ \lambda\, x_{{3}}\\ \lambda\, x_{{4}} \end{array}\right]$

The problem here is that, with the intrusion of the eigenvalue, we
have one more unknown than we have equations for. We can remedy this
by insisting that the values of $x$ be normalized, or

${x_{{1}}}^{2}+{x_{{2}}}^{2}+{x_{{3}}}^{2}+{x_{{4}}}^{2}=1$

Let us now construct a vector such that we have a closed system, all on the left hand side:

$\left[\begin{array}{c} a_{{1,1}}x_{{1}}+a_{{1,2}}x_{{2}}+a_{{1,3}}x_{{3}}+a_{{1,4}}x_{{4}}-\lambda\, x_{{1}}\\ a_{{2,1}}x_{{1}}+a_{{2,2}}x_{{2}}+a_{{2,3}}x_{{3}}+a_{{2,4}}x_{{4}}-\lambda\, x_{{2}}\\ a_{{3,1}}x_{{1}}+a_{{3,2}}x_{{2}}+a_{{3,3}}x_{{3}}+a_{{3,4}}x_{{4}}-\lambda\, x_{{3}}\\ a_{{4,1}}x_{{1}}+a_{{4,2}}x_{{2}}+a_{{4,3}}x_{{3}}+a_{{4,4}}x_{{4}}-\lambda\, x_{{4}}\\ {x_{{1}}}^{2}+{x_{{2}}}^{2}+{x_{{3}}}^{2}+{x_{{4}}}^{2}-1 \end{array}\right]=0$

Since the object of Newton’s method is to arrive at this, we need an intermediate vector $F$, or

$F=\left[\begin{array}{c} a_{{1,1}}x_{{1}}+a_{{1,2}}x_{{2}}+a_{{1,3}}x_{{3}}+a_{{1,4}}x_{{4}}-\lambda\, x_{{1}}\\ a_{{2,1}}x_{{1}}+a_{{2,2}}x_{{2}}+a_{{2,3}}x_{{3}}+a_{{2,4}}x_{{4}}-\lambda\, x_{{2}}\\ a_{{3,1}}x_{{1}}+a_{{3,2}}x_{{2}}+a_{{3,3}}x_{{3}}+a_{{3,4}}x_{{4}}-\lambda\, x_{{3}}\\ a_{{4,1}}x_{{1}}+a_{{4,2}}x_{{2}}+a_{{4,3}}x_{{3}}+a_{{4,4}}x_{{4}}-\lambda\, x_{{4}}\\ {x_{{1}}}^{2}+{x_{{2}}}^{2}+{x_{{3}}}^{2}+{x_{{4}}}^{2}-1 \end{array}\right]$

Let us make $\lambda$ the last “$x$” or

$\lambda=x_{n+1}$

The Jacobian of this vector with respect to the expanded $x$ vector is

J\left(F,x\right)=\left[\begin{array}{ccccc} a_{{1,1}}-\lambda & a_{{1,2}} & a_{{1,3}} & a_{{1,4}} & -x_{{1}}\\ \noalign{\medskip}a_{{2,1}} & a_{{2,2}}-\lambda & a_{{2,3}} & a_{{2,4}} & -x_{{2}}\\ \noalign{\medskip}a_{{3,1}} & a_{{3,2}} & a_{{3,3}}-\lambda & a_{{3,4}} & -x_{{3}}\\ \noalign{\medskip}a_{{4,1}} & a_{{4,2}} & a_{{4,3}} & a_{{4,4}}-\lambda & -x_{{4}}\\ \noalign{\medskip}2\, x_{{1}} & 2\, x_{{2}} & 2\, x_{{3}} & 2\, x_{{4}} & 0 \end{array}\right]

Now we can solve the Newton’s method equation iteratively:

$x_{m+1}=x_{m}-J\left(F,x\right)^{-1}F$

It is certainly possible to invert the Jacobian symbolically, but the algebra becomes very complicated, even for a relatively small matrix such as this one. A more sensible solution is to obtain numerical values for both Jacobian and $F$ vector, invert the former using the Gauss-Jordan technique and then multiply this by the $F$ vector before performing the Newton iteration.

Now let us consider the case of the tridiagonal matrix

A=\left[\begin{array}{cccc} 2 & -1 & 0 & 0\\ \noalign{\medskip}-1 & 2 & -1 & 0\\ \noalign{\medskip}0 & -1 & 2 & -1\\ \noalign{\medskip}0 & 0 & -1 & 2 \end{array}\right]

The eigenvalues of this matrix are

$\lambda=\frac{5}{2}\pm\frac{\sqrt{5}}{2},\frac{3}{2}\pm\frac{\sqrt{5}}{2}$

or numerically 3.618033989, 1.381966011, 2.618033989, and .381966011. The FORTRAN 77 code used for this is given at the end of the problem. Although the matrix is “hard coded” into the program, it is only necessary to change one parameter to change the matrix size.

The summary results of Newton’s Method are shown below.

There are two different quantities tracked above:

• The residual, i.e., the Euclidean norm of the entire $x$ vector.
• The error, i.e., the change in the eigenvalue from one step to the next.

Both these are tracked for three different starting places. For convenience,
all of the $x$ vector entries were initialized at the same value.

Some comments on the solution are as follows:

• As mentioned earlier, there were four eigenvalues to be determined. The method only returns one eigenvalue for each starting point, and that eigenvalue depends upon the starting point. The eigenvalues determined were 0.381966 (Start = 1 and Start = 2) and 2.6180339 (Start = 3.) Like the power method, this solution determines both an eigenvalue and eigenvector. Unlike the power method, it does not necessarily return the largest and/or smallest eigenvalue or even one easily predictable by the choice of starting value. With Newton’s Method, this is unsurprising; with multiple roots of an equation, which root is found very much depends upon the starting point. However, in the case where the number of roots is (in simple terms) the basic size of the matrix, this correspondence can become very complicated, and in some cases it is possible that certain eigenvalues will be missed altogether.
• As a corollary to the previous point, with some starting values the convergence almost comes apart, especially where Start = 3. With larger values than this, the program generates “nan” values and terminates abnormally. This result is a part of the method; poor initial estimates can led successive iterations “off the track” and thus fail to converge. Thus it is necessary to know the range of eigenvalues before one actually determines them, which to a great extent defeats the purpose of the method.
• The method is better at finding eigenvalues than finding eigenvectors. The residuals of the vector (which include the eigenvector plus the eigenvalue) converge much more slowly than the error. Those runs that were not limited by the termination criterion of the eigenvalue ($|\Delta\lambda|<1.0\times10^{-6}$) showed a very slow convergence continue with the eigenvectors. For the eigenvectors, the convergence rate is reasonable.

The general impression of this method, therefore, is that under proper circumstances it is capable of producing eigenvalues and (to a lesser extent) eigenvectors, but that it is necessary to a) have some idea of the values of the eigenvectors going in and b) be prepared for the method to collapse with the wrong initial guesses. One possible application (given the convergence limitations discussed above) is to use it in conjunction with, say, the Householder-Givens Method to determine the eigenvectors, which do not come out as a result of this method. The known eigenvalues give excellent starting values.

These results are confirmed when we expand the matrix to 10$\times$10.

We see that all of the errors and residuals go up and down from the initial guesses until convergence was achieved. The eigenvalues determined were 0.6902785 (Start = 1,) 1.7153704 (Start = 2,) and 1.7153703 (Start = 3.) Thus as before with three initial starting values only two eigenvalues were determined. The convergence was more lengthy than with the smaller matrix but not by much. This means that the method may be, to some extent, insensitive to matrix size, which would make it useful with larger matrices.

## FORTRAN Code

The code calls several “standard” routines and includes whose significance is as follows:

• matsize.for is an include which includes a parameter statement for the actual matrix size. For the 10$\times$10 matrix, it read as follows:
• parameter (nn=10, nplus1=nn+1,mcycle=100)
• xmtinv is a subroutine to invert a matrix using Gauss-Jordan elimination.
• matvec performs matrix-vector multiplication (in that order).
• veclen computes the Euclidean norm of a vector.
 c Determination of Eigenvalue(s) of Tridiagonal Matrix using
c Newton's Method
include 'matsize.for'
dimension a(nn,nn),x(nplus1),f(nplus1),fj(nplus1,nplus1),
&xt(nplus1),resid(mcycle),eigerr(mcycle)
call tstamp
c Define original matrix a
do 1 i=1,nn,1
do 1 j=1,nn,1
a(i,j)=0.0
if (i.eq.j) a(i,j) = 2.0
if (abs(i-j).eq.1) a(i,j) = -1.0
1 continue
open (unit=2,file='m5610p2d.csv')
write(2,*)'Original Matrix'
do 2 i=1,nn,1
2 write(2,3)(a(i,j),j=1,nn,1)
3 format(100(g10.3,1h,))
c Make initial guesses of eigenvalues and values of x
do 100 mm=1,3,1
vstart=float(mm)
write(2,*)'Eigenvalue Results for vstart =',vstart
do 4 i=1,nplus1,1
4 x(i)=vstart
c Cycle Newton's method
do 5 m=1,mcycle,1
eigold=x(nplus1)
c Compute Jacobian for current step
do 6 i=1,nplus1,1
do 6 j=1,nplus1,1
if(i.eq.nplus1.and.j.eq.nplus1) then
fj(nplus1,nplus1)=0.0
goto 6
endif
if(i.eq.j) then
fj(i,i)=a(i,i)-x(nplus1)
goto 6
endif
if(j.eq.nplus1) then
fj(i,nplus1)=-x(i)
goto 6
endif
if(i.eq.nplus1) then
fj(nplus1,j)=2.0*x(j)
goto 6
endif
fj(i,j)=a(i,j)
6 continue
c Output Jacobian
write(2,*)'Jacobian Matrix'
do 10 i=1,nplus1,1
10 write(2,3)(fj(i,j),j=1,nplus1,1)
c Compute Newton "F" vector
do 20 i=1,nplus1,1
if(i-nplus1)21,22,22
21 f(i)=-x(nplus1)*x(i)
do 23 j=1,nn,1
f(i)=f(i)+a(i,j)*x(j)
23 continue
goto 20
22 f(i)=-1.0
do 24 j=1,nn,1
24 f(i)=f(i)+x(i)**2
20 continue
c Invert Jacobian
call xmtinv(fj,nplus1,nplus1)
c Postmultiply Jacobian by f vector
call matvec(fj,f,xt,nplus1,nplus1)
c Update Value of x vector and output the result
write(2,*)'Result Vector for m = ',m
do 7 k=1,nplus1,1
x(k)=x(k)-xt(k)
7 write(2,*)x(k)
c Compute norm of residual and error
eigerr(m)=abs(x(nplus1)-eigold)
call veclen(xt,resid(m),nplus1)
if(resid(m).lt.1.0e-07.or.eigerr(m).lt.1.0e-07)goto 30
5 continue
c Output residual plot
30 write(2,*)'Residuals and Errors'
write(2,101)mm,mm
101 format(27hIteration,Residual (Start =,i2,
&16h),Error (Start = ,i2,1h))
do 31 i=1,m-1,1
31 write(2,32)i,resid(i),eigerr(i)
32 format(i3,2(1h,,g10.3))
100 continue
close(unit=2)
stop
end