ebooksgratis.com

See also ebooksgratis.com: no banners, no cookies, totally FREE.

CLASSICISTRANIERI HOME PAGE - YOUTUBE CHANNEL
Privacy Policy Cookie Policy Terms and Conditions
Power iteration - Wikipedia, the free encyclopedia

Power iteration

From Wikipedia, the free encyclopedia

In mathematics, the power iteration is an eigenvalue algorithm: given a matrix A, the algorithm will produce a number λ (the eigenvalue) and a nonzero vector v (the eigenvector), such that Av = λv.

The power iteration is a very simple algorithm. It does not compute a matrix decomposition, and hence it can be used when A is a very large sparse matrix. However, it will find only one eigenvalue (the one with the greatest absolute value) and it may converge only slowly.

Contents

[edit] The method

The power iteration algorithm starts with a vector b0, which may be an approximation to the dominant eigenvector or a random vector. The method is described by the iteration

 b_{k+1} = \frac{Ab_k}{\|Ab_k\|}.

So, at every iteration, the vector bk is multiplied by the matrix A and normalized.

Under the assumptions:

  • A has an eigenvalue that is strictly greater in magnitude than its other eigenvalues
  • The starting vector b0 has a nonzero component in the direction of an eigenvector associated with the dominant eigenvalue.

then:

  • A subsequence of \left( b_{k} \right) converges to an eigenvector associated with the dominant eigenvalue

Note that the sequence \left( b_{k} \right) does not necessarily converge. It can be shown that:
bk = eiφkv1 + rk where: v1 is an eigenvector associated with the dominant eigenvalue, and  \| r_{k} \| \rightarrow 0. The presence of the term eiφk implies that \left( b_{k} \right) does not converge unless eiφk = 1 Under the two assumptions listed above, the sequence \left( \mu_{k} \right) defined by: \mu_{k} = \frac{b_{k}^{*}Ab_{k}}{b_{k}^{*}b_{k}} converges to the dominant eigenvalue.


The method can also be used to calculate the spectral radius of a matrix by computing the Rayleigh quotient

 \frac{b_k^\top A b_k}{b_k^\top b_k} = \frac{b_{k+1}^\top b_k}{b_k^\top b_k}.

[edit] Analysis

Let A be decomposed into its Jordan canonical form: A = VJV − 1, where the first column of V is an eigenvector of A corresponding to the dominant eigenvalue λ1. Since the dominant eigenvalue of A is unique, the first Jordan block of J is the 1 \times 1 matrix \begin{bmatrix} \lambda_{1} \end{bmatrix} , where λ1 is the largest eigenvalue of A in magnitude. The starting vector b0 can be written as a linear combination of the columns of V: b_{0} = c_{1}v_{1} + c_{2}v_{2} + \cdots + c_{n}v_{n}. By assumption, b0 has a nonzero component in the direction of the dominant eigenvalue, so c_{1} \ne 0.

The computationally useful recurrence relation for bk + 1 can be rewritten as: b_{k+1}=\frac{Ab_{k}}{\|Ab_{k}\|}=\frac{A^{k+1}b_{0}}{\|A^{k+1}b_{0}\|}, where the expression: \frac{A^{k+1}b_{0}}{\|A^{k+1}b_{0}\|} is more amenable to the following analysis.

\begin{matrix}
b_{k} &=& \frac{A^{k}b_{0}}{\| A^{k} b_{0} \|} \\
      &=& \frac{\left( VJV^{-1} \right)^{k} b_{0}}{\|\left( VJV^{-1} \right)^{k}b_{0}\|} \\
      &=& \frac{ VJ^{k}V^{-1} b_{0}}{\| V J^{k} V^{-1} b_{0}\|} \\
      &=& \frac{ VJ^{k}V^{-1} \left( c_{1}v_{1} + c_{2}v_{2} + \cdots + c_{n}v_{n} \right)}
               {\| V J^{k} V^{-1} \left( c_{1}v_{1} + c_{2}v_{2} + \cdots + c_{n}v_{n} \right)\|} \\
      &=& \frac{ VJ^{k}\left( c_{1}e_{1} + c_{2}e_{2} + \cdots + c_{n}e_{n} \right)}
                {\| V J^{k} \left( c_{1}e_{1} + c_{2}e_{2} + \cdots + c_{n}e_{n} \right) \|} \\
      &=& \left( \frac{\lambda_{1}}{|\lambda_{1}|} \right)^{k} \frac{c_{1}}{|c_{1}|}
          \frac{ v_{1} + \frac{1}{c_{1}} V \left( \frac{1}{\lambda_1} J \right)^{k} 
                      \left( c_{2}e_{2} +  \cdots + c_{n}e_{n} \right)}
               {\| v_{1} + \frac{1}{c_{1}} V \left( \frac{1}{\lambda_1} J \right)^{k} 
                      \left( c_{2}e_{2} +  \cdots + c_{n}e_{n} \right) \| }
           
\end{matrix}
The expression above simplifies as k \rightarrow \infty

\left( \frac{1}{\lambda_{1}} J \right)^{k} = 
\begin{bmatrix}
[1] & & & & \\
& \left( \frac{1}{\lambda_{1}} J_{2} \right)^{k}& & & \\
& & \ddots & \\
& & & \left( \frac{1}{\lambda_{1}} J_{m} \right)^{k} \\
\end{bmatrix}
\rightarrow
\begin{bmatrix}
1 & & & & \\
& 0 & & & \\
& & \ddots & \\
& & & 0 \\
\end{bmatrix}
as  k \rightarrow \infty .
The limit follows from the fact that the eigenvalue of  \frac{1}{\lambda_{1}} J_{i} is less than in 1 in magnitude, so 
\left( \frac{1}{\lambda_{1}} J_{i} \right)^{k} \rightarrow 0
as  k \rightarrow \infty
It follows that:

\frac{1}{c_{1}} V \left( \frac{1}{\lambda_1} J \right)^{k} 
\left( c_{2}e_{2} +  \cdots + c_{n}e_{n} \right)
\rightarrow 0
as 
k \rightarrow \infty
Using this fact, bk can be written in a form that emphasizes its relationship with v1 when k is large:

\begin{matrix}
b_{k} &=& \left( \frac{\lambda_{1}}{|\lambda_{1}|} \right)^{k} \frac{c_{1}}{|c_{1}|}
          \frac{ v_{1} + \frac{1}{c_{1}} V \left( \frac{1}{\lambda_1} J \right)^{k} 
                      \left( c_{2}e_{2} +  \cdots + c_{n}e_{n} \right)}
               {\| v_{1} + \frac{1}{c_{1}} V \left( \frac{1}{\lambda_1} J \right)^{k} 
                      \left( c_{2}e_{2} +  \cdots + c_{n}e_{n} \right) \| }
      &=& e^{i \phi k} \frac{c_{1}}{|c_{1}|} v_{1} + r_{k}
\end{matrix}
where eiφ = λ1 / | λ1 | and  \| r_{k} \| \rightarrow 0 as k \rightarrow \infty
The sequence  \left( b_{k} \right) is bounded, so it contains a convergent subsequence. Note that the eigenvector corresponding to the dominant eigenvalue is only unique up to a scalar, so although the sequence \left(b_{k}\right) may not converge, bk is nearly an eigenvector of A for large k.

Alternatively, if A is diagonalizable, then the following proof yields the same result
Let λ1, λ2, …, λm be the m eigenvalues (counted with multiplicity) of A and let v1, v2, …, vm be the corresponding eigenvectors. Suppose that λ1 is the dominant eigenvalue, so that | λ1 | > | λj | for j > 1.

The initial vector b0 can be written:

b_0 = c_{1}v_{1} + c_{2}v_{2} + \cdots + c_{m}v_{m}.

If b0 is chosen randomly (with uniform probability), then c1 ≠ 0 with probability 1. Now,

\begin{matrix}A^{k}b_0 & = & c_{1}A^{k}v_{1} + c_{2}A^{k}v_{2} + \cdots + c_{m}A^{k}v_{m} \\
& = & c_{1}\lambda_{1}^{k}v_{1} + c_{2}\lambda_{2}^{k}v_{2} + \cdots + c_{m}\lambda_{m}^{k}v_{m} \\
& = & c_{1}\lambda_{1}^{k} \left( v_{1} + \frac{c_{2}}{c_{1}}\left(\frac{\lambda_{2}}{\lambda_{1}}\right)^{k}v_{2} + \cdots + \frac{c_{m}}{c_{1}}\left(\frac{\lambda_{m}}{\lambda_{1}}\right)^{k}v_{m}\right). \end{matrix}

The expression within parentheses converges to v1 because | λj / λ1 | < 1 for j > 1. On the other hand, we have

 b_k = \frac{A^kb_0}{\|A^kb_0\|}.

Therefore, bk converges to (a multiple of) the eigenvector v1. The convergence is geometric, with ratio

 \left| \frac{\lambda_2}{\lambda_1} \right|,

where λ2 denotes the second dominant eigenvalue. Thus, the method converges slowly if there is an eigenvalue close in magnitude to the dominant eigenvalue.

[edit] Applications

Power iteration is not used very much because it can find only the dominant eigenvalue. Nevertheless, the algorithm is very useful in some specific situations. For instance, Google uses it to calculate the page rank of documents in their search engine. [1]

Some of the more advanced eigenvalue algorithms can be understood as variations of the power iteration. For instance, the inverse iteration method applies power iteration to the matrix A − 1. Other algorithms look at the whole subspace generated by the vectors bk. This subspace is known as the Krylov subspace. It can be computed by Arnoldi iteration or Lanczos iteration.

[edit] See also

[edit] References

  1. ^ Ipsen, Ilse, and Rebecca M. Wills, Analysis and Computation of Google's PageRank, 7th IMACS International Symposium on Iterative Methods in Scientific Computing, Fields Institute, Toronto, Canada, 5–8 May 2005.

[edit] External links

Languages


aa - ab - af - ak - als - am - an - ang - ar - arc - as - ast - av - ay - az - ba - bar - bat_smg - bcl - be - be_x_old - bg - bh - bi - bm - bn - bo - bpy - br - bs - bug - bxr - ca - cbk_zam - cdo - ce - ceb - ch - cho - chr - chy - co - cr - crh - cs - csb - cu - cv - cy - da - de - diq - dsb - dv - dz - ee - el - eml - en - eo - es - et - eu - ext - fa - ff - fi - fiu_vro - fj - fo - fr - frp - fur - fy - ga - gan - gd - gl - glk - gn - got - gu - gv - ha - hak - haw - he - hi - hif - ho - hr - hsb - ht - hu - hy - hz - ia - id - ie - ig - ii - ik - ilo - io - is - it - iu - ja - jbo - jv - ka - kaa - kab - kg - ki - kj - kk - kl - km - kn - ko - kr - ks - ksh - ku - kv - kw - ky - la - lad - lb - lbe - lg - li - lij - lmo - ln - lo - lt - lv - map_bms - mdf - mg - mh - mi - mk - ml - mn - mo - mr - mt - mus - my - myv - mzn - na - nah - nap - nds - nds_nl - ne - new - ng - nl - nn - no - nov - nrm - nv - ny - oc - om - or - os - pa - pag - pam - pap - pdc - pi - pih - pl - pms - ps - pt - qu - quality - rm - rmy - rn - ro - roa_rup - roa_tara - ru - rw - sa - sah - sc - scn - sco - sd - se - sg - sh - si - simple - sk - sl - sm - sn - so - sr - srn - ss - st - stq - su - sv - sw - szl - ta - te - tet - tg - th - ti - tk - tl - tlh - tn - to - tpi - tr - ts - tt - tum - tw - ty - udm - ug - uk - ur - uz - ve - vec - vi - vls - vo - wa - war - wo - wuu - xal - xh - yi - yo - za - zea - zh - zh_classical - zh_min_nan - zh_yue - zu -