ebooksgratis.com

See also ebooksgratis.com: no banners, no cookies, totally FREE.

CLASSICISTRANIERI HOME PAGE - YOUTUBE CHANNEL
Privacy Policy Cookie Policy Terms and Conditions
Sparse matrix - Wikipedia, the free encyclopedia

Sparse matrix

From Wikipedia, the free encyclopedia

A sparse matrix obtained when solving a finite element problem in two dimensions. The non-zero elements are shown in black.
A sparse matrix obtained when solving a finite element problem in two dimensions. The non-zero elements are shown in black.

In the mathematical subfield of numerical analysis a sparse matrix is a matrix populated primarily with zeros.

Conceptually, sparsity corresponds to systems which are loosely coupled. Consider a line of balls connected by springs from one to the next; this is a sparse system. By contrast, if the same line of balls had springs connecting every ball to every other ball, the system would be represented by a dense matrix. The concept of sparsity is useful in combinatorics and application areas such as network theory, of a low density of significant data or connections.

Huge sparse matrices often appear in science or engineering when solving partial differential equations.

When storing and manipulating sparse matrices on a computer, it is beneficial and often necessary to use specialized algorithms and data structures that take advantage of the sparse structure of the matrix. Operations using standard matrix structures and algorithms are slow and consume large amounts of memory when applied to large sparse matrices. Sparse data is by nature easily compressed, and this compression almost always results in significantly less memory usage. Indeed, some very large sparse matrices are impossible to manipulate with the standard algorithms.

Contents

[edit] Storing a sparse matrix

The naive data structure for a matrix is a two-dimensional array. Each entry in the array represents an element ai,j of the matrix and can be accessed by the two indices i and j. For a m×n matrix we need at least enough memory to store (m×n) entries to represent the matrix.

Many if not most entries of a sparse matrix are zeros. The basic idea when storing sparse matrices is to store only the non-zero entries as opposed to storing all entries. Depending on the number and distribution of the non-zero entries, different data structures can be used and yield huge savings in memory when compared to a naïve approach.

One example of such a sparse matrix format is the (old) Yale Sparse Matrix Format[1]. It stores an initial sparse m×n matrix, M, in row form using three one-dimensional arrays. Let NNZ denote the number of nonzero entries of M. The first array is A, which is of length NNZ, and holds all nonzero entries of M in left-to-right top-to-bottom order. The second array is IA, which is of length m + 1 (i.e., one entry per row, plus one). IA(i) contains the index in A of the first nonzero element of row i. Row i of the original matrix extends from A(IA(i)) to A(IA(i+1)-1). The third array, JA, contains the column index of each element of A, so it also is of length NNZ.

For example, the matrix

[ 1 2 0 0 ]
[ 0 3 9 0 ]
[ 0 1 4 0 ]

is a three-by-four matrix with six nonzero elements, so

A  = [ 1 2 3 9 1 4 ]
IA = [ 1 3 5 7 ]
JA = [ 1 2 2 3 2 3 ]

Another possibility is to use quadtrees.

[edit] Example

A bitmap image having only 2 colors, with one of them dominant (say a file that stores a handwritten signature) can be encoded as a sparse matrix that contains only row and column numbers for pixels with the non-dominant color.

[edit] Diagonal matrices

A very efficient structure for a diagonal matrix is to store just the entries in the main diagonal as a one-dimensional array, so a diagonal n×n matrix requires only n entries.

[edit] Bandwidth

The lower bandwidth of a matrix A is the smallest number p such that the entry aij vanishes whenever i > j + p. Similarly, the upper bandwidth is the smallest p such that aij = 0 whenever i < jp (Golub & Van Loan 1996, §1.2.1). For example, a tridiagonal matrix has lower bandwidth 1 and upper bandwidth 1.

Matrices with small upper and lower bandwidth are known as band matrices and often lend themselves to simpler algorithms than general sparse matrices; one can sometimes apply dense matrix algorithms and simply loop over a reduced number of indices.

[edit] Reducing bandwidth

The Cuthill-McKee algorithm can be used to reduce the bandwidth of a sparse symmetric matrix. There are, however, matrices for which the Reverse Cuthill-McKee algorithm performs better.

The U.S. National Geodetic Survey (NGS) uses Dr. Richard Snay's "Banker's" algorithm because on realistic sparse matrices used in Geodesy work it has better performance.

There are many other methods in use.

[edit] Reducing fill-in

"Fill-in" redirects here. For the puzzle, see Fill-In (puzzle).

The fill-in of a matrix are those entries which change from an initial zero to a non-zero value during the execution of an algorithm. To reduce the memory requirements and the number of arithmetic operations used during an algorithm it is useful to minimize the fill-in by switching rows and columns in the matrix. The symbolic Cholesky decomposition can be used to calculate the worst possible fill-in before doing the actual Cholesky decomposition.

There are other methods than the Cholesky decomposition in use. Orthogonalization methods (such as QR factorization) are common, for example, when solving problems by least squares methods. While the theoretical fill-in is still the same, in practical terms the "false non-zeros" can be different for different methods. And symbolic versions of those algorithms can be used in the same manner as the symbolic Cholesky to compute worst case fill-in.

[edit] Solving sparse matrix equations

Both iterative and direct methods exist for sparse matrix solving. One popular iterative method is the conjugate gradient method.

[edit] See also

[edit] References

  • Tewarson, Reginald P, Sparse Matrices (Part of the Mathematics in Science & Engineering series), Academic Press Inc., May 1973. (This book, by a professor at the State University of New York at Stony Book, was the first book exclusively dedicated to Sparse Matrices. Graduate courses using this as a textbook were offered at that University in the early 1980s).
  • Sparse Matrix Multiplication Package, Randolph E. Bank, Craig C. Douglas [1]
  • Pissanetzky, Sergio 1984, "Sparse Matrix Technology", Academic Press
  • R. A. Snay. Reducing the profile of sparse symmetric matrices. Bulletin Géodésique, 50:341–352, 1976. Also NOAA Technical Memorandum NOS NGS-4, National Geodetic Survey, Rockville, MD.

[edit] Further reading


aa - ab - af - ak - als - am - an - ang - ar - arc - as - ast - av - ay - az - ba - bar - bat_smg - bcl - be - be_x_old - bg - bh - bi - bm - bn - bo - bpy - br - bs - bug - bxr - ca - cbk_zam - cdo - ce - ceb - ch - cho - chr - chy - co - cr - crh - cs - csb - cu - cv - cy - da - de - diq - dsb - dv - dz - ee - el - eml - en - eo - es - et - eu - ext - fa - ff - fi - fiu_vro - fj - fo - fr - frp - fur - fy - ga - gan - gd - gl - glk - gn - got - gu - gv - ha - hak - haw - he - hi - hif - ho - hr - hsb - ht - hu - hy - hz - ia - id - ie - ig - ii - ik - ilo - io - is - it - iu - ja - jbo - jv - ka - kaa - kab - kg - ki - kj - kk - kl - km - kn - ko - kr - ks - ksh - ku - kv - kw - ky - la - lad - lb - lbe - lg - li - lij - lmo - ln - lo - lt - lv - map_bms - mdf - mg - mh - mi - mk - ml - mn - mo - mr - mt - mus - my - myv - mzn - na - nah - nap - nds - nds_nl - ne - new - ng - nl - nn - no - nov - nrm - nv - ny - oc - om - or - os - pa - pag - pam - pap - pdc - pi - pih - pl - pms - ps - pt - qu - quality - rm - rmy - rn - ro - roa_rup - roa_tara - ru - rw - sa - sah - sc - scn - sco - sd - se - sg - sh - si - simple - sk - sl - sm - sn - so - sr - srn - ss - st - stq - su - sv - sw - szl - ta - te - tet - tg - th - ti - tk - tl - tlh - tn - to - tpi - tr - ts - tt - tum - tw - ty - udm - ug - uk - ur - uz - ve - vec - vi - vls - vo - wa - war - wo - wuu - xal - xh - yi - yo - za - zea - zh - zh_classical - zh_min_nan - zh_yue - zu -