Path: | rdoc/linalg.rdoc |
Last Update: | Sun Nov 14 22:53:48 +0000 2010 |
Contents:
These method calculate the LU decomposition of the matrix. The returned value is an array of [LU, perm, sign].
Examples:
>> m = Matrix[1..9, 3, 3] => GSL::Matrix: [ 1.000e+00 2.000e+00 3.000e+00 4.000e+00 5.000e+00 6.000e+00 7.000e+00 8.000e+00 9.000e+00 ] >> lu, perm, sign = Linalg::LU.decomp(m)
>> lu, perm, sign = m.LU_decomp
The following is an example to solve a linear system
A x = b, b = [1, 2, 3, 4]
using LU decomposition.
A = Matrix[[0.18, 0.60, 0.57, 0.96], [0.41, 0.24, 0.99, 0.58], [0.14, 0.30, 0.97, 0.66], [0.51, 0.13, 0.19, 0.85]] lu, perm, sign = A.LU_decomp b = Vector[1, 2, 3, 4] x = Linalg::LU.solve(lu, perm, b)
lu, perm, sign = A.LU_decomp # lu is an instance of Linalg::LUMatrix class b = Vector[1, 2, 3, 4] x = lu.solve(perm, b)
x = Linalg::LU.solve(A, b) # LU decomposition is calculated internally (A is not modified)
These solve the system Ax = b. The input vector b is overwitten by the solution x.
This method applys an iterative improvement to x, the solution of A x = b, using the LU decomposition of A.
These computes and returns the inverse of the matrix.
These methods return the determinant of the matrix.
These compute QR decomposition of the matrix and return an array [QR, tau].
qr, tau = Linalg::QR_decomp(m) p qr.class # GSL::Linalg::QRMatrix, subclass of GSL::Matrix p tau.class # GSL::Linalg::TauVector, subclass of GSL::Vector
qr, tau = Linalg::QR.decomp(m)
qr, tau = m.QR_decomp
Solve the system A x = b using the QR decomposition.
m = Matrix.alloc(...) b = Vector.alloc(...) x = Linalg::QR.solve(m, b)
x = m.QR_solve(b)
qr, tau = Linalg::QR.decomp(m) # or m.QR_decomp x = Linalg::QR.solve(qr, tau, b)
qr, tau = m.QR_decomp x = qr.solve(tau, b)
Solve the system A x = b. The input vector x is first give by the right-hand side vector b, and is overwritten by the solution.
Unpack the encoded QR decomposition QR,tau and return an array [Q, R].
Ex:
>> m = Matrix[1..9, 3, 3] => GSL::Matrix: [ 1.000e+00 2.000e+00 3.000e+00 4.000e+00 5.000e+00 6.000e+00 7.000e+00 8.000e+00 9.000e+00 ] >> qr, tau = m.QR_decomp >> q, r = qr.unpack(tau) >> q*r # Reconstruct the metrix m => GSL::Matrix: [ 1.000e+00 2.000e+00 3.000e+00 4.000e+00 5.000e+00 6.000e+00 7.000e+00 8.000e+00 9.000e+00 ]
This method solves the system R x = Q^T b for x. It can be used when the QR decomposition of a matrix is available in unpacked form as Q,R.
These methods factorize the M-by-N matrix A into the QRP^T decomposition A = Q R P^T, and return an array [QR, tau, perm, signum].
require("gsl") include GSL::Linalg m = Matrix.alloc(...) qr, tau, perm = QRPT.decomp(m) p qr.class # GSL::Linalg::QRPTMatrix, subclass of GSL::Matrix
qr, tau, perm = m.QROT_decomp
These return an array [Q, R, tau, perm, signum].
q, r, tau, perm = QRPT.decomp2(m) p q.class <----- GSL::Linalg::QMatrix p r.class <----- GSL::Linalg::RMatrix
These methods solve the system A x = b using the QRP^T decomposition of A into QR, tau, perm. The solution x is returned as a Vector.
m = Matrix.alloc(...) qr, tau, perm = m.QRPT_decomp b = Vector.alloc([1, 2, 3, 4]) x = Linalg::QRPT.solve(qr, tau, perm, b)
x = Linalg::QRPT.solve(m, b)
x = qr.solve(tau, p, b)
x = m.QRPT_solve(b)
These methods solve the system A x = b using the QRP^T decomposition of A into QR, tau, perm. The input b is overwritten by the solution x.
This method solves the system R P^T x = Q^T b for x. It can be used when the QR decomposition of a matrix is available in unpacked form as q, r obtained by the method decomp2.
q, r, tau, perm = QRPT_decomp2 x = Linalg::QRPT.QRsolve(q, r, perm, b)
These methods factorize the M-by-N matrix A into the singular value decomposition A = U S V^T using the Golub-Reinsch SVD algorithm, and return an array [U, V, S].
Ex:
>> m = Matrix[[3, 5, 2], [5, 1, 4], [7, 6, 3]] => GSL::Matrix: [ 3.000e+00 5.000e+00 2.000e+00 5.000e+00 1.000e+00 4.000e+00 7.000e+00 6.000e+00 3.000e+00 ] >> u, v, s = m.SV_decomp # u, v: Matrix, s: Vector (singular values) >> u*u.trans # u is orthnormal => GSL::Matrix: [ 1.000e+00 2.452e-17 -4.083e-16 2.452e-17 1.000e+00 -3.245e-16 -4.083e-16 -3.245e-16 1.000e+00 ] >> v*v.trans # v is also orthnormal => GSL::Matrix: [ 1.000e+00 3.555e-17 -1.867e-16 3.555e-17 1.000e+00 -1.403e-16 -1.867e-16 -1.403e-16 1.000e+00 ] >> u*Matrix.diagonal(s)*v.trans # Reconstruct the matrix => GSL::Matrix: [ 3.000e+00 5.000e+00 2.000e+00 5.000e+00 1.000e+00 4.000e+00 7.000e+00 6.000e+00 3.000e+00 ]
These compute the SVD using the modified Golub-Reinsch algorithm, which is faster for M>>N.
These compute the SVD using one-sided Jacobi orthogonalization. The Jacobi method can compute singular values to higher relative accuracy than Golub-Reinsch algorithms.
These methods solve the system A x = b using the singular value decomposition U, S, V of A.
m = Matrix.alloc(...) b = Vector.alloc(...) u, v, s = GSL::Linalg::SV.decomp(m) x = GSL::Linalg::SV.solve(u, v, s, b)
x = GSL::Linalg::SV.solve(m, b)
x = m.SV_solve(b)
A symmetric, positive definite square matrix A has a Cholesky decomposition into a product of a lower triangular matrix L and its transpose L^T, as A = L L^T. This is sometimes referred to as taking the square-root of a matrix. The Cholesky decomposition can only be carried out when all the eigenvalues of the matrix are positive. This decomposition can be used to convert the linear system A x = b into a pair of triangular systems (L y = b, L^T x = y), which can be solved by forward and back-substitution.
This method factorizes the positive-definite square matrix A into the Cholesky decomposition A = L L^T. The upper triangular part of the matrix returned contains L^T, the diagonal terms being identical for both L and L^T. If the matrix is not positive-definite then the decomposition will fail.
Ex:
>> m = Matrix.pascal(3) => GSL::Matrix [ 1.000e+00 1.000e+00 1.000e+00 1.000e+00 2.000e+00 3.000e+00 1.000e+00 3.000e+00 6.000e+00 ] >> c = Linalg::Cholesky.decomp(m) => GSL::Linalg::Cholesky::CholeskyMatrix [ 1.000e+00 1.000e+00 1.000e+00 1.000e+00 1.000e+00 2.000e+00 1.000e+00 2.000e+00 1.000e+00 ] >> l = c.lower => GSL::Matrix [ 1.000e+00 0.000e+00 0.000e+00 1.000e+00 1.000e+00 0.000e+00 1.000e+00 2.000e+00 1.000e+00 ] >> (l*l.trans) == m => true
These methods solve the system A x = b using the Cholesky decomposition of A into the matrix cholesky given by GSL::Linalg::Cholesky.decomp.
Factorizes the symmetric square matrix A into the symmetric tridiagonal decomposition Q T Q^T, and returns the results (A’, tau). On output the diagonal and subdiagonal part of the matrix A‘ contain the tridiagonal matrix T. The remaining lower triangular part of the matrix A‘ contains the Householder vectors which, together with the Householder coefficients tau, encode the orthogonal matrix Q. This storage scheme is the same as used by LAPACK. The upper triangular part of A is not referenced.
Unpacks the encoded symmetric tridiagonal decomposition (A’, tau) obtained from GSL::Linalg::Symmtd::decomp into the orthogonal matrix Q, the vector of diagonal elements diag and the vector of subdiagonal elements subdiag.
Unpacks the diagonal and subdiagonal of the encoded symmetric tridiagonal decomposition (A’, tau) obtained from GSL::Linalg::Symmtd::decomp into the vectors diag and subdiag.
Factorizes the hermitian matrix A into the symmetric tridiagonal decomposition U T U^T, and returns the result as (A’, tau). On output the real parts of the diagonal and subdiagonal part of the matrix A‘ contain the tridiagonal matrix T. The remaining lower triangular part of the matrix A‘ contains the Householder vectors which, together with the Householder coefficients tau, encode the orthogonal matrix Q. This storage scheme is the same as used by LAPACK. The upper triangular part of A and imaginary parts of the diagonal are not referenced.
Unpacks the encoded tridiagonal decomposition (A’, tau) obtained from GSL::Linalg::Hermtd::decomp into the unitary matrix U, the real vector of diagonal elements diag and the real vector of subdiagonal elements subdiag.
Unpacks the diagonal and subdiagonal of the encoded tridiagonal decomposition (A, tau) obtained from the GSL::Linalg::Hermtd::decomp into the real vectors diag and subdiag.
Computes the Hessenberg decomposition of the matrix A by applying the similarity transformation H = U^T A U, and returns the result as (A’, tau. On output, H is stored in the upper portion of A‘. The information required to construct the matrix U is stored in the lower triangular portion of A‘. U is a product of N - 2 Householder matrices. The Householder vectors are stored in the lower portion of A‘ (below the subdiagonal) and the Householder coefficients are stored in the vector tau.
Constructs the orthogonal matrix U and returns it from the information stored in the Hessenberg matrix A‘ along with the vector tau. A‘ and tau are outputs from GSL::Linalg::Hessenberg::decomp.
This method is similar to GSL::Linalg::Hessenberg::unpack, except it accumulates the matrix U into V, so that V’ = VU, and returns V. Setting V to the identity matrix provides the same result GSL::Linalg::Hessenberg::unpack.
Sets the lower triangular portion of A‘, below the subdiagonal, to zero. It is useful for clearing out the Householder vectors after calling GSL::Linalg::Hessenberg::decomp.
Compute the Hessenberg-Triangular decomposition of the matrix pair (A, B), and return (H, R. If U and V are provided (they may be null), the similarity transformations are stored in them. work is an additional workspace of length N.
Compute the Hessenberg-Triangular decomposition of the matrix pair (A, B). On output, H is stored in A, and R is stored in B. If U and V are provided (they may be null), the similarity transformations are stored in them. work is an additional workspace of length N.
These methods prepare a Householder transformation P = I - tau v v^T which can be used to zero all the elements of the input vector except the first. On output the transformation is stored in the vector v and the scalar tau is returned.
These methods apply the Householder matrix P defined by the scalar tau and the vector v to the left-hand side of the matrix A. On output the result P A is stored in A.
These methods apply the Householder matrix P defined by the scalar tau and the vector v to the right-hand side of the matrix A. On output the result A P is stored in A.
These methods apply the Householder transformation P defined by the scalar tau and the vector v to the vector w. On output the result P w is stored in w.
These methods solve the system A x = b directly using Householder transformations. The matrix A is not modified.
These methods solve the system A x = b directly using Householder transformations. The matrix A is destroyed by the Householder transformations.
These methods solve the system A x = b in-place directly using Householder transformations. The input vector b is replaced by the solution.
These methods solve the general N-by-N system A x = b where A is tridiagonal ( N >= 2). The super-diagonal and sub-diagonal vectors e and f must be one element shorter than the diagonal vector diag. The form of A for the 4-by-4 case is shown below,
A = ( d_0 e_0 0 0 ) ( f_0 d_1 e_1 0 ) ( 0 f_1 d_2 e_2 ) ( 0 0 f_2 d_3 )
These methods solve the general N-by-N system A x = b where A is symmetric tridiagonal ( N >= 2). The off-diagonal vector e must be one element shorter than the diagonal vector diag. The form of A for the 4-by-4 case is shown below,
A = ( d_0 e_0 0 0 ) ( e_0 d_1 e_1 0 ) ( 0 e_1 d_2 e_2 ) ( 0 0 e_2 d_3 )
These methods solve the general N-by-N system A x = b where A is cyclic tridiagonal ( N >= 3). The cyclic super-diagonal and sub-diagonal vectors e and f must have the same number of elements as the diagonal vector diag. The form of A for the 4-by-4 case is shown below,
A = ( d_0 e_0 0 f_3 ) ( f_0 d_1 e_1 0 ) ( 0 f_1 d_2 e_2 ) ( e_3 0 f_2 d_3 )
These methods solve the general N-by-N system A x = b where A is symmetric cyclic tridiagonal ( N >= 3). The cyclic off-diagonal vector e must have the same number of elements as the diagonal vector diag. The form of A for the 4-by-4 case is shown below,
A = ( d_0 e_0 0 e_3 ) ( e_0 d_1 e_1 0 ) ( 0 e_1 d_2 e_2 ) ( e_3 0 e_2 d_3 )
The process of balancing a matrix applies similarity transformations to make the rows and columns have comparable norms. This is useful, for example, to reduce roundoff errors in the solution of eigenvalue problems. Balancing a matrix A consists of replacing A with a similar matrix where D is a diagonal matrix whose entries are powers of the floating point radix.
Calculates the balanced counterpart of A and the diagonal elements of the similarity transformation. The result is returned as (A’, D).
Replaces the matrix A with its balanced counterpart and stores the diagonal elements of the similarity transformation into the vector D.
The following Linalg methods can handle NArray objects: