Go to the first, previous, next, last section, table of contents.


Linear Algebra

This chapter documents the linear algebra functions of Octave. Reference material for many of these functions may be found in Golub and Van Loan, Matrix Computations, 2nd Ed., Johns Hopkins, 1989, and in LAPACK Users' Guide, SIAM, 1992.

Basic Matrix Functions

@anchor{doc-balance}

Loadable Function: aa = balance (a, opt)
Loadable Function: [dd, aa] = balance (a, opt)
Loadable Function: [cc, dd, aa, bb] = balance (a, b, opt)

[dd, aa] = balance (a) returns aa = dd \ a * dd. aa is a matrix whose row and column norms are roughly equal in magnitude, and dd = p * d, where p is a permutation matrix and d is a diagonal matrix of powers of two. This allows the equilibration to be computed without roundoff. Results of eigenvalue calculation are typically improved by balancing first.

[cc, dd, aa, bb] = balance (a, b) returns aa = cc*a*dd and bb = cc*b*dd), where aa and bb have non-zero elements of approximately the same magnitude and cc and dd are permuted diagonal matrices as in dd for the algebraic eigenvalue problem.

The eigenvalue balancing option opt is selected as follows:

"N", "n"
No balancing; arguments copied, transformation(s) set to identity.
"P", "p"
Permute argument(s) to isolate eigenvalues where possible.
"S", "s"
Scale to improve accuracy of computed eigenvalues.
"B", "b"
Permute and scale, in that order. Rows/columns of a (and b) that are isolated by permutation are not scaled. This is the default behavior.

Algebraic eigenvalue balancing uses standard LAPACK routines.

Generalized eigenvalue problem balancing uses Ward's algorithm (SIAM Journal on Scientific and Statistical Computing, 1981).

@anchor{doc-cond}

Function File: cond (a)
Compute the (two-norm) condition number of a matrix. cond (a) is defined as norm (a) * norm (inv (a)), and is computed via a singular value decomposition.
@seealso{norm, svd, and rank}

@anchor{doc-det}

Loadable Function: [d, rcond] = det (a)
Compute the determinant of a using LAPACK. Return an estimate of the reciprocal condition number if requested.

@anchor{doc-dmult}

Function File: dmult (a, b)
If a is a vector of length rows (b), return diag (a) * b (but computed much more efficiently).

@anchor{doc-dot}

Function File: dot (x, y)
Computes the dot product of two vectors.

@anchor{doc-eig}

Loadable Function: lambda = eig (a)
Loadable Function: [v, lambda] = eig (a)
The eigenvalues (and eigenvectors) of a matrix are computed in a several step process which begins with a Hessenberg decomposition, followed by a Schur decomposition, from which the eigenvalues are apparent. The eigenvectors, when desired, are computed by further manipulations of the Schur decomposition.

@anchor{doc-givens}

Loadable Function: g = givens (x, y)
Loadable Function: [c, s] = givens (x, y)
Return a 2 by 2 orthogonal matrix g = [c s; -s' c] such that g [x; y] = [*; 0] with x and y scalars.

For example,

givens (1, 1)
     =>   0.70711   0.70711
         -0.70711   0.70711

@anchor{doc-inv}

Loadable Function: [x, rcond] = inv (a)
Loadable Function: [x, rcond] = inverse (a)
Compute the inverse of the square matrix a. Return an estimate of the reciprocal condition number if requested, otherwise warn of an ill-conditioned matrix if the reciprocal condition number is small.

@anchor{doc-norm}

Function File: norm (a, p)
Compute the p-norm of the matrix a. If the second argument is missing, p = 2 is assumed.

If a is a matrix:

p = 1
1-norm, the largest column sum of the absolute values of a.
p = 2
Largest singular value of a.
p = Inf
Infinity norm, the largest row sum of the absolute values of a.
p = "fro"
Frobenius norm of a, sqrt (sum (diag (a' * a))).

If a is a vector or a scalar:

p = Inf
max (abs (a)).
p = -Inf
min (abs (a)).
other
p-norm of a, (sum (abs (a) .^ p)) ^ (1/p).

@seealso{cond and svd}

@anchor{doc-null}

Function File: null (a, tol)
Return an orthonormal basis of the null space of a.

The dimension of the null space is taken as the number of singular values of a not greater than tol. If the argument tol is missing, it is computed as

max (size (a)) * max (svd (a)) * eps

@anchor{doc-orth}

Function File: orth (a, tol)
Return an orthonormal basis of the range space of a.

The dimension of the range space is taken as the number of singular values of a greater than tol. If the argument tol is missing, it is computed as

max (size (a)) * max (svd (a)) * eps

@anchor{doc-pinv}

Loadable Function: pinv (x, tol)
Return the pseudoinverse of x. Singular values less than tol are ignored.

If the second argument is omitted, it is assumed that

tol = max (size (x)) * sigma_max (x) * eps,

where sigma_max (x) is the maximal singular value of x.

@anchor{doc-rank}

Function File: rank (a, tol)
Compute the rank of a, using the singular value decomposition. The rank is taken to be the number of singular values of a that are greater than the specified tolerance tol. If the second argument is omitted, it is taken to be

tol = max (size (a)) * sigma(1) * eps;

where eps is machine precision and sigma(1) is the largest singular value of a.

@anchor{doc-trace}

Function File: trace (a)
Compute the trace of a, sum (diag (a)).

Matrix Factorizations

@anchor{doc-chol}

Loadable Function: chol (a)
Compute the Cholesky factor, r, of the symmetric positive definite matrix a, where

r' * r = a.

@anchor{doc-hess}

Loadable Function: h = hess (a)
Loadable Function: [p, h] = hess (a)
Compute the Hessenberg decomposition of the matrix a.

The Hessenberg decomposition is usually used as the first step in an eigenvalue computation, but has other applications as well (see Golub, Nash, and Van Loan, IEEE Transactions on Automatic Control, 1979. The Hessenberg decomposition is p * h * p' = a where p is a square unitary matrix (p' * p = I, using complex-conjugate transposition) and h is upper Hessenberg (i >= j+1 => h (i, j) = 0).

@anchor{doc-lu}

Loadable Function: [l, u, p] = lu (a)
Compute the LU decomposition of a, using subroutines from LAPACK. The result is returned in a permuted form, according to the optional return value p. For example, given the matrix a = [1, 2; 3, 4],

[l, u, p] = lu (a)

returns

l =

  1.00000  0.00000
  0.33333  1.00000

u =

  3.00000  4.00000
  0.00000  0.66667

p =

  0  1
  1  0

The matrix is not required to be square..

@anchor{doc-qr}

Loadable Function: [q, r, p] = qr (a)
Compute the QR factorization of a, using standard LAPACK subroutines. For example, given the matrix a = [1, 2; 3, 4],

[q, r] = qr (a)

returns

q =

  -0.31623  -0.94868
  -0.94868   0.31623

r =

  -3.16228  -4.42719
   0.00000  -0.63246

The qr factorization has applications in the solution of least squares problems

min norm(A x - b)

for overdetermined systems of equations (i.e., a is a tall, thin matrix). The QR factorization is q * r = a where q is an orthogonal matrix and r is upper triangular.

The permuted QR factorization [q, r, p] = qr (a) forms the QR factorization such that the diagonal entries of r are decreasing in magnitude order. For example, given the matrix a = [1, 2; 3, 4],

[q, r, p] = qr(a)

returns

q = 

  -0.44721  -0.89443
  -0.89443   0.44721

r =

  -4.47214  -3.13050
   0.00000   0.44721

p =

   0  1
   1  0

The permuted qr factorization [q, r, p] = qr (a) factorization allows the construction of an orthogonal basis of span (a).

@anchor{doc-qz}

Loadable Function: lambda = qz (a, b)
Generalized eigenvalue problem @math{A x = s B x}, QZ decomposition. Three ways to call:
  1. lambda = qz(A,B) Computes the generalized eigenvalues lambda of @math{(A - sB)}.
  2. [AA, BB, Q, Z, V, W, lambda] = qz (A, B) Computes qz decomposition, generalized eigenvectors, and generalized eigenvalues of @math{(A - sB)}
            A V = B V diag(lambda)
            W' A = diag(lambda) W' B
            AA = Q'*A*Z, BB = Q'*B*Z  with Q, Z orthogonal (unitary)= I
    
  3. [AA,BB,Z{,lambda}] = qz(A,B,opt) As in form [2], but allows ordering of generalized eigenpairs for (e.g.) solution of discrete time algebraic Riccati equations. Form 3 is not available for complex matrices and does not compute the generalized eigenvectors V, W, nor the orthogonal matrix Q.
    opt
    for ordering eigenvalues of the GEP pencil. The leading block of the revised pencil contains all eigenvalues that satisfy:
    "N"
    = unordered (default)
    "S"
    = small: leading block has all |lambda| <=1
    "B"
    = big: leading block has all |lambda >= 1
    "-"
    = negative real part: leading block has all eigenvalues in the open left half-plant
    "+"
    = nonnegative real part: leading block has all eigenvalues in the closed right half-plane

Note: qz performs permutation balancing, but not scaling (see balance). Order of output arguments was selected for compatibility with MATLAB

See also: balance, dare, eig, schur

@anchor{doc-qzhess}

Function File: [aa, bb, q, z] = qzhess (a, b)
Compute the Hessenberg-triangular decomposition of the matrix pencil (a, b), returning aa = q * a * z, bb = q * b * z, with q and z orthogonal. For example,

[aa, bb, q, z] = qzhess ([1, 2; 3, 4], [5, 6; 7, 8])
=> aa = [ -3.02244, -4.41741;  0.92998,  0.69749 ]
=> bb = [ -8.60233, -9.99730;  0.00000, -0.23250 ]
=>  q = [ -0.58124, -0.81373; -0.81373,  0.58124 ]
=>  z = [ 1, 0; 0, 1 ]

The Hessenberg-triangular decomposition is the first step in Moler and Stewart's QZ decomposition algorithm.

Algorithm taken from Golub and Van Loan, Matrix Computations, 2nd edition.

@anchor{doc-schur}

Loadable Function: s = schur (a)
Loadable Function: [u, s] = schur (a, opt)
The Schur decomposition is used to compute eigenvalues of a square matrix, and has applications in the solution of algebraic Riccati equations in control (see are and dare). schur always returns s = u' * a * u where u is a unitary matrix (u'* u is identity) and s is upper triangular. The eigenvalues of a (and s) are the diagonal elements of s If the matrix a is real, then the real Schur decomposition is computed, in which the matrix u is orthogonal and s is block upper triangular with blocks of size at most 2 x 2 along the diagonal. The diagonal elements of s (or the eigenvalues of the 2 x 2 blocks, when appropriate) are the eigenvalues of a and s.

The eigenvalues are optionally ordered along the diagonal according to the value of opt. opt = "a" indicates that all eigenvalues with negative real parts should be moved to the leading block of s (used in are), opt = "d" indicates that all eigenvalues with magnitude less than one should be moved to the leading block of s (used in dare), and opt = "u", the default, indicates that no ordering of eigenvalues should occur. The leading k columns of u always span the a-invariant subspace corresponding to the k leading eigenvalues of s.

@anchor{doc-svd}

Loadable Function: s = svd (a)
Loadable Function: [u, s, v] = svd (a)
Compute the singular value decomposition of a

a = u * sigma * v'

The function svd normally returns the vector of singular values. If asked for three return values, it computes U, S, and V. For example,

svd (hilb (3))

returns

ans =

  1.4083189
  0.1223271
  0.0026873

and

[u, s, v] = svd (hilb (3))

returns

u =

  -0.82704   0.54745   0.12766
  -0.45986  -0.52829  -0.71375
  -0.32330  -0.64901   0.68867

s =

  1.40832  0.00000  0.00000
  0.00000  0.12233  0.00000
  0.00000  0.00000  0.00269

v =

  -0.82704   0.54745   0.12766
  -0.45986  -0.52829  -0.71375
  -0.32330  -0.64901   0.68867

If given a second argument, svd returns an economy-sized decomposition, eliminating the unnecessary rows or columns of u or v.

@anchor{doc-housh}

Function File: [housv, beta, zer] = housh (x, j, z)
Computes householder reflection vector housv to reflect x to be jth column of identity, i.e., (I - beta*housv*housv')x =e(j) inputs x: vector j: index into vector z: threshold for zero (usually should be the number 0) outputs: (see Golub and Van Loan) beta: If beta = 0, then no reflection need be applied (zer set to 0) housv: householder vector

@anchor{doc-krylov}

Function File: [u, h, nu] = krylov (a, v, k, eps1, pflg);
construct orthogonal basis U of block Krylov subspace; [v a*v a^2*v ... a^(k+1)*v]; method used: householder reflections to guard against loss of orthogonality eps1: threshhold for 0 (default: 1e-12) pflg: flag to use row pivoting (improves numerical behavior) 0 [default]: no pivoting; prints a warning message if trivial null space is corrupted 1 : pivoting performed

outputs: u: orthogonal basis of block krylov subspace h: Hessenberg matrix; if v is a vector then a u = u h otherwise h is meaningless nu: dimension of span of krylov subspace (based on eps1) if b is a vector and k > m-1, krylov returns h = the Hessenberg decompostion of a.

Reference: Hodel and Misra, "Partial Pivoting in the Computation of Krylov Subspaces", to be submitted to Linear Algebra and its Applications

Functions of a Matrix

@anchor{doc-expm}

Loadable Function: expm (a)
Return the exponential of a matrix, defined as the infinite Taylor series

expm(a) = I + a + a^2/2! + a^3/3! + ...

The Taylor series is not the way to compute the matrix exponential; see Moler and Van Loan, Nineteen Dubious Ways to Compute the Exponential of a Matrix, SIAM Review, 1978. This routine uses Ward's diagonal Pade' approximation method with three step preconditioning (SIAM Journal on Numerical Analysis, 1977). Diagonal Pade' approximations are rational polynomials of matrices

     -1
D (a)   N (a)

whose Taylor series matches the first 2q+1 terms of the Taylor series above; direct evaluation of the Taylor series (with the same preconditioning steps) may be desirable in lieu of the Pade' approximation when Dq(a) is ill-conditioned.

@anchor{doc-logm}

Function File: logm (a)
Compute the matrix logarithm of the square matrix a. Note that this is currently implemented in terms of an eigenvalue expansion and needs to be improved to be more robust.

@anchor{doc-sqrtm}

Loadable Function: [result, error_estimate] = sqrtm (a)
Compute the matrix square root of the square matrix a.

Ref: Nicholas J. Higham. A new sqrtm for MATLAB. Numerical Analysis Report No. 336, Manchester Centre for Computational Mathematics, Manchester, England, January 1999.

@seealso{expm, logm, and funm}

@anchor{doc-kron}

Function File: kron (a, b)
Form the kronecker product of two matrices, defined block by block as

x = [a(i, j) b]

For example,

kron (1:4, ones (3, 1))
      =>  1  2  3  4
          1  2  3  4
          1  2  3  4

@anchor{doc-syl}

Loadable Function: x = syl (a, b, c)
Solve the Sylvester equation

A X + X B + C = 0

using standard LAPACK subroutines. For example,

syl ([1, 2; 3, 4], [5, 6; 7, 8], [9, 10; 11, 12])
     => [ -0.50000, -0.66667; -0.66667, -0.50000 ]


Go to the first, previous, next, last section, table of contents.