Implementing Iterative Methods

Use iterative methods to solve large problems faster and with a lower memory overhead than with direct methods.


In addition to direct methods, Sparse Solvers in the Accelerate framework offers iterative methods for solving systems of equations. Table 1 summarizes the differences between the two approaches.

Table 1

Direct versus iterative methods

Direct methods

Iterative methods

Ease of Use




Machine precision

Square root of machine precision


Fast for small problems

Quite fast for larger problems

Fastest for large problems, but only with a suitable problem-specific preconditioner

Memory Requirements



In contrast to direct methods, iterative methods provide a way for expert users to find approximate solutions faster and using less memory than with direct methods. Iterative methods can also be used when forming the explicit matrix is prohibitively expensive, but performing matrix-vector multiplications can be done efficiently. However, to achieve these gains, problem-specific knowledge is required to select an appropriate preconditioner (an operator that approximates the inverse of A). It's best to try a direct method before trying to use iterative methods.

As the name suggests, an iterative method generates a series of solutions that converge to the true solution. The iteration can be stopped as soon as the solution is sufficiently close to the answer. In most cases, the iteration stagnates before reaching the exact solution, due to the limits of numerical accuracy inherent in floating-point arithmetic. In fact, some algorithms may fail to converge in finite time even in exact arithmetic.

Table 2 lists the methods provided by the Accelerate framework for solving different classes of problems.

Table 2

Iterative methods provided by the Accelerate framework

Ax = b

A positive definite

Ax = b

A symmetric indefinite

A square unsymmetric

min_x ‖ Ax - b ‖

A square

A rectangular

Conjugate gradient (CG)





Implement an Iterative Method

In general, the solution call to solve AX = B has the following form:

SparseSolve(SparseMETHOD(), A, B, X);

where SparseMETHOD is one of the methods from the table above.

SparseMETHOD() optionally takes a structure of options as an argument. See method-specific documentation to find out what these options are.

Add an Optional Preconditioner

Optionally, a preconditioner may be supplied:

SparseSolve(SparseMETHOD(), B, X, P, A);

P can be either a specific enumerated type (SparsePreconditioner_t) of preconditioner derived from the matrix A, or a general preconditioner object containing a function pointer.

If the matrix A is not available explicitly, you can pass a block that performs the operation y = op(A)x + beta y, where op(A) is A or A and beta is 0.0 or 1.0, signaled by the Boolean parameter accumulate. For example, passing the sparse matrix A is equivalent to the following call:

SparseSolve( SparseMETHOD(), B, X, 
    ^void(bool accumulate, enum CBLAS_TRANSPOSE trans, 
          DenseMatrix X, DenseMatrix Y) {
        if(accumulate) {
                SparseMultiplyAdd(A, X, Y);
                SparseMultiplyAdd(SparseGetTranspose(A), X, Y);
        } else {
                SparseMultiply(A, X, Y);
                SparseMultiply(SparseGetTranspose(A), X, Y);
    } );

The Accelerate framework offers the following preconditioners that may be effective for some problems:

Ax = b

minₓ ‖ Ax - b ‖₂

SparsePreconditionerDiagonal (Diagonal (Jacobi))

SparsePreconditionerDiagScaling (Diagonal scaling)

Because preconditioning is often essential to convergence, accuracy, and performance, be sure to refer to the documentation for SparsePreconditioner_t to find a suitable preconditioner for the particular problem.

A preconditioner P is provided as a struct:

typedef struct {
  SparsePreconditioner_t type;
  void *mem;
  void (*apply) (void *mem, enum CBLAS_TRANSPOSE trans, DenseMatrix_Double X, DenseMatrix_Double Y);
} SparseOpaquePreconditioner_Double;

For user-provided preconditioners, the type must be SparsePreconditionerUser. The member mem is passed without modification as the first argument of apply(). The argument trans is either CblasNoTrans or CblasTrans, indicating whether to perform the operation Y = PX or Y = PX respectively.

You can get a structure of this form for a preconditioner provided by the Accelerate framework with the following code:

__auto_type P = SparseCreatePreconditioner(type, A);

In this case, you must release memory through a call to SparseRelease(_:) once the preconditioner is no longer required.

If you need to perform your own convergence tests, or you otherwise wish to single-step the iteration, the routine SparseIterate is provided. Because this routine is intended for expert users only, only the block form, with or without a struct-based preconditioner, is provided. Note that the solution X may not be available on all iterations, and you should explicitly finalize the iteration with a call using a negative first argument before attempting to use the solution.

See Also

Sparse Matrices

Creating Sparse Matrices

Create sparse matrices for factorization and solving systems.

Solving Systems Using Direct Methods

Use direct methods to solve systems of equations where the coefficient matrix is sparse.

Solving Systems Using Iterative Methods

Use iterative methods to solve systems of equations where the coefficient matrix is sparse.

Creating a Sparse Matrix from Coordinate Format Arrays

Use separate coordinate format arrays to create sparse matrices.

Sparse Solvers

Solve systems of equations where the coefficient matrix is sparse.