Use iterative methods to solve large problems faster and with a lower memory overhead than with direct methods.
Framework
 Accelerate
Overview
In addition to direct methods, Sparse Solvers in the Accelerate framework offers iterative methods for solving systems of equations. Table 1 summarizes the differences between the two approaches.
Direct methods  Iterative methods  

Ease of Use  Simple  Complex 
Accuracy  Machine precision  Square root of machine precision 
Speed  Fast for small problems Quite fast for larger problems  Fastest for large problems, but only with a suitable problemspecific preconditioner 
Memory Requirements  High  Low 
In contrast to direct methods, iterative methods provide a way for expert users to find approximate solutions faster and using less memory than with direct methods. Iterative methods can also be used when forming the explicit matrix is prohibitively expensive, but performing matrixvector multiplications can be done efficiently. However, to achieve these gains, problemspecific knowledge is required to select an appropriate preconditioner (an operator that approximates the inverse of A). It's best to try a direct method before trying to use iterative methods.
As the name suggests, an iterative method generates a series of solutions that converge to the true solution. The iteration can be stopped as soon as the solution is sufficiently close to the answer. In most cases, the iteration stagnates before reaching the exact solution, due to the limits of numerical accuracy inherent in floatingpoint arithmetic. In fact, some algorithms may fail to converge in finite time even in exact arithmetic.
Table 2 lists the methods provided by the Accelerate framework for solving different classes of problems.
Ax = b A positive definite  Ax = b A symmetric indefinite A square unsymmetric  min_x ‖ Ax  b ‖₂ A square A rectangular 

Conjugate gradient (CG)  DQGMRES  LSMR 
GMRES  
FGMRES 
Implement an Iterative Method
In general, the solution call to solve AX = B has the following form:
where Sparse
is one of the methods from the table above.
Sparse
optionally takes a structure of options as an argument. See methodspecific documentation to find out what these options are.
Add an Optional Preconditioner
Optionally, a preconditioner may be supplied:
P
can be either a specific enumerated type (Sparse
) of preconditioner derived from the matrix A, or a general preconditioner object containing a function pointer.
If the matrix A is not available explicitly, you can pass a block that performs the operation y = op(A)x + beta y, where op(A) is A or Aᵀ and beta is 0
or 1
, signaled by the Boolean parameter accumulate
. For example, passing the sparse matrix A is equivalent to the following call:
The Accelerate framework offers the following preconditioners that may be effective for some problems:
Ax = b  minₓ ‖ Ax  b ‖₂ 



Because preconditioning is often essential to convergence, accuracy, and performance, be sure to refer to the documentation for Sparse
to find a suitable preconditioner for the particular problem.
A preconditioner P
is provided as a struct:
For userprovided preconditioners, the type must be Sparse
. The member mem
is passed without modification as the first argument of apply
(). The argument trans is either CblasNoTrans or CblasTrans, indicating whether to perform the operation Y = PX or Y = PᵀX respectively.
You can get a structure of this form for a preconditioner provided by the Accelerate framework with the following code:
In this case, you must release memory through a call to SparseRelease(_:) once the preconditioner is no longer required.
If you need to perform your own convergence tests, or you otherwise wish to singlestep the iteration, the routine Sparse
is provided. Because this routine is intended for expert users only, only the block form, with or without a structbased preconditioner, is provided. Note that the solution X may not be available on all iterations, and you should explicitly finalize the iteration with a call using a negative first argument before attempting to use the solution.