Make large-scale mathematical computations and image calculations, optimized for high performance.


Accelerate provides high-performance, energy-efficient computation on the CPU by leveraging its vector-processing capability. The following Accelerate libraries abstract that capability so that code written for them executes appropriate instructions for the processor available at runtime:

  • vImage. A wide range of image-processing functions, including Core Graphics and Core Video interoperation, format conversion, and image manipulation.

  • vDSP. Digital signal processing functions, including 1D and 2D fast Fourier transforms, biquadratic filtering, vector and matrix arithmetic, convolution, and type conversion.

  • vForce. Functions for performing arithmetic and transcendental functions on vectors.

  • Sparse Solvers, BLAS, and LAPACK. Libraries for performing linear algebra on sparse and dense matrices.

  • BNNS. Subroutines for constructing and running neural networks.

Although not part of the Accelerate framework, the following libraries are closely related:

  • simd. A module for performing computations on small vectors and matrices.

  • Compression. Algorithms for lossless data compression; supports LZFSE, LZ4, LZMA, and ZLIB algorithms.




Implement and run neural networks, using previously obtained training data.


Approximates the definite integral of a function over a finite or infinite interval.


Apple’s implementation of the Basic Linear Algebra Subprograms (BLAS).

Sparse Solvers

Solve systems of equations where the coefficient matrix is sparse.


Perform basic arithmetic operations and common digital signal processing routines on large vectors.


Perform computations on large vectors.


Manipulate large images using the CPU’s vector processor.


Perform computations on small vectors and matrices.