Execute code concurrently on multicore hardware by submitting work to dispatch queues managed by the system.


Grand Central Dispatch (GCD) comprises language features, runtime libraries, and system enhancements that provide systemic, comprehensive improvements to the support for concurrent code execution on multicore hardware in macOS, iOS, watchOS, and tvOS.

The BSD subsystem, Core Foundation, and Cocoa APIs have all been extended to use these enhancements to help both the system and your application to run faster, more efficiently, and with improved responsiveness. Consider how difficult it is for a single application to use multiple cores effectively, let alone doing it on different computers with different numbers of computing cores or in an environment with multiple applications competing for those cores. GCD, operating at the system level, can better accommodate the needs of all running applications, matching them to the available system resources in a balanced fashion.

Dispatch Objects and ARC

When you build your app using the Objective-C compiler, all dispatch objects are Objective-C objects. As such, when automatic reference counting (ARC) is enabled, dispatch objects are retained and released automatically just like any other Objective-C object. When ARC is not enabled, use the dispatch_retain and dispatch_release functions (or Objective-C semantics) to retain and release your dispatch objects. You cannot use the Core Foundation retain/release functions.

If you need to use retain/release semantics in an ARC-enabled app with a later deployment target (for maintaining compatibility with existing code), you can disable Objective-C-based dispatch objects by adding -DOS_OBJECT_USE_OBJC=0 to your compiler flags.


Managing Dispatch Queues

GCD provides and manages FIFO queues to which your application can submit tasks in the form of block objects. Work submitted to dispatch queues are executed on a pool of threads fully managed by the system. No guarantee is made as to the thread on which a task executes.

Dispatch Queue Types

Attributes to use when creating new dispatch queues.


A dispatch queue is a lightweight object to which your application submits blocks for subsequent execution.


Returns the serial dispatch queue associated with the application’s main thread.


Returns a system-defined global concurrent queue with the specified quality of service class.


Returns the queue on which the currently executing block is running.


Sets the target queue for the given object.


Submits a block for asynchronous execution on a dispatch queue and returns immediately.


Submits an application-defined function for asynchronous execution on a dispatch queue and returns immediately.


Submits an application-defined function for synchronous execution on a dispatch queue.


Enqueue a block for execution at the specified time.


Enqueues an application-defined function for execution at a specified time.


Submits an application-defined function to a dispatch queue for multiple invocations.


Returns the label specified for the queue when the queue was created.


Returns the value for the key associated with the current dispatch queue.


Sets the key/value data for the specified dispatch queue.


Gets the value for the key associated with the specified dispatch queue.


A predicate for use with the dispatch_once function.


Executes a block object once and only once for the lifetime of an application.


Executes an application-defined function once and only once for the lifetime of an application.


Executes blocks submitted to the main queue.

Managing Units of Work

Dispatch blocks allow you to configure properties of individual units of work on a queue directly. They also allow you to address individual work units for the purposes of waiting for their completion, getting notified about their completion, and/or canceling them.


The prototype of blocks submitted to dispatch queues, which take no arguments and have no return value.


The prototype of functions submitted to dispatch queues.


Creates a new dispatch block on the heap using an existing block and the given flags.


Creates a new dispatch block on the heap from an existing block and the given flags, and assigns it the specified QoS class and relative priority.


Creates, synchronously executes, and releases a dispatch block from the specified block and flags.


Waits synchronously until execution of the specified dispatch block has completed or until the specified timeout has elapsed.


Schedules a notification block to be submitted to a queue when the execution of a specified dispatch block has completed.


Asynchronously cancels the specified dispatch block.


Tests whether the given dispatch block has been canceled.

Prioritizing Work and Specifying Quality of Service

Dispatch Queue Priorities

Used to select the appropriate global concurrent queue.


Returns attributes suitable for creating a dispatch queue with the desired quality-of-service information.

Using Dispatch Groups

Grouping blocks allows for aggregate synchronization. Your application can submit multiple blocks and track when they all complete, even though they might run on different queues. This behavior can be helpful when progress can’t be made until all of the specified tasks are complete.


A group of block objects submitted to a queue for asynchronous invocation.


Submits a block to a dispatch queue and associates the block with the specified dispatch group.


Submits an application-defined function to a dispatch queue and associates it with the specified dispatch group.


Schedules an application-defined function to be submitted to a queue when a group of previously submitted block objects have completed.


Waits synchronously for the previously submitted block objects to complete; returns if the blocks do not complete before the specified timeout period has elapsed.

Using Dispatch Semaphores

A dispatch semaphore is an efficient implementation of a traditional counting semaphore. Dispatch semaphores call down to the kernel only when the calling thread needs to be blocked. If the calling semaphore does not need to block, no kernel call is made.


Waits for (decrements) a semaphore.

Using Dispatch Barriers

A dispatch barrier allows you to create a synchronization point within a concurrent dispatch queue. When it encounters a barrier, a concurrent queue delays the execution of the barrier block (or any further blocks) until all blocks submitted before the barrier finish executing. At that point, the barrier block executes by itself. Upon completion, the queue resumes its normal execution behavior.


Submits a barrier block for asynchronous execution and returns immediately.


Submits a barrier function for asynchronous execution and returns immediately.


Submits a barrier block object for execution and waits until that block completes.


Submits a barrier function for execution and waits until that function completes.

Using Dispatch Data

Dispatch Data Object Constants

Constants representing data objects.

Dispatch Data Destructor Constants

Constants representing the destructors to use for data objects.


An immutable object representing a contiguous or sparse region of memory.


A block to invoke for every contiguous memory region in a data object.


Creates a new dispatch data object with the specified memory buffer.


Returns the logical size of the memory managed by a dispatch data object


Returns a new dispatch data object containing a contiguous representation of the specified object’s memory.


Returns a new dispatch data object consisting of the concatenated data from two other data objects.


Returns a new dispatch data object whose contents consist of a portion of another object’s memory region.


Traverses the memory of a dispatch data object and executes custom code on each region.


Returns a data object containing a portion of the data in another data object.

Using Dispatch Time

Dispatch Time Multiplier Constants

Multipliers for calculating time values.


A somewhat abstract representation of time.


Creates a dispatch_time_t relative to the default clock or modifies an existing dispatch_time_t.


Creates a dispatch_time_t using an absolute time according to the wall clock.

Managing Dispatch Sources


Defines a common set of properties and methods that are shared with all dispatch source types.


An identifier for the type system object being monitored by a dispatch source.


A file descriptor used for I/O operations.


Creates a new dispatch source to monitor low-level system objects and automatically submit a handler block to a dispatch queue in response to events.


Returns pending data for the dispatch source.


Returns the underlying system handle associated with the specified dispatch source.


Returns the mask of events monitored by the dispatch source.


Merges data into a dispatch source of type DISPATCH_SOURCE_TYPE_DATA_ADD or DISPATCH_SOURCE_TYPE_DATA_OR and submits its event handler block to its target queue.


Sets a start time, interval, and leeway value for a timer source.


Sets the registration handler block for the given dispatch source.


Sets the registration handler function for the given dispatch source.


Sets the event handler block for the given dispatch source.


Sets the event handler function for the given dispatch source.


Sets the cancellation handler block for the given dispatch source.


Sets the cancellation handler function for the given dispatch source.


Asynchronously cancels the dispatch source, preventing any further invocation of its event handler block.


Tests whether the given dispatch source has been canceled.

Managing Dispatch I/O

The dispatch I/O channel API lets you manage file descriptor–based operations. This API supports both stream-based and random-access semantics for accessing the contents of the file descriptor.

Dispatch I/O Channel Types

The types of dispatch I/O channels that may be created.

Dispatch I/O Channel Closing Options

The options to use when closing a dispatch I/O channel.

Dispatch I/O Channel Configuration Options

The options to use when configuring a channel.


A dispatch I/O channel.


The type of a dispatch I/O channel


A handler block used to process operations on a dispatch I/O channel.


Closes the specified channel to new read and write operations.


The type for flags used to specify closing options for a channel.


Sets the interval (in nanoseconds) at which to invoke the I/O handlers for the channel.


The type for flags used to specify the dispatch interval of a channel.

Working with Dispatch Objects

GCD provides dispatch object interfaces to allow your application to manage aspects of processing such as memory management, pausing and resuming execution, defining object context, and logging task data. Dispatch objects must be manually retained and released and are not garbage collected.