Protocol

MTLComputeCommandEncoder

An object used to encode commands in a compute pass.

Declaration

@protocol MTLComputeCommandEncoder

Overview

Don't implement this protocol yourself; instead, create compute command encoders by calling the computeCommandEncoderWithDispatchType: method of the MTLCommandBuffer object into which you want to encode compute commands. You can encode multiple commands in a single compute pass.

To encode a compute command:

  1. Call the setComputePipelineState: method with the MTLComputePipelineState object that contains the compute function to be executed.

  2. Call one or more other functions on the encoder to specify parameters for the compute function.

  3. Call the dispatchThreadgroups:threadsPerThreadgroup: method to encode a compute command.

After repeating these steps as many times as necessary, call endEncoding to finish the compute pass. You must always call endEncoding before the encoder is released or before creating another encoder.

Topics

Specifying the Compute Pipeline State

- setComputePipelineState:

Sets the current compute pipeline state object.

Required.

Specifying Arguments for a Compute Function

- setBuffer:offset:atIndex:

Sets a buffer for the compute function.

Required.

- setBuffers:offsets:withRange:

Sets an array of buffers for the compute function.

Required.

- setBufferOffset:atIndex:

Sets where the data begins in a buffer already bound to the compute shader.

Required.

- setBytes:length:atIndex:

Sets a block of data for the compute shader.

Required.

- setSamplerState:atIndex:

Sets a sampler for the compute function.

Required.

- setSamplerState:lodMinClamp:lodMaxClamp:atIndex:

Sets a sampler for the compute function, specifying clamp values for the level of detail.

Required.

- setSamplerStates:withRange:

Sets multiple samplers for the compute function.

Required.

- setSamplerStates:lodMinClamps:lodMaxClamps:withRange:

Sets multiple samplers for the compute function, specifying clamp values for the level of detail of each sampler.

Required.

- setTexture:atIndex:

Sets a texture for the compute function.

Required.

- setTextures:withRange:

Sets an array of textures for the compute function.

Required.

- setThreadgroupMemoryLength:atIndex:

Sets the size of a block of threadgroup memory.

Required.

Executing a Compute Function Directly

- dispatchThreadgroups:threadsPerThreadgroup:

Encodes a compute command using a grid aligned to threadgroup boundaries.

Required.

- dispatchThreads:threadsPerThreadgroup:

Encodes a compute command using an arbitrarily sized grid.

Required.

Executing a Compute Function Indirectly

Specifying Drawing and Dispatch Arguments Indirectly

Use indirect commands if you don't know your draw or dispatch call arguments when you encode the command.

- dispatchThreadgroupsWithIndirectBuffer:indirectBufferOffset:threadsPerThreadgroup:

Encodes a dispatch call for a compute pass, using an indirect buffer that defines the size of a grid aligned to threadgroup boundaries.

Required.

MTLDispatchThreadgroupsIndirectArguments

The data layout required for the arguments needed to specify the size of threadgroups.

Specifying Resource Usage for Argument Buffers

- useResource:usage:

Specifies that a resource in an argument buffer can be safely used by a compute pass.

Required.

- useResources:count:usage:

Specifies that an array of resources in an argument buffer can be safely used by a compute pass.

Required.

- useHeap:

Specifies that a heap containing resources in an argument buffer can be safely used by a compute pass.

Required.

- useHeaps:count:

Specifies that an array of heaps containing resources in an argument buffer can be safely used by a compute pass.

Required.

MTLResourceUsage

The options that describe how a resource within an argument buffer will be used in a graphics or compute function.

Specifying Imageblock Size

- setImageblockWidth:height:

Sets the size, in pixels, of the imageblock.

Required.

Specifying the Stage-In Region

- setStageInRegion:

Sets the region of the stage-in attributes to apply to the compute kernel.

Required.

- setStageInRegionWithIndirectBuffer:indirectBufferOffset:

Sets the region of the stage-in attributes to apply to the compute kernel using an indirect buffer.

Required.

MTLStageInRegionIndirectArguments

The data layout required for the arguments needed to specify the stage-in region.

Executing Commands Concurrently or Serially

Define whether the encoder's commands should execute in parallel or in succession.

dispatchType

The strategy to use when dispatching commands encoded by the compute command encoder.

Required.

MTLDispatchType

Constants indicating how the compute command encoder's commands are dispatched.

- memoryBarrierWithScope:

Encodes a barrier so that changes to a set of resource types made by commands encoded before the barrier are completed before further commands are executed.

Required.

- memoryBarrierWithResources:count:

Encodes a barrier so that changes to a set of resources made by commands encoded before the barrier are completed before further commands are executed.

Required.

Executing Commands From Indirect Command Buffers

- executeCommandsInBuffer:indirectBuffer:indirectBufferOffset:

Encodes a command to execute commands in an indirect command buffer, specifying the range indirectly.

Required.

- executeCommandsInBuffer:withRange:

Encodes a command to execute commands in an indirect command buffer.

Required.

Synchronizing Command Execution for Untracked Resources

- updateFence:

Tells the GPU to update the fence after all commands encoded by the compute command encoder have finished executing.

Required.

- waitForFence:

Tells the GPU to wait until the fence is updated before executing any commands encoded by the compute command encoder.

Required.

Sampling Compute Execution Data

- sampleCountersInBuffer:atSampleIndex:withBarrier:

Encodes a command to sample hardware counters at this point in the compute pass and store the samples into a counter sample buffer.

Required.

Relationships

Inherits From

See Also

Parallel Computation

Hello Compute

Demonstrates how to perform data-parallel computations using the GPU.

Creating Threads and Threadgroups

Learn how Metal organizes compute-processing workloads.

Calculating Threadgroup and Grid Sizes

Calculate the optimum sizes for threadgroups and grids when dispatching compute-processing workloads.

MTLComputePipelineDescriptor

An object used to customize how a new compute pipeline state object is compiled.

MTLComputePipelineState

An object that contains a compiled compute pipeline.