OpenGL ES on iOS
OpenGL ES provides a procedural API for submitting primitives to be rendered by a hardware accelerated graphics pipeline. Graphics commands are consumed by OpenGL to generate images that can be displayed to the user or retrieved for further processing outside of OpenGL ES.
The OpenGL ES specification defines the precise behavior for each function. Most commands in OpenGL ES perform one of the following activities:
Reading detailed information provided by the implementation about its capabilities. See “Determining OpenGL ES Capabilities.”
Reading and writing state variables defined by the specification. OpenGL ES state typically represents the current configuration of the graphics pipeline. For example, OpenGL ES 1.1 uses state variables extensively to configure lights, materials, and the computations performed by the fixed-function pipeline.
Creating, modifying or destroying OpenGL ES objects. OpenGL ES objects are not Objective-C objects; they are OpenGL ES resources manipulated through OpenGL ES’s procedural API. See “OpenGL ES Objects Encapsulate Resources on Behalf of Your Application” for more information.
Drawing primitives. Vertices are submitted to the pipeline where they are processed, assembled into primitives and rasterized to a framebuffer.
Which Version(s) of OpenGL ES Should I Target?
When designing your OpenGL ES application, a critical question you must answer is whether your application should support OpenGL ES 2.0, OpenGL ES 1.1 or both.
OpenGL ES 2.0 is more flexible and more powerful than OpenGL ES 1.1 and is the best choice for new applications. Custom vertex or fragment calculations can be implemented more clearly and concisely in shaders, with better performance. To perform the same calculations in OpenGL ES 1.1 often requires multiple rendering passes or complex state configurations that obscure the intent of the algorithm. As your algorithms grow in complexity, shaders convey your calculations more clearly and concisely and with better performance. OpenGL ES 2.0 requires more work up front to build your application; you need to recreate some of the infrastructure that OpenGL ES 1.1 provides by default.
OpenGL ES 1.1 provides a standard fixed-function pipeline that provides good baseline behavior for a 3D application, from transforming and lighting vertices to blending fragments into the framebuffer. If your needs are simple, OpenGL ES 1.1 requires less coding to add OpenGL ES support to your application. Your application should target OpenGL ES 1.1 if your application needs to support all iOS devices
If you choose to implement an OpenGL ES 1.1 application primarily for compatibility with older devices, consider adding an OpenGL ES 2.0 rendering option that takes advantage of the greater power of OpenGL ES 2.0-capable graphics processors found on newer iOS devices.
Understanding the OpenGL ES Architecture
OpenGL ES works on a few key principles. To design efficient OpenGL ES applications, you need to understand the underlying architecture.
OpenGL ES uses a client-server model, as shown in Figure 1-1. When your application calls an OpenGL ES function, it talks to an OpenGL ES client. The client processes the function call and, where necessary, converts it into commands to deliver to the OpenGL server. A client-server model allows the work to process a function call to be divided between the client and the server. The nature of the client, the server, and the communication path between them is specific to each implementation of OpenGL ES; on an iOS device, the workload is divided between the CPU and a dedicated graphics processor.
OpenGL ES Relies on Platform-Specific Libraries For Critical Functionality
The OpenGL ES specification defines how OpenGL ES works, but does not define functions to manage the interaction between OpenGL ES and the hosting operating system. Instead, the specification expects each implementation to provide functions to allocate rendering contexts and system framebuffers.
A rendering context stores the internal state for the OpenGL ES state machine. Rendering contexts allow each OpenGL ES application to each maintain a copy of the state data without worrying about other applications. See “Configuring OpenGL ES Contexts.” You can also use multiple rendering contexts in a single application.
A system framebuffer is a destination for OpenGL ES drawing commands, and is typically associated with the host operating system’s graphics subsystem. iOS does not provide system framebuffers. Instead, iOS extends the framebuffer object provided by OpenGL ES to allow framebuffers that share data with Core Animation. See “Framebuffer Objects are the Only Rendering Target on iOS” for more information on framebuffer objects and “Drawing With OpenGL ES” for a detailed discussion on creating and using framebuffers in your application.
Commands May Be Executed Asynchronously
A benefit of the OpenGL ES client-server model is that an OpenGL ES function can return control to the application before the requested operation completes. If OpenGL ES required every function to complete before returning control to your application, the CPU and GPU would run in lockstep, eliminating many opportunities for parallelism in your application. On iOS, deferring execution of drawing commands is quite common. By deferring several drawing commands and handling them simultaneously, the graphics hardware can remove hidden surfaces before performing costly fragment calculations.
Many OpenGL ES functions implicitly or explicitly flush commands to the graphics hardware. Other OpenGL functions flush commands to the graphics processor and wait until some or all pending commands have completed. Whenever possible, design your application to avoid client-server synchronizations.
Commands Are Executed In Order
OpenGL ES guarantees that the results of function calls made to a rendering context act as if they executed in the same order that the functions were called by the client application. When your application makes a function call to OpenGL ES, your application can assume the results from previous functions are available, even if some of the commands have not finished executing.
Parameters are Copied at Call-Time
To allow commands to be processed asynchronously, when your application calls an OpenGL ES function, any parameter data required to complete the function call must be copied by OpenGL ES before control is returned to your application. If a parameter points at an array of vertex data stored in application memory, OpenGL ES must copy the vertex data before returning. This has a couple of important implications. First, an application is free to change any memory it owns regardless of the calls it makes to OpenGL ES, because OpenGL ES and your application never access the same memory simultaneously. Second, copying parameters and reformatting them so that the graphics hardware can read the data adds overhead to every function call. For best performance, your application should define its data in format that are optimized for the graphics hardware, and it should use buffer objects to explicitly manage memory copies between your application and OpenGL ES.
Implementations May Extend the Capabilities Defined in the Specification
An OpenGL ES implementation may extend the OpenGL ES specification in one of two ways. First, the specification sets specific minimum requirements that implementations must meet, such as the size of a texture or the number of texture units that the application may access. An OpenGL ES implementation is free to support larger values — a larger texture, or more texture units. Second, OpenGL ES extensions allow an implementation to provide new OpenGL ES functions and constants. Extensions allow an implementation to add entirely new features. Apple implements many extensions to allow applications to take advantage of hardware features and to help you improve the performance of your applications. The actual hardware limits and the list of extensions offered by each implementation may vary depending on which device your application runs on and the version of iOS running on the device. Your application must test the capabilities at runtime and alter its behavior to match.
OpenGL ES Objects Encapsulate Resources on Behalf of Your Application
Objects are opaque containers that your application uses to hold configuration state or data needed by the renderer. Because the only access to objects is through the procedural API, the OpenGL ES implementation can choose different strategies when allocating objects for your application. It can store your data in a format or memory location that is optimal for the graphics processor. Another advantage to objects is that they are reusable, allowing your application to configure the object once and use it multiple times.
The most important OpenGL ES object types include:
A texture is an image that can be sampled by the graphics pipeline. This is typically used to map a color image onto primitives but can also be used to map other data, such as a normal map or pre-calculated lighting information. The chapter “Best Practices for Working with Texture Data” discusses critical topics for using textures on iOS.
A buffer object is a block of memory owned by OpenGL ES used to store data for your application. Buffers are used to precisely control the process of copying data between your application and OpenGL ES. For example, if you provide a vertex array to OpenGL ES, it must copy the data every time you submit a drawing call. In contrast, if your application stores its data in a vertex buffer object, the data is copied only when your application sends commands to modify the contents of the vertex buffer object. Using buffers to manage your vertex data can significantly boost the performance of your application.
A vertex array object holds a configuration for the vertex attributes that are to be read by the graphics pipeline. Many applications require different pipeline configurations for each entity it intends to render. By storing a configuration in a vertex array, you avoid the cost of reconfiguring the pipeline and may allow the implementation to optimize its handling of that particular vertex configuration.
Shader programs, also known as shaders, are also objects. An OpenGL ES 2.0 application creates vertex and fragment shaders to specify the calculations that are to be performed on each vertex or fragment, respectively.
A renderbuffer is a simple 2D graphics image in a specified format. This format usually is defined as color, depth or stencil data. Renderbuffers are not usually used in isolation, but are instead used as attachments to a framebuffer.
Framebuffer objects are the ultimate destination of the graphics pipeline. A framebuffer object is really just a container that attaches textures and renderbuffers to itself to create a complete configuration needed by the renderer. A later chapter, “Drawing With OpenGL ES,” describes strategies for creating and using framebuffers in iOS applications.
Although each object type in OpenGL ES has its own functions to manipulate it, all objects share a similar programming model:
Generate an object identifier.
An identifier is a plain integer used to identify a specific object instance. Whenever you need a new object, call OpenGL ES to create a new identifier. Creating the object identifier does not actually allocate an object, it simply allocates a reference to it.
Bind your object to the OpenGL ES context.
Most OpenGL ES functions act implicitly on an object, rather than requiring you to explicitly identify the object in every function call. You set the object to be configured by binding it to the context. Each object type uses different functions to bind it to the context. The first time you bind an object identifier, OpenGL ES allocates and initializes the object.
Modify the state of the object.
Your application makes one or more function calls to configure the object. For example, after binding a texture object, you typically would configure how the texture is filtered and then load image data into the texture object.
Changing an object can potentially be expensive, as it may require new data to be sent to the graphics hardware. Where reasonable, create and configure your objects once, and avoid changing them afterwards for the duration of your application.
Use the object for rendering.
Once you’ve created and configured all the objects needed to render a scene, you bind the objects needed by the pipeline and execute one or more drawing functions. OpenGL ES uses the data stored in the objects to render your primitives. The results are sent to the bound framebuffer object.
Delete the object.
When you are done with an object, your application should delete it. When an object is deleted, its contents are destroyed and the object identifier is recycled.
Framebuffer Objects are the Only Rendering Target on iOS
Framebuffer objects are the destination for rendering commands. OpenGL ES 2.0 provides framebuffer objects as part of the core specification; they are provided on OpenGL ES 1.1 by the
OES_framebuffer_object extension. Because framebuffer objects are the only rendering target on iOS, Apple guarantees that the
OES_framebuffer_object extension will always be provided by every OpenGL ES 1.1 implementation on iOS.
Framebuffer objects provide storage for color, depth and/or stencil data by attaching images to the framebuffer, as shown in Figure 1-2. The most common image attachment is a renderbuffer object. However, a OpenGL ES texture can be attached to the color attachment point of a framebuffer instead, allowing image to be rendered directly into a texture. Later, the texture can act as an input to future rendering commands.
Creating a framebuffer uses the following steps:
Generate and bind a framebuffer object.
Generate, bind, and configure an image.
Attach the image to one of the framebuffer’s attachment points.
Repeat steps 2 and 3 for other images.
Test the framebuffer for completeness. The rules for completeness are defined in the OpenGL ES specification. These rules ensure the framebuffer and its attachments are well-defined.
© 2013 Apple Inc. All Rights Reserved. (Last updated: 2013-04-23)