The OS X Accessibility Protocol
Apple includes the informal
NSAccessibility framework in AppKit. This framework includes:
The accessibility protocol that the Cocoa framework implements to allow applications to represent themselves to assistive applications and technologies
APIs an assistive application uses to drive the user interface of another application running in OS X
This chapter introduces the accessibility protocol. It describes:
The model that represents accessible applications to assistive technologies
The accessibility object that represents user interface objects
Some of the ways an assistive application interacts with an accessible application
If you’re an application developer, you should read this chapter to learn about the OS X accessibility protocol. Then, if you’re ready to access-enable your application, you should read Accessibility Programming Guidelines for Mac.
If you are developing an assistive application, you should read this chapter to learn how accessible applications represent themselves in OS X. You’ll find out what information your assistive application can expect to get from an accessible application.
The Accessibility Model
An assistive application helps a user interact with the applications on the user’s computer. To do this, an assistive application must be able to access everything in an application’s user interface and perform all the application’s functions. To be accessible, therefore, an application must provide information about its user interface and capabilities in a standard manner that any assistive application or technology can understand.
This is a challenge because each application type has its own native way of representing its user interface. Cocoa applications, for example, use the
NSControl classes to display windows and controls. Other types of applications use other native constructs.
Apple solved this problem with the introduction of a generic object, called an accessibility object. In an accessible application, a user interface element, such as a window, a control, and even the application itself is represented by an accessibility object. To learn more about the accessibility object and the information it provides, see “The Accessibility Object.”
Accessibility objects provide a uniform representation of an application’s user interface elements, regardless of the application framework on which the application depends. Figure 3-1 shows how an assistive application communicates with different types of applications using the accessibility objects the applications provide.
The OS X accessibility model represents an application’s user interface as a hierarchy of accessibility objects. For the most part, the hierarchy is defined by the parent-child relationships among accessibility objects. For example, an application’s dialog window might contain several buttons. The accessibility object representing the dialog contains a list of child accessibility objects, each representing a button in the dialog. In turn, each accessibility object representing one of the buttons knows its parent is the accessibility object representing the dialog.
Of course, the accessibility objects representing the menu bar and windows in an application are children of the application-level accessibility object. Even the application-level accessibility object has a parent, which is the system-wide accessibility object. An application never needs to worry about its system-wide parent because it is out of the application’s scope. On the other hand, an assistive application might query the system-wide accessibility object to find out which running application has keyboard focus.
Figure 3-2 shows the hierarchy of accessibility objects in a simple application.
A strength of the accessibility object hierarchy is that it can leave out implementation-specific details that are irrelevant to an assistive application and, by extension, to the user. For example, in Cocoa, a button in a window is usually implemented as a button cell within a button control, which is in a content view within a window. A user has no interest in this detailed containment hierarchy; she only needs to know that there’s a button in a window. If the application’s accessibility hierarchy contains an accessibility object for each of these intermediate objects, however, an assistive application has no choice but to present them to the user. This results in a poor user experience because the user is forced to take several steps to get from the window to the button. Figure 3-3 shows how such an inheritance hierarchy might look.
To exclude this unnecessary information, the accessibility protocol allows an application to specify some accessibility objects as ignored. Continuing the Cocoa button example, the application can designate as ignored the accessibility objects representing the button control and the content view. This allows an application to present to an assistive application only the significant objects in its accessibility hierarchy. Figure 3-4 shows how the same hierarchy shown in Figure 3-3 might be presented to an assistive application.
An assistive application helps a user perform tasks by telling an application’s accessibility objects to perform actions. For the most part, actions correspond to things a user can do with a single mouse click. Each accessibility object contains information about which actions it supports, if any. The accessibility object representing a button, for example, supports the press action. When a user wants to press a button, he communicates this to the assistive application. The assistive application determines that the button’s accessibility object supports the press action and sends a request to the button to perform it.
The OS X accessibility protocol defines only a handful of actions that accessibility objects can support. At first, this might seem restrictive, because applications can perform huge numbers of tasks. It’s essential to remember, however, that an assistive application helps a user drive an application’s user interface, it does not simulate the application’s functions. Therefore, an assistive application has no interest in how your application responds to a button press, for example. Its job is to tell the user that the button exists and to tell the application when the user wants to press it. It’s up to your application to respond appropriately to that button press, just as it would if a user used a mouse to click the button.
The Accessibility Object
An accessibility object provides to assistive applications information about the user-interface object it represents. This information includes the object’s position in the accessibility hierarchy, its position on the display, details about what it is, and what actions it can perform. In addition, an accessibility object responds to messages sent by assistive applications and sends out notifications that describe changes in its state.
This section describes the accessibility object. It describes the information an accessibility object provides and the actions it can perform and outlines the communication between accessibility objects and assistive applications.
An accessibility object has many attributes associated with it. The number and kind of attributes vary depending on the type of user interface object the accessibility object represents. A few attributes are required for every accessibility object, but most are optional.
Attributes have values that assistive applications use to find out about the user interface object. For example, an assistive application gets the value of an accessibility object’s role attribute to find out what type of user interface object it represents.
Some attribute values are settable by assistive applications. An example is the value attribute in an accessibility object that represents an editable text field. When a user types in the text field, an assistive application sets the value of the value attribute to the text the user enters.
If you use standard, noncustom Cocoa objects most of the attribute values are already in place. There are a few attribute values, however, that you must provide because they contain application-specific information, such as the description of a user interface object’s function.
AXAttributeConstants.h file in the HIServices framework defines all the accessibility object attributes in the OS X accessibility protocol. The following sections describe some of the most common required attributes, paying particular attention to the attributes for which you must provide values:
The Role and Role Description Attributes
An accessibility object’s most important attribute is its role attribute. This is because the accessibility object’s role determines which other attributes the object contains and tells an assistive application how to handle it. You can think of the role as the accessibility object’s class—it defines a standard set of behaviors and capabilities to which the object conforms. For more information on the attributes associated with specific roles, see “Roles and Associated Attributes.”
The value of the role attribute is a nonlocalized string. An assistive application can programmatically test the value of the role attribute to find out what type of user interface object the accessibility object represents.
AXRoleConstants.h (in the HIServices framework), OS X defines a set standard roles that describe the vast majority of user interface object types. Although it may be tempting to define new roles for custom objects in your application, it is not recommended. An assistive application may not know how to handle an accessibility object with an arbitrary role and additional roles add unnecessary complexity to your code. Instead, you should examine the behavior of your objects and choose the standard role that best represents them.
The role description attribute contains a human-intelligible, localized string that names the accessibility object’s role. An assistive application presents this string to the user (a screen reader application, for example, speaks the string). OS X defines a role description string for each role in
AXRoleConstants.h, so you do not have to provide strings such as “button” or “window”. In the very unlikely event that your application needs to define a new role, however, you are responsible for providing the value of the role description attribute.
The Description Attribute
The description attribute is almost as important as the role attribute. The value of the description attribute is a human-intelligible, localized string that describes what the object does. Because it describes the application-specific function of a user interface object, the accessibility protocol cannot supply the description value. Therefore, it is essential to provide a description for all accessibility objects in your application that do not already include a title attribute (described in “Title Attributes”).
To see why the description attribute is so important, suppose your application window contains a button to left-justify text and which displays a left-pointing arrow. An assistive application can accurately identify this control as a button because it is represented by an accessibility object with the role attribute “button”. However, unless you provide an appropriate description, the assistive application has no way to know that this button left-justifies text, and therefore no way to communicate this to the user.
The title attribute is required for user interface objects that include a text title in their display. For example, the title of a button is the text that appears on the button, such as the text “OK” on an OK button, and the title of a window is the text that appears in its title bar.
A user cannot change the title of such an object directly, but the title might change programmatically if the state of the object changes. For example, a Connect button’s title might change to Disconnect after a connection is made, but not because a user chose to change the button’s title. An accessibility object that represents such an object must include the title attribute and the attribute’s value must be the title string.
Many applications display static text that serves as the title for a user interface object, but that is not contained in that object. An example is the word “Search” displayed below a search field or “Address:” displayed next to a set of editable text fields. To a sighted user, the proximity of the string to the object (or objects) it describes is usually enough to imply the relationship between them. To an assistive application, however, these strings are unrelated to the objects they describe (if they are visible to the assistive application at all).
OS X version 10.4 introduced two related attributes that give assistive applications information about such titles. The TitleUIElement and ServesAsTitleForUIElements attributes allow you to define the relationship between a piece of static text and the object (or objects) it describes.
The TitleUIElement attribute belongs in the accessibility object representing the object being described. The value of this attribute is the accessibility object you create to represent the static text. The ServesAsTitleForUIElements attribute belongs in the static text accessibility object you’ve created and its value is an array containing an arbitrary number of accessibility objects. This allows you to link the static text title with any number of user interface objects. Although these attributes are not required, you should provide them if your user interface includes such static text titles.
To participate in the accessibility hierarchy, an accessibility object must include links to its immediate ancestor and descendants (if any). This helps an assistive application traverse the hierarchy. In addition, an accessibility object can express other relationships, such as between views that affect each other, but that are not linked by containment.
All accessibility objects, with the exceptions of the application-level and system-wide accessibility objects, include a parent attribute. The value of this attribute is usually the accessibility object representing the closest accessible container of the user interface object.
If a user interface object contains other accessible user interface objects, the UIElement representing it must include the children attribute. The value of this attribute is an array containing the UIElements of the accessible descendants.
Some relationships are conceptual rather than containment-based. For example, it’s not unusual for an application to display two separate views of the same or related content. One example is the OS X Mail application. The Mail application’s upper view can display the subject, sender, and receive date of each message in the selected mail box. In the lower view, Mail can display the body of a message selected in the upper view. To a sighted user, the relationship between the selected message in the upper view and the message content in the lower view is apparent. To an assistive application, on the other hand, this direct relationship doesn’t exist. If an assistive application can’t express such a relationship to a blind user, for example, the user can’t jump back and forth between the related elements the way a sighted user can. Instead, such a user might have to step through all intervening controls and views to move between the message description and its content.
OS X version 10.4 introduced the LinkedUIElements attribute to allow you to define such relationships. As you would expect, the UIElement of each related object should contain this attribute. The value of the attribute is an array of UIElements so you can specify one-to-one and one-to-many relationships.
The optional value attribute describes the accessibility object’s state. The value might be the state of a check box or the contents of an editable text field.
The value attribute is often settable. For example, an accessibility object that represents a user-modifiable object, such as an editable text field, has a settable value attribute. This allows an assistive application to set the value of the accessibility object’s value attribute to contain the user’s input. Optionally, accessibility objects may also include attributes that define a range or set of values the object can accept, such as minimum and maximum values for a slider.
Technically, an action is an attribute of an accessibility object. However, the accessibility protocol supports actions differently from the way it supports attributes, so this chapter describes them separately.
An accessibility object can support one or more actions. An action describes how a user interacts with the user interface object the accessibility object represents. It does not describe the application-defined function of the object. This is because an assistive application is concerned with driving the user interface and the results of an action are irrelevant to it. If your application displays a print button, for example, the button’s accessibility object supports a press action, not a print action.
Because actions are generic and refer to the capabilities of user interface objects, there are only a few of them. This means that the set of actions an assistive application has to understand is small and well-defined.
AXActionConstants.h (located in the HIServices framework), OS X defines seven actions an accessibility object can support:
Press (a button)
Increment (the value of a scroller or slider indicator)
Decrement (the value of a scroller or slider indicator)
Confirm (the selection of an object)
Cancel (a selection or action)
Raise (a window)
ShowMenu (display a contextual menu associated with an object)
When a user performs an action, an assistive application sends a message to the accessibility object, requesting it to perform the action. Your application should invoke the same code to carry out this action as it does when the request comes directly from your user interface.
Each action attribute has a description property. An assistive application may speak this description to tell the user what action is available for a specific object. The value of this description property is similar to the value of the role description attribute in that it is a human-intelligible, localized word or short phrase. Unlike the role description, however, the action description is not automatically supplied by the accessibility protocol. If your application creates its own accessibility objects which support actions, you must supply the appropriate action descriptions.
Communication With Accessibility Objects
At the heart of accessibility is the communication between an assistive application and the accessibility objects that represent your application’s user interface. This communication can be divided into two categories:
Messages sent by an assistive application to get information about an accessibility object and to request the performance of actions. An accessibility object responds to these messages by returning attribute values, performing actions, or changing attribute values.
Notifications triggered by accessibility objects that assistive applications can listen for. These notifications tell an interested assistive application about changes in the state of an accessibility object.
If you use only standard, noncustom Cocoa objects in your application, most of this communication is transparent to you. In some cases, you might have to create a custom response to a message, but this is unlikely. It is even less likely that you will have to handle notifications if you use only standard objects.
An assistive application communicates with your application by sending messages to accessibility objects. In Cocoa, these messages result in calls to methods of the
NSAccessibility protocol. For more details about the framework-specific implementation of messages, see Accessibility Programming Guidelines for Mac.
HIObject.h, OS X defines a handful of messages. The following lists the types of messages an assistive application can send to an accessibility object:
Get a list of the accessibility object’s attributes
Get the value of a specific attribute
Check to see if the value of a specific attribute can be set
Set the value of a specific attribute
Get a list of the accessibility object’s actions
Get the description of an action
Tell the accessibility object to perform a specific action
Determine which accessibility object is under the mouse pointer (hit testing)
Register or unregister for notifications
Like the set of actions, the set of messages to which an accessibility object can respond is small. These messages give the assistive application a great deal of control, however. For example, by getting and setting attributes, an assistive application can do things like the following:
Read the value of a slider control
Traverse the object hierarchy to find all the accessibility objects (such as controls, embedded controls, and table cells) within a window
Check to see if a control is enabled
In addition to responding to messages from assistive applications, accessibility objects also broadcast any significant changes that occur in the user interface objects they represent. For example, if the keyboard focus changes to a new text field, a new window becomes active, or a control’s title changes, the accessibility objects for these objects send out notifications.
An assistive application chooses to register for the notifications it is interested in.
An accessibility object can send notifications to indicate any of the following status changes:
The object’s value changed
The object was destroyed
The keyboard focus changed
Unless you are creating accessibility objects to represent custom user interface objects, it is unlikely you will have to write code to send notifications. Cocoa automatically broadcasts the appropriate notifications for standard objects.
Hit-Testing and Keyboard Focus
To a sighted user, the location of the cursor is easy to discern. Similarly, a sighted user can usually tell which object in the user interface has keyboard focus. An assistive application, on the other hand, must query an application to determine which object has keyboard focus or is under the mouse pointer. An accessibility object provides answers to these queries by returning the values of various attributes.
The basic procedure for implementing hit-testing is for an assistive application to ask the application to return the accessibility object under the cursor. The request is recursively passed down the application’s accessibility hierarchy until it reaches the deepest, unignored accessibility object that contains the mouse pointer.
Accessibility objects also must support queries regarding keyboard focus. An accessibility object stores focus information in its focused attribute. The initial query from the assistive application is for the focused attribute of the application-level accessibility object. This query, too, is passed down the application’s accessibility hierarchy until it reaches the deepest, unignored accessibility object whose focused attribute is
An Example of Accessibility
This example gives a detailed description of how a fictitious screen reader with speech recognition and speech synthesis capability might communicate with your application:
The user says, “Open Preferences window.”
The screen reader sends a message to the application accessibility object, asking for its menu bar attribute, which is a reference to the menu bar accessibility object. It then queries the menu bar for a list of its children, and queries each child for its title attribute until it finds the one whose title is the application’s name (that is, the application menu). A second iteration lets it find the Preferences menu item within the application menu. Finally, the screen reader tells the Preferences menu item to perform the press action.
The application opens the Preferences window and then the window sends a notification broadcasting that a new window is now visible and active.
The screen reader, assuming that it registered to be notified when a new window opens, queries the window for a list of its attributes. Assuming that the window accessibility object contains a children attribute, it then queries the accessibility object for the value of its children attribute.
To each child of the window accessibility object, the screen reader sends a query asking for a list of its attributes. It then queries the child for the values of its role, role description, and (if it exists) children attributes.
Among the responses, the screen reader learns that the pane contains several children (for example, three checkboxes).
The screen reader queries each checkbox, asking for the values of the following attributes:
checkBoxin this case)
role description (“checkbox”)
value (checked or unchecked)
children (none in this case)
The screen reader, having learned what objects (controls in this case) are accessible in the window, reports this information to the user using speech synthesis.
The user might then ask for more information about one of the checkboxes.
The screen reader queries the specified checkbox, asking for the value of its help attribute (assuming it exists). It reports this string to the user using speech synthesis.
The user then tells the screen reader to check the checkbox.
The screen reader sends a message requesting that the checkbox’s value attribute be set to
The checkbox accessibility object broadcasts that the value of its value attribute has changed.