Automating UI Testing

When you automate tests of UI interactions, you free critical staff and resources for other work. In this way you maximize productivity, minimize procedural errors, and shorten the amount of time needed to develop product updates.

You can use the Automation instrument to automate user interface tests in your iOS app through test scripts that you write. These scripts run outside of your app and simulate user interaction by calling the UI Automation API, a JavaScript programming interface that specifies actions to be performed in your app as it runs in the simulator or on a connected device. Your test scripts return log information to the host computer about the actions performed. You can even integrate the Automation instrument with other instruments to perform sophisticated tests such as tracking down memory leaks and isolating causes of performance problems.

This chapter describes how you use the Automation template in Instruments to execute scripts. The Automation trace template executes a script which simulates UI interaction for an iOS app launched from Instruments. It consists of the Automation instrument only.

This chapter also explains how to integrate your scripts with the UI Automation programming interface in order to verify that your app can do the following:

The Automation instrument provides powerful features, including:

As you work through this chapter, look for more detailed information about each class in UI Automation JavaScript Reference for iOS. For an overview of UI Automation with JavaScript, see JavaScript for Automation Release Notes. For some sample automation projects, see JavaScript for Automation WWDC 2014 Demos.

Writing, Exporting, and Importing Automation Test Scripts

It’s easy to write your own scripts inside Instruments. The built-in script editor in the Automation instrument allows you to create and edit new test scripts in your trace document, as well as import existing ones.

bullet
To create a new script
  1. Create a new trace document in Instruments using the Automation trace template.

    ../Art/instruments_automation_template_selection_2x.png
  2. With the Automation instrument selected in the Instruments pane, click the Display Settings button (middle) in the inspector sidebar.

    ../Art/instruments_automation_display_settings_2x.png
  3. Click Add > Create.

    ../Art/instruments_automation_create_new_script_2x.png
  4. Double-click New Script to change the name of the script.

    ../Art/instruments_automatioin_rename_script_2x.png
  5. In the Detail pane Navigation bar, select Script to enter the code for your script.

    ../Art/instruments_automation_script_editor_2x.png
  6. Choose a target for your script.

    ../Art/instruments_automation_target_2x.png
  7. Click the Play button at the bottom of the the Automation > Script Detail pane.

    ../Art/instruments_automation_controls_2x.png

After you create a script, you will want to use it throughout the development of your app. You do this by saving your configured trace document (which includes your script) and opening it again whenever you want to test your app. Or, you can export your test script and import it into a new trace document when you need it.

bullet
To export a script to a file on a disk
  1. Create a script in a trace document.

  2. Control-click in the content area to display the contextual menu.

  3. Choose Export.

    ../Art/instruments_automation_export_script_menu_2x.png
  4. Choose a location for your script in the file system and click Save.

    ../Art/instruments_automation_export_script_dialog_2x.png
bullet
To import a previously saved script
  1. Select the Automation trace template.

  2. Click Add > Import in the Scripts area of the Display Settings inspector.

    ../Art/instruments_automation_import_script_2x.png
  3. Navigate to your saved script file and click Open.

Loading Saved Automation Test Scripts

You write your Automation tests in JavaScript, using the UI Automation JavaScript library to specify actions that should be performed in your app as it runs. You can create as many scripts as you like and include them in your trace document, but you can run only one script at a time. The API does, however, offer a #import directive that allows you to write smaller, reusable discrete test scripts. For example, if you define commonly used functions in a file named TestUtilities.js, you can make those functions available for use in your test script by including in that script the line:

#import "<path-to-library-folder>/TestUtilities.js"

Changes you make with the script editor are saved when you save your trace document. For scripts created in the editor, changes are saved as part of the trace document itself. To save those changes in a file you can access on disk, you have to export the script. See To export a script to a file on a disk above.

Recording Manual User Interface Actions into Automation Scripts

A capture feature simplifies script development by allowing you to record actions that you perform on a target iOS device or in iOS Simulator. To use this feature, create an Automation trace document and then capture actions that you perform on the device. These captured actions are incorporated into your script as expressions that you can edit.

bullet
To record manual user interface actions
  1. Create or open a trace document containing the Automation instrument.

  2. Click the Display Settings button in the inspector sidebar, if necessary, to display the Scripts display settings.

  3. Select your script from the list.

    ../Art/instruments_automation_record_select_script_2x.png
  4. Click in the script editor pane to position the cursor where you want the captured actions to appear in the script.

  5. Click the Record button under the text editor.

    ../Art/instruments_automation_record_button_2x.png

    The target application launches, and the script status is updated to indicate that capturing is in progress.

  6. Perform the desired actions on the device or in the simulator.

    ../Art/instruments_automation_stop_button_2x.png
  7. Click the Stop button under the text editor to stop capturing actions.

../Art/instruments_automation_recorded_code_2x.png

The Automation instrument generates expressions in your script for the actions you perform. Some of these expressions include tokens that contain alternative syntax for the expression. To see the alternative syntax, select the arrow at the right of the token. To select the currently displayed syntax for a token and flatten the expression, double-click the token.

To configure the Automation instrument to automatically start and stop your script under control of the Instruments Record button in the toolbar, select the “Run on Record” checkbox.

If your app crashes or goes to the background, your script is blocked until the app is frontmost again, at which time the script continues to run.

Accessing and Manipulating UI Elements

The Accessibility-based mechanism underlying the UI Automation feature represents every control in your app as a uniquely identifiable element. To perform an action on an element in your app, you explicitly identify that element in terms of the app’s element hierarchy. To fully understand this section, you should be familiar with the information in iOS Human Interface Guidelines.

To illustrate the element hierarchy, this section refers to the Recipes iOS app shown in Figure 11-1, which is available as the code sample iPhoneCoreDataRecipes from the iOS Dev Center.

Figure 11-1  The Recipes app (Recipes screen)

UI Element Accessibility

Each accessible element is inherited from the base element, UIAElement. Every element can contain zero or more other elements.

As detailed below, your script can access individual elements by their position within the element hierarchy. However, you can assign a unique name to each element by setting the label attribute and making sure Accessibility is selected in Interface Builder for the control represented by that element, as shown in Figure 11-2.

Figure 11-2  Setting the accessibility label in Interface Builder

UI Automation uses the accessibility label (if it’s set) to derive a name property for each element. Aside from the obvious benefits, using such names can greatly simplify development and maintenance of your test scripts.

The name property is one of four properties of these elements that can be very useful in your test scripts.

  • name. Derived from the accessibility label

  • value. The current value of the control, for example, the text in a text field

  • elements. Any child elements contained within the current element, for example, the cells in a table view

  • parent. The element that contains the current element

Understanding the Element Hierarchy

At the top of the element hierarchy is the UIATarget class, which represents the high-level user interface elements of the system under test (SUT)—that is, the device (or simulator) as well as the iOS and your app running on that device. For the purposes of your test, your app is the frontmost app (or target app), identified as follows:

UIATarget.localTarget().frontMostApp();

To reach the app window, the main window of your app, you would specify

UIATarget.localTarget().frontMostApp().mainWindow();

At startup, the Recipes app window appears as shown in Figure 11-1.

Inside the window, the recipe list is presented in an individual view, in this case, a table view, see Figure 11-3.

Figure 11-3  Recipes table view

This is the first table view in the app’s array of table views, so you specify it as such using the zero index ([0]), as follows:

UIATarget.localTarget().frontMostApp().mainWindow().tableViews()[0];

Inside the table view, each recipe is represented by a distinct individual cell. You can specify individual cells in similar fashion. For example, using the zero index ([0]), you can specify the first cell as follows:

UIATarget.localTarget().frontMostApp().mainWindow().tableViews()[0].cells()[0];

Each of these individual cell elements is designed to contain a recipe record as a custom child element. in this first cell is the record for chocolate cake, which you can access by name with this line of code:

UIATarget.localTarget().frontMostApp().mainWindow().tableViews()[0].cells()[0].elements()["Chocolate Cake"];
../Art/ios_recipes_app_recipes_table_view_row_2x.png

Displaying the Element Hierarchy

You can use the logElementTree method for any element to list all of its child elements. The following code illustrates listing the elements for the main (Recipes) screen (or mode) of the Recipes app.

// List element hierarchy for the Recipes screen
UIALogger.logStart("Logging element tree ...");
UIATarget.localTarget().logElementTree();
UIALogger.logPass();

The output of the command is captured in the log displayed by the Automation instrument, as in Figure 11-4.

Figure 11-4  Output from the logElementTree method

Note the indentation of each element line item, indicating that element’s level in the hierarchy. These levels may be viewed conceptually, as in Figure 11-5.

Figure 11-5  Element hierarchy (Recipes screen)

Although a screen is not technically an iOS programmatic construct and doesn’t explicitly appear in the hierarchy, it is a helpful concept in understanding that hierarchy. Tapping the Unit Conversion tab in the tab bar displays the Unit Conversion screen (or mode), shown in Figure 11-6.

Figure 11-6  Recipes app (Unit Conversion screen)

The following code taps the Unit Conversion tab in the tab bar to display the associated screen and then logs the element hierarchy associated with it:

// List element hierarchy for the Unit Conversion screen
var target = UIATarget.localTarget();
var appWindow = target.frontMostApp().mainWindow();
var element = target;
appWindow.tabBar().buttons()["Unit Conversion"].tap();
UIALogger.logStart("Logging element tree …");
element.logElementTree();
UIALogger.logPass();

The resulting log reveals the hierarchy to be as illustrated in Figure 11-7. Just as with the previous example, logElementTree is called for the target, but the results are for the current screen—in this case, the Unit Conversion screen.

Figure 11-7  Element hierarchy (Unit Conversion screen)

Simplifying Element Hierarchy Navigation

The previous code sample introduces the use of variables to represent parts of the element hierarchy. This technique allows for shorter, simpler commands in your scripts.

Using variables in this way also allows for some abstraction, yielding flexibility in code use and reuse. The following example uses a variable (destinationScreen) to control changing between the two main screens (Recipes and Unit Conversion) of the Recipes app:

// Switch screen (mode) based on value of variable
var target = UIATarget.localTarget();
var app = target.frontMostApp();
var tabBar = app.mainWindow().tabBar();
var destinationScreen = "Recipes";
if (tabBar.selectedButton().name() != destinationScreen) {
    tabBar.buttons()[destinationScreen].tap();
}

With minor variations, this code could work, for example, for a tab bar with more tabs or with tabs of different names.

Performing User Interface Gestures

Once you understand how to access the desired element, it’s relatively simple and straightforward to manipulate that element.

The UI Automation API provides methods to perform most UIKit user actions, including multi-touch gestures. For comprehensive detailed information about these methods, see UI Automation JavaScript Reference for iOS.

Tapping. Perhaps the most common touch gesture is a simple tap. Implementing a one-finger single tap on a known UI element is very simple. For example, tapping the right button, labeled with a plus sign (+), in the navigation bar of the Recipes app, displays a new screen used to add a new recipe.

../Art/ios_recipes_app_navigation_bar_2x.png

This command is all that’s required to tap that button:

UIATarget.localTarget().frontMostApp().navigationBar().buttons()["Add"].tap();

Note that it uses the name Add to identify the button, presuming that the accessibility label has been set appropriately, as described above.

Of course, more complicated tap gestures are required to thoroughly test any sophisticated app. You can specify any standard tap gestures. For example, to tap once at an arbitrary location on the screen, you just need to provide the screen coordinates:

UIATarget.localTarget().tap({x:100, y:200});

This command taps at the x and y coordinates specified, regardless of what's at that location on the screen.

More complex taps are also available. To double-tap the same location, you could use this code:

UIATarget.localTarget().doubleTap({x:100, y:200});

And to perform a two-finger tap to test zooming in and out, for example, you could use this code:

UIATarget.localTarget().twoFingerTap({x:100, y:200});

Pinching. A pinch open gesture is typically used to zoom in or expand an object on the screen, and a pinch close gesture is used for the opposite effect—to zoom out or shrink an object on the screen. You specify the coordinates to define the start of the pinch close gesture or end of the pinch open gesture, followed by a number of seconds for the duration of the gesture. The duration parameter allows you some flexibility in specifying the speed of the pinch action.

UIATarget.localTarget().pinchOpenFromToForDuration({x:20, y:200}, {x:300, y:200}, 2);
UIATarget.localTarget().pinchCloseFromToForDuration({x:20, y:200}, {x:300, y:200}, 2);

Dragging and flicking. If you need to scroll through a table or move an element on screen, you can use the dragFromToForDuration method. You provide coordinates for the starting location and ending location, as well as a duration, in seconds. The following example specifies a drag gesture from location 160, 200 to location 160, 400, over a period of 1 second:

UIATarget.localTarget().dragFromToForDuration({x:160, y:200}, {x:160, y:400}, 1);

A flick gesture is similar, but it is presumed to be a fast action, so it doesn’t require a duration parameter.

UIATarget.localTarget().flickFromTo({x:160, y:200}, {x:160, y:400});

Entering text. Your script will likely need to test that your app handles text input correctly. To do so, it can enter text into a text field by simply specifying the target text field and setting its value with the setValue method. The following example uses a local variable to provide a long string as a test case for the first text field (index [0]) in the current screen:

var recipeName = "Unusually Long Name for a Recipe";
UIATarget.localTarget().frontMostApp().mainWindow().textFields()[0].setValue(recipeName);

Navigating in your app with tabs. To test navigating between screens in your app, you’ll very likely need to tap a tab in a tab bar. Tapping a tab is much like tapping a button; you access the appropriate tab bar, specify the desired button, and tap that button, as shown in the following example:

var tabBar = UIATarget.localTarget().frontMostApp().mainWindow().tabBar();
var selectedTabName = tabBar.selectedButton().name();
if (selectedTabName != "Unit Conversion")  {
    tabBar.buttons()["Unit Conversion"].tap();
}

First, a local variable is declared to represent the tab bar. Using that variable, the script accesses the tab bar to determine the selected tab and get the name of that tab. Finally, if the name of the selected tab matches the name of the desired tab (in this case “Unit Conversion”), the script taps that tab.

Scrolling to an element. Scrolling is a large part of a user’s interaction with many apps. UI Automation provides a variety of methods for scrolling. The basic methods allow for scrolling to the next element left, right, up, or down. More sophisticated methods support greater flexibility and specificity in scrolling actions. One such method is scrollToElementWithPredicate, which allows you to scroll to an element that meets certain criteria that you specify. This example accesses the appropriate table view through the element hierarchy and scrolls to a recipe in that table view whose name starts with “Turtle Pie.”

UIATarget.localTarget().frontMostApp().mainWindow().tableViews()[0] \
    .scrollToElementWithPredicate("name beginswith 'Turtle Pie'");

Using the scrollToElementWithPredicate method allows scrolling to an element whose exact name may not be known.

Using predicate functionality can significantly expand the capability and applicability of your scripts. For more information on using predicates, see Predicate Programming Guide.

Other useful methods for flexibility in scrolling include scrollToElementWithName and scrollToElementWithValueForKey. See UIAScrollView Class Reference for more information.

Accessibility Label and Identifier Attributes

The label attribute and identifier attribute figure prominently in your script’s ability to access UI elements, so it’s a good idea to understand how they are used.

Setting a meaningful value for the label attribute is optional but recommended. You can set and view the label string in the Label text field in the Accessibility section of the Identity inspector in Interface Builder. This label is expected to be descriptive but short, partly because assistive technologies such as Apple’s VoiceOver use it as the name of the associated UI element. In UI Automation, this label is returned by the label method. It is also returned by the name method as a default if the identifier attribute is not set. For an overview of accessibility labels, see Tic Tac Toe: Creating Accessible Apps with Custom UI and Accessibility Programming Guide for iOS. For reference details, see UIAccessibilityElement Class Reference.

The identifier attribute allows you to use more descriptive names for elements. It is optional, but it must be set for the script to perform either of these two operations:

In UI Automation, the name method returns the value of this identifier attribute, if one is set. If it is not set, the name method returns the value of the label attribute.

Currently, you can set a value for the identifier attribute only programmatically, through the accessibilityIdentifier property. For details, see UIAccessibilityIdentification Protocol Reference.

Adding Timing Flexibility with Timeout Periods

While executing a test script, an attempt to access an element can fail for a variety of reasons. For example, an action could fail if:

In situations like these, your script may need to wait for some action to complete before proceeding. In the Recipes app, for example, the user taps the Recipes tab to return from the Unit Conversion screen to the Recipes screen. However, UI Automation may detect the existence of the Add button, enabling the test script to attempt to tap it—before the button is actually drawn and the app is actually ready to accept that tap. An accurate test must ensure that the Recipes screen is completely drawn and that the app is ready to accept user interaction with the controls within that screen before proceeding.

To provide some flexibility in such cases and to give you finer control over timing, UI Automation provides for a timeout period, a period during which it repeatedly attempts to perform the specified action before failing. If the action completes during the timeout period, that line of code returns, and your script can proceed. If the action doesn’t complete during the timeout period, an exception is thrown and UI Automation returns a UIAElementNil object. A UIAElementNil object is always considered invalid.

The default timeout period is five seconds, but your script can change that at any time. For example, you might decrease the timeout period if you want to test whether an element exists but don’t need to wait if it isn’t. On the other hand, you might increase the timeout period when the script must access an element but the user interface is slow to update. The following methods for manipulating the timeout period are available in the UIATarget class:

To make this feature as easy as possible to use, UI Automation uses a stack model. You push a custom timeout period to the top of the stack, as with the following code that shortens the timeout period to two seconds.

UIATarget.localTarget().pushTimeout(2);

You then run the code to perform the action and pop the custom timeout off the stack.

UIATarget.localTarget().popTimeout();

Using this approach you end up with a robust script, waiting a reasonable amount of time for something to happen.

For more details, see UIATarget Class Reference.

Logging Test Results and Data

Your script reports log information to the Automation instrument, which gathers it and reports it back for analysis.

When writing your tests, you should log as much information as you can, if just to help you diagnose any failures that occur. At a bare minimum, you should log when each test begins and ends, identifying the test performed and recording pass/fail status. This kind of minimal logging is almost automatic in UI Automation. You simply call logStart with the name of your test, run your test, then call logPass or logFail as appropriate, as shown in the following example:

var testName = "Module 001 Test";
UIALogger.logStart(testName);
//some test code
UIALogger.logPass(testName);

But it’s a good practice to log what transpires whenever your script interacts with a control. Whether you’re validating that parts of your app perform properly or you’re still tracking down bugs, it’s hard to imagine having too much log information to analyze. To this end, you can log just about any occurrence using logMessage, and you can even supplement the textual data with screenshots.

The following code example expands the logging of the previous example to include a free-form log message and a screenshot:

var testName = "Module 001 Test";
UIALogger.logStart(testName);
//some test code
UIALogger.logMessage("Starting Module 001 branch 2, validating input.");
//capture a screenshot with a specified name
UIATarget.localTarget().captureScreenWithName("SS001-2_AddedIngredient");
//more test code
UIALogger.logPass(testName);

The screenshot requested in the example would be saved back to Instruments and appear in the Editor Log in the detail pane with the specified filename (SS001-2_AddedIngredient.png, in this case).

Using Screenshots

Your script can capture screenshots using the captureScreenWithName and captureRectWithName methods in the UIATarget class. To ensure easy access to those screenshots, open the Logging section at the left of the template, select the Continuously Log Results option, and use the Choose Location pop-up menu to specify a folder for the log results. Each captured screenshot is stored in the results folder with the name specified by your script.

Verifying Test Results

The crux of testing is being able to verify that each test has been performed and that it has either passed or failed. This code example runs the test testName to determine whether a valid element recipe element whose name starts with “Tarte” exists in the recipe table view. First, a local variable is used to specify the cell criteria:

var cell = UIATarget.localTarget().frontMostApp().mainWindow() \
    .tableViews()[0].cells().firstWithPredicate("name beginswith 'Tarte'");

Next, the script uses the isValid method to test whether a valid element matching those criteria exists in the recipe table view.

if (cell.isValid()) {
    UIALogger.logPass(testName);
}
else {
    UIALogger.logFail(testName);
}

If a valid cell is found, the code logs a pass message for the testName test; if not, it logs a failure message.

Notice that this test specifies firstWithPredicate and "name beginsWith 'Tarte'". These criteria yield a reference to the cell for “Tarte aux Fraises,” which works for the default data already in the Recipes sample app. If, however, a user adds a recipe for “Tarte aux Framboises,” this example may or may not give the desired results.

Handling Alerts

In addition to verifying that your app’s alerts perform properly, your test should accommodate alerts that appear unexpectedly from outside your app. For example, it’s not unusual to get a text message while checking the weather or playing a game.

Handling Externally Generated Alerts

Although it may seem somewhat paradoxical, your app and your tests should expect that unexpected alerts will occur whenever your app is running. Fortunately, UI Automation includes a default alert handler that renders external alerts very easy for your script to cope with. Your script provides an alert handler function called onAlert, which is called when the alert has occurred, at which time it can take any appropriate action, and then then simply return the alert to the default handler for dismissal.

The following code example illustrates a very simple alert case:

UIATarget.onAlert = function onAlert(alert) {
    var title = alert.name();
    UIALogger.logWarning("Alert with title '" + title + "' encountered.");
    // return false to use the default handler
    return false;
}

All this handler does is log a message that this type of alert was received and then return false. Returning false directs the UI Automation default alert handler to just dismiss the alert. In the case of an alert for a received text message, for example, UI Automation clicks the Close button.

Handling Internally Generated Alerts

As part of your app, you will have alerts that need to be handled. In those instances, your alert handler needs to perform the appropriate response and return true to the default handler, indicating that the alert has been handled.

The following code example expands slightly on the basic alert handler. After logging the alert type, it tests whether the alert is the specific one that’s anticipated. If so, it taps the Continue button, which is known to exist, and returns true to skip the default dismissal action.

UIATarget.onAlert = function onAlert(alert) {
    var title = alert.name();
    UIALogger.logWarning("Alert with title '" + title + "' encountered.");
    if (title == "The Alert We Expected") {
        alert.buttons()["Continue"].tap();
        return true;  //alert handled, so bypass the default handler
    }
    // return false to use the default handler
    return false;
}

This basic alert handler can be generalized to respond to just about any alert received, while allowing your script to continue running.

Detecting and Specifying Device Orientation

A well-behaved iOS app is expected to handle changes in device orientation gracefully, so your script should anticipate and test for such changes.

UI Automation provides setDeviceOrientation to simulate a change in the device orientation. This method uses the constants listed in Table 11-1.

Table 11-1  Device orientation constants

Orientation constant

Description

UIA_DEVICE_ORIENTATION_UNKNOWN

The orientation of the device cannot be determined.

UIA_DEVICE_ORIENTATION_PORTRAIT

The device is in portrait mode, with the device upright and the home button at the bottom.

UIA_DEVICE_ORIENTATION_PORTRAIT_UPSIDEDOWN

The device is in portrait mode but upside down, with the device upright and the home button at the top.

UIA_DEVICE_ORIENTATION_LANDSCAPELEFT

The device is in landscape mode, with the device upright and the home button on the right side.

UIA_DEVICE_ORIENTATION_LANDSCAPERIGHT

The device is in landscape mode, with the device upright and the home button on the left side.

UIA_DEVICE_ORIENTATION_FACEUP

The device is parallel to the ground with the screen facing upward.

UIA_DEVICE_ORIENTATION_FACEDOWN

The device is parallel to the ground with the screen facing downward.

In contrast to device orientation is interface orientation, which represents the rotation required to keep your app's interface oriented properly upon device rotation. Note that in landscape mode, device orientation and interface orientation are opposite because rotating the device requires rotating the content in the opposite direction.

UI Automation provides the interfaceOrientation method to get the current interface orientation. This method uses the constants listed in Table 11-2.

Table 11-2  Interface orientation constants

Orientation constant

Description

UIA_INTERFACE_ORIENTATION_PORTRAIT

The interface is in portrait mode, with the bottom closest to the home button.

UIA_INTERFACE_ORIENTATION_PORTRAIT_UPSIDEDOWN

The interface is in portrait mode but upside down, with the top closest to the home button.

UIA_INTERFACE_ORIENTATION_LANDSCAPELEFT

The interface is in landscape mode, with the left side closest to the home button.

UIA_INTERFACE_ORIENTATION_LANDSCAPERIGHT

The interface is in landscape mode, with the right side closest to the home button.

The following example changes the device orientation (in this case, to landscape left), then changes it back (to portrait):

var target = UIATarget.localTarget();
var app = target.frontMostApp();
//set orientation to landscape left
target.setDeviceOrientation(UIA_DEVICE_ORIENTATION_LANDSCAPELEFT);
UIALogger.logMessage("Current orientation now " + app.interfaceOrientation());
//reset orientation to portrait
target.setDeviceOrientation(UIA_DEVICE_ORIENTATION_PORTRAIT);
UIALogger.logMessage("Current orientation now " + app.interfaceOrientation());

Of course, once you've rotated, you do need to rotate back again.

When performing a test that involves changing the orientation of the device, it is a good practice to set the rotation at the beginning of the test, then set it back to the original rotation at the end of your test. This practice ensures that your script is always back in a known state.

You may have noticed the orientation logging in the example. Such logging provides additional assurance that your tests—and your testers—don’t become disoriented.

Testing for Multitasking

When a user exits your app by tapping the Home button or causing some other app to come to the foreground, your app is suspended. To simulate this occurrence, UI Automation provides the deactivateAppForDuration method. You just call this method, specifying a duration, in seconds, for which your app is to be suspended, as illustrated by the following example:

UIATarget.localTarget().deactivateAppForDuration(10);

This single line of code causes the app to be deactivated for 10 seconds, just as though a user had exited the app and returned to it 10 seconds later.

Running a Test Script from an Xcode Project

You can easily automate running your test script by creating a custom Automation instrument template.

Creating a Custom Automation Instrument Template

To create a custom Automation instrument template:

  1. Launch the Instruments app.

  2. Choose the Automation template to create a trace document.

  3. Choose View > Detail, if necessary, to display the detail view.

  4. Select your script from the list.

  5. Edit your script as needed in the Script area of the detail pane.

  6. Choose File > Save as Template, name the template, and save it to the default Instruments template location:

    ~/Library/Application Support/Instruments/Templates/

Executing an Automation Instrument Script in Xcode

After you have created your customized Automation template, you can execute your test script from Xcode by following these steps:

  1. Open your project in Xcode.

  2. From the Scheme pop-up menu (in the workspace window toolbar), select Edit Scheme for a scheme with which you would like to use your script.

  3. Select Profile from the left column of the scheme editing dialog.

  4. Choose your application from the Executable pop-up menu.

  5. Choose your customized Automation Instrument template from the Instrument pop-up menu.

  6. Click OK to approve your changes and dismiss the scheme editor dialog.

  7. Choose Product > Profile.

    Instruments launches and executes your test script.

Executing an Automation Instrument Script from the Command Line

You can also execute your test script from the command line. If you have created a customized Automation template as described in Creating a Custom Automation Instrument Template, you can use the following simple command:

instruments -w deviceID -t templateFilePath targetAppName

deviceID

The 40-character device identifier, available in the Xcode Devices organizer, and in iTunes.

Note: Omit the device identifier option (-w deviceID in this example) to target the Simulator instead of a device.

templateFilePath

The full pathname of your customized Automation template, by default, ~/Library/Application Support/Instruments/Templates/templateName, where templateName is the name you saved it with.

targetAppName

The local name of the application. When targeting a device, omit the pathname and .app extension. When targeting a simulator, use the full pathname.

You can use the default trace template if you don’t want to create a custom one. To do so, you use the environment variables UIASCRIPT and UIARESULTSPATH to identify the script and the results directory.

instruments -w deviceID -t defaultTemplateFilePath targetAppName \
   -e UIASCRIPT scriptFilePath -e UIARESULTSPATH resultsFolderPath
defaultTemplateFilePath

The full pathname of the default template:

/Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/Library/Instruments/PlugIns/
AutomationInstrument.bundle/Contents/Resources/Automation.tracetemplate
scriptFilePath

The file-system location of your test script.

resultsFolderPath

The file-system location of the directory to hold the results of your test script.