Put your test failures to work: Learn how to triage and diagnose uncaught issues in your app using the latest testing APIs in Xcode. We'll show you how to help ease your testing workflow and put failures into context to help you deliver the best quality product.
For more information on designing your tests to improve triaging, see “Write tests to fail.”
And check out the latest improvements to Xcode's testing workflow by watching “Get your test results faster”, “Handle interruptions and alerts in UI tests”, and “XCTSkip your tests.”
Hi my name is Wil and I work on testing and automation in Xcode. In this session, we're going to learn about new APIs and other improvements to how you investigate test failures in your projects. Investigation of test failures is the single most critical piece of maintaining any active test suite. Difficult-to-diagnose failures cost you time, they cost too much time at the wrong point in your release schedule. They can even lead the bugs shipping in products. With any growing project, code changes will sometimes cause tests to fail either locally or in continuous integration, and when they do fail you'll be considering the following questions. What failed.
How did it fail. Why. And perhaps most of all where in my source code did the failure happen. In Xcode 12, we've added new APIs and enhanced the UI for test failure reporting to make answering these questions more efficient.
It's worth noting the answer to these questions are so important that there's a whole section dedicated to coding patterns you can use to further improve this process. For more about that check out "Write Tests to Fail." We've organized the content for this session in four sections. Swift errors in tests, rich failure objects, call stacks for test failures, and advanced workflows.
We'll explore each of these topics in turn. But first let's take a look at how failures are presented in Xcode 12. I have here a little project called PlayGarden that I've been working on with my 3 year old daughter PlayGarden helps us keep track of all the plants, toys, and furniture in our backyard. Now even at age 3, my daughter has fully embraced test-driven development, so it's not surprising we have tests for all the view classes representing elements in our garden. We noticed recently there was a lot of duplicated code in these tests so we refactored it into some utilities.
I'm going to run one of these tests now. I've introduced an artificial failure so we can see how that's presented in Xcode 12. Right away you might have noticed that the test is marked as failing but the only annotation we can see is gray. This tells us the failure happened in a call underneath the annotated line but not at that line itself. We can explore this further by switching to the Issue navigator. The Issue navigator shows the test failure here but it shows more than that, underneath the failure is the call stack in your test code. If I click on a frame, the source editor takes me to that location. Here the annotation is read because this is the actual point of failure. Now if I move through the rest of the frames the Issue navigator and the source editor take me on a tour of my code working back from the failure to the point in the test where it was triggered.
This helps me quickly understand all the context around the test failure cutting down on the time needed to fix the issue. There's another way we can explore this data. Let's switch to the test report. This is a great way to investigate test failures particularly if you're working with the result bundle from a continuous integration system. In the report for our most recent test run we have the failing test here in red. If we expand that we see the failure message along with a file in line where it was recorded.
But let's drill down a little further. Now we can see the same call stack we saw in the Issue navigator giving us another way to explore your code.
You'll notice as I hover over a frame in the call stack the two buttons appear to the right of the frame. The first of these is the jump button which navigates to the source code location. I'll go back to the report so that we can explore the second button. New in Xcode 12, the assistant button opens a secondary editor next to the test report which shows the referenced source location. This lets us view the test report, and the source code, side by side, and we can explore the failure call stack in the same way that we did with an Issue navigator. So that's our look at how test failures are presented in XCode 12. Now I'd like to talk about using Swift errors in your tests. One of the ways XCTest supports idiomatic coding patterns in Swift is by making it possible for test functions to throw.
When a test does throw, the error is used to formulate the failure message.
This means instead of having boilerplate for handling errors like this, your tests can be written like this, much cleaner, but until recently, these failures could not provide the source code location (file and line) that was traditionally part of test failures recorded by XCTAssert. Because of this limitation. Some developers still use error handling boilerplate.
Happily, improvements to the Swift runtime in iOS and tvOSO 13.4 and macOS 10.15.4 made it possible for XCTest to begin reporting the source code locations for thrown errors in tests.
This means that you get great context for these errors without any extra handling code. Along with the source code location improvements we added APIs so the same level of convenience is available in your tests setUP and tearDown. These new APIs are variants of the original setUp and tearDown methods shown here. The new setUpWithError will run before the original setUp method and the new tearDownwithError will be called after the original tearDown method. You'll see both of these methods in the templates provided for new test files. It's possible to use both variants of these APIs in the same test case but generally we encourage you to switch your tests over to the new methods unless you need to preserve the old methods because of inheritance. Now I'd like to switch gears and talk about rich failure objects.
XCTest has always recorded failures as 4 discrete values, the failure message, the file path and line number where the failure was recorded, and a flag indicating whether the failure was "expected." Expected failures are those recorded by XCTAssert. "unexpected" failures generally indicate that XCTest has caught an unhandled exception thrown by the test code.
These values were passed by XCTAssert into the recordFailure API which ensures that failures are logged and routed to Xcode for display.
In Xcode 12 of these disparate values have been encapsulated by a new object XCTIssue. In addition there are new kinds of failure data, an explicit type enumeration, a detailed description, an underlying error, and attachments.
XCTAttachment is an API for capturing arbitrary data with test runs.
Attachments can either be added to the test itself or to an activity created by XCTContext. Attachments can also be added to XCTIssue making it possible to associate custom diagnostics with your test failures. XCTestCase has new API for recording test failures. This API, record(issue:) is used by all XCTAsserts and can be invoked directly or even overridden.
The recordFailure API that we showed a few slides back has been deprecated.
If you're calling recordFailure directly or are overriding it to customize failure recording, we encourage you to update to record record(issue:) at your earliest convenience when using record(issue:), you may need to know how to modify XCTIssues.
In Swift, issues are immutable when declared with 'let,' while declaring with 'var' allows you to modify them. In Objective C, XCTIssue has a mutable subclass and also conforms to NSMutableCopying. XCTIssue enhances failure triage in many ways but it's call stacks maybe the most transformative.
At the beginning of this session, I suggested that one of the most important questions to answer about a test failure is "where?" This is why the core test failure data has always included a file path and line number, captured at build time using compiler tokens like #filePath. A single source code location is great for simple tests, but isn't as useful when test code is factored into functions shared by more than one test. Consider this example.
Here are two tests both calling out to the same shared function. When there's a failure, the annotation appears next to the assertion. The test method itself becomes confusing because it's marked as failing, but has no further information to help the developer answer the "where" question. This can be mitigated if the helper function captures the source location where it was invoked and explicitly uses that in its XCTAssert calls. That improves the presentation in the test method. But if the helper has more than one assertion then the ambiguity has simply been shifted. Fortunately XCTIssue captures and symbolicates call stacks. This means that there's considerably more context for failures in complex test code. Here's how that same failure looks when we capture the call stack, very much like what we saw in the demo earlier.
Answering the "where" question is a totally different experience. A gray annotation in the test method indicates a line under which the failure occurred, and a red annotation in the helper method highlights the failure itself.
No extra code was required to pass down a location and you don't have to choose which location is annotated. You get the best of both worlds with no extra effort. Finally I'd like to show you some more advanced workflows made possible by these new APIs. First you can implement custom assertions by creating XCTIssue instances directly and calling record(issue:).
Here's an example of an assertion that validates some data and includes it as an attachment to the issue that it records. In the initial creation of the issue, I'm using 'var' because the rest of the code makes some modifications to the struct. We could also pass all the information upfront to a longer initializer, but I think it's a bit easier to read this way. Next I'm adding the data as an attachment to the issue. This means that the data will appear with the failure in the XCode Test Report enabling me to inspect it during triage and determine why it failed validation. Here I am capturing the location where my custom assertion was called. This isn't required but can result in clearer presentation since the code you see here isn't likely to be useful in understanding the failure itself. Finally we call record(issue:) which logs the issue and sends it to Xcode. The other advanced workflow I want to show you is how you can override record(issue:) to observe, suppress or modify failures recorded in your test class. This method is the funnel point through which every failure passes. Overides have total control over the output of your test class. Our first example overrides record(issue:) for observation. It's important to call super to ensure the issue continues along the recording chain. You can also observe issues using XCTestObservationCenter. But the approach here is useful if you only want to observe failures in one class. If your override does not call super, you will have suppressed the issue. It will not continue along the recording chain and nothing will be logged or reported to Xcode. Modification is the most common reason for overriding record(issue:). This pattern makes it possible to add attachments which can be great diagnostic aids. The example here shows adding a simple string attachment but the API can handle a broad range of types. And that wraps up our session. A few key points I hope you take away: triaging test failures is one of the most important parts of caring for your test suites. Having call stacks makes it easy to track down the locations in your code that are most relevant to a failure. This, in turn supports more natural patterns in your test code, freeing you up to focus on code reuse and other good practices. XCTIssue also supports attachments which lets you add custom diagnostic data helping you answer
Looking for something specific? Enter a topic above and jump straight to the good stuff.
An error occurred when submitting your query. Please check your Internet connection and try again.