Learn how to create a simple app that incorporates predicted touches into its drawing code.
The sample app Speed Sketch (see Leveraging Touch Input for Drawing Apps) uses predicted touches to minimize latency when drawing using either Apple Pencil or a finger. The key class for gathering touches is the
Stroke class. Each new sequence of touch events results in the creation of a
Stroke object to the app’s drawing canvas. Stroke objects store the touch data needed to do stylized line drawing and can render that data using a calligraphy pen or a regular pen, or in a special debug mode that draws line segments for each distinct touch event.
Collect the Touch Input
Stroke class collects drawing-related touch input and uses it to create a
Stroke object representing the path to render. In addition to the touches that actually occurred, the class also gathers any predicted touches. Listing 1 shows the portion of the gesture recognizer’s
append method that is responsible for gathering the predicted touches. The
collector block called by this code processes each touch event. The parameters to that block indicate whether the touch is an actual touch or a predicted touch.
The collection of touch input results in the creation of
Stroke objects, which are then added to the current
Stroke object. Stroke objects store predicted touches separately from other touches. Keeping them separate makes it easier to remove them later, and keeps them from being accidentally confused with the real touch input. Each time the app adds a new set of actual touches, it discards the preceding set of predicted samples.
Listing 2 shows a portion of the
Stroke class, which represents the touches associated with a single drawn line. For each new set of touches, the class adds the actual touches to its primary list of samples. Any predicted touches are then stored in the
predicted property. Each time
Stroke calls the
add, the method moves the last set of predicted touches to the
previous property and are ultimately discarded. Thus,
Stroke maintains only the last set of predicted touches.
Render the Predicted Touches
During rendering, the app treats predicted touches like actual touches. It breaks down the contents of each
Stroke object into one or more
Stroke objects, which the drawing code fetches using a
Stroke object. Listing 3 shows the implementation of this class. As the drawing code iterates over the stroke samples, the
sample method returns the samples for the actual touches first. Only after the method returns all of the actual touch samples does the iterator return the samples for any predicted touches. Thus, the predicted touches are always located at the end of the stroked line.