Gesture Recognizers

Gesture recognizers convert low-level event handling code into higher-level actions. They are objects that you attach to a view, which allows the view to respond to actions the way a control does. Gesture recognizers interpret touches to determine whether they correspond to a specific gesture, such as a swipe, pinch, or rotation. If they recognize their assigned gesture, they send an action message to a target object. The target object is typically the view’s view controller, which responds to the gesture as shown in Figure 1-1. This design pattern is both powerful and simple; you can dynamically determine what actions a view responds to, and you can add gesture recognizers to a view without having to subclass the view.

Figure 1-1  A gesture recognizer attached to a view

Use Gesture Recognizers to Simplify Event Handling

The UIKit framework provides predefined gesture recognizers that detect common gestures. It’s best to use a predefined gesture recognizer when possible because their simplicity reduces the amount of code you have to write. In addition, using a standard gesture recognizer instead of writing your own ensures that your app behaves the way users expect.

If you want your app to recognize a unique gesture, such as a checkmark or a swirly motion, you can create your own custom gesture recognizer. To learn how to design and implement your own gesture recognizer, see Creating a Custom Gesture Recognizer.

Built-in Gesture Recognizers Recognize Common Gestures

When designing your app, consider what gestures you want to enable. Then, for each gesture, determine whether one of the predefined gesture recognizers listed in Table 1-1 is sufficient.

Table 1-1  Gesture recognizer classes of the UIKit framework


UIKit class

Tapping (any number of taps)


Pinching in and out (for zooming a view)


Panning or dragging


Swiping (in any direction)


Rotating (fingers moving in opposite directions)


Long press (also known as “touch and hold”)


Your app should respond to gestures only in ways that users expect. For example, a pinch should zoom in and out whereas a tap should select something. For guidelines about how to properly use gestures, see Apps Respond to Gestures, Not Clicks.

Gesture Recognizers Are Attached to a View

Every gesture recognizer is associated with one view. By contrast, a view can have multiple gesture recognizers, because a single view might respond to many different gestures. For a gesture recognizer to recognize touches that occur in a particular view, you must attach the gesture recognizer to that view. When a user touches that view, the gesture recognizer receives a message that a touch occurred before the view object does. As a result, the gesture recognizer can respond to touches on behalf of the view.

Gestures Trigger Action Messages

When a gesture recognizer recognizes its specified gesture, it sends an action message to its target. To create a gesture recognizer, you initialize it with a target and an action.

Discrete and Continuous Gestures

Gestures are either discrete or continuous. A discrete gesture, such as a tap, occurs once. A continuous gesture, such as pinching, takes place over a period of time. For discrete gestures, a gesture recognizer sends its target a single action message. A gesture recognizer for continuous gestures keeps sending action messages to its target until the multitouch sequence ends, as shown in Figure 1-2.

Figure 1-2  Discrete and continuous gestures

Responding to Events with Gesture Recognizers

There are three things you do to add a built-in gesture recognizer to your app:

  1. Create and configure a gesture recognizer instance.

    This step includes assigning a target, action, and sometimes assigning gesture-specific attributes (such as the numberOfTapsRequired).

  2. Attach the gesture recognizer to a view.

  3. Implement the action method that handles the gesture.

Using Interface Builder to Add a Gesture Recognizer to Your App

Within Interface Builder in Xcode, add a gesture recognizer to your app the same way you add any object to your user interface—drag the gesture recognizer from the object library to a view. When you do this, the gesture recognizer automatically becomes attached to that view. You can check which view your gesture recognizer is attached to, and if necessary, change the connection in the nib file.

After you create the gesture recognizer object, you need to create and connect an action method. This method is called whenever the connected gesture recognizer recognizes its gesture. If you need to reference the gesture recognizer outside of this action method, you should also create and connect an outlet for the gesture recognizer. Your code should look similar to Listing 1-1.

Listing 1-1  Adding a gesture recognizer to your app with Interface Builder

@interface APLGestureRecognizerViewController ()
@property (nonatomic, strong) IBOutlet UITapGestureRecognizer *tapRecognizer;
- (IBAction)displayGestureForTapRecognizer:(UITapGestureRecognizer *)recognizer
     // Will implement method later...

Adding a Gesture Recognizer Programmatically

You can create a gesture recognizer programmatically by allocating and initializing an instance of a concrete UIGestureRecognizer subclass, such as UIPinchGestureRecognizer. When you initialize the gesture recognizer, specify a target object and an action selector, as in Listing 1-2. Often, the target object is the view’s view controller.

If you create a gesture recognizer programmatically, you need to attach it to a view using the addGestureRecognizer: method. Listing 1-2 creates a single tap gesture recognizer, specifies that one tap is required for the gesture to be recognized, and then attaches the gesture recognizer object to a view. Typically, you create a gesture recognizer in your view controller’s viewDidLoad method, as shown in Listing 1-2.

Listing 1-2  Creating a single tap gesture recognizer programmatically

- (void)viewDidLoad {
     [super viewDidLoad];
     // Create and initialize a tap gesture
     UITapGestureRecognizer *tapRecognizer = [[UITapGestureRecognizer alloc]
          initWithTarget:self action:@selector(respondToTapGesture:)];
     // Specify that the gesture must be a single tap
     tapRecognizer.numberOfTapsRequired = 1;
     // Add the tap gesture recognizer to the view
     [self.view addGestureRecognizer:tapRecognizer];
     // Do any additional setup after loading the view, typically from a nib

Responding to Discrete Gestures

When you create a gesture recognizer, you connect the recognizer to an action method. Use this action method to respond to your gesture recognizer’s gesture. Listing 1-3 provides an example of responding to a discrete gesture. When the user taps the view that the gesture recognizer is attached to, the view controller displays an image view that says “Tap.” The showGestureForTapRecognizer: method determines the location of the gesture in the view from the recognizer’s locationInView: property and then displays the image at that location.

Listing 1-3  Handling a double tap gesture

- (IBAction)showGestureForTapRecognizer:(UITapGestureRecognizer *)recognizer {
       // Get the location of the gesture
      CGPoint location = [recognizer locationInView:self.view];
       // Display an image view at that location
      [self drawImageForGestureRecognizer:recognizer atPoint:location];
       // Animate the image view so that it fades out
      [UIView animateWithDuration:0.5 animations:^{
           self.imageView.alpha = 0.0;

Each gesture recognizer has its own set of properties. For example, in Listing 1-4, the showGestureForSwipeRecognizer: method uses the swipe gesture recognizer’s direction property to determine if the user swiped to the left or to the right. Then, it uses that value to make an image fade out in the same direction as the swipe.

Listing 1-4  Responding to a left or right swipe gesture

// Respond to a swipe gesture
- (IBAction)showGestureForSwipeRecognizer:(UISwipeGestureRecognizer *)recognizer {
       // Get the location of the gesture
       CGPoint location = [recognizer locationInView:self.view];
       // Display an image view at that location
       [self drawImageForGestureRecognizer:recognizer atPoint:location];
       // If gesture is a left swipe, specify an end location
       // to the left of the current location
       if (recognizer.direction == UISwipeGestureRecognizerDirectionLeft) {
            location.x -= 220.0;
       } else {
            location.x += 220.0;
       // Animate the image view in the direction of the swipe as it fades out
       [UIView animateWithDuration:0.5 animations:^{
            self.imageView.alpha = 0.0;
   = location;

Responding to Continuous Gestures

Continuous gestures allow your app to respond to a gesture as it is happening. For example, your app can zoom while a user is pinching or allow a user to drag an object around the screen.

Listing 1-5 displays a “Rotate” image at the same rotation angle as the gesture, and when the user stops rotating, animates the image so it fades out in place while rotating back to horizontal. As the user rotates his fingers, the showGestureForRotationRecognizer: method is called continually until both fingers are lifted.

Listing 1-5  Responding to a rotation gesture

// Respond to a rotation gesture
- (IBAction)showGestureForRotationRecognizer:(UIRotationGestureRecognizer *)recognizer {
       // Get the location of the gesture
       CGPoint location = [recognizer locationInView:self.view];
       // Set the rotation angle of the image view to
       // match the rotation of the gesture
       CGAffineTransform transform = CGAffineTransformMakeRotation([recognizer rotation]);
       self.imageView.transform = transform;
       // Display an image view at that location
       [self drawImageForGestureRecognizer:recognizer atPoint:location];
      // If the gesture has ended or is canceled, begin the animation
      // back to horizontal and fade out
      if (([recognizer state] == UIGestureRecognizerStateEnded) || ([recognizer state] == UIGestureRecognizerStateCancelled)) {
           [UIView animateWithDuration:0.5 animations:^{
                self.imageView.alpha = 0.0;
                self.imageView.transform = CGAffineTransformIdentity;

Each time the method is called, the image is set to be opaque in the drawImageForGestureRecognizer: method. When the gesture is complete, the image is set to be transparent in the animateWithDuration: method. The showGestureForRotationRecognizer: method determines whether a gesture is complete by checking the gesture recognizer’s state. These states are explained in more detail in Gesture Recognizers Operate in a Finite State Machine.

Defining How Gesture Recognizers Interact

Oftentimes, as you add gesture recognizers to your app, you need to be specific about how you want the recognizers to interact with each other or any other touch-event handling code in your app. To do this, you first need to understand a little more about how gesture recognizers work.

Gesture Recognizers Operate in a Finite State Machine

Gesture recognizers transition from one state to another in a predefined way. From each state, they can move to one of several possible next states based on whether they meet certain conditions. The exact state machine varies depending on whether the gesture recognizer is discrete or continuous, as illustrated in Figure 1-3. All gesture recognizers start in the Possible state (UIGestureRecognizerStatePossible). They analyze any multitouch sequences that they receive, and during analysis they either recognize or fail to recognize a gesture. Failing to recognize a gesture means the gesture recognizer transitions to the Failed state (UIGestureRecognizerStateFailed).

Figure 1-3  State machines for gesture recognizers

When a discrete gesture recognizer recognizes its gesture, the gesture recognizer transitions from Possible to Recognized (UIGestureRecognizerStateRecognized) and the recognition is complete.

For continuous gestures, the gesture recognizer transitions from Possible to Began (UIGestureRecognizerStateBegan) when the gesture is first recognized. Then, it transitions from Began to Changed (UIGestureRecognizerStateChanged), and continues to move from Changed to Changed as the gesture occurs. When the user’s last finger is lifted from the view, the gesture recognizer transitions to the Ended state (UIGestureRecognizerStateEnded) and the recognition is complete. Note that the Ended state is an alias for the Recognized state.

A recognizer for a continuous gesture can also transition from Changed to Canceled (UIGestureRecognizerStateCancelled) if it decides that the gesture no longer fits the expected pattern.

Every time a gesture recognizer changes state, the gesture recognizer sends an action message to its target, unless it transitions to Failed or Canceled. Thus, a discrete gesture recognizer sends only a single action message when it transitions from Possible to Recognized. A continuous gesture recognizer sends many action messages as it changes states.

When a gesture recognizer reaches the Recognized (or Ended) state, it resets its state back to Possible. The transition back to Possible does not trigger an action message.

Interacting with Other Gesture Recognizers

A view can have more than one gesture recognizer attached to it. Use the view’s gestureRecognizers property to determine what gesture recognizers are attached to a view. You can also dynamically change how a view handles gestures by adding or removing a gesture recognizer from a view with the addGestureRecognizer: and removeGestureRecognizer: methods, respectively.

When a view has multiple gesture recognizers attached to it, you may want to alter how the competing gesture recognizers receive and analyze touch events. By default, there is no set order for which gesture recognizers receive a touch first, and for this reason touches can be passed to gesture recognizers in a different order each time. You can override this default behavior to:

  • Specify that one gesture recognizer should analyze a touch before another gesture recognizer.

  • Allow two gesture recognizers to operate simultaneously.

  • Prevent a gesture recognizer from analyzing a touch.

Use the UIGestureRecognizer class methods, delegate methods, and methods overridden by subclasses to effect these behaviors.

Declaring a Specific Order for Two Gesture Recognizers

Imagine that you want to recognize a swipe and a pan gesture, and you want these two gestures to trigger distinct actions. By default, when the user attempts to swipe, the gesture is interpreted as a pan. This is because a swiping gesture meets the necessary conditions to be interpreted as a pan (a continuous gesture) before it meets the necessary conditions to be interpreted as a swipe (a discrete gesture).

For your view to recognize both swipes and pans, you want the swipe gesture recognizer to analyze the touch event before the pan gesture recognizer does. If the swipe gesture recognizer determines that a touch is a swipe, the pan gesture recognizer never needs to analyze the touch. If the swipe gesture recognizer determines that the touch is not a swipe, it moves to the Failed state and the pan gesture recognizer should begin analyzing the touch event.

Prior to iOS 7, if a gesture recognizer requires another gesture recognizer to fail, you use requireGestureRecognizerToFail: to set up a permanent relationship between the two objects at creation time. This works fine when gesture recognizers aren’t created elsewhere in the app—or in a framework—and the set of gesture recognizers remains the same.

In iOS 7, UIGestureRecognizerDelegate introduces two methods that allow failure requirements to be specified at runtime by a gesture recognizer delegate object:

For both methods, the gesture recognizer delegate is called once per recognition attempt, which means that failure requirements can be determined lazily. It also means that you can set up failure requirements between recognizers in different view hierarchies.

An example of a situation where dynamic failure requirements are useful is in an app that attaches a screen-edge pan gesture recognizer to a view. In this case, you might want all other relevant gesture recognizers associated with that view's subtree to require the screen-edge gesture recognizer to fail so you can prevent any graphical glitches that might occur when the other recognizers get canceled after starting the recognition process. To do this, you could use code similar to the code shown in Listing 1-6.

Listing 1-6  Setting up failure requirements

UIScreenEdgePanGestureRecognizer *myScreenEdgePanGestureRecognizer;
myScreenEdgePanGestureRecognizer = [[UIScreenEdgePanGestureRecognizer alloc] initWithTarget:self action:@selector(handleScreenEdgePan:)];
myScreenEdgePanGestureRecognizer.delegate = self;
// Configure the gesture recognizer and attach it to the view.
 - (BOOL)gestureRecognizer:(UIGestureRecognizer *)gestureRecognizer shouldBeRequiredToFailByGestureRecognizer:(UIGestureRecognizer *)otherGestureRecognizer {
    BOOL result = NO;
    if ((gestureRecognizer == myScreenEdgePanGestureRecognizer) && [[otherGestureRecognizer view] isDescendantOfView:[gestureRecognizer view]]) {
        result = YES;
    return result;

Preventing Gesture Recognizers from Analyzing Touches

You can alter the behavior of a gesture recognizer by adding a delegate object to your gesture recognizer. The UIGestureRecognizerDelegate protocol provides a couple of ways that you can prevent a gesture recognizer from analyzing touches. You use either the gestureRecognizer:shouldReceiveTouch: method or the gestureRecognizerShouldBegin: method—both are optional methods of the UIGestureRecognizerDelegate protocol.

When a touch begins, if you can immediately determine whether or not your gesture recognizer should consider that touch, use the gestureRecognizer:shouldReceiveTouch: method. This method is called every time there is a new touch. Returning NO prevents the gesture recognizer from being notified that a touch occurred. The default value is YES. This method does not alter the state of the gesture recognizer.

Listing 1-7 uses the gestureRecognizer:shouldReceiveTouch: delegate method to prevent a tap gesture recognizer from receiving touches that are within a custom subview. When a touch occurs, the gestureRecognizer:shouldReceiveTouch: method is called. It determines whether the user touched the custom view, and if so, prevents the tap gesture recognizer from receiving the touch event.

Listing 1-7  Preventing a gesture recognizer from receiving a touch

- (void)viewDidLoad {
    [super viewDidLoad];
    // Add the delegate to the tap gesture recognizer
    self.tapGestureRecognizer.delegate = self;
// Implement the UIGestureRecognizerDelegate method
-(BOOL)gestureRecognizer:(UIGestureRecognizer *)gestureRecognizer shouldReceiveTouch:(UITouch *)touch {
    // Determine if the touch is inside the custom subview
    if ([touch view] == self.customSubview){
        // If it is, prevent all of the delegate's gesture recognizers
        // from receiving the touch
        return NO;
    return YES;

If you need to wait as long as possible before deciding whether or not a gesture recognizer should analyze a touch, use the gestureRecognizerShouldBegin: delegate method. Generally, you use this method if you have a UIView or UIControl subclass with custom touch-event handling that competes with a gesture recognizer. Returning NO causes the gesture recognizer to immediately fail, which allows the other touch handling to proceed. This method is called when a gesture recognizer attempts to transition out of the Possible state, if the gesture recognition would prevent a view or control from receiving a touch.

You can use the gestureRecognizerShouldBegin:UIView method if your view or view controller cannot be the gesture recognizer’s delegate. The method signature and implementation is the same.

Permitting Simultaneous Gesture Recognition

By default, two gesture recognizers cannot recognize their respective gestures at the same time. But suppose, for example, that you want the user to be able to pinch and rotate a view at the same time. You need to change the default behavior by implementing the gestureRecognizer:shouldRecognizeSimultaneouslyWithGestureRecognizer: method, an optional method of the UIGestureRecognizerDelegate protocol. This method is called when one gesture recognizer’s analysis of a gesture would block another gesture recognizer from recognizing its gesture, or vice versa. This method returns NO by default. Return YES when you want two gesture recognizers to analyze their gestures simultaneously.

Specifying a One-Way Relationship Between Two Gesture Recognizers

If you want to control how two recognizers interact with each other but you need to specify a one-way relationship, you can override either the canPreventGestureRecognizer: or canBePreventedByGestureRecognizer: subclass methods to return NO (default is YES). For example, if you want a rotation gesture to prevent a pinch gesture but you don’t want a pinch gesture to prevent a rotation gesture, you would specify:

[rotationGestureRecognizer canPreventGestureRecognizer:pinchGestureRecognizer];

and override the rotation gesture recognizer’s subclass method to return NO. For more information about how to subclass UIGestureRecognizer, see Creating a Custom Gesture Recognizer.

If neither gesture should prevent the other, use the gestureRecognizer:shouldRecognizeSimultaneouslyWithGestureRecognizer: method, as described in Permitting Simultaneous Gesture Recognition. By default, a pinch gesture prevents a rotation and vice versa because two gestures cannot be recognized at the same time.

Interacting with Other User Interface Controls

In iOS 6.0 and later, default control actions prevent overlapping gesture recognizer behavior. For example, the default action for a button is a single tap. If you have a single tap gesture recognizer attached to a button’s parent view, and the user taps the button, then the button’s action method receives the touch event instead of the gesture recognizer. This applies only to gesture recognition that overlaps the default action for a control, which includes:

If you have a custom subclass of one of these controls and you want to change the default action, attach a gesture recognizer directly to the control instead of to the parent view. Then, the gesture recognizer receives the touch event first. As always, be sure to read the iOS Human Interface Guidelines to ensure that your app offers an intuitive user experience, especially when overriding the default behavior of a standard control.

Gesture Recognizers Interpret Raw Touch Events

So far, you’ve learned about gestures and how your app can recognize and respond to them. However, to create a custom gesture recognizer or to control how gesture recognizers interact with a view’s touch-event handling, you need to think more specifically in terms of touches and events.

An Event Contains All the Touches for the Current Multitouch Sequence

In iOS, a touch is the presence or movement of a finger on the screen. A gesture has one or more touches, which are represented by UITouch objects. For example, a pinch-close gesture has two touches—two fingers on the screen moving toward each other from opposite directions.

An event encompasses all touches that occur during a multitouch sequence. A multitouch sequence begins when a finger touches the screen and ends when the last finger is lifted. As a finger moves, iOS sends touch objects to the event. An multitouch event is represented by a UIEvent object of type UIEventTypeTouches.

Each touch object tracks only one finger and lasts only as long as the multitouch sequence. During the sequence, UIKit tracks the finger and updates the attributes of the touch object. These attributes include the phase of the touch, its location in a view, its previous location, and its timestamp.

The touch phase indicates when a touch begins, whether it is moving or stationary, and when it ends—that is, when the finger is no longer touching the screen. As depicted in Figure 1-4, an app receives event objects during each phase of any touch.

Figure 1-4  A multitouch sequence and touch phases
A multi-touch sequence and touch phasesA multi-touch sequence and touch phases

An App Receives Touches in the Touch-Handling Methods

During a multitouch sequence, an app sends these messages when there are new or changed touches for a given touch phase; it calls the

Each of these methods is associated with a touch phase; for example, the touchesBegan:withEvent: method is associated with UITouchPhaseBegan. The phase of a touch object is stored in its phase property.

Regulating the Delivery of Touches to Views

There may be times when you want a view to receive a touch before a gesture recognizer. But, before you can alter the delivery path of touches to views, you need to understand the default behavior. In the simple case, when a touch occurs, the touch object is passed from the UIApplication object to the UIWindow object. Then, the window first sends touches to any gesture recognizers attached the view where the touches occurred (or to that view’s superviews), before it passes the touch to the view object itself.

Figure 1-5  Default delivery path for touch events

Gesture Recognizers Get the First Opportunity to Recognize a Touch

A window delays the delivery of touch objects to the view so that the gesture recognizer can analyze the touch first. During the delay, if the gesture recognizer recognizes a touch gesture, then the window never delivers the touch object to the view, and also cancels any touch objects it previously sent to the view that were part of that recognized sequence.

For example, if you have a gesture recognizer for a discrete gesture that requires a two-fingered touch, this translates to two separate touch objects. As the touches occur, the touch objects are passed from the app object to the window object for the view where the touches occurred, and the following sequence occurs, as depicted in Figure 1-6.

Figure 1-6  Sequence of messages for touches
  1. The window sends two touch objects in the Began phase—through the touchesBegan:withEvent: method—to the gesture recognizer. The gesture recognizer doesn’t recognize the gesture yet, so its state is Possible. The window sends these same touches to the view that the gesture recognizer is attached to.

  2. The window sends two touch objects in the Moved phase—through the touchesMoved:withEvent: method—to the gesture recognizer. The recognizer still doesn’t detect the gesture, and is still in state Possible. The window then sends these touches to the attached view.

  3. The window sends one touch object in the Ended phase—through the touchesEnded:withEvent: method—to the gesture recognizer. This touch object doesn’t yield enough information for the gesture, but the window withholds the object from the attached view.

  4. The window sends the other touch object in the Ended phase. The gesture recognizer now recognizes its gesture, so it sets its state to Recognized. Just before the first action message is sent, the view calls the touchesCancelled:withEvent: method to invalidate the touch objects previously sent in the Began and Moved phases. The touches in the Ended phase are canceled.

Now assume that the gesture recognizer in the last step decides that this multitouch sequence it’s been analyzing is not its gesture. It sets its state to UIGestureRecognizerStateFailed. Then the window sends the two touch objects in the Ended phase to the attached view in a touchesEnded:withEvent: message.

A gesture recognizer for a continuous gesture follows a similar sequence, except that it is more likely to recognize its gesture before touch objects reach the Ended phase. Upon recognizing its gesture, it sets its state to UIGestureRecognizerStateBegan (not Recognized). The window sends all subsequent touch objects in the multitouch sequence to the gesture recognizer but not to the attached view.

Affecting the Delivery of Touches to Views

You can change the values of several UIGestureRecognizer properties to alter the default delivery path in certain ways. If you change the default values of these properties, you get the following differences in behavior:

  • delaysTouchesBegan (default of NO)—Normally, the window sends touch objects in the Began and Moved phases to the view and the gesture recognizer. Setting delaysTouchesBegan to YES prevents the window from delivering touch objects in the Began phase to the view. This ensures that when a gesture recognizer recognizes its gesture, no part of the touch event was delivered to the attached view. Be cautious when setting this property because it can make your interface feel unresponsive.

    This setting provides a similar behavior to the delaysContentTouches property on UIScrollView; in this case, when scrolling begins soon after the touch begins, subviews of the scroll-view object never receive the touch, so there is no flash of visual feedback.

  • delaysTouchesEnded (default of YES)—When this property is set toYES, it ensures that a view does not complete an action that the gesture might want to cancel later. When a gesture recognizer is analyzing a touch event, the window does not deliver touch objects in the Ended phase to the attached view. If a gesture recognizer recognizes its gesture, the touch objects are canceled. If the gesture recognizer does not recognize its gesture, the window delivers these objects to the view through a touchesEnded:withEvent: message. Setting this property to NO allows the view to analyze touch objects in the Ended phase at the same time as the gesture recognizer.

    Consider, for example, that a view has a tap gesture recognizer that requires two taps, and the user double taps the view. With the property set to YES, the view gets touchesBegan:withEvent:, touchesBegan:withEvent:, touchesCancelled:withEvent:, and touchesCancelled:withEvent:. If this property is set to NO, the view gets the following sequence of messages: touchesBegan:withEvent:, touchesEnded:withEvent:, touchesBegan:withEvent:, and touchesCancelled:withEvent:, which means that in touchesBegan:withEvent:, the view can recognize a double tap.

If a gesture recognizer detects a touch that it determines is not part of its gesture, it can pass the touch directly to its view. To do this, the gesture recognizer calls ignoreTouch:forEvent: on itself, passing in the touch object.

Creating a Custom Gesture Recognizer

To implement a custom gesture recognizer, first create a subclass of UIGestureRecognizer in Xcode. Then, add the following import directive in your subclass’s header file:

#import <UIKit/UIGestureRecognizerSubclass.h>

Next, copy the following method declarations from UIGestureRecognizerSubclass.h to your header file; these are the methods you override in your subclass:

- (void)reset;
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event;
- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event;
- (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event;
- (void)touchesCancelled:(NSSet *)touches withEvent:(UIEvent *)event;

These methods have the same exact signature and behavior as the corresponding touch-event handling methods described earlier in An App Receives Touches in the Touch-Handling Methods. In all of the methods you override, you must call the superclass implementation, even if the method has a null implementation.

Notice that the state property in UIGestureRecognizerSubclass.h is now readwrite instead of readonly, as it is in UIGestureRecognizer.h. Your subclass changes its state by assigning UIGestureRecognizerState constants to that property.

Implementing the Touch-Event Handling Methods for a Custom Gesture Recognizer

The heart of the implementation for a custom gesture recognizer are the four methods: touchesBegan:withEvent:, touchesMoved:withEvent:, touchesEnded:withEvent:, and touchesCancelled:withEvent:. Within these methods, you translate low-level touch events into gesture recognition by setting a gesture recognizer’s state. Listing 1-8 creates a gesture recognizer for a discrete single-touch checkmark gesture. It records the midpoint of the gesture—the point at which the upstroke begins—so that clients can obtain this value.

This example has only a single view, but most apps have many views. In general, you should convert touch locations to the screen’s coordinate system so that you can correctly recognize gestures that span multiple views.

Listing 1-8  Implementation of a checkmark gesture recognizer

#import <UIKit/UIGestureRecognizerSubclass.h>
// Implemented in your custom subclass
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
    [super touchesBegan:touches withEvent:event];
    if ([touches count] != 1) {
        self.state = UIGestureRecognizerStateFailed;
- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event {
    [super touchesMoved:touches withEvent:event];
    if (self.state == UIGestureRecognizerStateFailed) return;
    CGPoint nowPoint = [touches.anyObject locationInView:self.view];
    CGPoint prevPoint = [touches.anyObject previousLocationInView:self.view];
    // strokeUp is a property
    if (!self.strokeUp) {
        // On downstroke, both x and y increase in positive direction
        if (nowPoint.x >= prevPoint.x && nowPoint.y >= prevPoint.y) {
            self.midPoint = nowPoint;
            // Upstroke has increasing x value but decreasing y value
        } else if (nowPoint.x >= prevPoint.x && nowPoint.y <= prevPoint.y) {
            self.strokeUp = YES;
        } else {
            self.state = UIGestureRecognizerStateFailed;
- (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event {
    [super touchesEnded:touches withEvent:event];
    if ((self.state == UIGestureRecognizerStatePossible) && self.strokeUp) {
        self.state = UIGestureRecognizerStateRecognized;
- (void)touchesCancelled:(NSSet *)touches withEvent:(UIEvent *)event {
    [super touchesCancelled:touches withEvent:event];
    self.midPoint = CGPointZero;
    self.strokeUp = NO;
    self.state = UIGestureRecognizerStateFailed;

State transitions for discrete and continuous gestures are different, as described in Gesture Recognizers Operate in a Finite State Machine. When you create a custom gesture recognizer, you indicate whether it is discrete or continuous by assigning it the relevant states. As an example, the checkmark gesture recognizer in Listing 1-8 never sets the state to Began or Changed, because it’s discrete.

The most important thing you need to do when subclassing a gesture recognizer is to set the gesture recognizer’s state accurately. iOS needs to know the state of a gesture recognizer in order for gesture recognizers to interact as expected. For example, if you want to permit simultaneous recognition or require a gesture recognizer to fail, iOS needs to understand the current state of your recognizer.

For more about creating custom gesture recognizers, see WWDC 2012: Building Advanced Gesture Recognizers.

Resetting a Gesture Recognizer’s State

If your gesture recognizer transitions to Recognized/Ended, Canceled, or Failed, the UIGestureRecognizer class calls the reset method just before the gesture recognizer transitions back to Possible.

Implement the reset method to reset any internal state so that your recognizer is ready for a new attempt at recognizing a gesture, as in Listing 1-9. After a gesture recognizer returns from this method, it receives no further updates for touches that are in progress.

Listing 1-9  Resetting a gesture recognizer

- (void)reset {
    [super reset];
    self.midPoint = CGPointZero;
    self.strokeUp = NO;