AVSpeechSynthesizer Class Reference

Inherits from
Conforms to
Framework
/System/Library/Frameworks/AVFoundation.framework
Availability
Available in iOS 7.0 and later.
Companion guide
Declared in
AVSpeechSynthesis.h

Overview

The AVSpeechSynthesizer class produces synthesized speech from text on an iOS device, and provides methods for controlling or monitoring the progress of ongoing speech.

To speak some amount of text, you must first create an AVSpeechUtterance instance containing the text. (Optionally, you may also use the utterance object to control parameters affecting its speech, such as voice, pitch, and rate.) Then, pass it to the speakUtterance: method on a speech synthesizer instance to speak that utterance.

The speech synthesizer maintains a queue of utterances to be spoken. If the synthesizer is not currently speaking, calling speakUtterance: begins speaking that utterance immediately (or begin waiting through its preUtteranceDelay if one is set). If the synthesizer is speaking, utterances are added to a queue and spoken in the order they are received.

After speech has begun, you can use the synthesizer object to pause or stop speech. After speech is paused, it may be continued from the point at which it left off; stopping ends speech entirely, removing any utterances yet to be spoken from the synthesizer’s queue.

You may monitor the speech synthesizer by examining its speaking and paused properties, or by setting a delegate. Messages in the AVSpeechSynthesizerDelegate protocol are sent as significant events occur during speech synthesis.

Tasks

Synthesizing Speech

Controlling Speech Synthesis

Managing the Delegate

Properties

delegate

The delegate object for the speech synthesizer.

@property(nonatomic, assign) id<AVSpeechSynthesizerDelegate> delegate
Discussion

Messages in the AVSpeechSynthesizerDelegate are sent to the delegate for speech synthesis events.

Availability
  • Available in iOS 7.0 and later.
Declared In
AVSpeechSynthesis.h

paused

A Boolean value that indicates whether speech has been paused. (read-only)

@property(nonatomic, readonly, getter=isPaused) BOOL paused
Discussion

Returns YES if the synthesizer has begun speaking an utterance and was paused using pauseSpeakingAtBoundary:; NO otherwise.

Availability
  • Available in iOS 7.0 and later.
Declared In
AVSpeechSynthesis.h

speaking

A Boolean value that indicates whether the synthesizer is speaking. (read-only)

@property(nonatomic, readonly, getter=isSpeaking) BOOL speaking
Discussion

Returns YES if the synthesizer is speaking or has utterances enqueued to speak, even if it is currently paused. Returns NO if the synthesizer has finished speaking all utterances in its queue or if it has not yet been given an utterance to speak.

Availability
  • Available in iOS 7.0 and later.
Declared In
AVSpeechSynthesis.h

Instance Methods

continueSpeaking

Continues speech from the point at which it left off.

- (BOOL)continueSpeaking
Return Value

YES if speech has continued, or NO otherwise.

Discussion

This method only has any effect if the synthesizer is paused.

Availability
  • Available in iOS 7.0 and later.
Declared In
AVSpeechSynthesis.h

pauseSpeakingAtBoundary:

Pauses speech at the specified boundary constraint.

- (BOOL)pauseSpeakingAtBoundary:(AVSpeechBoundary)boundary
Parameters
boundary

A constant describing whether speech should pause immediately or only after finishing the word currently being spoken.

Return Value

YES if speech has paused, or NO otherwise.

Discussion

The boundary parameter also affects the manner in which the synthesizer, once paused, continues speech upon a call to continueSpeaking. If paused with boundary constraint AVSpeechBoundaryImmediate, speech continues from exactly the point at which it was paused, even if that point occurred in the middle of pronouncing a word. If paused with AVSpeechBoundaryWord, speech continues from the word following the word on which it was paused.

Availability
  • Available in iOS 7.0 and later.
Declared In
AVSpeechSynthesis.h

speakUtterance:

Enqueues an utterance to be spoken.

- (void)speakUtterance:(AVSpeechUtterance *)utterance
Parameters
utterance

An AVSpeechUtterance object containing text to be spoken.

Discussion

The AVSpeechUtterance object not only contains the text to be spoken, but also parameters controlling speech synthesis such as voice, pitch, and delays between utterances.

Calling this method adds the utterance to a queue; utterances are spoken in the order in which they are added to the queue. If the synthesizer is not currently speaking, the utterance is spoken immediately. Attempting to enqueue AVSpeechUtterance instance multiple times throws an exception.

Availability
  • Available in iOS 7.0 and later.
Declared In
AVSpeechSynthesis.h

stopSpeakingAtBoundary:

Stops all speech at the specified boundary constraint.

- (BOOL)stopSpeakingAtBoundary:(AVSpeechBoundary)boundary
Parameters
boundary

A constant describing whether speech should stop immediately or only after finishing the word currently being spoken.

Return Value

YES if speech has stopped, or NO otherwise.

Discussion

Stopping the synthesizer cancels any further speech; in constrast with when the synthesizer is paused, speech cannot be resumed where it left off. Any utterances yet to be spoken are removed from the synthesizer’s queue.

Availability
  • Available in iOS 7.0 and later.
Declared In
AVSpeechSynthesis.h

Constants

AVSpeechBoundary

Constraints describing when speech may be paused or stopped.

typedef enum : NSInteger {
   AVSpeechBoundaryImmediate,
   AVSpeechBoundaryWord
} AVSpeechBoundary;
Constants
AVSpeechBoundaryImmediate

Indicates that speech should pause or stop immediately.

Available in iOS 7.0 and later.

Declared in AVSpeechSynthesis.h.

AVSpeechBoundaryWord

Indicates that speech should pause or stop after the word currently being spoken.

Available in iOS 7.0 and later.

Declared in AVSpeechSynthesis.h.