Class

NSSpeechSynthesizer

The Cocoa interface to speech synthesis in macOS.

Declaration

@interface NSSpeechSynthesizer : NSObject

Overview

Speech synthesis, also called text-to-speech (TTS), parses text and converts it into audible speech. It offers a concurrent feedback mode that can be used in concert with or in place of traditional visual and aural notifications. For example, your application can use a speech synthesizer object to “pronounce” the text of important alert dialogs. Synthesized speech has several advantages. It can provide urgent information to users without forcing them to shift attention from their current task. And because speech doesn’t rely on visual elements for meaning, it is a crucial technology for users with vision or attention disabilities.

In addition, synthesized speech can help save system resources. Because sound samples can take up large amounts of room on disk, using text in place of sampled sound is extremely efficient, and so a multimedia application might use an NSSpeechSynthesizer object to provide a narration of a QuickTime movie instead of including sampled-sound data on a movie track.

When you create an NSSpeechSynthesizer instance using the default initializer (init), the class uses the default voice selected in System Preferences > Speech. Alternatively, you can select a specific voice for an NSSpeechSynthesizer instance by initializing it with initWithVoice:. To begin synthesis, send either startSpeakingString: or startSpeakingString:toURL: to the instance. The former generates speech through the system’s default sound output device; the latter saves the generated speech to a file. If you wish to be notified when the current speech concludes, set the delegate property and implement the delegate method speechSynthesizer:didFinishSpeaking:.

Speech synthesis is just one of the macOS speech technologies. The speech recognizer technology allows applications to “listen to” text spoken in U.S. English; the NSSpeechRecognizer class is the Cocoa interface to this technology. Both technologies provide benefits for all users, and are particularly useful to those users who have difficulties seeing the screen or using the mouse and keyboard.

Speech Feedback Window

The speech feedback window (Figure 1) displays the text recognized from the user’s speech and the text from which an NSSpeechSynthesizer object synthesizes speech. Using the feedback window makes spoken exchange more natural and helps the user understand the synthesized speech.

Figure 1

Speech feedback window

For example, your application may use an NSSpeechRecognizer object to listen for the command “Play some music.” When it recognizes this command, your application might then respond by speaking “Which artist?” using a speech synthesizer.

When UsesFeedbackWindow is YES, the speech synthesizer uses the feedback window if its visible, which the user specifies in System Preferences > Speech.

Topics

Creating a Speech Synthesizer

- initWithVoice:

Initializes the receiver with a voice.

Customizing the Speech Synthesizer Behavior

delegate

The synthesizer’s delegate.

NSSpeechSynthesizerDelegate

A set of optional methods implemented by delegates of NSSpeechSynthesizer objects.

Configuring Speech Synthesizers

usesFeedbackWindow

Indicates whether the receiver uses the speech feedback window.

- voice

Returns the identifier of the receiver’s current voice.

- setVoice:

Sets the receiver’s current voice.

rate

The synthesizer’s speaking rate (words per minute).

volume

The synthesizer’s speaking volume.

Configuring Speech Attributes

- addSpeechDictionary:

Registers the given speech dictionary with the receiver.

NSSpeechDictionaryKey

These constants identify key-value pairs used to add vocabulary to the dictionary using addSpeechDictionary:.

- objectForProperty:error:

Provides the value of a receiver’s property.

- setObject:forProperty:error:

Specifies the value of a receiver’s property.

NSSpeechPropertyKey

These constants are used with setObject:forProperty:error: and objectForProperty:error: to get or set the characteristics of a synthesizer.

NSSpeechCommandDelimiterKey

Keys for the command delimiters.

NSSpeechErrorKey

Keys that identify errors that may occur during speech synthesis.

NSSpeechMode

Keys for the speaking mode.

NSSpeechPhonemeInfoKey

Keys for the speech phoneme information.

NSSpeechStatusKey

Keys for the speech synthesizier status.

NSSpeechSynthesizerInfoKey

Keys for the speech synthesizier information.

NSVoiceGenderName

The following constants define voice gender attributes, which are the allowable values of the NSVoiceGender key returned by attributesForVoice:.

Getting Speech Synthesizer Information

availableVoices

Provides the identifiers of the voices available on the system.

+ attributesForVoice:

Provides the attribute dictionary of a voice.

defaultVoice

Provides the identifier of the default voice.

NSVoiceAttributeKey

The following constants are keys for the dictionary returned by attributesForVoice:.

Getting Speech State

anyApplicationSpeaking

A Boolean value indicating whether any application is currently speaking through the sound output device.

Synthesizing Speech

speaking

Indicates whether the receiver is currently generating synthesized speech.

- startSpeakingString:

Begins speaking synthesized text through the system’s default sound output device.

- startSpeakingString:toURL:

Begins synthesizing text into a sound (AIFF) file.

- pauseSpeakingAtBoundary:

Pauses synthesis in progress at a given boundary.

- continueSpeaking

Resumes synthesis.

- stopSpeaking

Stops synthesis in progress.

- stopSpeakingAtBoundary:

Stops synthesis in progress at a given boundary.

NSSpeechBoundary

These constants are used to indicate where speech should be stopped and paused. See pauseSpeakingAtBoundary: and stopSpeakingAtBoundary:.

Getting Phonemes

- phonemesFromText:

Provides the phoneme symbols generated by the given text.

Relationships

Inherits From

See Also

Speech

NSSpeechRecognizer

The Cocoa interface to speech recognition in macOS.