The Cocoa interface to speech recognition in macOS.
- macOS 10.3+
NSSpeech provides a “command and control” style of voice recognition system, where the command phrases must be defined prior to listening, in contrast to a dictation system where the recognized text is unconstrained. Through an
NSSpeech instance, Cocoa apps can use the speech recognition engine built into macOS to recognize spoken commands. With speech recognition, users can accomplish complex tasks with spoken commands—for example, “Move pawn B2 to B4” and “Take back move.”
NSSpeech class has a property that lets you specify which spoken words should be recognized as commands (
commands) and methods that let you start and stop listening (
stop). When the speech recognition facility recognizes one of the designated commands,
NSSpeech invokes the delegation method
speech, allowing the delegate to perform the command.
Speech recognition is just one of the macOS speech technologies. The speech synthesis technology allows applications to “pronounce” written text in U.S. English and over 25 other languages, with a number of different voices and dialects for each language (
NSSpeech is the Cocoa interface to this technology). Both speech technologies provide benefits for all users, and are particularly useful to those users who have difficulties seeing the screen or using the mouse and keyboard. By incorporating speech into your application, you can provide a concurrent mode of interaction for your users: In macOS, your software can accept input and provide output without requiring users to change their working context.