Challenge: Build an app using built-in Sound Classification

Speaker symbol badging a folder symbol

With Sound Classification, you can create experiences for camera, video, productivity, and game apps on all Apple platforms — and for this challenge, we’re inviting you to explore a sample project and build your own.

When you use the built-in sound classifier in Sound Analysis you have access to over 300 different sound classes trained on a massive amount of data to ensure great model performance. The model won’t predict just a single sound at once: It returns multiple labels with individual confidence scores for each so that you can understand all the sounds being heard at a given time.

The API in Sound Analysis provides a full list of all trained classes, you decide which ones you care about, ignoring all others. You could use the speech detector, for example, to identify when someone has finished speaking. Even further, you have control over the sampling window for the prediction and can apply sound-specific confidence thresholds to greatly improve the real world accuracy of the features you will create.

Begin the challenge

For this challenge, we invite you to brainstorm how you could incorporate Sound Classification into an existing app or a brand new app idea. You can use sound classifiers on all Apple platforms — Mac, iPhone, iPad, Apple Watch, or Apple TV — allowing you to explore a variety of different ideas and situations. For example, a camera app could enable people to quickly locate the precise moment in personal videos where things like laughter occurred, or listen for specific sounds during video capture to trigger special effects and overlays in the camera frame. A video editing or productivity app could leverage sound classification to help someone quickly organize media assets based on the sounds that are in them. Or an interactive game experience could be developed that leverages recognized sounds in the environment as a trigger for unlocking special modes where characters mimic what they're hearing.

We’ve provided the “Classifying live audio input with a built-in sound classifier” project to help you get started. From here, we invite you to come up with an app of your own that uses the microphone or another audio source to listen to and identify sounds. What will you make? Show off the creative ways you can apply this built-in capability.

  • WWDC21

Discover built-in sound classification in SoundAnalysis

Explore how you can use the Sound Analysis framework in your app to detect and classify discrete sounds from any audio source — including live sounds from a microphone or from a video or audio file — and identify precisely in a moment where that sound occurs. Learn how the built-in sound...

Classifying Live Audio Input with a Built-in Sound Classifier

Need support, or want help from the community as you explore Sound Classification? You can share your progress in the Developer Forums.

Visit the Apple Developer Forums


Sound Analysis

Read the WWDC21 Challenges Terms and Conditions