Sound Analysis

RSS for tag

Analyze streamed and file-based audio to classify it as a particular type using Sound Analysis.

Posts under Sound Analysis tag

12 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

Urgent Issue with SoundAnalysis in iOS 18 - Critical Background Permissions Error
We are experiencing a major issue with the native .version1 of the SoundAnalysis framework in iOS 18, which has led to all our user not having recordings. Our core feature relies heavily on sound analysis in the background, and it previously worked flawlessly in prior iOS versions. However, in the new iOS 18, sound analysis stops working in the background, triggering a critical warning. Details of the issue: We are using SoundAnalysis to analyze background sounds and have enabled the necessary background permissions. We are using the latest XCode A warning now appears, and sound analysis fails in the background. Below is the warning message we are encountering: Warning Message: Execution of the command buffer was aborted due to an error during execution. Insufficient Permission (to submit GPU work from background) [Espresso::handle_ex_plan] exception=Espresso exception: "Generic error": Insufficient Permission (to submit GPU work from background) (00000006:kIOGPUCommandBufferCallbackErrorBackgroundExecutionNotPermitted); code=7 status=-1 Unable to compute the prediction using a neural network model. It can be an invalid input data or broken/unsupported model (error code: -1). CoreML prediction failed with Error Domain=com.apple.CoreML Code=0 "Failed to evaluate model 0 in pipeline" UserInfo={NSLocalizedDescription=Failed to evaluate model 0 in pipeline, NSUnderlyingError=0x30330e910 {Error Domain=com.apple.CoreML Code=0 "Failed to evaluate model 1 in pipeline" UserInfo={NSLocalizedDescription=Failed to evaluate model 1 in pipeline, NSUnderlyingError=0x303307840 {Error Domain=com.apple.CoreML Code=0 "Unable to compute the prediction using a neural network model. It can be an invalid input data or broken/unsupported model (error code: -1)." UserInfo={NSLocalizedDescription=Unable to compute the prediction using a neural network model. It can be an invalid input data or broken/unsupported model (error code: -1).}}}}} We urgently need guidance or a fix for this, as our application’s main functionality is severely impacted by this background permission error. Please let us know the next steps or if this is a known issue with iOS 18.
10
10
840
5d
Coordination of Video Capture and Audio Engine Start in iOS Development
Question: When implementing simultaneous video capture and audio processing in an iOS app, does the order of starting these components matter, or can they be initiated in any sequence? I have an actor responsible for initiating video capture using the setCaptureMode function. In this actor, I also call startAudioEngine to begin the audio engine and register a resultObserver. While the audio engine starts successfully, I notice that the resultObserver is not invoked when startAudioEngine is called synchronously. However, it works correctly when I wrap the call in a Task. Could you please explain why the synchronous call to startAudioEngine might be blocking the invocation of the resultObserver? What would be the best practice for ensuring both components work effectively together? Additionally, if I were to avoid using Task, what approach would be required? Lastly, is the startAudioEngine effective from the start time of the video capture (00:00)? Platform: Xcode 16, Swift 6, iOS 18 References: Classifying Sounds in an Audio Stream – In my case, the analyzeAudio() method is not invoked. Setting Up a Capture Session – Here, the focus is on video capture. Classifying Sounds in an Audio File Code Snippet: (For further details. setVideoCaptureMode() surfaces the problem.) // ensures all operations happen off of the `@MainActor`. actor CaptureService { ... nonisolated private let resultsObserver1 = ResultsObserver1() ... private func setUpSession() throws { .. } ... setVideoCaptureMode() throws { captureSession.beginConfiguration() defer { captureSession.commitConfiguration() } /* -- Works fine (analyseAudio is printed) Task { self.resultsObserver1.startAudioEngine() } */ self.resultsObserver1.startAudioEngine() // Does not work - analyzeAudio not printed captureSession.sessionPreset = .high try addOutput(movieCapture.output) if isHDRVideoEnabled { setHDRVideoEnabled(true) } updateCaptureCapabilities() }
5
0
364
Oct ’24
ios sound recognition: to what extent can developers access apple's built-in sound recognition?
hi, i am currently developing an app that has core functionalities reliant on detecting user laughter in the background. in our early stages we noticed apple's built-in sound recognition functionality. at the core, i am guessing that sound recognition requires permission from the user to access the microphone 24/7. currently, using the conventional avenue of background audio recording, a yellow indicator will be present on the top of the iphone screen indicating recording. this is not the case for sound recognition; instead. if all sound processing/recognition is kept on-device, is there any way to avoid the yellow dot and achieve sound laughter in a way that is similar to how apple's sound recognition does it? from the settings interface for sound recognition accessible to the user in the settings app, the only detectable "people" sounds are baby crying, coughing, and shouting. is it also possible to add laughter to this list somehow? thank you in advance.
2
0
450
Aug ’24
AccessibilityUIServer has microphone locked
Just installed iOS 18 Beta 3. I am seeing my AccessibilityUIServer using the microphone and this is causing no notification sounds, inability to use Siri by voice and volume is grayed out. If I start to play anything with sound AccessibilityUIServer releases the microphone and I am able to use the app. Calls still work since AccessibilityUIServer will release and the phone will ring. Feed back ID is FB14241838.
12
9
4.6k
Sep ’24
In iOS 18 beta, the SoundAnalysis framework reports an error when the iPhone is locked
I use SoundAnalysis to analyze background sounds and have enabled background permissions. It worked well in previous iOS systems, but a warning appeared in the new iOS18beta version and sound analysis was stopped. Warning List: Execution of the command buffer was aborted due to an error during execution. Insufficient Permission (to submit GPU work from background) [Espresso::handle_ex_plan] exception=Espresso exception: "Generic error": Insufficient Permission (to submit GPU work from background) (00000006:kIOGPUCommandBufferCallbackErrorBackgroundExecutionNotPermitted); code=7 status=-1 Unable to compute the prediction using a neural network model. It can be an invalid input data or broken/unsupported model (error code: -1). CoreML prediction failed with Error Domain=com.apple.CoreML Code=0 "Failed to evaluate model 0 in pipeline" UserInfo={NSLocalizedDescription=Failed to evaluate model 0 in pipeline, NSUnderlyingError=0x30330e910 {Error Domain=com.apple.CoreML Code=0 "Failed to evaluate model 1 in pipeline" UserInfo={NSLocalizedDescription=Failed to evaluate model 1 in pipeline, NSUnderlyingError=0x303307840 {Error Domain=com.apple.CoreML Code=0 "Unable to compute the prediction using a neural network model. It can be an invalid input data or broken/unsupported model (error code: -1)." UserInfo={NSLocalizedDescription=Unable to compute the prediction using a neural network model. It can be an invalid input data or broken/unsupported model (error code: -1).}}}}}
14
7
1.1k
1w
Crash due to SNMLModelFactory "try!" expression
Hello there, We currently have a crash in prod when executing the following line: let classificationRequest = try SNClassifySoundRequest(classifierIdentifier: .version1) It appears to only happen on iOS 17+ and only when regaining audio focus after an interruption in a background state. We are aware this call probably fails because it is happening from a background state - however - I would expect then that the SNClassifySoundRequest throws some kind of error since it is already an initializer that throws. If it is possible for the call to fail under certain circumstances, then could SNMLModelFactory throw an error instead of using try! ? Full trace below: SoundAnalysis/SNMLModelFactory.swift:112: Fatal error: 'try!' expression unexpectedly raised an error: Error Domain=com.apple.CoreML Code=0 "Failed to build the model execution plan using a model architecture file '/System/Library/Frameworks/SoundAnalysis.framework/SNSoundClassifierVersion1Model.mlmodelc/model1/model.espresso.net' with error code: -1." UserInfo={NSLocalizedDescription=Failed to build the model execution plan using a model architecture file '/System/Library/Frameworks/SoundAnalysis.framework/SNSoundClassifierVersion1Model.mlmodelc/model1/model.espresso.net' with error code: -1.}
1
3
707
May ’24
Better Results with Separate Sound Classifier Models?
I'm working with MLSoundClassifier to try to look for 2 different sounds in a live audio stream. I have been debating with the team if it is better to train 2 separate models, one for each different sound, or train 1 model on both sounds? Has anyone had any experience with this. Some of us believe that we have received better results with the separate models and some with 1 single model trained on both sounds. Thank you!
0
0
650
Feb ’24
Poor Quality 2021 MBP Speakers
I've only been using this late 2021 MBP 16 for nearly 2 years, and now the speaker is producing a crackling sound. Upon inquiring about repairs, customer service informed me that it would cost $728 to replace the speaker, which is a third of the price of the laptop itself. It's absolutely absurd that a $2200 laptop's speaker would fail within such a short period without any external damage. The repair cost being a third of the laptop's price is outrageous. I intend to initiate a petition in the US, hoping to connect with others experiencing the same problem. This is indicative of a subpar product, and customers shouldn't bear the burden of Apple's shortcomings. I plan to share my grievances on various social media platforms and if the issue persists, I will escalate it to the media for further exposure.
2
0
619
Feb ’24
ASIO DRIVER
Hi, I have been trying to connect my microphone on my reason studio for days now without any outcome. So I was asked to download ASIO Driver on my mac. I realised that I have an IAC driver. I need help on downloading the Asio driver and wish to know if there will be a problem running it with the IAC driver. I also just upgraded without knowing from ventura to sonoma. Am using an audio iinterface(Focusrite scarlett solo to connect to the reason application and my mic is nt1-a rode but I can get a sound but cannot record. Will be overwhelmed if I can get help from here. Thanks
1
0
1.3k
Dec ’23
Access to sound classification for app running in background
Can access to SoundAnalysis (sound classifier built into next version of MacOS, iOS, WatchOS) be provided to my app running in the background on iPhone or Apple Watch? I want to monitor local sounds from Apple Watch and iPhones and take remote action for out of band data (ie. send alert to caregiver if coughing rate is too high, or if someone is knocking on the door for more than a minute, etc.)
2
0
571
2w