Integrate music and other audio content into your apps.

Posts under Audio tag

84 Posts
Sort by:
Post not yet marked as solved
2 Replies
429 Views
I use openAl to play music. It works on iOS15. x phones, but there is no sound and no error reported on iOS16. x. Can someone help me
Posted
by
Post not yet marked as solved
0 Replies
899 Views
With numerous audio channel available from Dolby Atmos encoded audio files. Is it possible to address, and increase the volume of a preset channel when engaging in a specific head movement? For example, toggle on/off right surround bass channel when sweeping your nose quickly from Centerpoint to right shoulder.
Posted
by
Post not yet marked as solved
1 Replies
553 Views
I try to match a microphone audio with a custom catalog which I created via ShazamKit. What is the code for extracting and displaying "matchedMediaItem.artist" information on my iPhone screen after finding a song match with an entry of my custom-built catalog? I am a beginner.
Posted
by
Post not yet marked as solved
0 Replies
342 Views
Does anyone know if there will be a problem with publishing an app that contains music under a CC (royalty-free) license? Will adding the credentials of the author be sufficient for the review team?
Posted
by
Post not yet marked as solved
6 Replies
2k Views
Turn on address sanitizer on Xcode and use a real device and put a Test.mp3 file in the Xcode project. Then it will crash when you initialise a AVAudioPlayer with a mp3 file (with a wav file it works fine). I have made an entry in feedback assistent -> FB12425453. var player : AVAudioPlayer? func playSound() { if let url = Bundle.main.url(forResource: "Test", withExtension: "mp3") { self.player = try? AVAudioPlayer(contentsOf: url) // --> deallocation of non allocated memory problem --> with a "wav" file it works .... } }
Posted
by
Post not yet marked as solved
0 Replies
466 Views
Using the OC framework AVFAudio, obtain PCM data for the 32-bit microphone of the device, and the results obtained are all between -1 and 1 (the actual value of the fixed noise source has exceeded 100 decibels). According to the conversion results, it cannot exceed 100 decibels (20 * log10 (PCM/0.00002)). Do you know what the reason or problem is? Output: -0.82569194 -0.82774025 -0.83398014 -0.87197787 -0.90468484 -0.9037836 0.9085202
Posted
by
Post not yet marked as solved
1 Replies
394 Views
I'm experiencing a lot of crackling sounds when watching a video or listening to music on my Macbook. I see a lot of issues but without a solution for it. So my question to Apple is: HOW TO FIX THIS?
Posted
by
Post not yet marked as solved
0 Replies
406 Views
Planning to create a Music Game App similar to Guitar Hero kind of, and I am planning to use songs from beginner producers (They don't have any Publishers or Distributors). If I get their legal concent through a contract to use their songs, does Apple allow to have Music that is not from Itunes if I get the legal license from the producer to use their songs in my app? Will apple allow songs that don't have Publishers or Distributors, or in the App review stage they will flag it and cause an issue for me?
Posted
by
Post not yet marked as solved
1 Replies
700 Views
For iPhone 14 pro, mic icon remains active on dynamic island. Every time restart is required to make it disappear. After restarting after few number of phone call, it again remains active on dynamic island.
Posted
by
Post not yet marked as solved
0 Replies
638 Views
I have the Flutter mobile app and I'm using the record flutter package for recording audio. So I'm facing an issue while recording the audio while the phone is locked. App Behavior: First we start the app and connect it to a Bluetooth device Then the app starts looking for the trigger of 1 from the device connected with it. On receiving the trigger from device it start recording. while mobile locked and app is running in background. AVAudioSession_iOS.mm:2367 Failed to set category, error: '!int' Failed to set up audio session: Error Domain=NSOSStatusErrorDomain Code=560557684 "(null)" I'm getting this error when AVAudioSession setting the category. My is for Users security purpose so it need to record background let me know how can I achive this functionality
Posted
by
Post not yet marked as solved
1 Replies
717 Views
I'm developing webview app with using javascript with IONIC. When I try to add worklet module, It looks fine but after add module, and then when I try to connect audioWorkletprocessor with audioContext. IOS give me error description like this. `2023-07-24 11:35:57.436444+0900 CHeKT[27066:10627891] [assertion] Error acquiring assertion: <Error Domain=RBSServiceErrorDomain Code=1 "(originator doesn't have entitlement com.apple.runningboard.assertions.webkit AND originator doesn't have entitlement com.apple.multitasking.systemappassertions)" UserInfo={NSLocalizedFailureReason=(originator doesn't have entitlement com.apple.runningboard.assertions.webkit AND originator doesn't have entitlement com.apple.multitasking.systemappassertions)}> 2023-07-24 11:35:57.436491+0900 CHeKT[27066:10627891] [ProcessSuspension] 0x1060089f0 - ProcessAssertion::acquireSync Failed to acquire RBS assertion 'WebKit Media Playback' for process with PID=27071, error: Error Domain=RBSServiceErrorDomain Code=1 "(originator doesn't have entitlement com.apple.runningboard.assertions.webkit AND originator doesn't have entitlement com.apple.multitasking.systemappassertions)" UserInfo={NSLocalizedFailureReason=(originator doesn't have entitlement com.apple.runningboard.assertions.webkit AND originator doesn't have entitlement com.apple.multitasking.systemappassertions)} 2023-07-24 11:35:57.436947+0900 CHeKT[27066:10627891] [assertion] Error acquiring assertion: <Error Domain=RBSServiceErrorDomain Code=1 "(originator doesn't have entitlement com.apple.runningboard.assertions.webkit AND originator doesn't have entitlement com.apple.multitasking.systemappassertions)" UserInfo={NSLocalizedFailureReason=(originator doesn't have entitlement com.apple.runningboard.assertions.webkit AND originator doesn't have entitlement com.apple.multitasking.systemappassertions)}> 2023-07-24 11:35:57.436980+0900 CHeKT[27066:10627891] [ProcessSuspension] 0x106008ae0 - ProcessAssertion::acquireSync Failed to acquire RBS assertion 'WebKit Media Playback' for process with PID=27066, error: Error Domain=RBSServiceErrorDomain Code=1 "(originator doesn't have entitlement com.apple.runningboard.assertions.webkit AND originator doesn't have entitlement com.apple.multitasking.systemappassertions)" UserInfo={NSLocalizedFailureReason=(originator doesn't have entitlement com.apple.runningboard.assertions.webkit AND originator doesn't have entitlement com.apple.multitasking.systemappassertions)} 2023-07-24 11:35:57.437323+0900 CHeKT[27066:10627891] [assertion] Error acquiring assertion: <Error Domain=RBSServiceErrorDomain Code=1 "(originator doesn't have entitlement com.apple.runningboard.assertions.webkit AND originator doesn't have entitlement com.apple.multitasking.systemappassertions)" UserInfo={NSLocalizedFailureReason=(originator doesn't have entitlement com.apple.runningboard.assertions.webkit AND originator doesn't have entitlement com.apple.multitasking.systemappassertions)}> 2023-07-24 11:35:57.437354+0900 CHeKT[27066:10627891] [ProcessSuspension] 0x106008bd0 - ProcessAssertion::acquireSync Failed to acquire RBS assertion 'WebKit Media Playback' for process with PID=27072, error: Error Domain=RBSServiceErrorDomain Code=1 "(originator doesn't have entitlement com.apple.runningboard.assertions.webkit AND originator doesn't have entitlement com.apple.multitasking.systemappassertions)" UserInfo={NSLocalizedFailureReason=(originator doesn't have entitlement com.apple.runningboard.assertions.webkit AND originator doesn't have entitlement com.apple.multitasking.systemappassertions)}` even gives me that error. audioWorkletProcessor is working, but when I try to access microphone with getUserMedia() method, audioWorkletProcessor sound is broke like robot sound. audioWorklet is not working with IOS? I need to develop 2way audio using with audioWorklet. but It is not possible to make 2way audio in IOS (Android is working well) Please let me know if you have any feedback or solutions. Thanks. Bobby.
Posted
by
Post not yet marked as solved
0 Replies
820 Views
Hello developers, we have an issue with opening an Apple MPEG-4 audio file that apparently has a correct header but then no actual audio data. This file is 594 bytes and freezes completely the app's main thread and never returns from either of these calls: NSURL *fileURL = [NSURL fileURLWithPath:filePath]; NSError *error; AVAudioPlayer *audioPlayer = [[AVAudioPlayer alloc] initWithContentsOfURL:fileURL error:&error]; // freez (call stack below) AVAudioFile *audioFile = [[AVAudioFile alloc] initForReading:fileURL error:&error]; // freez AudioFileID audioFileID; OSStatus result = AudioFileOpenURL((__bridge CFURLRef)fileURL, kAudioFileReadPermission, 0, &audioFileID); // freez Putting the debugger in pause reveals where it is stuck: #0 0x00007ff81b7683f9 in MP4BoxParser_Track::GetSampleTableBox() () #1 0x00007ff81b76785a in MP4BoxParser_Track::GetInfoFromTrackSubBoxes() () #2 0x00007ff81b93fde5 in MP4AudioFile::UseAudioTrack(void*, unsigned int, unsigned int) () #3 0x00007ff81b93ab2c in MP4AudioFile::OpenFromDataSource() () #4 0x00007ff81b72ee85 in AudioFileObject::Open(__CFURL const*, signed char, int) () #5 0x00007ff81b72ed9d in AudioFileObject::DoOpen(__CFURL const*, signed char, int) () #6 0x00007ff81b72e1f0 in AudioFileOpenURL () #7 0x00007ffa382e8183 in -[AVAudioPlayer initWithContentsOfURL:fileTypeHint:error:] () With either of 3 calls the call stack is a little bit different but all in the end get stuck forever in MP4BoxParser_Track::GetSampleTableBox() I'm attaching the incriminated audio file to the post (just rename it back to .m4a): Audio_21072023_10462282.crash How can we avoid this and verify that an audio file is openable and playable. Before, we were checking if a file that we belive be an audio contains data inside, if true then we create AVAudioPlayer with it and see if it return no errors and if the duration is >0. This bug breaks this fondamental logic and now we added a hotfix hack to check if the data is at least 600 bytes long. How do we correctly solve this if none of the methods above return any error but instead ALL hang?
Posted
by
Post not yet marked as solved
1 Replies
528 Views
Whenever I try to make a playlist with a lot of songs all at once, I'll get to a point it almost freezes. Like how after I click add to playlist for each song a notice comes up saying "song added", but it will stop doing that and the song won't show up on the playlist. Then maybe 2 or 3 minutes later it will show the notice the song has been added. Anyone else dealing with this? Frustrating when I'm trying to do big playlists and I have to come back to it the next day to add the rest.
Posted
by
Post not yet marked as solved
0 Replies
439 Views
Could you provide guidance on how to add chapter marks to an M4A. I've been attempting bookmark. From what I've read, it requires the use of AVMetadataKey.quickTimeUserDataKeyChapter track.addTrackAssociation(to: ... type: .chapterList) or both. I've looked into AVTimedMetadataGroup but I havent found a way to get it added based on the documentation. I also havent found anyone who has used native Swift to add chapter marks. They've always given in and used ffmpeg or some other external solution. inputURL is for the file that is being read in outputURL is for the the final file chapters is an array of dictionaries, where time is the start of each chapter and its name in the list The target is macOS import AVFoundation class AudioChapterCreator { // Function to create an audio file with chapters and a chapter list func createAudioFileWithChapters(inputURL: URL, outputURL: URL, chapters: [(time: CMTime, title: String)]) { let options = [AVURLAssetPreferPreciseDurationAndTimingKey: true] let asset = AVURLAsset(url: inputURL, options: options) let durationInSeconds = CMTimeGetSeconds(asset.duration) print("asset durationInSeconds: \(durationInSeconds)") guard let audioTrack = asset.tracks(withMediaType: .audio).first else { print("Error: Unable to find audio track in the asset.") return } // Create metadata items for chapters let chapterMetadataItems = chapters.map { chapter -> AVMetadataItem in let item = AVMutableMetadataItem() // this duration is just for testing let tempDur = CMTime(seconds: 100, preferredTimescale: 1) item.keySpace = AVMetadataKeySpace.quickTimeUserData item.key = AVMetadataKey.quickTimeUserDataKeyChapter as NSString item.value = chapter.title as NSString item.time = chapter.time item.duration = tempDur return item } // Create an AVAssetExportSession for writing the output file guard let exportSession = AVAssetExportSession(asset: asset, presetName: AVAssetExportPresetAppleM4A) else { print("Error: Unable to create AVAssetExportSession.") return } // Configure the AVAssetExportSession exportSession.outputFileType = .m4a exportSession.outputURL = outputURL exportSession.metadata = asset.metadata + chapterMetadataItems exportSession.timeRange = CMTimeRangeMake(start: CMTime.zero, duration: asset.duration); // Export the audio file exportSession.exportAsynchronously { switch exportSession.status { case .completed: print("Audio file with chapters and chapter list created successfully.") case .failed: print("Error: Failed to create the audio file.") case .cancelled: print("Export cancelled.") default: print("Export failed with unknown status.") } } } }
Posted
by
Post not yet marked as solved
1 Replies
697 Views
Problem Description This HLS video https://lf3-vod-cdn-tos.douyinstatic.com/obj/vodsass/hls/main.m3u8 starts with noise at 22 seconds play directly on MacOS 12.6.6 Safari,and it also appears on iOS (16.5.1) safari. But there is no noise when playing with MSE on Mac by the third-party open source web playe such as hls.js on Safari. Test tool hls.js test demo: https://hlsjs.video-dev.org/demo/
Posted
by
Post not yet marked as solved
1 Replies
535 Views
Hello, I have struggled to resolve issue above question. I could speak utterance when I turn on my iPhone, but when my iPhone goes to background mode(turn off iPhone), It doesn't speak any more. I think it is possible to play audio or speak utterance because I can play music on background status in youtube. Any help please??
Posted
by
Post not yet marked as solved
0 Replies
484 Views
I am writing a watchOS app where I have some audio files that I want to play at various points. I am using AVAudioPlayer. It all works in the simulator and it also works if I have Air Pods connected to my watch via Bluetooth. However I get no sound if there isn't a paired set of earphones. In the case of no earphones I would like the sounds to play from the physical watch speaker. I can't seem to find any documentation on how to cause that to happen. Any hints or tips are appreciated.
Posted
by
Post not yet marked as solved
0 Replies
529 Views
How can i record an audio when the app is in background? I tried on android and its working but on IOS its only recording on the foreground Its an Expo app and am using Expo Av library I have already allowed the permissions and set the UIBackgroundModes with audio "infoPlist": { ... "UIBackgroundModes": [ "audio" ] } And in the code also await Audio.setAudioModeAsync({ allowsRecordingIOS: true, playsInSilentModeIOS: true, staysActiveInBackground:true }); but once the app is in background mode it is failing to start recording. Can anyone help me how I can fix this ?
Posted
by
Post not yet marked as solved
0 Replies
517 Views
I'm developing a macOS app and I'm trying to access the microphone without directly triggering the default permission dialog. Instead, I've managed to programmatically open the System Settings, specifically the Privacy &amp;amp; Security -&amp;gt; Microphone section, allowing users to manually grant permission. However, there's an issue. Even after the user manually toggles on the microphone permission for my app in System Settings, the AVCaptureDevice.authorizationStatus(for: .audio) still returns .notDetermined. To clarify, I'm avoiding the use of AVCaptureDevice.requestAccess(for: .audio) because it prompts the default permission dialog. But when I do use it, the app correctly recognizes changes in permission status. The problem arises only when trying to detect permission changes made directly from the System Settings. Here is my code struct SystemSettingsHandler { static func openSystemSetting(for type: String) { guard type == "microphone" || type == "screen" else { return } let microphoneURL = "x-apple.systempreferences:com.apple.preference.security?Privacy_Microphone" let screenURL = "x-apple.systempreferences:com.apple.preference.security?Privacy_ScreenCapture" let urlString = type == "microphone" ? microphoneURL : screenURL if let url = URL(string: urlString) { NSWorkspace.shared.open(url) } } } private func requestMicrophonePermission(completion: @escaping (Bool) -&amp;gt; Void) { switch AVCaptureDevice.authorizationStatus(for: .audio) { case .authorized: print("authorized") completion(true) case .notDetermined: print("notDetermined") AVCaptureDevice.requestAccess(for: .audio) { granted in if granted { completion(granted) } else { completion(granted) } } case .denied, .restricted: print("denied") SystemSettingsHandler.openSystemSetting(for: "microphone") completion(false) @unknown default: print("unknown") completion(false) } } Thank you for reading this post!
Posted
by
Post not yet marked as solved
1 Replies
493 Views
I've been using AVAssetExportSession to trim audio files for the past 2 years, and suddenly it stopped working properly. It still works fine when I run my app on a phone running iOS 16, but on my iOS 17 phone it exports an incorrect duration (ex. I'll provide a file with 2 seconds duration, ask it to trim it to 0 - 1.7s, it'll return the file overtrimmed at 1.58s or something like that). The AVURLAsset is returning the correct duration, I've already tried using the AVURLAssetPreferPreciseDurationAndTimingKey, it's useless to me, as the error happens somewhere during the export. guard let exportSession = AVAssetExportSession(asset: asset, presetName: AVAssetExportPresetAppleM4A) else { completion(false, nil) return } let startTime = CMTimeMakeWithSeconds(floor(startPoint * 100) / 100.0, preferredTimescale: 44100) let stopTime = CMTimeMakeWithSeconds(ceil(endPoint * 100) / 100.0, preferredTimescale: 44100) let exportTimeRange = CMTimeRange(start: startTime, end: stopTime) exportSession.timeRange = exportTimeRange exportSession.outputFileType = .m4a exportSession.outputURL = targetURL AudioHelper.deleteFile(at: exportSession.outputURL) exportSession.exportAsynchronously { ... } I've managed to somewhat mitigate the damage by adding silence to the file and continuously trimming it until I get it close to my required duration, but it's an extremely ugly hack and it's breaking down the whole functionality of my app.
Posted
by