Integrate music and other audio content into your apps.

Posts under Audio tag

179 results found
Sort by:
Post not yet marked as solved
37 Views

in iOS 15 , avplayer same audio is playing two times when we pause and play from Notification Mini Bar

something broke in iOS 15 in my app , the same code is working fine with iOS 14.8 and below versions. The actual issues is when I play audio in my app then I go to notification bar , pause the audio and next play the audio from notification bar itself , then same audio is playing twice . One audio is resuming from where I paused it before and the other one is playing the same audio from initial stage. When the issue is happening this is the logs I am getting Ignoring setPlaybackState because application does not contain entitlement com.apple.mediaremote.set-playback-state for platform 2021-09-24 21:40:06.597469+0530 BWW[2898:818107] [rr] Response: updateClientProperties<A4F2E21E-9D79-4FFA-9B49-9F85214107FD> returned with error <Error Domain=kMRMediaRemoteFrameworkErrorDomain Code=29 “Could not find the specified now playing player” UserInfo={NSLocalizedDescription=Could not find the specified now playing player}> for origin-iPhone-1280262988/client-com.iconicsolutions.xstream-2898/player-(null) in 0.0078 seconds I got stuck with this issue since 2 days , I tried all the ways but unable to get why it's only happening in iOS 15. Any help will be greatly appreciated.
Asked
by KLucky.
Last updated
.
Post not yet marked as solved
53 Views

Submit for review - AudioKit branch?

I have my first App ready and crash free (I think!) using AudioKit. While coding it I used the develop branch. I assume I should submit it with the main branch packages? Trouble is I updated my iPad to iOS15 (yesterday) so then had to move onto Xcode 13 and ended up have a lot of broken AudioKit code with the main branch of AudioKit. As well as a couple of issues with the develop branch - which I managed to fix. This is my first App submission so I'd like to get it right - excuse my newbie idiocy. Seems like it may have been a bad idea moving to iOS15 & Xcode 13 right now. Should I go back to 12? Main question though is what 3rd party framework branches should be used in a final App release?
Asked
by Waterboy.
Last updated
.
Post not yet marked as solved
411 Views

Youtube Data API 5.2.2 Third-Party Sites/Services 5.2.3 Audio/Video Downloading

I have been developing an app that uses Youtube Content which I am fetching from Youtube Data API which is publicly provided by Youtube itself. Basically, my app shows a list of Youtube Videos and playlist fetched from Youtube API in the user interface and the user can play video The app I am developing is not enabling users to "save, convert, or download" any videos directly or indirectly App Store Review Guidelines mentions two points 1 > 5.2.2 points states that "Authorization must be provided upon request" 2> 5.2.3 states that "Documentation must be provided upon request" so my question is that is there any chance of my app may face app store rejection? if yes then what can I do in order to pass the app store review process? if my app receives app store rejection then how can I get "Authorization" or "Documentation" from Youtube because as far as I read on Youtube API Documentation Youtube is not providing neither "Authorization" nor "Documentation". Youtube only lets you register your app on their console and gives you API key using which you can get data
Asked
by Dhrumil_7.
Last updated
.
Post not yet marked as solved
69 Views

Is it possible to play an audio file right after getting the user's camera (getUserMedia) on Safari?

(original question on stack overflow) Safari requires that a user gesture occurs before the playing of any audio. However, the user's response to getUserMedia does not appear to count as a user gesture. Or perhaps I have that wrong, maybe there is some way to trigger the playing? This question ("Why can't JavaScript .play() audio files on iPhone safari?") details the many attempts to work around the need for a user gesture, but it seems like Apple has closed most of the loopholes. For whatever reason, Safari does not consider the IOS acceptance of the camera/mic usage dialog as a user gesture and there's no way to make camera capture count as a user gesture. Is there something I'm missing, is it impossible to play an audio file after capturing the camera? Or is there someway to respond to the camera being captured with an audio file?
Asked
by m4bwav2.
Last updated
.
Post not yet marked as solved
259 Views

CarPlay Audio entitlement

Hi, My account was approved for the com.apple.developer.playable-content entitlement, but now it's deprecated and I want to switch to the new one com.apple.developer.carplay-audio. I have some problems making the transition, do I need to submit a new request to Apple for the new entitlement? Thanks.
Asked
by angarov.
Last updated
.
Post not yet marked as solved
2k Views

Simulator causing Mac audio distortion

I am experiencing an issue where my Mac's speakers will crackle and pop when running an app on the Simulator or even when previewing SwiftUI with Live Preview. I am using a 16" MacBook Pro (i9) and I'm running Xcode 12.2 on Big Sur (11.0.1). Killing coreaudiod temporarily fixes the problem however this is not much of a solution. Is anyone else having this problem?
Asked
by joltguy.
Last updated
.
Post not yet marked as solved
2.1k Views

Big Sur Display Port Audio?

Hello all. I'm running Big Sur 20A4299v with my 16-inch MacBook Pro and after the beta install, the Display Port audio has ceased the work. Has anyone else come across this bug yet? Thanks y'all.
Asked
by EDMFPEREZ.
Last updated
.
Post not yet marked as solved
73 Views

How can I use currently playing items like in menu bar item - Now Playing

Hey, I am trying to figure out how can I display currently playing sources of Audio on my Xcode Project. In the new Big Sur update I believe this was possible due to the Mac Catalyst. How can I do this on the Mac? New to this, can someone guide please.
Asked
by avi__99.
Last updated
.
Post not yet marked as solved
269 Views

App crashes on Activation of Display after Background Audio

How can I find out what the Problem is? Every Time I start Audio and hear it when the iPad/iPhone is turned off and then activate Display of the device after 10-15 Minutes, the App crashes. Here are the First Lines of the Crash Report: Hardware Model: iPad8,12 Process: VOH-App [16336] Path: /private/var/containers/Bundle/Application/5B2CF582-D108-4AA2-B30A-81BA510B7FB6/VOH-App.app/VOH-App Identifier: com.voiceofhope.VOH Version: 7 (1.0) Code Type: ARM-64 (Native) Role: Non UI Parent Process: launchd [1] Coalition: com.voiceofhope.VOH [740] Date/Time: 2021-08-18 22:51:24.0770 +0200 Launch Time: 2021-08-18 22:36:50.4081 +0200 OS Version: iPhone OS 14.7.1 (18G82) Release Type: User Baseband Version: 2.05.01 Report Version: 104 Exception Type: EXC_BAD_ACCESS (SIGSEGV) Exception Subtype: KERN_PROTECTION_FAILURE at 0x000000016d2dffb0 VM Region Info: 0x16d2dffb0 is in 0x16d2dc000-0x16d2e0000; bytes after start: 16304 bytes before end: 79 REGION TYPE START - END [ VSIZE] PRT/MAX SHRMOD REGION DETAIL CG raster data 11cad0000-11d814000 [ 13.3M] r--/r-- SM=COW GAP OF 0x4fac8000 BYTES ---> STACK GUARD 16d2dc000-16d2e0000 [ 16K] ---/rwx SM=NUL ... for thread 0 Stack 16d2e0000-16d3dc000 [ 1008K] rw-/rwx SM=PRV thread 0 Termination Signal: Segmentation fault: 11 Termination Reason: Namespace SIGNAL, Code 0xb Terminating Process: exc handler [16336] Triggered by Thread: 0 Thread 0 name: Dispatch queue: com.apple.main-thread Thread 0 Crashed: 0 libswiftCore.dylib 0x00000001a8028360 swift::MetadataCacheKey::operator==+ 3773280 (swift::MetadataCacheKey) const + 4 1 libswiftCore.dylib 0x00000001a801ab8c _swift_getGenericMetadata+ 3718028 (swift::MetadataRequest, void const* const*, swift::TargetTypeContextDescriptor<swift::InProcess> const*) + 304 2 libswiftCore.dylib 0x00000001a7ffbd00 __swift_instantiateCanonicalPrespecializedGenericMetadata + 36 Here is a full crash Report: VOH-App 16.08.21, 20-22.crash
Asked Last updated
.
Post not yet marked as solved
83 Views

iOS AVAudioSession Notify Thread crash.

Good day community, More than half a year we faced the crash with following callstack: Crashed: AVAudioSession Notify Thread EXC_BAD_ACCESS KERN_INVALID_ADDRESS 0x0000000000000000 0. libEmbeddedSystemAUs.dylib InterruptionListener(void*, unsigned int, unsigned int, void const*) 1. libEmbeddedSystemAUs.dylib InterruptionListener(void*, unsigned int, unsigned int, void const*) arrow_right 2. AudioToolbox AudioSessionPropertyListeners::CallPropertyListeners(unsigned int, unsigned int, void const*) + 596 3. AudioToolbox HandleAudioSessionCFTypePropertyChangedMessage(unsigned int, unsigned int, void*, unsigned int) + 1144 4. AudioToolbox ProcessDeferredMessage(unsigned int, __CFData const*, unsigned int, unsigned int) + 2452 5. AudioToolbox ASCallbackReceiver_AudioSessionPingMessage + 632 6. AudioToolbox _XAudioSessionPingMessage + 44 7. libAudioToolboxUtility.dylib mshMIGPerform + 264 8. CoreFoundation __CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE1_PERFORM_FUNCTION__ + 56 9. CoreFoundation __CFRunLoopDoSource1 + 444 10. CoreFoundation __CFRunLoopRun + 1888 11. CoreFoundation CFRunLoopRunSpecific + 424 12. AVFAudio GenericRunLoopThread::Entry(void*) + 156 13. AVFAudio CAPThread::Entry(CAPThread*) + 204 14. libsystem_pthread.dylib _pthread_start + 156 15. libsystem_pthread.dylib thread_start + 8 We use Wwise audio framework as audio playback API. We did reported the problem to Audiokinetic's support, but it seems that the problem is not there. Also we used FMOD sound engine earlier, but we had the same issue. At this time we have around 100 crash events every day, which makes us upset. Looks like it started from iOS 13. My main problem is that I don't communicate with AudioToolbox or AVFAudio API directly but use thirdparty sound engines instead. I believe I am not the only who faced this problem. Also there is a discussion at https://forum.unity.com/threads/ios-12-crash-audiotoolbox.719675/ The last message deserves special attention: https://zhuanlan.zhihu.com/p/370791950 where Jeffrey Zhuang made a research. This might be helpful for Apple's support team. Any help is highly appreciated. Best regards, Sergey.
Asked Last updated
.
Post not yet marked as solved
137 Views

how to start building an app

where and how would I start to build a dj app?
Asked Last updated
.
Post not yet marked as solved
312 Views

ShazamKit during AVCaptureSession - Recognize audio while using camera

Hi, I want to implement ShazamKit in my project. But I have some problems. I use AVCaptureSession to take photos in my app and I'm unable to use ShazamKit. I tried to use three different ways Use an AVAudioEngine during my AVCaptureSession But I didn't obtain any result from Shazam. Try to use ShazamKit after stopping my AvCaptureSession but this causes some problems, and some crashes. Try to use the buffer of my AVCaptureSession to catch audio directly without use AVAudioEngine. This is the code that I use with AVAudioEngine: try! audioSession.setActive(true, options: .notifyOthersOnDeactivation)                 let inputNode = self.audioEngine.inputNode                 let recordingFormat = inputNode.outputFormat(forBus: 0)                                 let audioFormat = recordingFormat //AVAudioFormat(standardFormatWithSampleRate: self.audioEngine.inputNode.outputFormat(forBus: 0).sampleRate,                     //                            channels: 1)                                  inputNode.installTap(onBus: 0, bufferSize: 1024, format: audioFormat) { (buffer: AVAudioPCMBuffer, when: AVAudioTime) in                     try! self.signatureGenerator.append(buffer, at: nil)                                          self.session.matchStreamingBuffer(buffer, at: nil)                 }                              self.audioEngine.prepare()                 try! self.audioEngine.start() I can choose two ways to do this, use AVCaptureSession output to pass it to ShazamKit or use an AVAudioSession after the stop of AVCaptureSession. So I have two questions: Can I use a CMSampleBufferRef from AVCaptureSession buffer in a SHSession? And if the answer is yes how? How can I prevent this error if I want to use an AVAudioSession after I stopped my AVCaptureSession? [aurioc]            AURemoteIO.cpp:1117  failed: -10851 (enable 1, outf< 2 ch,      0 Hz, Float32, deinterleaved> inf< 2 ch,      0 Hz, Float32, deinterleaved>) [avae]            AVAEInternal.h:76    required condition is false: [AVAEGraphNode.mm:834:CreateRecordingTap: (IsFormatSampleRateAndChannelCountValid(format))] *** Terminating app due to uncaught exception 'com.apple.coreaudio.avfaudio', reason: 'required condition is false: IsFormatSampleRateAndChannelCountValid(format)' Thanks
Asked
by RedSun.
Last updated
.
Post not yet marked as solved
152 Views

How to play Dolby Atmos on Mac OS application

I'm using the MacBook Pro (2020 series). https://developer.dolby.com/platforms/apple/macos/overview/ The above page says that Dolby Atmos is supported for built-in speakers, but I don't know how to play. I can't found the settings for playing with Dolby Atmos on MacBook Pro, so how can I play with Dolby Atmos in Mac OS application written by mysellf ? Execution environment: MacBook Pro (2020) Big Sur 11.5.2 CPU: Apple M1 Xcode: 12.5.1
Asked
by Dev21MP.
Last updated
.
Post not yet marked as solved
175 Views

AVSpeechSynthesizer - how to run callback onError

I use AVSpeechSynthesizer to pronounce some text in German. Sometimes it works just fine and sometimes it doesn't for some unknown to me reason (there is no error, because the speak() method doesn't throw and the only thing I am able to observe is the following message logged in the console): _BeginSpeaking: couldn't begin playback I tried to find some API in the AVSpeechSynthesizerDelegate to register a callback when error occurs, but I have found none. The closest match was this (but it appears to be only available for macOS, not iOS): https://developer.apple.com/documentation/appkit/nsspeechsynthesizerdelegate/1448407-speechsynthesizer?changes=_10 Below you can find how I initialize and use the speech synthesizer in my app: class Speaker: NSObject, AVSpeechSynthesizerDelegate {   class func sharedInstance() -> Speaker {     struct Singleton {       static var sharedInstance = Speaker()     }     return Singleton.sharedInstance   }       let audioSession = AVAudioSession.sharedInstance()   let synth = AVSpeechSynthesizer()       override init() {     super.init()     synth.delegate = self   }       func initializeAudioSession() {     do {       try audioSession.setCategory(.playback, mode: .spokenAudio, options: .duckOthers)       try audioSession.setActive(true, options: .notifyOthersOnDeactivation)     } catch {             }   }       func speak(text: String, language: String = "de-DE") { guard !self.synth.isSpeaking else { return }     let utterance = AVSpeechUtterance(string: text)     let voice = AVSpeechSynthesisVoice.speechVoices().filter { $0.language == language }.first!           utterance.voice = voice     self.synth.speak(utterance)   } } The audio session initialization is ran during app started just once. Afterwards, speech is synthesized by running the following code: Speaker.sharedInstance.speak(text: "Lederhosen") The problem is that I have no way of knowing if the speech synthesis succeeded—the UI is showing "speaking" state, but nothing is actually being spoken.
Asked Last updated
.