Integrate music and other audio content into your apps.

Posts under Audio tag

92 Posts
Sort by:
Post not yet marked as solved
0 Replies
213 Views
Hi, I'm able to play songs from the Apple Music catalog (using the catalogId) by adding an item to the QueueProviderBuilder as shown below queueProviderBuilder.items(MediaItemType.SONG, "1590036021"); playerController.prepare(queueProviderBuilder.build(), true); However, I'm unable to play a User uploaded song using MediaItemType.UPLOADED_AUDIO I'm unsure what to pass in as the "itemId" as there's no 'catalogId' on the Apple Music API Response for a user's own uploaded song to iCloud. I have managed to obtain the URL of the uploaded song but that doesn't work either. Any ideas?
Posted
by
Post not yet marked as solved
0 Replies
254 Views
When I use PHASE on my iPad, the test app is in landscape mode left or right only. But it seems like phase is in portrait mode. That is, 90 degrees off with a normal pointed at me. Is this a case where I need to use the world transform and detect view orientation notifications? Or does phase automatically handle it? I notice in simulator when I rotate the app, the speakers on the mac pro are always fixed which is what I was expecting. Or maybe its my imagination ... sounds like portrait on my device but I'm in landscape. I do have supported interface orientations set in my plist. Actually, its kind of annoying having the speakers on one side of the iPad.
Posted
by
Post not yet marked as solved
0 Replies
545 Views
This issue has a blocking impact on our ability to serve our product on any iOS devices (since Web Audio APIs are not supported on any other browsers than Safari on iOS) and Safari browser on desktop and iOS in general. And one of our customer is currently heavily impacted because of this limitation in Safari. Currently Safari built on WebKit has a limitation that it cannot provide access to raw audio data via AudioContext for HLS playback, which works on mp4 files. This is supported by EVERY OTHER MAJOR BROWSER EXCEPT SAFARI, which is concerning because we will need to force users to not use our application on safari desktop, and we simply CANNOT SERVE ANY IPHONE AND IPAD USERS which is a BLOCKER for us given that more than half of our users use iOS based devices. And of course this is clearly a feature that should’ve been in place already in Safari, which is currently lagging behind in comparison to other browsers. The W3C specification already supports this and all major browsers have already implemented and supported HLS streams to be used with AudioContext. We’d like to re-iterate the importance and urgency of this (https://bugs.webkit.org/show_bug.cgi?id=231656) for us, and this has been raised multiple times by other developers as well, so certainly this will help thousands of other Web developers to bring HLS based applications to life on Safari and iOS ecosystem. Can we please get the visibility on what is going to be the plan and timelines for HLS support with AudioContext in Safari? Critical part of our business and our customer’s products depend on this support in Safari. We're using new webkitAudioContext() in Safari 15.0 on MacBook and iOS Safari on iPhone and iPad to create AudioContext instance, and we're creating ScriptProcessorNode and attaching it to the HLS/m3u8 source create using audioContext. createMediaElementSource(). The onaudioprocess callback gets called with the audio data, but no data is processed and instead we get 0’s. If you also connect Analyser node to the same audio source create using audioContext. createMediaElementSource(), analyser.getByteTimeDomainData(dataArray) populates no data in the data but onaudioprocess on the ScriptProcessorNode on the same source What has been tried: We confirmed that the stream being used is the only stream in the tab and createMediaElementSource() was only called once to get the stream. We confirmed that if the stream source is MP4/MP3 it works with no issues and data is received in onaudioprocess, but when modifing the source to HLS/m3u8 it does not work We also tried using MediaRecorder with HLS/m3u8 as the stream source but didn’t get any events or data We also tried to create two AudioContext’s, so the first AudioContext will be the source passing the createMediaElementSource as the destination to the other Audio Context and then pass it to ScriptProcessorNode, but Safari does not allow more than one output. Currently none of the scenarios we tried works and this is a major blocker to us and for our customers. Code sample used to create the ScriptProcessorNode: const AudioContext = window.AudioContext || window.webkitAudioContext; audioContext = new AudioContext(); // Create a MediaElementAudioSourceNode // Feed the HTML Video Element 'VideoElement' into it const audioSource = audioContext.createMediaElementSource(VideoElement); const processor = audioContext.createScriptProcessor(2048, 1, 1); processor.connect(audioContext.destination); processor.onaudioprocess = (e) => { // Does not get called when connected to external microphone // Gets called when using internal MacBook microphone console.log('print audio buffer', e); } The exact same behavior is also observed on iOS Safari on iPhone and iPad. We are asking for your help on this matter ASAP. Thank you!
Posted
by
Post marked as solved
1 Replies
389 Views
I'm trying to construct what I would consider a tracklist of an Album from MusicKit for Swift, but the steps I've had to take don't feel right to me, so I feel like I am missing something. The thing is that I don't just want a bag of tracks, I want their order (trackNumber) AND what disc they're on (discNumber), if more than one disc. At the moment, I can get to the tracks for the album with album.with([.tracks]) to get a (MusicItemCollection<Track>), but each Track DOES NOT have a discNumber. Which means, in multi disc Albums all the tracks are munged together. To get to the discNumber's, I've had to get an equivalent Song using the same ID, which DOES contain the discNumber, as well as the trackNumber. But this collection is not connected to the Album strongly, so I've had to create a containing struct to hold both the Album (which contains Track's) and some separate Song's, making my data model slightly more complex. (Note that you don't appear to be able to do album.with([.songs]) to skip getting to a Song via a Track.) I can't get my head around this structure - what is the difference between a Track and a Song? I would think that the difference is that a Track is a specific representation on a specific Album, and a Song is a more general representation that might be shared amongst more Album's (like compilations). But therefore, surely discNumber should actually be on the Track as it is specific to the Album, and arguably Song shouldn't even have trackNumber of it's own? In any case, can anyone help with what is the quickest way to get get the trackNumber's and discNumbers' of each actual track on an Album (starting with an MusicItemID) please? var songs = [Song]() let detailedAlbum = try await album.with([.tracks]) if let tracks = detailedAlbum.tracks { for track in tracks { let resourceRequest = MusicCatalogResourceRequest<Song>(matching: \.id, equalTo: track.id) async let resourceResponse = resourceRequest.response() guard let song = try await resourceResponse.items.first else { break } songs.append(song) } }
Posted
by
Post not yet marked as solved
0 Replies
435 Views
I'm trying to play an audio content built from NSData inside a library (.a). It works properly when my code is inside an app. But it is not working when in a library, I get no error and no sound playing. NSError * errorAudio = nil; NSError * errorFile; // Clear all cache NSArray* tmpDirectory = [[NSFileManager defaultManager] contentsOfDirectoryAtPath:NSTemporaryDirectory() error:NULL]; for (NSString *file in tmpDirectory) {     [[NSFileManager defaultManager] removeItemAtPath:[NSString stringWithFormat:@"%@%@", NSTemporaryDirectory(), file] error:NULL]; } // Set temporary directory and temporary file NSURL * tmpDirURL = [NSURL fileURLWithPath:NSTemporaryDirectory() isDirectory:YES]; NSURL * soundFileURL = [[tmpDirURL URLByAppendingPathComponent:@"temp"] URLByAppendingPathExtension:@"wav"]; [[NSFileManager defaultManager] createDirectoryAtURL:tmpDirURL withIntermediateDirectories:NO attributes:nil error:&amp;errorFile]; // Write NSData to temporary file NSString *path= [soundFileURL path]; [audioToPlay writeToFile:path options:NSDataWritingAtomic error:&amp;errorFile]; if (errorFile) {     // Error while writing NSData } else {     // Init audio player     self.audioPlayer = [[AVAudioPlayer alloc] initWithContentsOfURL:soundFileURL error:&amp;errorAudio];     if (errorAudio) {         // Audio player could not be initialized     } else {         // Audio player was initialized correctly         [audioPlayer prepareToPlay];         [audioPlayer stop];         [audioPlayer setCurrentTime:0];         [audioPlayer play];     } } I don't check errorFile intros piece of code, but when debugging I can see that value is nil. My header file #import &lt;AudioToolbox/AudioToolbox.h&gt; #import &lt;AVFoundation/AVFoundation.h&gt; @property(nonatomic, strong) AVAudioPlayer * audioPlayer; My m file #import &lt;AudioToolbox/AudioToolbox.h&gt; #import &lt;AVFoundation/AVFoundation.h&gt; @synthesize audioPlayer; I've been checking for dozens of posts but cannot find any solution, it always works properly in an app, but not in a library. Any help would be greatly appreciated.
Posted
by
Post not yet marked as solved
0 Replies
361 Views
Hello! There are a few similar open-source projects that implement support for creating macOS M1 guest Virtual Machine on macOS M1 Host, for example: https://github.com/jspahrsummers/Microverse https://github.com/KhaosT/MacVM But both of them share a common problem: any audio from guest VM (system sounds or youtube videos in Safari or Firefox) is played late on host by approximately 0.3 seconds. Is there any way how we can remove this latency, so it is possible to hear real-time audio from guest VM? Regards, Eugene.
Posted
by
Post not yet marked as solved
0 Replies
369 Views
HI I'm a composer who has used Logic since it ran on an Atari, and was delighted by the update of Logic that includes Dolby mixing particularly as I have been working with sound spatialisation for many years - mostly live, in concerts of music I did at the Royal College of Music in London and around the world - classics of electronic music like Stockhausen and Jonathan Harvey and generally using Max/Ircam software which doesn't always actually work(!). Anyway, I want to create ambient type music which rotates the sounds around the listener, and decoding to binaural and listening on Airpods sounds very good. But. If the head tracking could be employed to keep the image of this 3d soundcape fixed I think (know) the illusion would be so much more pronounced. So is there anyway that this can be incorporated into the listening experience (from the Music app on your phone) - presumably extra data in the file? I'm really not an engineer in that way. Now even more fiddly, and probably a pipe dream, - I'm using the Leslie Cabinet plugin (native to Logic) to create a very lovely sound by spinning the sound of the Tanpura (The Indian Classical stringed drone instrument). What could potentially be mind bending is if the output of the Lesile wasn't just stereo, but actually surround, and that this sound could similarly whirl around the head of the listener, MAINTAINING an fixed reference postiton (is that what I mean?) as the listener moves their head - so the surround leslie output could be directly "inserted" into the space and the Atmos system would, so to speak, know where the rotating drum of the Leslie is at all times. Sometimes though I'm using quite fast rotations - hard to know what the speed is as the knob on the Leslie seems to use arbitrary units (ahem, something more scientific would help!), but max speed sounds like about 30Hz - almost audio itself which creates extaordinary effects. At the moment I'm doing a rough fake by automating a surround panner, and I even tried mapping in a rotary controller, but actually that creates circles within circles, and it can't match the actual experience you might get in a performance of something like Harvey's 4th String Quartet where you're surrounded by actual speakers and the quatrtet is whizzing around you almost sounding like a cloud of bees (last movement -amazing). Thing is we're close, and the listening quality of your products here (the Leslie, the Binaural render and the Airpods themselves) really is exceptional - so much better than stuff I (or other people) have made in say Max or Ableton Live. I'm very impressed so far but I feel we could push it to "the next level" as you Americans say. I hope that explains the sort of thing I would like to work with? I'm sure it's very niche, and feel free to tell me to piss off, lol, but your ads always claim that you're really into making cool stuff. This is cool (says the 55yo balding guy). I really think this way of listening with Aipods is going to become massive - I'm no gamer, but the implications for the immersive sound on those (and for watching films) is huge too (I'm sure you have someone working on it) I'm just a composer writing profoundly uncommercial, rather poetic, classical electronic music ;). I've attached a rough BInaural mix of this thing I'm working on with the Tanpuras, Temple bells and some improvising musicians i know - once you hear the sound I think it will become much more clear what I'm talking about - and hopefully you like the music and not just effect! Very unfinished, and very hot off the press, so to speak. I've also added a screen grab of my current workaround in Logic and a cool picture of Stockhausen making it work in !959 (!) before we ever dreamed of having tiny computers in our ears. If you can't help I may have to actually build a version of that spinning table, which would be a pain and expensive - I've spent more than enough on your products over the years. ;) https://www.dropbox.com/s/uh9a4nx7psiv9qb/Tanpura%20Extended%20with%20Hannah%20Dolby%20Atmos%20Version%20%28A%29.mp3?dl=0 Ok it won't let me submit the pic, ugh... Thanks for taking the time, let me know what you think, cheers, Michael Oliva
Posted
by
Post not yet marked as solved
0 Replies
484 Views
Hi all, I have an app that is playing music from http live streams using AVPlayer. Have been trying to figure out how to use ShazamKit to recognise the music playing but I just can't figure out how to do it :-( Works well with local files and microphone recordings, but how do I get the data from a stream that is currently playing??? Feels like I tried everything... Have tried to install an MTAudioProcessingTap but it doesn't seem to work on streaming assets even though I can get hold of the proper AVAssetTrack containing the audio. No callback with data are received? Bug? I can open the streaming url and just save the bytes to disk and that's fine, but I'm not in sync with what is playing in AVPlayerItem so the recognition isn't working with the same audio data as the user is currently hearing. Hmmm. Any suggestions and ideas are welcome. It would be such a nice feature for my app so I'm really looking forward to solving this. Thx in advance / Jörgen, Bitfield
Posted
by
Post not yet marked as solved
0 Replies
212 Views
We have a syndicated radioshow goin on 21 years here in Reno, NV. Wondering if we could make an app to simulcast the show online? The show is on an FM radio station here. What documentation would we have to provide to prove we are legit? Thanks so much in advance, people always ask about hearing the show outside of the FM radio range & an app would be very cool.
Posted
by
Post not yet marked as solved
1 Replies
397 Views
Facing strange issue in two devices, iPhone 6s and 12 mini. Not getting audio from the media when ringer is off, while playing media. Is this device specific issue, any settings issue, OS issue or ultimately app issue. But this works fine with you tube and other media apps.
Posted
by
Post marked as solved
2 Replies
463 Views
I was wondering if there are any example download projects of the PHASE audio framework? I was watching the WWDC 2021 video but there was no example code to download. The examples within the video were pretty verbose -- do not want to freeze a frame and type all that by hand. I am attempting to replace some old OpenAL code from a few years ago with an alternate solution. All the OpenAL code shows deprecation messages when I build in Xcode. The generated header PHASE documentation is kind of sparse and somewhat boiler plate with no examples. Thanks in advance.
Posted
by
Post marked as solved
1 Replies
765 Views
Hoping for guidance on how to prevent my app from stopping/pausing music playing from the Apple Music app. Would prefer if users can choose to listen to their own music while playing the game.
Posted
by
Post not yet marked as solved
0 Replies
940 Views
something broke in iOS 15 in my app , the same code is working fine with iOS 14.8 and below versions. The actual issues is when I play audio in my app then I go to notification bar , pause the audio and next play the audio from notification bar itself , then same audio is playing twice . One audio is resuming from where I paused it before and the other one is playing the same audio from initial stage. When the issue is happening this is the logs I am getting Ignoring setPlaybackState because application does not contain entitlement com.apple.mediaremote.set-playback-state for platform 2021-09-24 21:40:06.597469+0530 BWW[2898:818107] [rr] Response: updateClientProperties<A4F2E21E-9D79-4FFA-9B49-9F85214107FD> returned with error <Error Domain=kMRMediaRemoteFrameworkErrorDomain Code=29 “Could not find the specified now playing player” UserInfo={NSLocalizedDescription=Could not find the specified now playing player}> for origin-iPhone-1280262988/client-com.iconicsolutions.xstream-2898/player-(null) in 0.0078 seconds I got stuck with this issue since 2 days , I tried all the ways but unable to get why it's only happening in iOS 15. Any help will be greatly appreciated.
Posted
by
Post not yet marked as solved
0 Replies
276 Views
I have my first App ready and crash free (I think!) using AudioKit. While coding it I used the develop branch. I assume I should submit it with the main branch packages? Trouble is I updated my iPad to iOS15 (yesterday) so then had to move onto Xcode 13 and ended up have a lot of broken AudioKit code with the main branch of AudioKit. As well as a couple of issues with the develop branch - which I managed to fix. This is my first App submission so I'd like to get it right - excuse my newbie idiocy. Seems like it may have been a bad idea moving to iOS15 & Xcode 13 right now. Should I go back to 12? Main question though is what 3rd party framework branches should be used in a final App release?
Posted
by
Post not yet marked as solved
0 Replies
380 Views
(original question on stack overflow) Safari requires that a user gesture occurs before the playing of any audio. However, the user's response to getUserMedia does not appear to count as a user gesture. Or perhaps I have that wrong, maybe there is some way to trigger the playing? This question ("Why can't JavaScript .play() audio files on iPhone safari?") details the many attempts to work around the need for a user gesture, but it seems like Apple has closed most of the loopholes. For whatever reason, Safari does not consider the IOS acceptance of the camera/mic usage dialog as a user gesture and there's no way to make camera capture count as a user gesture. Is there something I'm missing, is it impossible to play an audio file after capturing the camera? Or is there someway to respond to the camera being captured with an audio file?
Posted
by
Post not yet marked as solved
0 Replies
607 Views
Good day community, More than half a year we faced the crash with following callstack: Crashed: AVAudioSession Notify Thread EXC_BAD_ACCESS KERN_INVALID_ADDRESS 0x0000000000000000 0. libEmbeddedSystemAUs.dylib InterruptionListener(void*, unsigned int, unsigned int, void const*) 1. libEmbeddedSystemAUs.dylib InterruptionListener(void*, unsigned int, unsigned int, void const*) arrow_right 2. AudioToolbox AudioSessionPropertyListeners::CallPropertyListeners(unsigned int, unsigned int, void const*) + 596 3. AudioToolbox HandleAudioSessionCFTypePropertyChangedMessage(unsigned int, unsigned int, void*, unsigned int) + 1144 4. AudioToolbox ProcessDeferredMessage(unsigned int, __CFData const*, unsigned int, unsigned int) + 2452 5. AudioToolbox ASCallbackReceiver_AudioSessionPingMessage + 632 6. AudioToolbox _XAudioSessionPingMessage + 44 7. libAudioToolboxUtility.dylib mshMIGPerform + 264 8. CoreFoundation __CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE1_PERFORM_FUNCTION__ + 56 9. CoreFoundation __CFRunLoopDoSource1 + 444 10. CoreFoundation __CFRunLoopRun + 1888 11. CoreFoundation CFRunLoopRunSpecific + 424 12. AVFAudio GenericRunLoopThread::Entry(void*) + 156 13. AVFAudio CAPThread::Entry(CAPThread*) + 204 14. libsystem_pthread.dylib _pthread_start + 156 15. libsystem_pthread.dylib thread_start + 8 We use Wwise audio framework as audio playback API. We did reported the problem to Audiokinetic's support, but it seems that the problem is not there. Also we used FMOD sound engine earlier, but we had the same issue. At this time we have around 100 crash events every day, which makes us upset. Looks like it started from iOS 13. My main problem is that I don't communicate with AudioToolbox or AVFAudio API directly but use thirdparty sound engines instead. I believe I am not the only who faced this problem. Also there is a discussion at https://forum.unity.com/threads/ios-12-crash-audiotoolbox.719675/ The last message deserves special attention: https://zhuanlan.zhihu.com/p/370791950 where Jeffrey Zhuang made a research. This might be helpful for Apple's support team. Any help is highly appreciated. Best regards, Sergey.
Posted
by
Post not yet marked as solved
0 Replies
459 Views
I'm using the MacBook Pro (2020 series). https://developer.dolby.com/platforms/apple/macos/overview/ The above page says that Dolby Atmos is supported for built-in speakers, but I don't know how to play. I can't found the settings for playing with Dolby Atmos on MacBook Pro, so how can I play with Dolby Atmos in Mac OS application written by mysellf ? Execution environment: MacBook Pro (2020) Big Sur 11.5.2 CPU: Apple M1 Xcode: 12.5.1
Posted
by
Post not yet marked as solved
0 Replies
441 Views
I use AVSpeechSynthesizer to pronounce some text in German. Sometimes it works just fine and sometimes it doesn't for some unknown to me reason (there is no error, because the speak() method doesn't throw and the only thing I am able to observe is the following message logged in the console): _BeginSpeaking: couldn't begin playback I tried to find some API in the AVSpeechSynthesizerDelegate to register a callback when error occurs, but I have found none. The closest match was this (but it appears to be only available for macOS, not iOS): https://developer.apple.com/documentation/appkit/nsspeechsynthesizerdelegate/1448407-speechsynthesizer?changes=_10 Below you can find how I initialize and use the speech synthesizer in my app: class Speaker: NSObject, AVSpeechSynthesizerDelegate {   class func sharedInstance() -> Speaker {     struct Singleton {       static var sharedInstance = Speaker()     }     return Singleton.sharedInstance   }       let audioSession = AVAudioSession.sharedInstance()   let synth = AVSpeechSynthesizer()       override init() {     super.init()     synth.delegate = self   }       func initializeAudioSession() {     do {       try audioSession.setCategory(.playback, mode: .spokenAudio, options: .duckOthers)       try audioSession.setActive(true, options: .notifyOthersOnDeactivation)     } catch {             }   }       func speak(text: String, language: String = "de-DE") { guard !self.synth.isSpeaking else { return }     let utterance = AVSpeechUtterance(string: text)     let voice = AVSpeechSynthesisVoice.speechVoices().filter { $0.language == language }.first!           utterance.voice = voice     self.synth.speak(utterance)   } } The audio session initialization is ran during app started just once. Afterwards, speech is synthesized by running the following code: Speaker.sharedInstance.speak(text: "Lederhosen") The problem is that I have no way of knowing if the speech synthesis succeeded—the UI is showing "speaking" state, but nothing is actually being spoken.
Posted
by