Post not yet marked as solved
This issue has a blocking impact on our ability to serve our product on any iOS devices (since Web Audio APIs are not supported on any other browsers than Safari on iOS) and Safari browser on desktop and iOS in general. And one of our customer is currently heavily impacted because of this limitation in Safari. Currently Safari built on WebKit has a limitation that it cannot provide access to raw audio data via AudioContext for HLS playback, which works on mp4 files. This is supported by EVERY OTHER MAJOR BROWSER EXCEPT SAFARI, which is concerning because we will need to force users to not use our application on safari desktop, and we simply CANNOT SERVE ANY IPHONE AND IPAD USERS which is a BLOCKER for us given that more than half of our users use iOS based devices. And of course this is clearly a feature that should’ve been in place already in Safari, which is currently lagging behind in comparison to other browsers. The W3C specification already supports this and all major browsers have already implemented and supported HLS streams to be used with AudioContext.
We’d like to re-iterate the importance and urgency of this (https://bugs.webkit.org/show_bug.cgi?id=231656) for us, and this has been raised multiple times by other developers as well, so certainly this will help thousands of other Web developers to bring HLS based applications to life on Safari and iOS ecosystem.
Can we please get the visibility on what is going to be the plan and timelines for HLS support with AudioContext in Safari? Critical part of our business and our customer’s products depend on this support in Safari.
We're using new webkitAudioContext() in Safari 15.0 on MacBook and iOS Safari on iPhone and iPad to create AudioContext instance, and we're creating ScriptProcessorNode and attaching it to the HLS/m3u8 source create using audioContext. createMediaElementSource(). The onaudioprocess callback gets called with the audio data, but no data is processed and instead we get 0’s.
If you also connect Analyser node to the same audio source create using audioContext. createMediaElementSource(), analyser.getByteTimeDomainData(dataArray) populates no data in the data but onaudioprocess on the ScriptProcessorNode on the same source
What has been tried:
We confirmed that the stream being used is the only stream in the tab and createMediaElementSource() was only called once to get the stream.
We confirmed that if the stream source is MP4/MP3 it works with no issues and data is received in onaudioprocess, but when modifing the source to HLS/m3u8 it does not work
We also tried using MediaRecorder with HLS/m3u8 as the stream source but didn’t get any events or data
We also tried to create two AudioContext’s, so the first AudioContext will be the source passing the createMediaElementSource as the destination to the other Audio Context and then pass it to ScriptProcessorNode, but Safari does not allow more than one output.
Currently none of the scenarios we tried works and this is a major blocker to us and for our customers.
Code sample used to create the ScriptProcessorNode:
const AudioContext = window.AudioContext || window.webkitAudioContext;
audioContext = new AudioContext();
// Create a MediaElementAudioSourceNode
// Feed the HTML Video Element 'VideoElement' into it
const audioSource = audioContext.createMediaElementSource(VideoElement);
const processor = audioContext.createScriptProcessor(2048, 1, 1);
processor.connect(audioContext.destination);
processor.onaudioprocess = (e) => {
// Does not get called when connected to external microphone
// Gets called when using internal MacBook microphone
console.log('print audio buffer', e);
}
The exact same behavior is also observed on iOS Safari on iPhone and iPad.
We are asking for your help on this matter ASAP.
Thank you!
Post not yet marked as solved
I'm trying to play an audio content built from NSData inside a library (.a). It works properly when my code is inside an app. But it is not working when in a library, I get no error and no sound playing.
NSError * errorAudio = nil;
NSError * errorFile;
// Clear all cache
NSArray* tmpDirectory = [[NSFileManager defaultManager] contentsOfDirectoryAtPath:NSTemporaryDirectory() error:NULL];
for (NSString *file in tmpDirectory) {
[[NSFileManager defaultManager] removeItemAtPath:[NSString stringWithFormat:@"%@%@", NSTemporaryDirectory(), file] error:NULL];
}
// Set temporary directory and temporary file
NSURL * tmpDirURL = [NSURL fileURLWithPath:NSTemporaryDirectory() isDirectory:YES];
NSURL * soundFileURL = [[tmpDirURL URLByAppendingPathComponent:@"temp"] URLByAppendingPathExtension:@"wav"];
[[NSFileManager defaultManager] createDirectoryAtURL:tmpDirURL withIntermediateDirectories:NO attributes:nil error:&errorFile];
// Write NSData to temporary file
NSString *path= [soundFileURL path];
[audioToPlay writeToFile:path options:NSDataWritingAtomic error:&errorFile];
if (errorFile) {
// Error while writing NSData
} else {
// Init audio player
self.audioPlayer = [[AVAudioPlayer alloc] initWithContentsOfURL:soundFileURL error:&errorAudio];
if (errorAudio) {
// Audio player could not be initialized
} else {
// Audio player was initialized correctly
[audioPlayer prepareToPlay];
[audioPlayer stop];
[audioPlayer setCurrentTime:0];
[audioPlayer play];
}
}
I don't check errorFile intros piece of code, but when debugging I can see that value is nil.
My header file
#import <AudioToolbox/AudioToolbox.h>
#import <AVFoundation/AVFoundation.h>
@property(nonatomic, strong) AVAudioPlayer * audioPlayer;
My m file
#import <AudioToolbox/AudioToolbox.h>
#import <AVFoundation/AVFoundation.h>
@synthesize audioPlayer;
I've been checking for dozens of posts but cannot find any solution, it always works properly in an app, but not in a library. Any help would be greatly appreciated.
Post not yet marked as solved
Hello!
There are a few similar open-source projects that implement support for creating macOS M1 guest Virtual Machine on macOS M1 Host, for example:
https://github.com/jspahrsummers/Microverse
https://github.com/KhaosT/MacVM
But both of them share a common problem: any audio from guest VM (system sounds or youtube videos in Safari or Firefox) is played late on host by approximately 0.3 seconds.
Is there any way how we can remove this latency, so it is possible to hear real-time audio from guest VM?
Regards, Eugene.
Post not yet marked as solved
HI
I'm a composer who has used Logic since it ran on an Atari, and was delighted by the update of Logic that includes Dolby mixing particularly as I have been working with sound spatialisation for many years - mostly live, in concerts of music I did at the Royal College of Music in London and around the world - classics of electronic music like Stockhausen and Jonathan Harvey and generally using Max/Ircam software which doesn't always actually work(!).
Anyway, I want to create ambient type music which rotates the sounds around the listener, and decoding to binaural and listening on Airpods sounds very good. But. If the head tracking could be employed to keep the image of this 3d soundcape fixed I think (know) the illusion would be so much more pronounced. So is there anyway that this can be incorporated into the listening experience (from the Music app on your phone) - presumably extra data in the file? I'm really not an engineer in that way.
Now even more fiddly, and probably a pipe dream, - I'm using the Leslie Cabinet plugin (native to Logic) to create a very lovely sound by spinning the sound of the Tanpura (The Indian Classical stringed drone instrument).
What could potentially be mind bending is if the output of the Lesile wasn't just stereo, but actually surround, and that this sound could similarly whirl around the head of the listener, MAINTAINING an fixed reference postiton (is that what I mean?) as the listener moves their head - so the surround leslie output could be directly "inserted" into the space and the Atmos system would, so to speak, know where the rotating drum of the Leslie is at all times.
Sometimes though I'm using quite fast rotations - hard to know what the speed is as the knob on the Leslie seems to use arbitrary units (ahem, something more scientific would help!), but max speed sounds like about 30Hz - almost audio itself which creates extaordinary effects.
At the moment I'm doing a rough fake by automating a surround panner, and I even tried mapping in a rotary controller, but actually that creates circles within circles, and it can't match the actual experience you might get in a performance of something like Harvey's 4th String Quartet where you're surrounded by actual speakers and the quatrtet is whizzing around you almost sounding like a cloud of bees (last movement -amazing).
Thing is we're close, and the listening quality of your products here (the Leslie, the Binaural render and the Airpods themselves) really is exceptional - so much better than stuff I (or other people) have made in say Max or Ableton Live. I'm very impressed so far but I feel we could push it to "the next level" as you Americans say.
I hope that explains the sort of thing I would like to work with? I'm sure it's very niche, and feel free to tell me to piss off, lol, but your ads always claim that you're really into making cool stuff. This is cool (says the 55yo balding guy).
I really think this way of listening with Aipods is going to become massive - I'm no gamer, but the implications for the immersive sound on those (and for watching films) is huge too (I'm sure you have someone working on it) I'm just a composer writing profoundly uncommercial, rather poetic, classical electronic music ;).
I've attached a rough BInaural mix of this thing I'm working on with the Tanpuras, Temple bells and some improvising musicians i know - once you hear the sound I think it will become much more clear what I'm talking about - and hopefully you like the music and not just effect! Very unfinished, and very hot off the press, so to speak.
I've also added a screen grab of my current workaround in Logic and a cool picture of Stockhausen making it work in !959 (!) before we ever dreamed of having tiny computers in our ears. If you can't help I may have to actually build a version of that spinning table, which would be a pain and expensive - I've spent more than enough on your products over the years. ;)
https://www.dropbox.com/s/uh9a4nx7psiv9qb/Tanpura%20Extended%20with%20Hannah%20Dolby%20Atmos%20Version%20%28A%29.mp3?dl=0
Ok it won't let me submit the pic, ugh...
Thanks for taking the time, let me know what you think, cheers,
Michael Oliva
Post not yet marked as solved
Hi,
we are currently integrating the new CarPlay experience in iOS 14 (com.apple.developer.carplay-audio) in our radio app and already use MPNowPlayingInfoCenter and MPRemoteCommandCenter. In this context we came across some weird behavior: when playing a live stream (MPNowPlayingInfoPropertyIsLiveStream = true) the NowPlayingTemplate on CarPlay always displays the stop button even when we only configure for play and pause button. The stop button then doesn't do anything (because we didn't assign any action to it). Also are other actions like skipBackwards and nextTrack disabled.
When declaring the audios property to not live, everything works fine.
Is there any way we can have a live stream and also make the template display pause button and enable other actions?
Post not yet marked as solved
We have a syndicated radioshow goin on 21 years here in Reno, NV. Wondering if we could make an app to simulcast the show online? The show is on an FM radio station here. What documentation would we have to provide to prove we are legit? Thanks so much in advance, people always ask about hearing the show outside of the FM radio range & an app would be very cool.
Post not yet marked as solved
When I use AVFoundation to record video on my App, the video can only save both video and audio less than 15seconds, if the video is more than 15seconds, the audio is missing.
My iOS is 15 and my Xcode is 13.0
I was wondering if there are any example download projects of the PHASE audio framework? I was watching the WWDC 2021 video but there was no example code to download. The examples within the video were pretty verbose -- do not want to freeze a frame and type all that by hand.
I am attempting to replace some old OpenAL code from a few years ago with an alternate solution. All the OpenAL code shows deprecation messages when I build in Xcode.
The generated header PHASE documentation is kind of sparse and somewhat boiler plate with no examples.
Thanks in advance.
Post not yet marked as solved
Facing strange issue in two devices, iPhone 6s and 12 mini. Not getting audio from the media when ringer is off, while playing media. Is this device specific issue, any settings issue, OS issue or ultimately app issue. But this works fine with you tube and other media apps.
Hoping for guidance on how to prevent my app from stopping/pausing music playing from the Apple Music app. Would prefer if users can choose to listen to their own music while playing the game.
Post not yet marked as solved
Where can I find a comprehensive list of all the classes that the built in Sound Classifier model supports?
Post not yet marked as solved
something broke in iOS 15 in my app , the same code is working fine with iOS 14.8 and below versions. The actual issues is when I play audio in my app then I go to notification bar , pause the audio and next play the audio from notification bar itself , then same audio is playing twice . One audio is resuming from where I paused it before and the other one is playing the same audio from initial stage.
When the issue is happening this is the logs I am getting
Ignoring setPlaybackState because application does not contain entitlement com.apple.mediaremote.set-playback-state for platform
2021-09-24 21:40:06.597469+0530 BWW[2898:818107] [rr] Response: updateClientProperties<A4F2E21E-9D79-4FFA-9B49-9F85214107FD> returned with error <Error Domain=kMRMediaRemoteFrameworkErrorDomain Code=29 “Could not find the specified now playing player” UserInfo={NSLocalizedDescription=Could not find the specified now playing player}> for origin-iPhone-1280262988/client-com.iconicsolutions.xstream-2898/player-(null) in 0.0078 seconds
I got stuck with this issue since 2 days , I tried all the ways but unable to get why it's only happening in iOS 15.
Any help will be greatly appreciated.
Post not yet marked as solved
Would like to play with this example project shown on session wwdc21-10187
Post not yet marked as solved
I have my first App ready and crash free (I think!) using AudioKit. While coding it I used the develop branch. I assume I should submit it with the main branch packages?
Trouble is I updated my iPad to iOS15 (yesterday) so then had to move onto Xcode 13 and ended up have a lot of broken AudioKit code with the main branch of AudioKit. As well as a couple of issues with the develop branch - which I managed to fix.
This is my first App submission so I'd like to get it right - excuse my newbie idiocy.
Seems like it may have been a bad idea moving to iOS15 & Xcode 13 right now. Should I go back to 12?
Main question though is what 3rd party framework branches should be used in a final App release?
Post not yet marked as solved
I have been developing an app that uses Youtube Content which I am fetching from Youtube Data API which is publicly provided by Youtube itself.
Basically, my app shows a list of Youtube Videos and playlist fetched from Youtube API in the user interface and the user can play video
The app I am developing is not enabling users to "save, convert, or download" any videos directly or indirectly
App Store Review Guidelines mentions two points
1 > 5.2.2 points states that "Authorization must be provided upon request"
2> 5.2.3 states that "Documentation must be provided upon request"
so my question is that is there any chance of my app may face app store rejection? if yes then what can I do in order to pass the app store review process?
if my app receives app store rejection then how can I get "Authorization" or "Documentation" from Youtube because as far as I read on Youtube API Documentation Youtube is not providing neither "Authorization" nor "Documentation". Youtube only lets you register your app on their console and gives you API key using which you can get data
Post not yet marked as solved
(original question on stack overflow)
Safari requires that a user gesture occurs before the playing of any audio.
However, the user's response to getUserMedia does not appear to count as a user gesture. Or perhaps I have that wrong, maybe there is some way to trigger the playing?
This question ("Why can't JavaScript .play() audio files on iPhone safari?")
details the many attempts to work around the need for a user gesture,
but it seems like Apple has closed most of the loopholes. For whatever
reason, Safari does not consider the IOS acceptance of the camera/mic
usage dialog as a user gesture and there's no way to make camera capture
count as a user gesture.
Is there something I'm missing, is it impossible to play an audio
file after capturing the camera? Or is there someway to respond to the
camera being captured with an audio file?
Post not yet marked as solved
Hi,
My account was approved for the com.apple.developer.playable-content entitlement, but now it's deprecated and I want to switch to the new one com.apple.developer.carplay-audio.
I have some problems making the transition, do I need to submit a new request to Apple for the new entitlement?
Thanks.
Post not yet marked as solved
Hey,
I am trying to figure out how can I display currently playing sources of Audio on my Xcode Project.
In the new Big Sur update I believe this was possible due to the Mac Catalyst.
How can I do this on the Mac?
New to this, can someone guide please.
How can I find out what the Problem is?
Every Time I start Audio and hear it when the iPad/iPhone is turned off and then activate Display of the device after 10-15 Minutes, the App crashes.
Here are the First Lines of the Crash Report:
Hardware Model: iPad8,12
Process: VOH-App [16336]
Path: /private/var/containers/Bundle/Application/5B2CF582-D108-4AA2-B30A-81BA510B7FB6/VOH-App.app/VOH-App
Identifier: com.voiceofhope.VOH
Version: 7 (1.0)
Code Type: ARM-64 (Native)
Role: Non UI
Parent Process: launchd [1]
Coalition: com.voiceofhope.VOH [740]
Date/Time: 2021-08-18 22:51:24.0770 +0200
Launch Time: 2021-08-18 22:36:50.4081 +0200
OS Version: iPhone OS 14.7.1 (18G82)
Release Type: User
Baseband Version: 2.05.01
Report Version: 104
Exception Type: EXC_BAD_ACCESS (SIGSEGV)
Exception Subtype: KERN_PROTECTION_FAILURE at 0x000000016d2dffb0
VM Region Info: 0x16d2dffb0 is in 0x16d2dc000-0x16d2e0000; bytes after start: 16304 bytes before end: 79
REGION TYPE START - END [ VSIZE] PRT/MAX SHRMOD REGION DETAIL
CG raster data 11cad0000-11d814000 [ 13.3M] r--/r-- SM=COW
GAP OF 0x4fac8000 BYTES
---> STACK GUARD 16d2dc000-16d2e0000 [ 16K] ---/rwx SM=NUL ... for thread 0
Stack 16d2e0000-16d3dc000 [ 1008K] rw-/rwx SM=PRV thread 0
Termination Signal: Segmentation fault: 11
Termination Reason: Namespace SIGNAL, Code 0xb
Terminating Process: exc handler [16336]
Triggered by Thread: 0
Thread 0 name: Dispatch queue: com.apple.main-thread
Thread 0 Crashed:
0 libswiftCore.dylib 0x00000001a8028360 swift::MetadataCacheKey::operator==+ 3773280 (swift::MetadataCacheKey) const + 4
1 libswiftCore.dylib 0x00000001a801ab8c _swift_getGenericMetadata+ 3718028 (swift::MetadataRequest, void const* const*, swift::TargetTypeContextDescriptor<swift::InProcess> const*) + 304
2 libswiftCore.dylib 0x00000001a7ffbd00 __swift_instantiateCanonicalPrespecializedGenericMetadata + 36
Here is a full crash Report:
VOH-App 16.08.21, 20-22.crash
Post not yet marked as solved
Good day community,
More than half a year we faced the crash with following callstack:
Crashed: AVAudioSession Notify Thread
EXC_BAD_ACCESS KERN_INVALID_ADDRESS 0x0000000000000000
0. libEmbeddedSystemAUs.dylib
InterruptionListener(void*, unsigned int, unsigned int, void const*)
1. libEmbeddedSystemAUs.dylib
InterruptionListener(void*, unsigned int, unsigned int, void const*)
arrow_right
2. AudioToolbox
AudioSessionPropertyListeners::CallPropertyListeners(unsigned int, unsigned int, void const*) + 596
3. AudioToolbox
HandleAudioSessionCFTypePropertyChangedMessage(unsigned int, unsigned int, void*, unsigned int) + 1144
4. AudioToolbox
ProcessDeferredMessage(unsigned int, __CFData const*, unsigned int, unsigned int) + 2452
5. AudioToolbox
ASCallbackReceiver_AudioSessionPingMessage + 632
6. AudioToolbox
_XAudioSessionPingMessage + 44
7. libAudioToolboxUtility.dylib
mshMIGPerform + 264
8. CoreFoundation
__CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE1_PERFORM_FUNCTION__ + 56
9. CoreFoundation
__CFRunLoopDoSource1 + 444
10. CoreFoundation
__CFRunLoopRun + 1888
11. CoreFoundation
CFRunLoopRunSpecific + 424
12. AVFAudio
GenericRunLoopThread::Entry(void*) + 156
13. AVFAudio
CAPThread::Entry(CAPThread*) + 204
14. libsystem_pthread.dylib
_pthread_start + 156
15. libsystem_pthread.dylib
thread_start + 8
We use Wwise audio framework as audio playback API. We did reported the problem to Audiokinetic's support, but it seems that the problem is not there.
Also we used FMOD sound engine earlier, but we had the same issue.
At this time we have around 100 crash events every day, which makes us upset. Looks like it started from iOS 13.
My main problem is that I don't communicate with AudioToolbox or AVFAudio API directly but use thirdparty sound engines instead.
I believe I am not the only who faced this problem.
Also there is a discussion at https://forum.unity.com/threads/ios-12-crash-audiotoolbox.719675/
The last message deserves special attention:
https://zhuanlan.zhihu.com/p/370791950
where Jeffrey Zhuang made a research. This might be helpful for Apple's support team.
Any help is highly appreciated.
Best regards,
Sergey.