Post not yet marked as solved
Is there a way for me to programmatically query whether if my AVAudioSession is able to play even when app is minimized/screen is locked? I need this to debug background audio permissions as my AVAudioSession keeps getting paused while app goes into background and it resumes once it goes into the foreground. Moreover, when I try to call setActive for AVAudioSession in didEnterBackground, it gives me the error code 561015905 which says it is permission related.
My Info.plist already has
<key>UIBackgroundModes</key>
<array>
<string>audio</string>
</array>
added to it.
Post not yet marked as solved
Hello all. I'm running Big Sur 20A4299v with my 16-inch MacBook Pro and after the beta install, the Display Port audio has ceased the work. Has anyone else come across this bug yet? Thanks y'all.
Post not yet marked as solved
Hi,
I've been asked to develop a software for macOS that monitors daily 24/7 some streamings and logs the aired music.
I'd like to use ShazamKit, by the way I don't know if I have to do something in particular in order to use it in a commercial app and I don't know if with so many requests I can hit some threshold (it could be 10 simultaneous streams, it could be 100 simultaneous streams, I don't know at the moment).
Any info about that?
TL;DR C function can't open MP3 file in app's documents directory, am I missing any sort of permissions?
I am trying to create an app to play music through the BASS Audio Library for C/C++ and while I have it playing music, I cannot seem to have it open local files.
To create a stream from a file to play in this library, you use BASS_StreamCreateFile(); function, which you pass a URL to the file to use, but even thought I can verify the URL I passing is correct and the file is in the files app, it throws error code 2 "Cannot open file"
However, when I use BASS_StreamCreateURL(); and pass in a URL from the internet, it works perfectly, so I have to assume the problem has something to do with file permissions.
Here is the C function in which I am creating these streams
int createStream(const char* url) {
//HSTREAM stream = BASS_StreamCreateURL("https://vgmsite.com/soundtracks/legend-of-zelda-ocarina-of-time-original-sound-track/fticxozs/68%20-%20Gerudo%20Valley.mp3", 0, 0, NULL, 0);
HSTREAM stream = BASS_StreamCreateFile(false, url, 0, 0, 0);
if (stream == 0) {
printf("Error at createStream, error code: %i\n", BASS_ErrorGetCode());
return 0;
} else {
return stream;
}
}
In the commented out line is the working Stream created from a URL
And here is the URL I am passing in
guard let documentsURL = fileManager.urls(for: .documentDirectory, in: .userDomainMask).first else { return }
gerudoValleyURL = documentsURL.appendingPathComponent("GerudoValley.mp3")
stream = createStream(gerudoValleyURL.absoluteString)
I can confirm that the MP3 "GerudoValley.mp3" is in the app's documents directory in the files app.
Is there anything I could do to allow this C file to open to open MP3's form the App's documents directory? The exact MP3 from that link is already there.
Post not yet marked as solved
Hello,
I would like to create a VoIP App I want to build a sound visualizer for the voice of the other party.
example:
https://medium.com/swlh/swiftui-create-a-sound-visualizer-cadee0b6ad37
The call function was implemented using the call kit.
(Based on twilio's quick-start)
https://jp.twilio.com/docs/voice/sdks/ios/get-started
https://github.com/twilio/voice-quickstart-ios
After importing AVFoundation, have to need routing the AVAudioengine mixer.
I have no idea how to put voice in the mixer.
Any guidance is appreciated!
Thank you!
Post not yet marked as solved
I have been using AVAudioEngine to take audio from the mic and send it out over a WebRTC connection. When I use the iPhone device mic, this seems to work as expected. But if I run the app with bluetooth headphones connected, the engine reports this error when trying to start:
[avae] AVAudioEngine.mm:160 Engine@0x2833e1790: could not initialize, error = -10868
[avae] AVAEInternal.h:109 [AVAudioEngineGraph.mm:1397:Initialize: (err = AUGraphParser::InitializeActiveNodesInInputChain(ThisGraph, *GetInputNode())): error -10868
Error starting audio engine: The operation couldn’t be completed. (com.apple.coreaudio.avfaudio error -10868.)
I see that Error code -10878 is:
@constant kAudioUnitErr_FormatNotSupported
Returned if an input or output format is not supported
...
kAudioUnitErr_FormatNotSupported = -10868
but that doesn't seem like it can be quite correct. I know that the output format is supported because the same format works correctly when my headphones are not attached. And I am pretty sure that the input format is supported because I am able to simply hook up Headphones InputNode -> Mixer -> Headphones OutputNode and correctly hear the audio from the mic.
So I can only assume that this means the format conversion is not supported.
My Questions:
Is this a bug?
Is there any way I can work around this?
Notes:
My full audio graph looks like this, where all the "mixers" are just AVAudioMixerNodes:
// InputNode (Mic) -> Mic Mixer -\
// 																 >-> WebRTC Mixer -> Tap -> WebRTC Framework
// AudioPlayer 1 -> Player Mixer -/
//
// AudioPlayer 2 -> Player Mixer -----> LocalOutputMixer -> OutputNode (Device Speakers/Headphones)
but the issue still happens even if I simplify down to this:
InputNode (Mic) ->	Mixer -> Tap -> WebRTC Framework
Specifically it happens when a single mixer node is connected with an input format and output format as follows: The input format is:
(lldb) po audioEngine.inputNode.inputFormat(forBus: 0).streamDescription.pointee
▿ AudioStreamBasicDescription
	- mSampleRate : 16000.0
	- mFormatID : 1819304813
	- mFormatFlags : 41
	- mBytesPerPacket : 4
	- mFramesPerPacket : 1
	- mBytesPerFrame : 4
	- mChannelsPerFrame : 1
	- mBitsPerChannel : 32
	- mReserved : 0
The output format WebRTC expects is:
▿ AudioStreamBasicDescription
	- mSampleRate : 48000.0
	- mFormatID : 1819304813
	- mFormatFlags : 12
	- mBytesPerPacket : 2
	- mFramesPerPacket : 1
	- mBytesPerFrame : 2
	- mChannelsPerFrame : 1
	- mBitsPerChannel : 16
	- mReserved : 0
My headphones are Jaybird Freedom 2.
Post not yet marked as solved
tvOS14 has no option to choose an output source so No Airplay to homepods. Has the Option moved or been removed from this version? I see nothing about it in the release notes
Post not yet marked as solved
Hello!
I have an Iphone XS Max.
Until the IOS15 update I could listen to music by bluetooth on my car without a problem, but ever since the update, whenever i turn off the car, when i turn it on again the music stutters/skips and is very choppy, makes it imposible to listen.
If then i forget devices and repair it everything works fine until i turn off the car and then its the same process again.
I dont want to have to pair my phone to my car everytime i enter the car, and this only started to happen with the IOS15 update.
Any solution?
Thanks!
Post not yet marked as solved
Is it possible to use SNAudioFileAnalyzer with live HLS(m3u8) stream? Maybe we need to extract somehow audio from it?
And Can we use SNAudioFileAnalyzer with real remote url? Or we can use it only with files in file system?
Post not yet marked as solved
Specifically I am trying to set .constrainsSeekingForwardInPrimaryContent when creating an AVPlayerInterstitialEvent but it has no effect on iOS. I don't have access to tvOS at the moment to try it there but according to the docs it should work on iOS
15 and >.
let event = AVPlayerInterstitialEvent(
primaryItem: App.player!.currentItem!,
identifier: ad.podId,
time: CMTime(seconds: ad.timeOffsetSec!, preferredTimescale: 1),
templateItems: adPodTemplates,
// sadly on iOS these restrictions seem to simply not work
restrictions: [
.constrainsSeekingForwardInPrimaryContent,
.requiresPlaybackAtPreferredRateForAdvancement
],
resumptionOffset: .zero,
playoutLimit: .invalid)
Thoughts?
Post not yet marked as solved
I have several (hundreds, thousands) of files locally on my machine, when i add them to Music, then play them. They skip at some point. I have no clue what this is but have read there might be a corrupt index somewhere that needs to be rebuilt. The files themselves are definitely not corrupt bc I can listen to them just fine in finder or any other player / app.
MacOS 12.2 Beta (21D5025f)
Mac mini (M1, 2020)
Post not yet marked as solved
Hello.
When trying to add more than 8 nodes to the graph, the AUGraphInitialize function returns an error code -10877 (kAudioUnitErr_InvalidElement).
What is wrong? How can I debug this InvalidElement?
var acd: AudioComponentDescription = AudioComponentDescription()
acd.componentManufacturer = kAudioUnitManufacturer_Apple
acd.componentType = kAudioUnitType_MusicDevice
acd.componentSubType = kAudioUnitSubType_MIDISynth
var units = []
for i in 0..<number {
var instrumentNode = AUNode()
AUGraphAddNode(graph, &acd, &instrumentNode)
var instrumentUnit: AudioUnit!
AUGraphNodeInfo(graph, instrumentNode, nil, &instrumentUnit)
AUGraphConnectNodeInput(graph, instrumentNode, 0, mixerNode, UInt32(i))
units.append(instrumentUnit)
}
AUGraphConnectNodeInput(graph, mixerNode, 0, outNode, 0)
AUGraphInitialize(graph)
AUGraphStart(graph)
Post not yet marked as solved
I have a game that is built as a PWA and the minute I add sounds that overlap, everything goes to pot on mobile. The sounds work perfectly on desktop, but on an iPhone X the behavior is drawing and sounds get randomly delayed and serialized. So even though on the desktop the visual of a block dropping is accompanied by the sound of a block dropping, on mobile I get one or the other first, and if I do 16 in a row, I'll get some set of each before or after one another (3 sounds, 8 visual animations, 6 sounds, 5 visual animations, 2 sound, 3 visual animations).
I'd rather not use a dependency like Howler.js, but have decided to do so.
I've tried both DOM audio elements and new Audio() objects in javascript. Nothing seems to make it better. Thoughts?
To see this in action - https://staging.likeme.games
Post not yet marked as solved
Hi guys,
We implement Avplayer in our app - but it seems to have this odd problem where the audio does not play from the speakers on ios 14.2.1 Seems to work fine on 14.2
Is there some setting or configuration that we can use to get around this?
Many thanks
Post not yet marked as solved
The web audio panner node processing webRTC stream has no spatial effect on iOS safari 15.1.1.
I move the positions of panner node or listener to no effect at all.
Stream:
webRTC p2p stream , audio only
Browser:
iOS safari 15.1.1
DEMO
This is my demo code: https://github.com/random-vincent/webRTCP2Pdemo-spatial-audio
This is a Demo based on WebRTC Team GitHub.
webRTC Stream -> MediaStream source node -> panner node -> destination
Usage:
Clone this repo
Run an https service in the root directory of this project
Open the url on browser
Click call to establish a webrtc p2p connection.
Click init to initialize the spatial audio.
Change positions and forwards of panner node and listener
Hey all,
It seems like there isn't support for MPMusicPlayerController on MacOS. I was trying to leverage MusicKit for a personal MacOS app with Apple Music playback.
I was wondering:
Is there an alternative approach to using MPMusicPlayerController on MacOS where I can play Apple Music based entities in a similar style (e.g., player.setqueue(with: applesongid))?
Are there known plans to bring MPMusicPlayerController to MacOS, or known reason why it's not supported?
Best!
Post not yet marked as solved
I've got volume in my implementation. Too much hurricane force volume. Though, the consistent problem is the volume is blasting when I create a ambient or channel mixer. (not a point or volumetric source. eg. calm breeze sound) I set the level on the mixer and nothing seems to happen. I'd like to set the volume lower.
Though, on the spatial mixer, if I set the gain, rolloff and direct path level on the source node ( a point or volumetric source), then the spatial mixer case appears to work and no blasting audio.
I've been following the wwdc examples. ( watched it about 4 times now) It appears I should not use the source node with the ambient and channel mixers? That seems to be only an option adding the parameter to the spatial mixer. The ambient mixer seems to only want the listener and a quaternion direction. ( I normalized to 1 )
If I set the calibration to relative spl on the sampler node but that always seems to cause blasting audio.
I added the sound assets with dynamic using wav format at 32 bits and 44.1 khz.
Also, are there any examples of the meta parameters? Is that how I could dynamically adjust the level? Think there was a passing reference to it in the wwdc video.
Any pointers would be appreciated. I wonder if I'm making consistent assumptions on how phase works. I try to set up as much as possible before I start the engine. ( especially adding children nodes. )
I'm trying to construct what I would consider a tracklist of an Album from MusicKit for Swift, but the steps I've had to take don't feel right to me, so I feel like I am missing something. The thing is that I don't just want a bag of tracks, I want their order (trackNumber) AND what disc they're on (discNumber), if more than one disc.
At the moment, I can get to the tracks for the album with album.with([.tracks]) to get a (MusicItemCollection<Track>), but each Track DOES NOT have a discNumber. Which means, in multi disc Albums all the tracks are munged together.
To get to the discNumber's, I've had to get an equivalent Song using the same ID, which DOES contain the discNumber, as well as the trackNumber. But this collection is not connected to the Album strongly, so I've had to create a containing struct to hold both the Album (which contains Track's) and some separate Song's, making my data model slightly more complex.
(Note that you don't appear to be able to do album.with([.songs]) to skip getting to a Song via a Track.)
I can't get my head around this structure - what is the difference between a Track and a Song?
I would think that the difference is that a Track is a specific representation on a specific Album, and a Song is a more general representation that might be shared amongst more Album's (like compilations). But therefore, surely discNumber should actually be on the Track as it is specific to the Album, and arguably Song shouldn't even have trackNumber of it's own?
In any case, can anyone help with what is the quickest way to get get the trackNumber's and discNumbers' of each actual track on an Album (starting with an MusicItemID) please?
var songs = [Song]()
let detailedAlbum = try await album.with([.tracks])
if let tracks = detailedAlbum.tracks {
for track in tracks {
let resourceRequest = MusicCatalogResourceRequest<Song>(matching: \.id, equalTo: track.id)
async let resourceResponse = resourceRequest.response()
guard let song = try await resourceResponse.items.first else { break }
songs.append(song)
}
}
Post not yet marked as solved
Hi,
I'm able to play songs from the Apple Music catalog (using the catalogId) by adding an item to the QueueProviderBuilder as shown below
queueProviderBuilder.items(MediaItemType.SONG, "1590036021");
playerController.prepare(queueProviderBuilder.build(), true);
However, I'm unable to play a User uploaded song using MediaItemType.UPLOADED_AUDIO I'm unsure what to pass in as the "itemId" as there's no 'catalogId' on the Apple Music API Response for a user's own uploaded song to iCloud. I have managed to obtain the URL of the uploaded song but that doesn't work either. Any ideas?
Post not yet marked as solved
When I use PHASE on my iPad, the test app is in landscape mode left or right only. But it seems like phase is in portrait mode. That is, 90 degrees off with a normal pointed at me. Is this a case where I need to use the world transform and detect view orientation notifications? Or does phase automatically handle it? I notice in simulator when I rotate the app, the speakers on the mac pro are always fixed which is what I was expecting. Or maybe its my imagination ... sounds like portrait on my device but I'm in landscape. I do have supported interface orientations set in my plist. Actually, its kind of annoying having the speakers on one side of the iPad.