I was wondering if there are any example download projects of the PHASE audio framework? I was watching the WWDC 2021 video but there was no example code to download. The examples within the video were pretty verbose -- do not want to freeze a frame and type all that by hand.
I am attempting to replace some old OpenAL code from a few years ago with an alternate solution. All the OpenAL code shows deprecation messages when I build in Xcode.
The generated header PHASE documentation is kind of sparse and somewhat boiler plate with no examples.
Thanks in advance.
Post not yet marked as solved
I would like to know what multichannel formats can be used in a PHASEAmbientMixerDefinition? What channel layouts are supported? Are first order ambiosonic supported?
Post not yet marked as solved
I want to create a sort of soundscape in surround sound. Imagine something along the lines of the user can place the sound of a waterfall to their front right and the sound of frogs croaking to their left etc. etc.
I have an AVAudioEngine playing a number of AVAudioPlayerNodes. I'm using AVAudioEnvironmentNode to simulate the positioning of these. The position seems to work correctly. However, I'd like these to work with head tracking so if the user moves their head the sounds from the players move accordingly.
I can't figure out for to do it or find any docs on the subject. Is it possible to make AVAudioEngine output surround sound and if it can would the tracking just work automagically the same as it does when playing surround sound content using AVPlayerItem. If not is the only way to achieve this effect to use CMHeadphonemotionmanager and manually move the listener AVAudioEnvironmentNode listener around?
Post not yet marked as solved
I'm attempting to create an app that uses the spatial audio API but doesn't play audio the whole time.
I want the user to be able to listen to other media outside my app but gather X Y & Z axis information most of the time the app is running in the background.
Does anyone know if this is possible?
Post not yet marked as solved
When I use PHASE on my iPad, the test app is in landscape mode left or right only. But it seems like phase is in portrait mode. That is, 90 degrees off with a normal pointed at me. Is this a case where I need to use the world transform and detect view orientation notifications? Or does phase automatically handle it? I notice in simulator when I rotate the app, the speakers on the mac pro are always fixed which is what I was expecting. Or maybe its my imagination ... sounds like portrait on my device but I'm in landscape. I do have supported interface orientations set in my plist. Actually, its kind of annoying having the speakers on one side of the iPad.
Post not yet marked as solved
I've got volume in my implementation. Too much hurricane force volume. Though, the consistent problem is the volume is blasting when I create a ambient or channel mixer. (not a point or volumetric source. eg. calm breeze sound) I set the level on the mixer and nothing seems to happen. I'd like to set the volume lower.
Though, on the spatial mixer, if I set the gain, rolloff and direct path level on the source node ( a point or volumetric source), then the spatial mixer case appears to work and no blasting audio.
I've been following the wwdc examples. ( watched it about 4 times now) It appears I should not use the source node with the ambient and channel mixers? That seems to be only an option adding the parameter to the spatial mixer. The ambient mixer seems to only want the listener and a quaternion direction. ( I normalized to 1 )
If I set the calibration to relative spl on the sampler node but that always seems to cause blasting audio.
I added the sound assets with dynamic using wav format at 32 bits and 44.1 khz.
Also, are there any examples of the meta parameters? Is that how I could dynamically adjust the level? Think there was a passing reference to it in the wwdc video.
Any pointers would be appreciated. I wonder if I'm making consistent assumptions on how phase works. I try to set up as much as possible before I start the engine. ( especially adding children nodes. )
Post not yet marked as solved
I tried to run multiple demos utilising spatial audio. However no matter what I do, I only get 2 channel output. Which is also confirmed by calling:
let numHardwareOutputChannels = gameView.audioEngine.outputNode.outputFormat(forBus: 0).channelCount
My appleTV is connected to DolbyAtmos capable audio system which works just fine.
So my question is more less - how to convince TVOS app that my appleTV has multichannel output ?!
Post not yet marked as solved
I am using 3D audio in space using the phase framework. Therefore, I am using sound echoes, etc.
So I am using the following code to play the audio.
final class PhaseManager {
private var engine: PHASEEngine!
private var listener: PHASEListener!
init() {
engine = PHASEEngine(updateMode: .automatic)
engine.outputSpatializationMode = .alwaysUseChannelBased
engine.defaultReverbPreset = .mediumRoom
listener = PHASEListener(engine: engine)
listener.transform = matrix_identity_float4x4
try? engine.rootObject.addChild(listener)
try? engine.start()
}
func play() {
let spatialPipelineFlags: PHASESpatialPipeline.Flags = [.directPathTransmission, .lateReverb]
let spatialPipeline = PHASESpatialPipeline(flags: spatialPipelineFlags)!
let spatialMixerDefinition = PHASESpatialMixerDefinition(spatialPipeline: spatialPipeline)
let joinSoundId = "\(Int.random(in: 0..<Int.max))"
let audioFileUrl = Bundle.main.url(forResource: "sound", withExtension: "mp3")!
let soundAsset = try! engine.assetRegistry.registerSoundAsset(
url: audioFileUrl,
identifier: joinSoundId,
assetType: .resident,
channelLayout: nil,
normalizationMode: .dynamic)
let samplerNodeDefinition = PHASESamplerNodeDefinition(
soundAssetIdentifier: soundAsset.identifier,
mixerDefinition: spatialMixerDefinition
)
samplerNodeDefinition.playbackMode = .oneShot
let soundEventAsset = try! engine.assetRegistry.registerSoundEventAsset(rootNode: samplerNodeDefinition, identifier: nil)
let source = PHASESource(engine: engine)
source.gain = 0.02
source.transform = matrix_identity_float4x4
try! engine.rootObject.addChild(source)
let mixerParameters = PHASEMixerParameters()
mixerParameters.addSpatialMixerParameters(identifier: spatialMixerDefinition.identifier, source: source, listener: listener)
let soundEvent = try! PHASESoundEvent(engine: engine, assetIdentifier: soundEventAsset.identifier, mixerParameters: mixerParameters)
soundEvent.start { [weak self] _ in
self?.engine.assetRegistry.unregisterAsset(identifier: joinSoundId) { _ in }
}
}
}
let manager = PhaseManager()
manager.play()
After playback is complete, memory is not released even though unregisterAsset is executed.
manager.play() is performed repeatedly and t the memory is increasing proportionally.
Is there any way to free the memory?
I have already confirmed that removing the .lateReverb flag or setting source.gain = 1 will free memory, but that does not do what I want to achieve.