Provide a consistent spatial audio experience across all supported devices with geometry-aware audio.

PHASE Documentation

Posts under PHASE tag

4 Posts
Sort by:
Post not yet marked as solved
0 Replies
198 Views
Hello, We are trying to use an audio calling functionality for visionOS with no success since the update of visionOS. We do not used CallKit for this flow. We set the AudioSession as followed: [sessionInstance setCategory:AVAudioSessionCategoryPlayAndRecord mode:AVAudioSessionModeVoiceChat options: (AVAudioSessionCategoryOptionAllowBluetooth | AVAudioSessionCategoryOptionAllowBluetoothA2DP | AVAudioSessionCategoryOptionMixWithOthers) error:&error_]; We are creating our Audio unit as followed: AudioComponentDescription desc_; desc_.componentType = kAudioUnitType_Output; desc_.componentSubType = kAudioUnitSubType_VoiceProcessingIO; desc_.componentManufacturer = kAudioUnitManufacturer_Apple; desc_.componentFlags = 0; desc_.componentFlagsMask = 0; AudioComponent comp_ = AudioComponentFindNext(NULL, &desc_); IMSXThrowIfError(AudioComponentInstanceNew(comp_, &_audioUnit),"couldn't create a new instance of Apple Voice Processing IO."); UInt32 one_ = 1; IMSXThrowIfError(AudioUnitSetProperty(self.audioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Input, audioUnitElementIOInput, &one_, sizeof(one_)), "could not enable input on Apple Voice Processing IO"); IMSXThrowIfError(AudioUnitSetProperty(self.audioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Output, audioUnitElementIOOutput, &one_, sizeof(one_)), "could not enable output on Apple Voice Processing IO"); IMSTagLogInfo(kIMSTagAudio, @"Rate: %ld", _rate); bool isInterleaved = _channel == 2 ? true : false; self.ioFormat = CAStreamBasicDescription(_rate, _channel, CAStreamBasicDescription::kPCMFormatInt16, isInterleaved); IMSXThrowIfError(AudioUnitSetProperty(self.audioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 0, &_ioFormat, sizeof(self.ioFormat)), "couldn't set the input client format on Apple Voice Processing IO"); IMSXThrowIfError(AudioUnitSetProperty(self.audioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 1, &_ioFormat, sizeof(self.ioFormat)), "couldn't set the output client format on Apple Voice Processing IO"); UInt32 maxFramesPerSlice_ = 4096; IMSXThrowIfError(AudioUnitSetProperty(self.audioUnit, kAudioUnitProperty_MaximumFramesPerSlice, kAudioUnitScope_Global, 0, &maxFramesPerSlice_, sizeof(UInt32)), "couldn't set max frames per slice on Apple Voice Processing IO"); UInt32 propSize_ = sizeof(UInt32); IMSXThrowIfError(AudioUnitGetProperty(self.audioUnit, kAudioUnitProperty_MaximumFramesPerSlice, kAudioUnitScope_Global, 0, &maxFramesPerSlice_, &propSize_), "couldn't get max frames per slice on Apple Voice Processing IO"); AURenderCallbackStruct renderCallbackStruct_; renderCallbackStruct_.inputProc = playbackCallback; renderCallbackStruct_.inputProcRefCon = (__bridge void *)self; IMSXThrowIfError(AudioUnitSetProperty(self.audioUnit, kAudioUnitProperty_SetRenderCallback, kAudioUnitScope_Output, 0, &renderCallbackStruct_, sizeof(renderCallbackStruct_)), "couldn't set render callback on Apple Voice Processing IO"); AURenderCallbackStruct inputCallbackStruct_; inputCallbackStruct_.inputProc = recordingCallback; inputCallbackStruct_.inputProcRefCon = (__bridge void *)self; IMSXThrowIfError(AudioUnitSetProperty(self.audioUnit, kAudioOutputUnitProperty_SetInputCallback, kAudioUnitScope_Input, 0, &inputCallbackStruct_, sizeof(inputCallbackStruct_)), "couldn't set render callback on Apple Voice Processing IO"); And as soon as we try to start the AudioUnit we have the following error: PhaseIOImpl.mm:1514 phaseextio@0x107a54320: failed to start IO directions 0x3, num IO streams [1, 1]: Error Domain=com.apple.coreaudio.phase Code=1346924646 "failed to pause/resume stream 6B273F5B-D6EF-41B3-8460-0E34B00D10A6" UserInfo={NSLocalizedDescription=failed to pause/resume stream 6B273F5B-D6EF-41B3-8460-0E34B00D10A6} We do not use PHASE framework on our side and the error is not clear to us nor documented anywhere. We also try to use a AudioUnit that only do Speaker witch works perfectly, but as soon as we try to record from an AudioUnit the start failed as well with the error AVAudioSessionErrorCodeCannotStartRecording We suppose that somehow inside PHASE an IO VOIP audio unit is running that prevent us from stoping/killing it when we try to create our own, and stop the whole flow. It used to work on visonOS 1.0.1 Regards, Summit-tech
Posted
by kjijijiji.
Last updated
.
Post not yet marked as solved
0 Replies
149 Views
I'm looking for a sample code project on integrating Spatial Audio into my app, Tunda Island, a music-loving, make friends and dating app. I have gone as far as purchasing a book "Exploring MusicKit" by Rudrank Riyam but to no avail.
Posted
by Amani_L.
Last updated
.
Post not yet marked as solved
0 Replies
395 Views
hi there, i'm running into issues when creating occluders from arbitrary MDLMeshes (i.e. 3d models i create in app and then turn into MDLMeshes, rather than MDLMesh.newBox() etc). some meshes work fine, others hit an assert deep in the PHASE c++ code. i've tested on simulator and iPhone 12 (iOS16 and now iOS17 beta 3). i've tried to figure out if there's some kind of pattern to which meshes work and which don't, but haven't been able to find one. note that using the build in MDLMesh primitives works fine. the assert is: Assertion failed: (voxelIndex < level.mVoxels.Count()), function AddBuilderVoxelToSubtree, file GeoVoxelTree.cpp, line 188. the code is: func createOccluder(from meshes: [MDLMesh], at transform: Transform, preset: PHASEMaterialPreset) throws -> PHASEOccluder { guard let engine else { throw "No engine" } print("audio meshes: \(meshes)") let material = PHASEMaterial(engine: engine, preset: preset) var shapes: [PHASEShape] = [] for (_, mesh) in meshes.enumerated() { let meshShape = PHASEShape(engine: engine, mesh: mesh) for element in meshShape.elements { element.material = material } shapes.append(meshShape) } let occluder = PHASEOccluder(engine: engine, shapes: shapes) occluder.worldTransform = transform.matrix try engine.rootObject.addChild(occluder) return occluder } the assert happens with: let occluder = PHASEOccluder(engine: engine, shapes: shapes) any ideas on what could be going on here? cheers, Mike screenshots of callstack etc:
Posted
by ziggmike.
Last updated
.
Post not yet marked as solved
1 Replies
639 Views
I am currently building an app using PHASE, and I am using a PUSHStreamNode as an asset since the audio data I am using is coming from a stream. I produce from this stream 1 channel, 48kHz sample rate pcm buffers. I can create the node and the event, but when I start the engine (without even providing any data to the streams), I get the following error: Fatal Action tree data error: push stream data has invalid audio format, layoutTag = 0x0. I tried changing the format provided to PHASEPushStreamNodeDefinition, but it didn't help. Is there a specific AVAudioFormat that I need to use? My code to create the PushStreamNode: let source = PHASESource(engine: self.engine) do { try self.engine.rootObject.addChild(source)} catch { print("Failed to add a source to the scene") } let mixer = PHASESpatialMixerDefinition( spatialPipeline: PHASESpatialPipeline( flags: .directPathTransmission)! ) let model = PHASEGeometricSpreadingDistanceModelParameters() model.rolloffFactor = 1.0 mixer.distanceModelParameters = model let pushStreamNode = PHASEPushStreamNodeDefinition(mixerDefinition: mixer, format: AVAudioFormat(standardFormatWithSampleRate: 48000.0, channels: 1)!) var soundEventAsset: PHASESoundEventNodeAsset! do { soundEventAsset = try self.engine.assetRegistry.registerSoundEventAsset( rootNode: pushStreamNode, identifier: "mic_stream" ) } catch { print("Failed to register the sound event asset") return nil } let mixerParameters = PHASEMixerParameters() mixerParameters.addSpatialMixerParameters( identifier: mixer.identifier, source: source, listener: self.listener! ) var event: PHASESoundEvent! do { event = try PHASESoundEvent( engine: self.engine, assetIdentifier: soundEventAsset.identifier, mixerParameters: mixerParameters ) } catch { print("Failed to create the sound event \(error)") return nil }
Posted Last updated
.