Dive into the technical aspects of audio on your device, including codecs, format support, and customization options.

Audio Documentation

Posts under Audio subtopic

Post

Replies

Boosts

Views

Created

AVAudioEngine - How to archive configured nodes to file?
I’m looking to add DAW-like capabilities to my macOS music app, and AVAudioEngine seems like the right tool for the job. However, I haven’t been able to find any documentation on how to save the user’s AVAudioEngine configuration—specifically the connections between nodes and the internal states of each node—to a file. Does AVAudioEngine provide any API for saving and restoring this state, or does it need to be handled manually? If it’s manual, are there any sample "DAW" apps or resources that demonstrate how this can be implemented? Any guidance would be greatly appreciated. Thanks, BD
1
0
479
Nov ’24
AVMIDIPlayer not working for all instruments
Hi, I test AVMIDIPlayer in order to replace classes written based on AVAudioEngine with callbacks functions sending MIDI events to test, I use an NSMutableData filled with: the MIDI header a track for time signature a track containing a few midi events. I then create an instance of the AVMIDIPlayer using the data Everything works fine for some instrument (00 … 20) or 90 but not for other instruments 60, 70, … The MiDI header and the time signature track are based on the MIDI.org sample, https://midi.org/standard-midi-files-specification RP-001_v1-0_Standard_MIDI_Files_Specification_96-1-4.pdf the midi events are: UInt8 trkEvents[] = { 0x00, 0xC0, instrument, // Tubular bell 0x00, 0x90, 0x4C, 0xA0, // Note 4C 0x81, 0x40, 0x48, 0xB0, // TS + Note 48 0x00, 0xFF, 0x2F, 0x00}; // End for (UInt8 i=0; i<3; i++) { printf("0x%X ", trkEvents[i]); } printf("\n"); [_midiTempData appendBytes:trkEvents length:sizeof(trkEvents)]; A template application is used to change the instrument in a NSTextField I was wondering if specifics are required for some instruments? The interface header: #import <AVFoundation/AVFoundation.h> NS_ASSUME_NONNULL_BEGIN @interface TestMIDIPlayer : NSObject @property (retain) NSMutableData *midiTempData; @property (retain) NSURL *midiTempURL; @property (retain) AVMIDIPlayer *midiPlayer; - (void)createTest:(UInt8)instrument; @end NS_ASSUME_NONNULL_END The implementation: #pragma mark - typedef struct _MThd { char magic[4]; // = "MThd" UInt8 headerSize[4]; // 4 Bytes, MSB first. Always = 00 00 00 06 UInt8 format[2]; // 16 bit, MSB first. 0; 1; 2 Use 1 UInt8 trackCount[2]; // 16 bit, MSB first. UInt8 division[2]; // }MThd; MThd MThdMake(void); void MThdPrint(MThd *mthd) ; typedef struct _MIDITrackHeader { char magic[4]; // = "MTrk" UInt8 trackLength[4]; // Ignore, because it is occasionally wrong. } Track; Track TrackMake(void); void TrackPrint(Track *track) ; #pragma mark - C Functions MThd MThdMake(void) { MThd mthd = { "MThd", {0, 0, 0, 6}, {0, 1}, {0, 0}, {0, 0} }; MThdPrint(&mthd); return mthd; } void MThdPrint(MThd *mthd) { char *ptr = (char *)mthd; for (int i=0;i<sizeof(MThd); i++, ptr++) { printf("%X", *ptr); } printf("\n"); } Track TrackMake(void) { Track track = { "MTrk", {0, 0, 0, 0} }; TrackPrint(&track); return track; } void TrackPrint(Track *track) { char *ptr = (char *)track; for (int i=0;i<sizeof(Track); i++, ptr++) { printf("%X", *ptr); } printf("\n"); } @implementation TestMIDIPlayer - (id)init { self = [super init]; printf("%s %p\n", __FUNCTION__, self); if (self) { _midiTempData = nil; _midiTempURL = [[NSURL alloc]initFileURLWithPath:@"midiTempUrl.mid"]; _midiPlayer = nil; [self createTest:0x0E]; NSLog(@"_midiTempData:%@", _midiTempData); } return self; } - (void)dealloc { [_midiTempData release]; [_midiTempURL release]; [_midiPlayer release]; [super dealloc]; } - (void)createTest:(UInt8)instrument { /* MIDI Header */ [_midiTempData release]; _midiTempData = nil; _midiTempData = [[NSMutableData alloc]initWithCapacity:1024]; MThd mthd = MThdMake(); MThd *ptrMthd = &mthd; ptrMthd->trackCount[1] = 2; ptrMthd->division[1] = 0x60; MThdPrint(ptrMthd); [_midiTempData appendBytes:ptrMthd length:sizeof(MThd)]; /* Track Header Time signature */ Track track = TrackMake(); Track *ptrTrack = &track; ptrTrack->trackLength[3] = 0x14; [_midiTempData appendBytes:ptrTrack length:sizeof(track)]; UInt8 trkEventsTS[]= { 0x00, 0xFF, 0x58, 0x04, 0x04, 0x04, 0x18, 0x08, // Time signature 4/4; 18; 08 0x00, 0xFF, 0x51, 0x03, 0x07, 0xA1, 0x20, // tempo 0x7A120 = 500000 0x83, 0x00, 0xFF, 0x2F, 0x00 }; // End [_midiTempData appendBytes:trkEventsTS length:sizeof(trkEventsTS)]; /* Track Header Track events */ ptrTrack->trackLength[3] = 0x0F; [_midiTempData appendBytes:ptrTrack length:sizeof(track)]; UInt8 trkEvents[] = { 0x00, 0xC0, instrument, // Tubular bell 0x00, 0x90, 0x4C, 0xA0, // Note 4C 0x81, 0x40, 0x48, 0xB0, // TS + Note 48 0x00, 0xFF, 0x2F, 0x00}; // End for (UInt8 i=0; i<3; i++) { printf("0x%X ", trkEvents[i]); } printf("\n"); [_midiTempData appendBytes:trkEvents length:sizeof(trkEvents)]; [_midiTempData writeToURL:_midiTempURL atomically:YES]; dispatch_async(dispatch_get_main_queue(), ^{ if (!_midiPlayer.isPlaying) [self midiPlay]; }); } - (void)midiPlay { NSError *error = nil; _midiPlayer = [[AVMIDIPlayer alloc]initWithData:_midiTempData soundBankURL:nil error:&error]; if (_midiPlayer) { [_midiPlayer prepareToPlay]; [_midiPlayer play:^{ printf("Midi Player ended\n"); [_midiPlayer stop]; [_midiPlayer release]; _midiPlayer = nil; }]; } } @end Call from AppDelegate - (IBAction)actionInstrument:(NSTextField*)sender { [_testMidiplayer createTest:(UInt8)sender.intValue]; }
1
0
422
Nov ’24
Microphone access from control center
Title: Unable to Access Microphone in Control Center Widget – Is It Possible? Hello everyone, I'm attempting to create a widget in the Control Center that accesses the microphone, similar to how Shazam does it. However, I'm running into an issue where the widget always prints "Microphone permission denied." It's worth mentioning that microphone access works fine when I'm using the app itself. Here's the code I'm using in the widget: swift Copy code func startRecording() async { logger.info("Starting recording...") print("Starting recording...") recognizedText = "" isFinishingRecognition = false // First, check speech recognition authorization let speechAuthStatus = await withCheckedContinuation { continuation in SFSpeechRecognizer.requestAuthorization { status in continuation.resume(returning: status) } } guard speechAuthStatus == .authorized else { logger.error("Speech recognition not authorized") return } // Then, request microphone permission using our manager let micPermission = await AudioSessionManager.shared.requestMicrophonePermission() guard micPermission else { logger.error("Microphone permission denied") print("Microphone permission denied") return } // Continue with recording... } Issues: The code consistently prints "Microphone permission denied" when run from the widget. Microphone access works without issues when the same code is executed from within the app. Questions: Is it possible for a Control Center widget to access the microphone? If yes, what might be causing the "Microphone permission denied" error in the widget? Are there additional permissions or configurations required to enable microphone access in a widget? Any insights or suggestions would be greatly appreciated! Thank you.
0
0
499
Nov ’24
API to switch the mode of Airpods Pro 2
Hi, May I ask if there is any API or similar way inside the iOS app to set up/switch the transparency and ANC modes of the AirPods Pro 2? One way is to set up one shortcut and activate that shortcut in the app, but it requires manually setting for a shortcut, which is not convenient. Thx for any advice on that!
0
1
214
Nov ’24
Any API for AirPods Pro 2
Hi, May I ask if there is any iOS API or similar way to switch between the transparency and ANC modes of AirPods Pro 2? I know there is one way to configure and activate the shortcut in the APP, which requires an inconvenient manual setting. May I ask for any other advice? Thx in advance!
0
0
248
Nov ’24
Failure of AudioUnitSetProperty when using MacCatalyst (works on macOS)
I was trying to set custom audio output device for a generated audio on macCatalyst. While using let status = AudioUnitSetProperty(outputUnit, kAudioOutputUnitProperty_CurrentDevice, kAudioUnitScope_Global, 0, &outputDeviceID, UInt32(MemoryLayout.size)) kAudioOutputUnitProperty_CurrentDevice is invalid, and status = -10879, indicating an error. STEPS TO REPRODUCE Set Run Destination to MacOS and run the program. "AudioUnitSetProperty: 0" should be printed, indicating it works fine. Set Run Destination to Mac Catalyst and run the program. "Error setting output device: -10879" should be printed, indicating an error.
4
1
629
Nov ’24
Lightning headphone adapter modes
I'm developing an app that plays a WAV file through the Lightning headphone adapter. When i connect the adapter, a prompt appears asking whether to select "Headphones" or "Other Device" What does this setting actually do? I've noticed that it affects the maximum amplitude (volume) of the WAV output. Could you explain the precise difference between these two modes?
0
0
232
Nov ’24
mp3 audio plays on my device and simulator but some users have issue
Hi I just released an app which is live. i have a strange issue: while the audio files in the app play fine on my device, but some users are unable to hear. One friend said it played yesterday but not today. Any idea why? The files are mp3, I see them in Build Phase, and in the project obviously. Here's the audio view code, thank you! import AVFoundation struct MeditationView: View { @State private var player: AVAudioPlayer? @State private var isPlaying = false @State private var selectedMeditation: String? var isiPad = UIDevice.current.userInterfaceIdiom == .pad let columns = [GridItem(.flexible()),GridItem(.flexible())] let tracks = ["Intro":"intro.mp3", "Peace" : "mysoundbath1.mp3", "Serenity" : "mysoundbath2.mp3", "Relax" : "mysoundbath3.mp3"] var body: some View { VStack{ VStack{ VStack{ Image("dhvani").resizable().aspectRatio(contentMode: .fit) .frame(width: 120) Text("Enter the world of Dhvani soundbath sessions, click lotus icon to play.") .font(.custom("Times New Roman", size: 20)) .lineLimit(nil) .multilineTextAlignment(.leading) .fixedSize(horizontal: false, vertical: true) .italic() .foregroundStyle(Color.ashramGreen) .padding() } LazyVGrid(columns:columns, spacing:10){ ForEach(tracks.keys.sorted(),id:\.self){ track in Button { self.playMeditation(named: tracks[track]!) } label: { Image("lotus") .resizable() .frame(width: 40,height: 40) .background(Color.ashramGreen) .cornerRadius(10) } Text(track) .font(.custom("Times New Roman", size: 22)) .foregroundStyle(Color.ashramGreen) .italic() } } HStack(spacing:20) { Button(action: { self.togglePlayPause() }) { Image(systemName: isPlaying ? "playpause.fill" : "play.fill") .resizable() .frame(width: 20, height: 20) .foregroundColor(Color.ashramGreen) } Button(action: { self.stopMeditation() }) { Image(systemName: "stop.fill") .resizable() .frame(width: 20, height: 20) .foregroundColor(Color.ashramGreen) } } }.padding() .background(Color.ashramBeige) .cornerRadius(20) Spacer() //video play VStack{ Text("Chant") .font(.custom("Times New Roman", size: 24)) .foregroundStyle(Color.ashramGreen) .padding(5) WebView(urlString: "https://www.youtube.com/embed/ny3TqP9BxzE") .frame(height: isiPad ? 400 : 200) .cornerRadius(10) .padding() Text("Courtesy Sri Ramanasramam").font(.footnote).italic() } }.background(Color.ashramBeige) } //View func playMeditation(named name: String) { if let url = Bundle.main.url(forResource: name, withExtension: nil) { do { player = try AVAudioPlayer(contentsOf: url) player?.play() isPlaying = true } catch { print("Error playing meditation") } } } func togglePlayPause() { if let player = player { if player.isPlaying { player.pause() isPlaying = false } else { player.play() isPlaying = true } } } func stopMeditation() { player?.stop() isPlaying = false } } #Preview { MeditationView() }
1
0
335
Nov ’24
AVAudioEngineConfigurationChange Clearing AVPlayerNode
Hi all, I am working on an app where I have live prompts playing, in addition to a voice channel that sometimes becomes active. Right now I am using two different AVAudioSession Configurations so what we only switch to a mic enabled mode when we actually need input from the mic. These are defined below. When just using the device hardware, everything works as expected and the modes change and the playback continues as needed. However when using bluetooth devices such as AirPods where the switch from AD2P to HFP is needed, I am getting a AVAudioEngineConfigurationChange notification. In response I am tearing down the engine and creating a new one with the same 2 player nodes. This does work fine and there are no crashes, except all the audio I have scheduled on a player node has now been cleared. All the completion blocks marked with ".dataPlayedBack" return the second this event happens, and leaves me in a state where I now have a valid engine setup again but have no idea what actually played, or was errantly marked as such. Is this the expected behavior when getting a configuration change notification? Adding some information below to my audio graph for context: All my parts of the graph, I disconnect when getting this event and do the same to the new engine private var inputEngine: AVAudioEngine private var audioEngine: AVAudioEngine private let voicePlayerNode: AVAudioPlayerNode private let promptPlayerNode: AVAudioPlayerNode audioEngine.attach(voicePlayerNode) audioEngine.attach(promptPlayerNode) audioEngine.connect( voicePlayerNode, to: audioEngine.mainMixerNode, format: voiceNodeFormat ) audioEngine.connect( promptPlayerNode, to: audioEngine.mainMixerNode, format: nil ) An example of how I am scheduling playback, and where that completion is firing even if it didn't actually play. private func scheduleVoicePlayback(_ id: AudioPlaybackSample.Id, buffer: AVAudioPCMBuffer) async throws { guard !voicePlayerQueue.samples.contains(where: { $0 == id }) else { return } seprateQueue.append(buffer) if !isVoicePlaying { activateAudioSession() } voicePlayerQueue.samples.append(id) if !voicePlayerNode.isPlaying { voicePlayerNode.play() } if let convertedBuffer = buffer.convert(to: voiceNodeFormat) { await voicePlayerNode.scheduleBuffer(convertedBuffer, completionCallbackType: .dataPlayedBack) } else { throw AudioPlaybackError.failedToConvert } voiceSampleHasBeenPlayed(id) } And lastly my audio session configuration if its useful. extension AVAudioSession { static func setDefaultCategory() { do { try sharedInstance().setCategory( .playback, options: [ .duckOthers, .interruptSpokenAudioAndMixWithOthers ] ) } catch { print("Failed to set default category? \(error.localizedDescription)") } } static func setVoiceChatCategory() { do { try sharedInstance().setCategory( .playAndRecord, options: [ .defaultToSpeaker, .allowBluetooth, .allowBluetoothA2DP, .duckOthers, .interruptSpokenAudioAndMixWithOthers ] ) } catch { print("Failed to set category? \(error.localizedDescription)") } } }
1
0
657
Nov ’24
aumi AUv3 with AvAudioEngine ConnectMIDI multiple
Hi! I am creating a aumi AUv3 extension and I am trying to achieve simultaneous connections to multiple other avaudionodes. I would like to know it is possible to route the midi to different outputs inside the render process in the AUv3. I am using connectMIDI(_:to:format:eventListBlock:) to connect the output of the AUv3 to multiple AvAudioNodes. However, when I send midi out of the AUv3, it gets sent to all the AudioNodes connected to it. I can't seem to find any documentation on how to route the midi only to one of the connected nodes. Is this possible?
3
0
585
Nov ’24
No charts genres for some storefronts
Hello, This is about the Get Catalog Top Charts Genres endpoint : GET https://api.music.apple.com/v1/catalog/{storefront}/genres I noticed that for some storefronts, no genre is returned. You can try with the following storefront values : France (fr) Poland (pl) Kyrgyzstan (kg) Uzbekistan (uz) Turkmenistan (tm) Is that a bug or is it on purpose ? Thank you.
0
0
423
Dec ’24
AVAudioSession's "availableInputs" not update in time
// Here addObserver for routeChangeNotification func testAudioRoute() { // My app is an VoIP app, so I need to set "playAndRecord" and "allowBluetooth" try? AVAudioSession.sharedInstance().setCategory(.playAndRecord, options: [.duckOthers, .allowBluetooth, .allowBluetoothA2DP]) NotificationCenter.default.addObserver(self, selector: #selector(currentRouteChanged(noti:)), name: AVAudioSession.routeChangeNotification, object: nil) } // Print the "availableInputs" once got a notification @objc func currentRouteChanged(noti: Notification) { let availableInputs = AVAudioSession.sharedInstance().availableInputs?.compactMap({ $0.portType }) ?? [] let currentRouteInputs = AVAudioSession.sharedInstance().currentRoute.inputs.compactMap({ $0.portType }) let currentRouteOutputs = AVAudioSession.sharedInstance().currentRoute.outputs.compactMap({ $0.portType }) print("willtest: \navailableInputs=\(availableInputs), \ncurrentRouteInputs=\(currentRouteInputs), \ncurrentRouteOutputs=\(currentRouteOutputs)") /* When BT (Airpods pro 2) CONNECTTED: it will print like below when notification comes, this is correct. ---------------------------------------------------------- willtest: availableInputs=[__C.AVAudioSessionPort(_rawValue: MicrophoneBuiltIn), __C.AVAudioSessionPort(_rawValue: BluetoothHFP)], currentRouteInputs=[], currentRouteOutputs=[__C.AVAudioSessionPort(_rawValue: BluetoothA2DPOutput)] ---------------------------------------------------------- When BT (Airpods pro 2) DISCONNECTTED: it will print like below when notification comes, this is wrong. ---------------------------------------------------------- availableInputs=[__C.AVAudioSessionPort(_rawValue: MicrophoneBuiltIn), __C.AVAudioSessionPort(_rawValue: BluetoothHFP)], currentRouteInputs=[], currentRouteOutputs=[__C.AVAudioSessionPort(_rawValue: Speaker)] */ } So my question here is: Why does the "availableInputs" still contain the "C.AVAudioSessionPort(_rawValue: BluetoothHFP)" item even though I have already disconnected the BT device? (Put AirPods in the case.) BTW, if I tap the "Manual" button once I disconnected the BT, it also prints the "wrong" value for "availableInputs", and it will become normal after about 3~4 seconds.
4
0
494
Dec ’24
Swift iOS CallKit audio resource contention
I noticed the following behavior with CallKit when receiving a VolP push notification: When the app is in the foreground and a CallKit incoming call banner appears, pressing the answer button directly causes the speaker indicator in the CallKit interface to turn on. However, the audio is not actually activated (the iPhone's orange microphone indicator does not light up). In the same foreground scenario, if I expand the CallKit banner before answering the call, the speaker indicator does not turn on, but the orange microphone indicator does light up, and audio works as expected. When the app is in the background or not running, the incoming call banner works as expected: I can answer the call directly without expanding the banner, and the speaker does not turn on automatically. The orange microphone indicator lights up as it should. Why is there a difference in behavior between answering directly from the banner versus expanding it first when the app is in the foreground? Is there a way to ensure consistent audio activation behavior across these scenarios?
0
0
378
Dec ’24
Swift iOS CallKit audio resource contention
I noticed the following behavior with CallKit when receiving a VolP push notification: When the app is in the foreground and a CallKit incoming call banner appears, pressing the answer button directly causes the speaker indicator in the CallKit interface to turn on. However, the audio is not actually activated (the iPhone's orange microphone indicator does not light up). In the same foreground scenario, if I expand the CallKit banner before answering the call, the speaker indicator does not turn on, but the orange microphone indicator does light up, and audio works as expected. When the app is in the background or not running, the incoming call banner works as expected: I can answer the call directly without expanding the banner, and the speaker does not turn on automatically. The orange microphone indicator lights up as it should. Why is there a difference in behavior between answering directly from the banner versus expanding it first when the app is in the foreground? Is there a way to ensure consistent audio activation behavior across these scenarios? I tried reconfiguring the audio when answering a call, but an error occurred during setActive, preventing the configuration from succeeding. let audioSession = AVAudioSession.sharedInstance() do { try audioSession.setActive(false) try audioSession.setCategory(.playAndRecord, mode: .voiceChat, options: [.defaultToSpeaker]) try audioSession.setActive(true, options: []) } catch { print("Failed to activate audio session: \(error)") } action.fulfill() } Error Domain=NSOSStatusErrorDomain Code=561017449 "Session activation failed" UserInfo={NSLocalizedDescription=Session activation failed}
1
0
482
Dec ’24
AudioServicesPlaySystemSound not playing through BluetoothA2DP device
Hello We have an application that play some sound via the system sound APIs from the AudioToolbox framework. AudioServicesCreateSystemSoundID(url as CFURL, &soundID) AudioServicesPlaySystemSoundWithCompletion(soundID) Our make sure that an active audio session is available before playing the system sound. But when the device is connected to a BluetoothA2DP device. The sound are played on through the device speaker and not through the bluetooth A2DP device. Our AudioSesison is configured with the following categories [.allowBluetooth, .defaultToSpeaker, .allowBluetoothA2DP] Sound played from the AVAudioPlayer are played on the allowBluetoothA2DP device with similar code. Is this a bug in the AudioToolbox framework?
2
0
536
Dec ’24
Increased delay when AUGraph's output device is system output
I'm using an AUGraph to mix audio from different sources for a real time streaming application. Whenever the audio device used as the graph's output device is also the Mac's default output device, the measured latency increases by about 35 milliseconds for wired devices. Any idea why this is? Is there a way around this besides nagging the user to not the use the system output in our app?
0
0
288
Dec ’24
Delay w/ new AudioTap API when system device is a BL device
I'm capturing audio from other applications on macOS to mix them with other sources in a real time streaming application. I noticed that audio data captured via the new tapping mechanism introduced in macOS 14.2 arrives delayed in my app, when the macOS system device is a Bluetooth headphone, e.g. Apple AirPods. Sometimes this delay is about 300-400 milliseconds, which makes it unusable for live streaming, because the audio is out of sync with the video and also audio captured from other devices. What is confusing to me, is that this also happens when my app does not even use that output device. Is this a known issue? Is there a way around this?
1
0
379
Dec ’24
AirPods Audio Sample Rate Issue on macOS Sequoia
I’m experiencing an unusual audio issue with AirPods on macOS Sequoia while developing VoIP applications like Zoom and FaceTime. When AirPods are connected, the other party’s voice sometimes sounds unnaturally stretched (approximately twice as long). This problem can be temporarily fixed by switching the sound output settings from AirPods to speakers and then back to AirPods. From our analysis, the issue appears to be related to the sample rate provided by AudioObjectGetPropertyData. Here’s what we’ve observed: When the issue occurs, the AudioStreamBasicDescription.sampleRate for AirPods is reported as 48000. Under normal conditions, it’s reported as 24000. It seems like the system is mistakenly returning a sample rate that doesn’t match the AirPods’ actual settings, perhaps defaulting to a system speaker value. Once the output setting is toggled, the correct sampleRate (24000) is retrieved. This discrepancy causes our application to transmit the audio stream at 48000, leading to the distorted playback. Has anyone encountered a similar issue or knows how to resolve it?
2
0
581
Dec ’24
PTTFramework w/ AVAudioSession
Hi all, I have spent a lot of time reading the tech note and watching the WDDC video that introduce the PTTFramework on iOS. I currently have a custom setup where I am using AVAudioEngine to schedule and play buffers that are being streamed through a call. I am looking to use the PTTFramework to allow a user to trigger this push to talk behavior from the lock screen and the various places with the system UI it provides. However I am unsure what the correct behavior is regarding the handling of the audio session. Right now I am using .playback when there is no active voice transmission so that devices such as AirPods can be in AD2P mode where applicable, and then transitioning to .playbackAndRecord category only when the mic input should become active. Following this change in my AVAudioEngine manager I am then manually activating and deactivating the audio session manually when the engine is either playing/recording or idle. In the documentation it states that you should not attempt to activate or deactivate your audio session directly, but allow the framework to handle it. Does that mean that I need to either call the request to transmit delegate function or set an active participant on the channel manager first, and then wait for the didBecomeActive delegate method to trigger before I actually attempt to play or record any audio? (I am using the fullDuplex mode currently.) I noticed that that delegate method will only trigger if the audio session wasn't active before doing one of the above (setting active participant, requesting transmit). Lastly, when using the PTTFramework it also mentions that we get support for PTT devices and I notice on the didBeginTransmittingFrom property we have a handsfreeButton case. Is there any documentation or resources for what is actually supported out of the box for this? I am currently working on handling a lot of the push to talk through bluetooth LE, and wanted to make sure there wasn't overlap with what the system provides. Thank you!
2
0
563
Dec ’24