Dive into the technical aspects of audio on your device, including codecs, format support, and customization options.

Audio Documentation

Posts under Audio subtopic

Post

Replies

Boosts

Views

Activity

CarPlay HID transport buttons remap to call-control during continuous mic capture (no opt-out API)
Hello, I am developing Uniq Intercom, a voice-only group communication app for motorcyclists (always-on intercom over WebRTC, used continuously for multi-hour rides). I am seeking guidance on an iOS audio session and CarPlay HID interaction I have not been able to resolve through documented APIs. Problem: As soon as my app activates the microphone (yellow recording indicator visible), iOS appears to classify the app as a real-time communication participant and CarPlay re-routes the steering-wheel / handlebar HID transport buttons (left / right / ok) from the media-control role to the call-control role (answer/decline). Because I do not register a CallKit / LiveCommunicationKit call (the session is a continuous group voice channel, not a discrete telephony call), there is no call object for those buttons to act upon — they effectively become inert. Why this matters: Motorcyclists rely on the intercom for 4–6 hour rides. CarPlay is now built into a growing number of modern motorcycles and with aftermarket display units virtually any bike, and any rider who uses any voice communication platform alongside it — Uniq Intercom, WhatsApp Call currently runs into this same handlebar button remap. With the buttons inert, the rider's only remaining option is to reach for the motorcycle's touchscreen to skip a track or change navigation — this is unsafe. The exact same remap behavior occurs during a real WhatsApp or Phone call — but for those the remap is correct (answer/decline maps to a real call). For continuous voice apps without a CallKit-style discrete call, no equivalent path exists today. As both an iOS developer and a motorcyclist, I would very much like to see this resolved — solving it would meaningfully improve safety for every rider using an iPhone with CarPlay. Configurations I have tested (all reproduce the symptom on iOS 18+ / 26 with wireless CarPlay): AVAudioSession.Category.playAndRecord + .voiceChat mode + various option combinations (duckOthers, mixWithOthers, allowBluetoothHFP, allowBluetoothA2DP, defaultToSpeaker) Same category with .videoChat mode (which @livekit/react-native defaults to) Same category with .default mode (re-applied after setAudioModeAsync to defeat any framework override) — confirmed Mode = Default for ~2 s window in audiomxd log before WebRTC's setActive cycle returned mode to .voiceChat. Buttons remained remapped during the .default window. Disabling MPRemoteCommandCenter and clearing MPNowPlayingInfoCenter.default().nowPlayingInfo JS-side override of WebRTC's global RTCAudioSessionConfiguration via @livekit/react-native's AudioSession.setAppleAudioConfiguration({audioMode: 'default'}) bridge, applied both before connect and after setAudioModeAsync to defeat library overrides In every case the audiomxd system log confirms our session goes active (Mode = VoiceChat or Default, Recording = YES), and CarPlay HID buttons are immediately remapped to call-control. The middle "OK" button remains functional because it is not part of the call-control mapping — confirming the buttons are not blocked, only re-purposed. The remap occurs the instant the iOS recording indicator appears, regardless of audio session mode. This led me to conclude the trigger is not audio session mode but the combination of microphone permission + active session + (likely) the AUVoiceIO unit instantiated by WebRTC. I cannot find a public API path to suppress this classification while maintaining the always-on continuous voice channel. My questions: Is there an entitlement or API that allows an app with active microphone capture to declare itself as a non-call media participant, keeping CarPlay HID transport buttons in the media role? Is AVAudioSession.setPrefersEchoCancelledInput(_:) (iOS 18+) the intended path for retaining platform AEC under .default mode without the focus-engine "communication priority" marking? Documentation is sparse on its CarPlay arbitration implications. Does the PushToTalk framework affect HID arbitration differently from playAndRecord + voiceChat? Our continuous-channel UX does not fit the PTT transmit-on-press model, but understanding the contrast would help. If no current API exists, is this something the iOS Audio team would consider for future SDKs? Solving this would meaningfully improve safety for motorcycle and adventure-sport users on iOS. Thank you for your time and any guidance you can offer. — Emre Erkaya / Uniq Intercom
1
0
141
1w
AVAudioEngine : Split 1x4 channel bus into 4x1 channel busses?
I'm using a 4 channel USB Audio interface, with 4 microphones, and want to process them through 4 independent effect chains. However the output from AVAudioInputNode is a single 4 channel bus. How can I split this into 4 mono busses? The following code splits the input into 4 copies, and routes them through the effects, but each bus contains all four channels. How can I remap the channels to remove the unwanted channels from the bus? I tried using channelMap on the mixer node but that had no effect. I'm currently using this code primarily on iOS but it should be portable between iOS and MacOS. It would be possible to do this through a Matrix Mixer Node, but that seems completely overkill, for such a basic operation. I'm already using a Matrix Mixer to combine the inputs, and it's not well supported in AVAudioEngine. AVAudioInputNode *inputNode=[engine inputNode]; [inputNode setVoiceProcessingEnabled:NO error:nil]; NSMutableArray *micDestinations=[NSMutableArray arrayWithCapacity:trackCount]; for(i=0;i<trackCount;i++) { fixMicFormat[i]=[AVAudioMixerNode new]; [engine attachNode:fixMicFormat[i]]; // And create reverb/compressor and eq the same way... [engine connect:reverb[i] to:matrixMixerNode fromBus:0 toBus:i format:nil]; [engine connect:eq[i] to:reverb[i] fromBus:0 toBus:0 format:nil]; [engine connect:compressor[i] to:eq[i] fromBus:0 toBus:0 format:nil]; [engine connect:fixMicFormat[i] to:compressor[i] fromBus:0 toBus:0 format:nil]; [micDestinations addObject:[[AVAudioConnectionPoint alloc] initWithNode:fixMicFormat[i] bus:0] ]; } AVAudioFormat *inputFormat = [inputNode outputFormatForBus: 1]; [engine connect:inputNode toConnectionPoints:micDestinations fromBus:1 format:inputFormat];
2
0
357
Oct ’25
Keeping PiP alive during third-party video recording (camera capture)
I’m building a teleprompter-style app that relies on Picture in Picture. PiP starts correctly on device. Everything works — until another app (e.g. TikTok / Instagram) starts active video recording. When camera capture begins in the foreground app, iOS terminates my PiP session. Some teleprompter apps appear to keep PiP active while recording in other apps, so I’m trying to understand the recommended architectural pattern for this scenario. Is there a documented approach or best practice to keep PiP stable during third-party camera capture? Looking specifically for guidance on the correct AVKit / AVAudioSession configuration for this use case.
0
0
416
Feb ’26
Entitlement "com.apple.developer.carplay-driving-task" not allowing audio playback for voice controlled interaction
According to https://developer.apple.com/download/files/CarPlay-Developer-Guide.pdf , apps with entitlement com.apple.developer.carplay-driving-task are allowed to use voice control. In my current implementation the voice recording working fine but the voice response (AVPlayer with category "playback set") does not output any audio. I suspect that it is a entitlement limitation because if I quickly tap to play a music while the voice assistant AVPlayer is "playing", then I can hear the response, but without this trick it stays playing but mute. In parallel I have now requested com.apple.developer.carplay-voice-based-conversation entitlement , but I don't even know if when approved I will be able to use 2 entitlement for the same CarPlay app. Long story short: 1 - Should an app be able to play audio responses when it's CarPlay entitlement is com.apple.developer.carplay-driving-task? 2 - If not, can I combine entitlements com.apple.developer.carplay-driving-task and com.apple.developer.carplay-voice-based-conversation?
1
0
486
1w
iOS - record audio fails to record
Hi, I try to record audio on the iPhone with the AVAudioRecorder and Xcode 26.0.1. Maybe the problem is that I can not record audio with the simulator. But there's a menu for audio. In the plist I added 'Privacy - Microphone Usage Description' and I ask for permission before recording. if await AVAudioApplication.requestRecordPermission() { print("permission granted") recordPermission = true } else { print("permission denied") } Permission is granted. let settings: [String : Any] = [ AVFormatIDKey: kAudioFormatMPEG4AAC, AVSampleRateKey: 12000, AVNumberOfChannelsKey: 1, AVEncoderAudioQualityKey: AVAudioQuality.high.rawValue ] recorder = try AVAudioRecorder(url: filename, settings: settings) let prepared = recorder.prepareToRecord() print("prepared started: \(prepared)") let started = recorder.record() print("recording started: \(started)") started is always false and I tried many settings. Error messages AddInstanceForFactory: No factory registered for id <CFUUID 0x600000211480> F8BB1C28-BAE8-11D6-9C31-00039315CD46 AudioConverter.cpp:1052 Failed to create a new in process converter -> from 0 ch, 12000 Hz, .... (0x00000000) 0 bits/channel, 0 bytes/packet, 0 frames/packet, 0 bytes/frame to 1 ch, 12000 Hz, aac (0x00000000) 0 bits/channel, 0 bytes/packet, 1024 frames/packet, 0 bytes/frame, with status -50 AudioQueueObject.cpp:1892 BuildConverter: AudioConverterNew returned -50 from: 0 ch, 12000 Hz, .... (0x00000000) 0 bits/channel, 0 bytes/packet, 0 frames/packet, 0 bytes/frame to: 1 ch, 12000 Hz, aac (0x00000000) 0 bits/channel, 0 bytes/packet, 1024 frames/packet, 0 bytes/frame prepared started: true AudioQueueObject.cpp:7581 ConvertInput: aq@0x10381be00: AudioConverterFillComplexBuffer returned -50, packetCount 5 recording started: false All examples I find are the same, but apparently there must be something different.
1
0
403
Oct ’25
Random EXC_BAD_ACCESS using AVFoundation
My app uses the AVFoundation to pronounce some words. Running the app from Xcode, either to a simulator or device, I frequently get this crash at start-up: AXSpeech (13): EXC_BAD_ACCESS (code=EXC_I386_GPFLT). It seems to occur randomly, maybe 20%-30% of the time I launch the app. When it does not crash, using audio works as expected. When launched from the device, it never crashes (so far, at least). Here's the code that outputs speech: Declared at the top level of the View struct: @State var synth = AVSpeechSynthesizer() In the View, as part of a Button's action closure: let utterance = AVSpeechUtterance(string: answer) utterance.voice = AVSpeechSynthesisVoice(language: "en_US") synth.speak(utterance) Any idea on how to stop this? It's annoying having to launch the app multiple times to test on a simulator or device.
1
0
602
Mar ’26
No mic capture on iOS 18.5
Hello! We stumbled upon a problem with our karaoke app where user on iPhone 16e/iOS 18.5 has problem with mic capture, other users cannot hear him. The mic capture is working fine on 17.5, 16.8. Maybe there is something else we need when configuring AVAudioSession for iOS 18.5? Currently it's set up like this: override func viewDidLoad() { super.viewDidLoad() UIApplication.shared.isIdleTimerDisabled = true mRoomId = appDelegate.getRoomId() let audioSession = AVAudioSession.sharedInstance() try! audioSession.setCategory(.playAndRecord, mode: .voiceChat, options: [.defaultToSpeaker]) try! audioSession.setPreferredSampleRate(48000) try! audioSession.setActive(true, options: []) }
1
0
303
Nov ’25
AudioHardwareCreateProcessTap delivers all-zero buffers while system audio is audible
Summary Using AudioHardwareCreateProcessTap + AudioHardwareCreateAggregateDevice for system audio capture. During long sessions, the AudioDeviceIOProc callback continues firing normally but every PCM sample is exactly 0.0f — while the system is producing audible output. Environment Field Value macOS 26.5 Beta Hardware MacBook Air (M2) API AudioHardwareCreateProcessTap + AudioHardwareCreateAggregateDevice Tap CATapDescription, processes = [], .unmuted, private Format 48,000 Hz, Float32, interleaved stereo Aggregate anchor kAudioAggregateDeviceMainSubDeviceKey = current default output UID Observed behavior After running normally for several minutes, the stream transitions into an all-zero state: AudioDeviceIOProc continues to fire at expected cadence Frame count, timestamps (mHostTime, mSampleTime), and mDataByteSize all look normal AudioBufferList pointers are valid Every sample in every buffer is exactly 0.0f Other apps are still producing audible output through the same output device The condition may self-recover or persist until the session is stopped Confirmed via RMS logging both inside the IOProc and after the ring buffer consumer — data is zero on delivery, not introduced downstream. Example: 51-minute session on MacBook Air M2 Segment 1 (~7 min): Three all-zero periods: 60 s, 53 s, 141 s. Real PCM briefly returned between them. Segment 2 (~44 min): Two all-zero periods: 16 min 3 s, 3 min 8 s. IOProc cadence, timestamp deltas, default output UID, and kAudioDevicePropertyDeviceIsRunningSomewhere all remained normal throughout. What I have ruled out Actual silence: User was in an active video call and could hear participants through the output device. Default output device change: Monitored kAudioHardwarePropertyDefaultOutputDevice — no change during affected periods. IOProc stall: Heartbeat and kAudioDevicePropertyDeviceIsRunningSomewhere remained normal. Aggregate device destroyed: AudioObjectGetPropertyData on the aggregate UID continued returning the expected device. Tap descriptor misconfiguration: The same tap produces valid PCM earlier in the same session, so this is not a startup-time issue. Why detection is hard All-zero buffers from a broken tap are indistinguishable from legitimate silence (muted participant, waiting room, paused media). kAudioProcessPropertyIsRunningOutput reports whether a process has active output IO, not whether it is contributing non-zero samples — a muted Zoom call still reports true. Possible correlations Sample-rate renegotiation on the output device (44.1 kHz ↔ 48 kHz) when another app changes output Bluetooth device state changes (AirPods sleep/wake) where UID stays the same MacBook Air more frequently affected than MacBook Pro Always occurs after extended uptime — first few minutes are consistently clean Current workaround Full teardown and rebuild restores real PCM. Restarting the IOProc alone or recreating only the aggregate device is not reliable — both the Process Tap and Aggregate Device must be destroyed and recreated. 1. AudioDeviceStop 2. AudioDeviceDestroyIOProcID 3. AudioHardwareDestroyAggregateDevice 4. AudioHardwareDestroyProcessTap 5. AudioHardwareCreateProcessTap 6. AudioHardwareCreateAggregateDevice 7. Create + start new IOProc Applying this automatically is risky because it cannot be reliably distinguished from legitimate silence. Questions Expected failure mode? Can a Process Tap continue delivering zero-filled buffers while the system output is audible? Is this expected under certain device or routing conditions? Detection signal? Is there any HAL property, notification, or diagnostic counter that distinguishes "sources are genuinely silent" from "the tap data path has stopped receiving the real mix"? Targeted recovery? Is there a supported way to re-anchor or reset the tap data path without destroying and recreating both objects? Full rebuild as intended workaround? If so, it would help to confirm this so developers can converge on a consistent approach. Mixer activity signal? kAudioProcessPropertyIsRunningOutput reflects IO registration, not sample contribution. Is there any AudioProcess property that indicates a process is currently delivering non-zero audio to the system mixer?
0
0
227
1w
Android MusicKit canSetRadioLikeState and setRadioLikeState
The Android MusicKit documentation documents two functions that are not actually exposed/added to the SDK. https://developer.apple.com/musickit/android/com/apple/android/music/playback/controller/MediaPlayerController.html#canSetRadioLikeState-- https://developer.apple.com/musickit/android/com/apple/android/music/playback/controller/MediaPlayerController.html#setRadioLikeState-int- Is the documentation stale or is the SDK out of date?
0
0
160
Apr ’26
Playing periodic audio in background using AVFoundation - facing audio session startup failure
Hello everyone, I’m new to Swift development and have been working on an audio module that plays a specific sound at regular intervals - similar to a workout timer that signals switching exercises every few minutes. Following AVFoundation documentation, I’m configuring my audio session like this: let session = AVAudioSession.sharedInstance() try session.setCategory( .playback, mode: .default, options: [.interruptSpokenAudioAndMixWithOthers, .duckOthers] ) self.engine.attach(self.player) self.engine.connect(self.player, to: self.engine.outputNode, format: self.audioFormat) try? session.setActive(true) When it’s time to play cues, I schedule playback on a DispatchQueue: // scheduleAudio uses DispatchQueue self.scheduleAudio(at: interval.start) { do { try audio.engine.start() audio.node.play() for sample in interval.samples { audio.node.scheduleBuffer(sample.buffer, at: AVAudioTime(hostTime: sample.hostTime)) } } catch { print("Audio activation failed: \(error)") } } This works perfectly in the foreground. But once the app goes into the background, the scheduled callback runs, yet the audio engine fails to start, resulting in an error with code 561015905. Interestingly, if the app is already playing audio before going to the background, the scheduled sounds continue to play as expected. I have added the required background audio mode to my Info plist file by including the key UIBackgroundModes with the value audio. Is there anything else I should configure? What is the best practice to play periodic audio when the app runs in the background? How do apps like turn-by-turn navigation handle continuous audio playback in the background? Any advice or pointers would be greatly appreciated!
0
0
237
Jul ’25
ScaleTimeRange will cause noise in sound
I'm using AVFoundation to make a multi-track editor app, which can insert multiple track and clip, including scale some clip to change the speed of the clip, (also I'm not sure whether AVFoundation the best choice for me) but after making the scale with scaleTimeRange API, there is some short noise sound in play back. Also, sometimes it's fine when play AVMutableCompostion using AVPlayer with AVPlayerItem, but after exporting with AVAssetReader, will catch some short noise sounds in result file.... Not sure why. Here is the example project, which can build and run directly. https://github.com/luckysmg/daily_images/raw/refs/heads/main/TestDemo.zip
0
0
144
Jul ’25
Switching default input/output channels using Core Audio
I wrote a Swift macOS app to control a PCI audio device. The code switches between the default output and input channels. As soon as I launch the Audio-Midi Setup utility, channel switching stops working. The driver properties allow switching, but the system doesn't respond. I have to delete the contents of /Library/Preferences/Audio and reset Core Audio. What am I missing? func setDefaultChannelsOutput() { guard let deviceID = getDeviceIDByName(deviceName: "PCI-424") else { return } let selectedIndex = DefaultChannelsOutput.indexOfSelectedItem if selectedIndex < 0 || selectedIndex >= 24 { return } let channel1 = UInt32(selectedIndex * 2 + 1) let channel2 = UInt32(selectedIndex * 2 + 2) var channels: [UInt32] = [channel1, channel2] var propertyAddress = AudioObjectPropertyAddress( mSelector: kAudioDevicePropertyPreferredChannelsForStereo, mScope: kAudioDevicePropertyScopeOutput, mElement: kAudioObjectPropertyElementWildcard ) let dataSize = UInt32(MemoryLayout<UInt32>.size * channels.count) let status = AudioObjectSetPropertyData(deviceID, &propertyAddress, 0, nil, dataSize, &channels) if status != noErr { print("Error setting default output channels: \(status)") } }
0
0
307
Dec ’25
Video Audio + Speech To Text
Hello, I am wondering if it is possible to have audio from my AirPods be sent to my speech to text service and at the same time have the built in mic audio input be sent to recording a video? I ask because I want my users to be able to say "CAPTURE" and I start recording a video (with audio from the built in mic) and then when the user says "STOP" I stop the recording.
2
0
1.3k
Mar ’26
iOS 26 Beta Personal Voice bug affecting AVSpeechSynthesizer
I have sent in a feedback report (FB18222398) but I have no idea if anyone has looked at it. I know from past experiences that Apple devs do look at these forums. This applies to each of the betas, 1, 2 and 3. I have created a new Personal Voice with each beta. I create a personal voice in English. When it's done processing, I tap Preview and it says in English what is expected. But after some time, an hour or a day, the language of the voice file changes languages and no longer works properly. If I press Preview it is no longer intelligible. I have a text to speech app and initially the created voice works but then when the language of the file changes, it no longer works. I have run an app on my iphone through Xcode that prints to the console the voices installed on the device with the language. Currently this is the voice file: Voice Identifier: com.apple.speech.personalvoice.AAA9C6F2-9125-475F-BA2F-22C63274991D Language: es-MX and on a second device the same personal voice is in a different language: Voice Identifier: com.apple.speech.personalvoice.AAA9C6F2-9125-475F-BA2F-22C63274991D Language: zh-CN Although, a previous personal voice file that listed as Spanish-Mexican played in English with a Spanish accent or when playing Spanish text, it sounded almost perfect. This current personal voice doesn't do that, and is unintelligible. Previous attempts have converted to Chinese. I hope someone can look into this.
2
0
596
Dec ’25
SpeechTranscriber not supported
I've tried SpeechTranscriber with a lot of my devices (from iPhone 12 series ~ iPhone 17 series) without issues. However, SpeechTranscriber.isAvailable value is false for my iPhone 11 Pro. https://developer.apple.com/documentation/speech/speechtranscriber/isavailable I'am curious why the iPhone 11 Pro device is not supported. Are all iPhone 11 series not supported intentionally? Or is there any problem with my specific device? I've also checked the supportedLocales, and the value is an empty array. https://developer.apple.com/documentation/speech/speechtranscriber/supportedlocales
5
0
934
Mar ’26
Live Translations on VOIP on iOS26
Hi team, With regards to Call (Live) Translations on VOIP: Is it possible to invoke live translations within the app? (without going into the Call System UI) Is it possible to navigate users from app to Call System UI via an API? (and also invoking the new live translations directly) Will Apple support more languages apart from the current ones? (Currently I see 4 supported languages)
1
0
180
Aug ’25
Correct way for an Audio Unit v3 to return fewer than requested number of samples given a buffer
I have an AUv3 plugin which uses an FFT - which requires n samples before it can produce any output - so, depending on the relation between the host's buffer size and the FFT window size, it may receive a several buffers of samples, producing no output, and then dumping out what it has once a sufficient number of samples have been received. This means that output is produced in fits and starts, in batches that match the FFT size (modulo oversampling) - e.g. if being fed buffers of 256 samples with an fft size of 1024, the output buffer sizes will be 0 for the first 3 buffers, and upon the fourth, the first 256 processed samples are returned and the remaining 768 cached; the next three buffers will return the remaining cached samples while processing and buffering subsequent ones, and so forth. The internal mechanics of that I have solved, caching output if the current output buffer is too small, and so forth - so it all works as advertised, and the plugin reports its latency correctly. And when run as an app in demo-mode, playback works as expected. In the plugin's render block, it captures the number of frames written, and if it is less than the number of frames passed in, adjusts the mDataByteSize of the output buffers to match the actual quantity of data being returned: unsigned int framesWritten = (unsigned int) processHelper->processWithEvents(inAudioBufferList, outAudioBufferList, timestamp, frameCount, realtimeEventListHead); if (framesWritten < frameCount) { for (UInt32 i = 0; i < outAudioBufferList->mNumberBuffers; ++i) { outAudioBufferList->mBuffers[i].mDataByteSize = framesWritten * 4; // assume 4 byte floats } } However, there are a couple of serious issues: auval -v fails it with - Render Test at 64 frames, sample rate: 22050 Hz ERROR: Output Buffer Size does not match requested When connected to Logic Pro, it appears that mDataByteSize is ignored, and the entire allocated buffer is read - audio has sections of silence snipped into it which corresponds the number of empty buffers being returned If I set Logic's buffer size to 1024 and use a 1024 sample FFT window, the plugin works correctly - but of course a plugin cannot dictate buffer size, and `1024 is too small a window size to be useful for anything but filtering very high frequencies This seems like it has to be a solvable problem, and most likely the issue is in how my code reports the number of usable samples in the returned buffer. So, what is the correct way for a plugin to report that it has no samples to return, but will, uh, real soon now? I know I could convert this plugin to be one that does offline rendering of the entire input, but this is real-time processing, just with a fixed amount of latency, so that should not be necessary.
0
0
516
Nov ’25
SpeechTranscriber supported Devices
I have the new iOS 26 SpeechTranscriber working in my application. The issue I am facing is how to determine if the device I am running on supports SpeechTranscriber. I was able to create code that tests if the device supports transcription but it takes a bit of time to run and thus the results are not available when the app launches. What I am looking for is a list of what iOS 26 devices it doesn't run on. I think its safe to assume any new devices will support it so if we can just have a list of what devices that can run iOS 26 and not able to do transcription it would be much faster for the app. I have determined it doesn't work on a SE 2nd Gen, it works on iPhone 12, SE 3rd Gen, iPhone 14 Pro, 15 Pro. As the SpeechTranscriber doesn't work in the simulator I can't determine that way. I have checked the docs and it doesn't list the devices it doesn't work on.
1
0
573
Nov ’25
CarPlay HID transport buttons remap to call-control during continuous mic capture (no opt-out API)
Hello, I am developing Uniq Intercom, a voice-only group communication app for motorcyclists (always-on intercom over WebRTC, used continuously for multi-hour rides). I am seeking guidance on an iOS audio session and CarPlay HID interaction I have not been able to resolve through documented APIs. Problem: As soon as my app activates the microphone (yellow recording indicator visible), iOS appears to classify the app as a real-time communication participant and CarPlay re-routes the steering-wheel / handlebar HID transport buttons (left / right / ok) from the media-control role to the call-control role (answer/decline). Because I do not register a CallKit / LiveCommunicationKit call (the session is a continuous group voice channel, not a discrete telephony call), there is no call object for those buttons to act upon — they effectively become inert. Why this matters: Motorcyclists rely on the intercom for 4–6 hour rides. CarPlay is now built into a growing number of modern motorcycles and with aftermarket display units virtually any bike, and any rider who uses any voice communication platform alongside it — Uniq Intercom, WhatsApp Call currently runs into this same handlebar button remap. With the buttons inert, the rider's only remaining option is to reach for the motorcycle's touchscreen to skip a track or change navigation — this is unsafe. The exact same remap behavior occurs during a real WhatsApp or Phone call — but for those the remap is correct (answer/decline maps to a real call). For continuous voice apps without a CallKit-style discrete call, no equivalent path exists today. As both an iOS developer and a motorcyclist, I would very much like to see this resolved — solving it would meaningfully improve safety for every rider using an iPhone with CarPlay. Configurations I have tested (all reproduce the symptom on iOS 18+ / 26 with wireless CarPlay): AVAudioSession.Category.playAndRecord + .voiceChat mode + various option combinations (duckOthers, mixWithOthers, allowBluetoothHFP, allowBluetoothA2DP, defaultToSpeaker) Same category with .videoChat mode (which @livekit/react-native defaults to) Same category with .default mode (re-applied after setAudioModeAsync to defeat any framework override) — confirmed Mode = Default for ~2 s window in audiomxd log before WebRTC's setActive cycle returned mode to .voiceChat. Buttons remained remapped during the .default window. Disabling MPRemoteCommandCenter and clearing MPNowPlayingInfoCenter.default().nowPlayingInfo JS-side override of WebRTC's global RTCAudioSessionConfiguration via @livekit/react-native's AudioSession.setAppleAudioConfiguration({audioMode: 'default'}) bridge, applied both before connect and after setAudioModeAsync to defeat library overrides In every case the audiomxd system log confirms our session goes active (Mode = VoiceChat or Default, Recording = YES), and CarPlay HID buttons are immediately remapped to call-control. The middle "OK" button remains functional because it is not part of the call-control mapping — confirming the buttons are not blocked, only re-purposed. The remap occurs the instant the iOS recording indicator appears, regardless of audio session mode. This led me to conclude the trigger is not audio session mode but the combination of microphone permission + active session + (likely) the AUVoiceIO unit instantiated by WebRTC. I cannot find a public API path to suppress this classification while maintaining the always-on continuous voice channel. My questions: Is there an entitlement or API that allows an app with active microphone capture to declare itself as a non-call media participant, keeping CarPlay HID transport buttons in the media role? Is AVAudioSession.setPrefersEchoCancelledInput(_:) (iOS 18+) the intended path for retaining platform AEC under .default mode without the focus-engine "communication priority" marking? Documentation is sparse on its CarPlay arbitration implications. Does the PushToTalk framework affect HID arbitration differently from playAndRecord + voiceChat? Our continuous-channel UX does not fit the PTT transmit-on-press model, but understanding the contrast would help. If no current API exists, is this something the iOS Audio team would consider for future SDKs? Solving this would meaningfully improve safety for motorcycle and adventure-sport users on iOS. Thank you for your time and any guidance you can offer. — Emre Erkaya / Uniq Intercom
Replies
1
Boosts
0
Views
141
Activity
1w
AVAudioEngine : Split 1x4 channel bus into 4x1 channel busses?
I'm using a 4 channel USB Audio interface, with 4 microphones, and want to process them through 4 independent effect chains. However the output from AVAudioInputNode is a single 4 channel bus. How can I split this into 4 mono busses? The following code splits the input into 4 copies, and routes them through the effects, but each bus contains all four channels. How can I remap the channels to remove the unwanted channels from the bus? I tried using channelMap on the mixer node but that had no effect. I'm currently using this code primarily on iOS but it should be portable between iOS and MacOS. It would be possible to do this through a Matrix Mixer Node, but that seems completely overkill, for such a basic operation. I'm already using a Matrix Mixer to combine the inputs, and it's not well supported in AVAudioEngine. AVAudioInputNode *inputNode=[engine inputNode]; [inputNode setVoiceProcessingEnabled:NO error:nil]; NSMutableArray *micDestinations=[NSMutableArray arrayWithCapacity:trackCount]; for(i=0;i<trackCount;i++) { fixMicFormat[i]=[AVAudioMixerNode new]; [engine attachNode:fixMicFormat[i]]; // And create reverb/compressor and eq the same way... [engine connect:reverb[i] to:matrixMixerNode fromBus:0 toBus:i format:nil]; [engine connect:eq[i] to:reverb[i] fromBus:0 toBus:0 format:nil]; [engine connect:compressor[i] to:eq[i] fromBus:0 toBus:0 format:nil]; [engine connect:fixMicFormat[i] to:compressor[i] fromBus:0 toBus:0 format:nil]; [micDestinations addObject:[[AVAudioConnectionPoint alloc] initWithNode:fixMicFormat[i] bus:0] ]; } AVAudioFormat *inputFormat = [inputNode outputFormatForBus: 1]; [engine connect:inputNode toConnectionPoints:micDestinations fromBus:1 format:inputFormat];
Replies
2
Boosts
0
Views
357
Activity
Oct ’25
Keeping PiP alive during third-party video recording (camera capture)
I’m building a teleprompter-style app that relies on Picture in Picture. PiP starts correctly on device. Everything works — until another app (e.g. TikTok / Instagram) starts active video recording. When camera capture begins in the foreground app, iOS terminates my PiP session. Some teleprompter apps appear to keep PiP active while recording in other apps, so I’m trying to understand the recommended architectural pattern for this scenario. Is there a documented approach or best practice to keep PiP stable during third-party camera capture? Looking specifically for guidance on the correct AVKit / AVAudioSession configuration for this use case.
Replies
0
Boosts
0
Views
416
Activity
Feb ’26
Audio of AirPods won’t work
Since the last update to IOS 26.0 (23A5276f) the AirPods connect to my IPhone and the Audio is still running through the phone. They are shown in the Bluetooth Icon that they’re paired.
Replies
1
Boosts
0
Views
101
Activity
Jun ’25
Entitlement "com.apple.developer.carplay-driving-task" not allowing audio playback for voice controlled interaction
According to https://developer.apple.com/download/files/CarPlay-Developer-Guide.pdf , apps with entitlement com.apple.developer.carplay-driving-task are allowed to use voice control. In my current implementation the voice recording working fine but the voice response (AVPlayer with category "playback set") does not output any audio. I suspect that it is a entitlement limitation because if I quickly tap to play a music while the voice assistant AVPlayer is "playing", then I can hear the response, but without this trick it stays playing but mute. In parallel I have now requested com.apple.developer.carplay-voice-based-conversation entitlement , but I don't even know if when approved I will be able to use 2 entitlement for the same CarPlay app. Long story short: 1 - Should an app be able to play audio responses when it's CarPlay entitlement is com.apple.developer.carplay-driving-task? 2 - If not, can I combine entitlements com.apple.developer.carplay-driving-task and com.apple.developer.carplay-voice-based-conversation?
Replies
1
Boosts
0
Views
486
Activity
1w
iOS - record audio fails to record
Hi, I try to record audio on the iPhone with the AVAudioRecorder and Xcode 26.0.1. Maybe the problem is that I can not record audio with the simulator. But there's a menu for audio. In the plist I added 'Privacy - Microphone Usage Description' and I ask for permission before recording. if await AVAudioApplication.requestRecordPermission() { print("permission granted") recordPermission = true } else { print("permission denied") } Permission is granted. let settings: [String : Any] = [ AVFormatIDKey: kAudioFormatMPEG4AAC, AVSampleRateKey: 12000, AVNumberOfChannelsKey: 1, AVEncoderAudioQualityKey: AVAudioQuality.high.rawValue ] recorder = try AVAudioRecorder(url: filename, settings: settings) let prepared = recorder.prepareToRecord() print("prepared started: \(prepared)") let started = recorder.record() print("recording started: \(started)") started is always false and I tried many settings. Error messages AddInstanceForFactory: No factory registered for id <CFUUID 0x600000211480> F8BB1C28-BAE8-11D6-9C31-00039315CD46 AudioConverter.cpp:1052 Failed to create a new in process converter -> from 0 ch, 12000 Hz, .... (0x00000000) 0 bits/channel, 0 bytes/packet, 0 frames/packet, 0 bytes/frame to 1 ch, 12000 Hz, aac (0x00000000) 0 bits/channel, 0 bytes/packet, 1024 frames/packet, 0 bytes/frame, with status -50 AudioQueueObject.cpp:1892 BuildConverter: AudioConverterNew returned -50 from: 0 ch, 12000 Hz, .... (0x00000000) 0 bits/channel, 0 bytes/packet, 0 frames/packet, 0 bytes/frame to: 1 ch, 12000 Hz, aac (0x00000000) 0 bits/channel, 0 bytes/packet, 1024 frames/packet, 0 bytes/frame prepared started: true AudioQueueObject.cpp:7581 ConvertInput: aq@0x10381be00: AudioConverterFillComplexBuffer returned -50, packetCount 5 recording started: false All examples I find are the same, but apparently there must be something different.
Replies
1
Boosts
0
Views
403
Activity
Oct ’25
Random EXC_BAD_ACCESS using AVFoundation
My app uses the AVFoundation to pronounce some words. Running the app from Xcode, either to a simulator or device, I frequently get this crash at start-up: AXSpeech (13): EXC_BAD_ACCESS (code=EXC_I386_GPFLT). It seems to occur randomly, maybe 20%-30% of the time I launch the app. When it does not crash, using audio works as expected. When launched from the device, it never crashes (so far, at least). Here's the code that outputs speech: Declared at the top level of the View struct: @State var synth = AVSpeechSynthesizer() In the View, as part of a Button's action closure: let utterance = AVSpeechUtterance(string: answer) utterance.voice = AVSpeechSynthesisVoice(language: "en_US") synth.speak(utterance) Any idea on how to stop this? It's annoying having to launch the app multiple times to test on a simulator or device.
Replies
1
Boosts
0
Views
602
Activity
Mar ’26
No mic capture on iOS 18.5
Hello! We stumbled upon a problem with our karaoke app where user on iPhone 16e/iOS 18.5 has problem with mic capture, other users cannot hear him. The mic capture is working fine on 17.5, 16.8. Maybe there is something else we need when configuring AVAudioSession for iOS 18.5? Currently it's set up like this: override func viewDidLoad() { super.viewDidLoad() UIApplication.shared.isIdleTimerDisabled = true mRoomId = appDelegate.getRoomId() let audioSession = AVAudioSession.sharedInstance() try! audioSession.setCategory(.playAndRecord, mode: .voiceChat, options: [.defaultToSpeaker]) try! audioSession.setPreferredSampleRate(48000) try! audioSession.setActive(true, options: []) }
Replies
1
Boosts
0
Views
303
Activity
Nov ’25
AudioHardwareCreateProcessTap delivers all-zero buffers while system audio is audible
Summary Using AudioHardwareCreateProcessTap + AudioHardwareCreateAggregateDevice for system audio capture. During long sessions, the AudioDeviceIOProc callback continues firing normally but every PCM sample is exactly 0.0f — while the system is producing audible output. Environment Field Value macOS 26.5 Beta Hardware MacBook Air (M2) API AudioHardwareCreateProcessTap + AudioHardwareCreateAggregateDevice Tap CATapDescription, processes = [], .unmuted, private Format 48,000 Hz, Float32, interleaved stereo Aggregate anchor kAudioAggregateDeviceMainSubDeviceKey = current default output UID Observed behavior After running normally for several minutes, the stream transitions into an all-zero state: AudioDeviceIOProc continues to fire at expected cadence Frame count, timestamps (mHostTime, mSampleTime), and mDataByteSize all look normal AudioBufferList pointers are valid Every sample in every buffer is exactly 0.0f Other apps are still producing audible output through the same output device The condition may self-recover or persist until the session is stopped Confirmed via RMS logging both inside the IOProc and after the ring buffer consumer — data is zero on delivery, not introduced downstream. Example: 51-minute session on MacBook Air M2 Segment 1 (~7 min): Three all-zero periods: 60 s, 53 s, 141 s. Real PCM briefly returned between them. Segment 2 (~44 min): Two all-zero periods: 16 min 3 s, 3 min 8 s. IOProc cadence, timestamp deltas, default output UID, and kAudioDevicePropertyDeviceIsRunningSomewhere all remained normal throughout. What I have ruled out Actual silence: User was in an active video call and could hear participants through the output device. Default output device change: Monitored kAudioHardwarePropertyDefaultOutputDevice — no change during affected periods. IOProc stall: Heartbeat and kAudioDevicePropertyDeviceIsRunningSomewhere remained normal. Aggregate device destroyed: AudioObjectGetPropertyData on the aggregate UID continued returning the expected device. Tap descriptor misconfiguration: The same tap produces valid PCM earlier in the same session, so this is not a startup-time issue. Why detection is hard All-zero buffers from a broken tap are indistinguishable from legitimate silence (muted participant, waiting room, paused media). kAudioProcessPropertyIsRunningOutput reports whether a process has active output IO, not whether it is contributing non-zero samples — a muted Zoom call still reports true. Possible correlations Sample-rate renegotiation on the output device (44.1 kHz ↔ 48 kHz) when another app changes output Bluetooth device state changes (AirPods sleep/wake) where UID stays the same MacBook Air more frequently affected than MacBook Pro Always occurs after extended uptime — first few minutes are consistently clean Current workaround Full teardown and rebuild restores real PCM. Restarting the IOProc alone or recreating only the aggregate device is not reliable — both the Process Tap and Aggregate Device must be destroyed and recreated. 1. AudioDeviceStop 2. AudioDeviceDestroyIOProcID 3. AudioHardwareDestroyAggregateDevice 4. AudioHardwareDestroyProcessTap 5. AudioHardwareCreateProcessTap 6. AudioHardwareCreateAggregateDevice 7. Create + start new IOProc Applying this automatically is risky because it cannot be reliably distinguished from legitimate silence. Questions Expected failure mode? Can a Process Tap continue delivering zero-filled buffers while the system output is audible? Is this expected under certain device or routing conditions? Detection signal? Is there any HAL property, notification, or diagnostic counter that distinguishes "sources are genuinely silent" from "the tap data path has stopped receiving the real mix"? Targeted recovery? Is there a supported way to re-anchor or reset the tap data path without destroying and recreating both objects? Full rebuild as intended workaround? If so, it would help to confirm this so developers can converge on a consistent approach. Mixer activity signal? kAudioProcessPropertyIsRunningOutput reflects IO registration, not sample contribution. Is there any AudioProcess property that indicates a process is currently delivering non-zero audio to the system mixer?
Replies
0
Boosts
0
Views
227
Activity
1w
Android MusicKit canSetRadioLikeState and setRadioLikeState
The Android MusicKit documentation documents two functions that are not actually exposed/added to the SDK. https://developer.apple.com/musickit/android/com/apple/android/music/playback/controller/MediaPlayerController.html#canSetRadioLikeState-- https://developer.apple.com/musickit/android/com/apple/android/music/playback/controller/MediaPlayerController.html#setRadioLikeState-int- Is the documentation stale or is the SDK out of date?
Replies
0
Boosts
0
Views
160
Activity
Apr ’26
Playing periodic audio in background using AVFoundation - facing audio session startup failure
Hello everyone, I’m new to Swift development and have been working on an audio module that plays a specific sound at regular intervals - similar to a workout timer that signals switching exercises every few minutes. Following AVFoundation documentation, I’m configuring my audio session like this: let session = AVAudioSession.sharedInstance() try session.setCategory( .playback, mode: .default, options: [.interruptSpokenAudioAndMixWithOthers, .duckOthers] ) self.engine.attach(self.player) self.engine.connect(self.player, to: self.engine.outputNode, format: self.audioFormat) try? session.setActive(true) When it’s time to play cues, I schedule playback on a DispatchQueue: // scheduleAudio uses DispatchQueue self.scheduleAudio(at: interval.start) { do { try audio.engine.start() audio.node.play() for sample in interval.samples { audio.node.scheduleBuffer(sample.buffer, at: AVAudioTime(hostTime: sample.hostTime)) } } catch { print("Audio activation failed: \(error)") } } This works perfectly in the foreground. But once the app goes into the background, the scheduled callback runs, yet the audio engine fails to start, resulting in an error with code 561015905. Interestingly, if the app is already playing audio before going to the background, the scheduled sounds continue to play as expected. I have added the required background audio mode to my Info plist file by including the key UIBackgroundModes with the value audio. Is there anything else I should configure? What is the best practice to play periodic audio when the app runs in the background? How do apps like turn-by-turn navigation handle continuous audio playback in the background? Any advice or pointers would be greatly appreciated!
Replies
0
Boosts
0
Views
237
Activity
Jul ’25
ScaleTimeRange will cause noise in sound
I'm using AVFoundation to make a multi-track editor app, which can insert multiple track and clip, including scale some clip to change the speed of the clip, (also I'm not sure whether AVFoundation the best choice for me) but after making the scale with scaleTimeRange API, there is some short noise sound in play back. Also, sometimes it's fine when play AVMutableCompostion using AVPlayer with AVPlayerItem, but after exporting with AVAssetReader, will catch some short noise sounds in result file.... Not sure why. Here is the example project, which can build and run directly. https://github.com/luckysmg/daily_images/raw/refs/heads/main/TestDemo.zip
Replies
0
Boosts
0
Views
144
Activity
Jul ’25
Switching default input/output channels using Core Audio
I wrote a Swift macOS app to control a PCI audio device. The code switches between the default output and input channels. As soon as I launch the Audio-Midi Setup utility, channel switching stops working. The driver properties allow switching, but the system doesn't respond. I have to delete the contents of /Library/Preferences/Audio and reset Core Audio. What am I missing? func setDefaultChannelsOutput() { guard let deviceID = getDeviceIDByName(deviceName: "PCI-424") else { return } let selectedIndex = DefaultChannelsOutput.indexOfSelectedItem if selectedIndex < 0 || selectedIndex >= 24 { return } let channel1 = UInt32(selectedIndex * 2 + 1) let channel2 = UInt32(selectedIndex * 2 + 2) var channels: [UInt32] = [channel1, channel2] var propertyAddress = AudioObjectPropertyAddress( mSelector: kAudioDevicePropertyPreferredChannelsForStereo, mScope: kAudioDevicePropertyScopeOutput, mElement: kAudioObjectPropertyElementWildcard ) let dataSize = UInt32(MemoryLayout<UInt32>.size * channels.count) let status = AudioObjectSetPropertyData(deviceID, &propertyAddress, 0, nil, dataSize, &channels) if status != noErr { print("Error setting default output channels: \(status)") } }
Replies
0
Boosts
0
Views
307
Activity
Dec ’25
Video Audio + Speech To Text
Hello, I am wondering if it is possible to have audio from my AirPods be sent to my speech to text service and at the same time have the built in mic audio input be sent to recording a video? I ask because I want my users to be able to say "CAPTURE" and I start recording a video (with audio from the built in mic) and then when the user says "STOP" I stop the recording.
Replies
2
Boosts
0
Views
1.3k
Activity
Mar ’26
iOS 26 Beta Personal Voice bug affecting AVSpeechSynthesizer
I have sent in a feedback report (FB18222398) but I have no idea if anyone has looked at it. I know from past experiences that Apple devs do look at these forums. This applies to each of the betas, 1, 2 and 3. I have created a new Personal Voice with each beta. I create a personal voice in English. When it's done processing, I tap Preview and it says in English what is expected. But after some time, an hour or a day, the language of the voice file changes languages and no longer works properly. If I press Preview it is no longer intelligible. I have a text to speech app and initially the created voice works but then when the language of the file changes, it no longer works. I have run an app on my iphone through Xcode that prints to the console the voices installed on the device with the language. Currently this is the voice file: Voice Identifier: com.apple.speech.personalvoice.AAA9C6F2-9125-475F-BA2F-22C63274991D Language: es-MX and on a second device the same personal voice is in a different language: Voice Identifier: com.apple.speech.personalvoice.AAA9C6F2-9125-475F-BA2F-22C63274991D Language: zh-CN Although, a previous personal voice file that listed as Spanish-Mexican played in English with a Spanish accent or when playing Spanish text, it sounded almost perfect. This current personal voice doesn't do that, and is unintelligible. Previous attempts have converted to Chinese. I hope someone can look into this.
Replies
2
Boosts
0
Views
596
Activity
Dec ’25
SpeechTranscriber not supported
I've tried SpeechTranscriber with a lot of my devices (from iPhone 12 series ~ iPhone 17 series) without issues. However, SpeechTranscriber.isAvailable value is false for my iPhone 11 Pro. https://developer.apple.com/documentation/speech/speechtranscriber/isavailable I'am curious why the iPhone 11 Pro device is not supported. Are all iPhone 11 series not supported intentionally? Or is there any problem with my specific device? I've also checked the supportedLocales, and the value is an empty array. https://developer.apple.com/documentation/speech/speechtranscriber/supportedlocales
Replies
5
Boosts
0
Views
934
Activity
Mar ’26
Live Translations on VOIP on iOS26
Hi team, With regards to Call (Live) Translations on VOIP: Is it possible to invoke live translations within the app? (without going into the Call System UI) Is it possible to navigate users from app to Call System UI via an API? (and also invoking the new live translations directly) Will Apple support more languages apart from the current ones? (Currently I see 4 supported languages)
Replies
1
Boosts
0
Views
180
Activity
Aug ’25
Correct way for an Audio Unit v3 to return fewer than requested number of samples given a buffer
I have an AUv3 plugin which uses an FFT - which requires n samples before it can produce any output - so, depending on the relation between the host's buffer size and the FFT window size, it may receive a several buffers of samples, producing no output, and then dumping out what it has once a sufficient number of samples have been received. This means that output is produced in fits and starts, in batches that match the FFT size (modulo oversampling) - e.g. if being fed buffers of 256 samples with an fft size of 1024, the output buffer sizes will be 0 for the first 3 buffers, and upon the fourth, the first 256 processed samples are returned and the remaining 768 cached; the next three buffers will return the remaining cached samples while processing and buffering subsequent ones, and so forth. The internal mechanics of that I have solved, caching output if the current output buffer is too small, and so forth - so it all works as advertised, and the plugin reports its latency correctly. And when run as an app in demo-mode, playback works as expected. In the plugin's render block, it captures the number of frames written, and if it is less than the number of frames passed in, adjusts the mDataByteSize of the output buffers to match the actual quantity of data being returned: unsigned int framesWritten = (unsigned int) processHelper->processWithEvents(inAudioBufferList, outAudioBufferList, timestamp, frameCount, realtimeEventListHead); if (framesWritten < frameCount) { for (UInt32 i = 0; i < outAudioBufferList->mNumberBuffers; ++i) { outAudioBufferList->mBuffers[i].mDataByteSize = framesWritten * 4; // assume 4 byte floats } } However, there are a couple of serious issues: auval -v fails it with - Render Test at 64 frames, sample rate: 22050 Hz ERROR: Output Buffer Size does not match requested When connected to Logic Pro, it appears that mDataByteSize is ignored, and the entire allocated buffer is read - audio has sections of silence snipped into it which corresponds the number of empty buffers being returned If I set Logic's buffer size to 1024 and use a 1024 sample FFT window, the plugin works correctly - but of course a plugin cannot dictate buffer size, and `1024 is too small a window size to be useful for anything but filtering very high frequencies This seems like it has to be a solvable problem, and most likely the issue is in how my code reports the number of usable samples in the returned buffer. So, what is the correct way for a plugin to report that it has no samples to return, but will, uh, real soon now? I know I could convert this plugin to be one that does offline rendering of the entire input, but this is real-time processing, just with a fixed amount of latency, so that should not be necessary.
Replies
0
Boosts
0
Views
516
Activity
Nov ’25
Background audio player issue
I am work an app development on an app which request an audio function in background as an alert sound. during debug testing , the function work fine, but once I testing standalone without debugging , The function not work , it will play out the sound when I back to app. does any way to trace the issues ?
Replies
0
Boosts
0
Views
182
Activity
May ’25
SpeechTranscriber supported Devices
I have the new iOS 26 SpeechTranscriber working in my application. The issue I am facing is how to determine if the device I am running on supports SpeechTranscriber. I was able to create code that tests if the device supports transcription but it takes a bit of time to run and thus the results are not available when the app launches. What I am looking for is a list of what iOS 26 devices it doesn't run on. I think its safe to assume any new devices will support it so if we can just have a list of what devices that can run iOS 26 and not able to do transcription it would be much faster for the app. I have determined it doesn't work on a SE 2nd Gen, it works on iPhone 12, SE 3rd Gen, iPhone 14 Pro, 15 Pro. As the SpeechTranscriber doesn't work in the simulator I can't determine that way. I have checked the docs and it doesn't list the devices it doesn't work on.
Replies
1
Boosts
0
Views
573
Activity
Nov ’25