Hi all,
I have been quite stumped on this behavior for a little bit now, so thought it best to share here and see if someone more experience with AVAudioEngine / AVAudioSession can weigh in.
Right now I have a AVAudioEngine that I am using to perform some voice chat with and give buffers to play. This works perfectly until route changes start to occur, which causes the AVAudioEngine to reset itself, which then causes all players attached to this engine to be stopped.
Once a AVPlayerNode gets stopped due to this (but also any other time), all samples that were scheduled to be played then get purged. Where this becomes confusing for me is the completion handler gets called every time regardless of the sound actually being played.
Is there a reliable way to know if a sample needs to be rescheduled after a player has been reset?
I am not quite sure in my case what my observer of AVAudioEngineConfigurationChange needs to be doing, as this engine only handles output. All input is through a separate engine for simplicity.
Currently I am storing a queue of samples as they get sent to the AVPlayerNode for playback, and after that completion checking if the player isPlaying or not. If it's playing I assume that the sound actually was played- and if not then I leave it in the queue and assume that an observer on the route change or the configuration change will realize there are samples in the queue and reset them
Thanks for any feedback!
AVFoundation
RSS for tagWork with audiovisual assets, control device cameras, process audio, and configure system audio interactions using AVFoundation.
Posts under AVFoundation tag
200 Posts
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I have a PCM audio buffer (AVAudioPCMFormatInt16). When I try to play it using AVPlayerNode / AVAudioEngine an exception is thrown:
"[[busArray objectAtIndexedSubscript:(NSUInteger)element] setFormat:format error:&nsErr]: returned false, error Error Domain=NSOSStatusErrorDomain Code=-10868
(related thread https://forums.developer.apple.com/forums/thread/700497?answerId=780530022#780530022)
If I convert the buffer to AVAudioPCMFormatFloat32 playback works.
My questions are:
Does AVAudioEngine / AVPlayerNode require AVAudioPCMBuffer to be in the Float32 format? Is there a way I can configure it to accept another format instead for my application?
If 1 is YES is this documented anywhere?
If 1 is YES is this required format subject to change at any point?
Thanks!
I was looking to watch the "AVAudioEngine in Practice" session video from WWDC 2014 but I can't find it anywhere (https://forums.developer.apple.com/forums/thread/747008).
I have a SwiftUI Mac Catalyst app that shows a video player using a UIViewControllerRepresentable AVPlayerViewController.
When I tap the full screen button on the native playback control, the app crashes.
The app crashes only when built with Xcode 26. When I build with Xcode 16, this does not cause a crash.
Here is some of the crash log:
0 CoreFoundation 0x000000019a5cc770 __exceptionPreprocess + 176
1 libobjc.A.dylib 0x000000019a0aa418 objc_exception_throw + 88
2 CoreFoundation 0x000000019a69b7fc -[NSException initWithCoder:] + 0
3 AppKit 0x000000019eeee1d0 -[NSBezierPath(NSBezierPathDevicePrimitives) _deviceMoveToPoint:] + 104
4 AppKit 0x000000019eeec930 -[NSBezierPath appendBezierPathWithRoundedRect:xRadius:yRadius:] + 200
5 AppKit 0x000000019eeea238 +[NSBezierPath bezierPathWithRoundedRect:xRadius:yRadius:] + 88
6 AVKitMacHelper 0x0000000247d73cc4 -[AVScrubberSliderCell drawBarInside:flipped:] + 1264
7 AppKit 0x000000019f35cf7c -[NSSliderCell drawInteriorWithFrame:inView:] + 680
8 AppKit 0x000000019f35ccbc -[NSSliderCell drawWithFrame:inView:] + 104
Any ideas!?
AlarmKit custom sounds are universally broken in iOS 26.0 stable - instead of playing your custom sound, it plays a system error/timeout beep.
I've spent days investigating why custom sounds result in what sounds like an error beep (like when you cancel an operation or hit a timeout) instead of the actual audio file. I can now prove this is an Apple bug, not implementation error.
Evidence:
Test 1: My Implementation
Followed Apple's documentation exactly
Tried both bundle and Library/Sounds (as documented)
Result: System error beep (not my audio)
Test 2: Professional Apps
Tested ADHDAlarms (popular AlarmKit example by jacobsapps) https://github.com/jacobsapps/ADHDAlarms
Their airhorn.mp3 custom sound: same error beep (not an airhorn)
Their default sound: works perfectly
Test 3: Device Testing
Physical iPhone (iOS 26.0 - 23A341): broken
iOS Simulator: broken
Not device-specific
Files are found correctly, but the actual audio file is never played. Instead, you hear what sounds like a system error/cancellation tone.
What I've Eliminated
Not a Library/Sounds vs Bundle issue (both broken)
Not a file format issue (.mp3, .caf, .m4a all broken)
Not an implementation issue (professional apps broken too)
Not a device issue (simulator and device both broken)
Not a file size issue (5KB to 2MB all broken)
The Documentation Lie:
Apple's docs for AlertConfiguration.AlertSound.named(_:) state:
"Choose a file that's in your app's main bundle or the Library/Sounds folder"
https://developer.apple.com/documentation/activitykit/alertconfiguration/alertsound/named(_:)
Both locations are broken.
Tested on: iOS 26.0 (23A341), Xcode 26.0.1, Swift 6.2
Impact:
This affects any app trying to:
Provide personalized wake-up sounds
Use custom alarm tones
Create meditation/sleep apps
Differentiate from default iOS alarms
Current Status:
Multiple bug reports filed: FB19900024, FB18237648, FB19779004
Apple engineer claimed "fixed in latest beta" in August
Still broken in iOS 26.0 stable (September)
Workaround:
None that I know of. You must use .default sound.
For apps needing custom audio, play it with AVAudioPlayer after the alarm fires and user opens the app.
Question:
Has ANYONE gotten custom AlarmKit sounds working in iOS 26.0 stable? If so, plzzz help I'd be so grateful.
Hi there,
We're working on offline playback of DRM tracks. The persistent keys (also known as track licenses) for offline playback are stored locally on the device and are served from cache when a user initiates playback of a downloaded track.
Our persistent keys have a limited validity time and need to be refreshed when they expire. To prevent a situation where a persistent key expires while the user is offline, we've decided to eagerly refresh these keys one week before their expiration date. To make that happen we need to be able to obtain the expiration date of the given track license.
We've been attempting to use the makeSecureTokenForExpirationDateOfPersistableContentKey API to facilitate this process. The documentation states that this API returns a secret token representing the persistent key, which we can then exchange with our license server for the expiration date: https://developer.apple.com/documentation/avfoundation/avcontentkeysession/makesecuretokenforexpirationdate(ofpersistablecontentkey:completionhandler:)?language=objc
However, every time we call makeSecureTokenForExpirationDateOfPersistableContentKey, we receive an error with code -46250. We haven't been able to find any public references or documentation for this specific error code, which is preventing us from troubleshooting the issue. We are conducting our tests on a physical device, as the simulator does not support FairPlay playback. We don't use dual expiry approach.
Is our understanding of how to obtain the expiration timestamp correct? Are we using the makeSecureTokenForExpirationDateOfPersistableContentKey API as it was intended? What does the -46250 error code mean, and what steps should we take to fix our FairPlay implementation to make this work?
Thanks in advance for your assistance.
I am able to capture 48mp photos using .builtInWideAngleCamera, but it seems like .builtInTripleCamera is capped at 12mp?
Is there a way to capture 48mp photos using .builtInTripleCamera? Because .builtInTripleCamera provides smooth transition between cameras during zooming, and I'd like to keep this behavior.
New iPhone 17 Pro have all their cameras at 48mp. Is there a chance that their .builtInTripleCamera is capable of capturing 48mp? Or is this an API limitation?
Hi,
I’m trying to configure camera feed in ARKit to be in Apple Log color space.
I can change Capture Device’s format to one that has Apple Log and I see one frame being in proper log-gray colors but then all AR tracking stops and tracking state hangs at “initializing”. In other combinations I see error “sensor failed to initialize” and session restarts with default format.
I suspect that this is because normal AR capture formats are 420f, whereas ones that have Apple Log are 422.
Could someone confirm if it’s even possible to run ARKit session with camera feed in a different pixel format?
I’m trying it on iphone 15 pro
I am working on capturing 48MP images using the iPhone 16 Pro Max with the Ultra-wide camera. I’ve updated the code to capture the maximum supported dimensions with the following snippet:
if #available(iOS 16.0, *) {
photoOutput.maxPhotoDimensions = device.activeFormat.supportedMaxPhotoDimensions.last!
photoSettings.maxPhotoDimensions = .init(width: 5712, height: 4284)
}
However, I’m still not getting the expected results. My goal is to capture 48MP images, and I want to confirm if the Ultra-wide camera supports this resolution or if I’m missing any other configuration.
Any guidance would be appreciated!
For iPhones 16 and below, orientation is applied in UIImage or CIImage, but not for iPhone 17.
The camera is front-facing, and it uses Vision to capture facial images.
Thanks for your help.
Topic:
Developer Tools & Services
SubTopic:
Apple Developer Program
Tags:
Vision
Camera
Core Image
AVFoundation
Our app plays TS files on an iPhone.
The app fragments the TS files, creates an M3U8 playlist, converts them to HLS(HTTP Live Streaming), and then uses AVPlayer to play the video content.
On a device running iOS 26, after starting playback and seeking, restarting playback causes the video and audio to be out of sync (by about 2-3 seconds depending on the situation).
This also occurs on iPadOS/macOS 26.
This issue was not observed prior to iOS 18.
We are trying to fix this issue on the app side, but we have the following questions:
The behavior of AVPlayer is different between iOS 26 and previous versions. Has there been any change that could be considered? Or is it a bug?
We tried pausing before seeking, but it didn’t seem to have any effect. Are there any APIs or workarounds that can improve this?
We would appreciate it if you could tell us any other helpful documents or URLs.
I want to use both front UW and TrueDepth cameras in iPad which has front UW camera.
Firstly, I have used only front builtInDualCamera by AVFoundation and tried all the formats that can be used with builtInDualCamera, but there was no format that could capture UW.
Secondly, I have tried to both front builtInDualCamera and builtInUltraWideCamera, but there was no combination that could use builtInUltraWideCamera and builtInDualCamera.
Is there any way ?
In iOS 26 When we download any DRM content first time it is downloading again when we edit audios and Video Quality and start downloading it is freezing complete app. Neither it is crashing not giving any error.
Hello everyone,
I'm working on a feature where I need to capture the highest possible quality photo (e.g., 24MP on supported devices) and upload it to our server. I don't need the photos to appear in user's main Photos app so I thought I could store the photos in app's private directory using FileManager until they are uploaded. This wouldn't require requesting Photo Library permission, maximizing user privacy.
The documentation on AVCapturePhotoOutput states that "the 24MP setting (5712, 4284) is only serviced as 24MP when opted-in to autoDeferredPhotoDeliveryEnabled"
/**
@property maxPhotoDimensions
@abstract
Indicates the maximum resolution of the requested photo.
@discussion
Set this property to enable requesting of images up to as large as the specified dimensions. Images returned by AVCapturePhotoOutput may be smaller than these dimensions but will never be larger. Once set, images can be requested with any valid maximum photo dimensions by setting AVCapturePhotoSettings.maxPhotoDimensions on a per photo basis. The dimensions set must match one of the dimensions returned by AVCaptureDeviceFormat.supportedMaxPhotoDimensions for the current active format. Changing this property may trigger a lengthy reconfiguration of the capture render pipeline so it is recommended that this is set before calling -[AVCaptureSession startRunning].
Note: When supported, the 24MP setting (5712, 4284) is only serviced as 24MP when opted-in to autoDeferredPhotoDeliveryEnabled.
*/
@available(iOS 16.0, *)
open var maxPhotoDimensions: CMVideoDimensions
(btw. this note is not present in the docs https://developer.apple.com/documentation/avfoundation/avcapturephotooutput/maxphotodimensions)
Enabling autoDeferredPhotoDeliveryEnabled means that for a 24MP capture, the system will call the photoOutput(_:didFinishCapturingDeferredPhotoProxy:error:) delegate method, providing a proxy object instead of the final image data.
According to the WWDC23 session "Create a more responsive camera experience," this AVCaptureDeferredPhotoProxy must be saved to the PHPhotoLibrary using a PHAssetCreationRequest with the resource type .photoProxy. The system then handles the final processing in the background within the library.
To use deferred photo processing, you'll need to have write permission to the photo library to store the proxy photo, and read permission if your app needs to show the final photo or wants to modify it in any way.
https://developer.apple.com/videos/play/wwdc2023/10105/?time=799
This seems to create a hard dependency on the Photo Library for accessing 24MP images.
My question is:
Is there any way to receive the final, processed 24MP image data directly in the app after a deferred capture, without using PHPhotoLibrary as the processing intermediary?
For example, is there a delegate callback or a mechanism I'm missing that provides the final data for a deferred photo, allowing an app to handle it in-memory or in its own private sandbox, completely bypassing the user's Photo Library?
Our goal is to follow Apple's privacy-first principles by avoiding requesting a PHPhotoLibrary authorization when our app's core function doesn't require access to the user's photo collection.
Thank you for your time and any clarification you can provide.
I'm receiving output from avcapturesession and capturing an image using Vision, but the image is output in landscape orientation instead of portrait.
Even when I set the orientation to up in ciimage, cgimage, and uiimage, the image is still output in landscape orientation.
On iPhones 16 and below, the image is output in portrait orientation.
But on iPhones 17 and above, the image is output in landscape orientation.
Please help.
Hi, as other threads have already discussed, I'd like to record audio from a keyboard extension.
The keyboard has been granted both full access and microphone access. Nonetheless whenever I attempt to start a recording from my keyboard, it fails to start with the following error:
Recording failed to start: Error Domain=com.apple.coreaudio.avfaudio Code=561145187 "(null)" UserInfo={failed call=err = PerformCommand(*ioNode, kAUStartIO, NULL, 0)}
This is the code I am using:
import Foundation
import AVFoundation
protocol AudioRecordingServiceDelegate: AnyObject {
func audioRecordingDidStart()
func audioRecordingDidStop(withAudioData: Data?)
func audioRecordingPermissionDenied()
}
class AudioRecordingService {
weak var delegate: AudioRecordingServiceDelegate?
private var audioEngine: AVAudioEngine?
private var audioSession: AVAudioSession?
private var isRecording = false
private var audioData = Data()
private let targetFormat = AVAudioFormat(commonFormat: .pcmFormatInt16,
sampleRate: 16000,
channels: 1,
interleaved: false)!
private func setupAudioSession() throws {
let session = AVAudioSession.sharedInstance()
try session.setCategory(.playAndRecord, mode: .spokenAudio,
options: [.mixWithOthers, .allowBluetooth, .defaultToSpeaker])
try session.setPreferredIOBufferDuration(0.005)
try session.setActive(true, options: .notifyOthersOnDeactivation)
audioSession = session
}
func checkMicrophonePermission(completion: @escaping (Bool) -> Void) {
switch AVAudioApplication.shared.recordPermission {
case .granted:
completion(true)
case .denied:
delegate?.audioRecordingPermissionDenied()
completion(false)
case .undetermined:
AVAudioApplication.requestRecordPermission { [weak self] granted in
if !granted {
self?.delegate?.audioRecordingPermissionDenied()
}
completion(granted)
}
@unknown default:
delegate?.audioRecordingPermissionDenied()
completion(false)
}
}
func toggleRecording() {
if isRecording {
stopRecording()
} else {
checkMicrophonePermission { [weak self] granted in
if granted {
self?.startRecording()
}
}
}
}
private func startRecording() {
guard !isRecording else { return }
do {
try setupAudioSession()
audioEngine = AVAudioEngine()
guard let engine = audioEngine else { return }
let inputNode = engine.inputNode
let inputFormat = inputNode.inputFormat(forBus: 0)
audioData.removeAll()
guard let converter = AVAudioConverter(from: inputFormat, to: targetFormat) else {
print("Failed to create audio converter")
return
}
inputNode.installTap(onBus: 0, bufferSize: 1024, format: inputFormat) { [weak self] buffer, _ in
guard let self = self else { return }
let frameCount = AVAudioFrameCount(Double(buffer.frameLength) * 16000.0 / buffer.format.sampleRate)
guard let outputBuffer = AVAudioPCMBuffer(pcmFormat: self.targetFormat,
frameCapacity: frameCount) else { return }
outputBuffer.frameLength = frameCount
var error: NSError?
converter.convert(to: outputBuffer, error: &error) { _, outStatus in
outStatus.pointee = .haveData
return buffer
}
if error == nil, let channelData = outputBuffer.int16ChannelData {
let dataLength = Int(outputBuffer.frameLength) * 2
let data = Data(bytes: channelData.pointee, count: dataLength)
self.audioData.append(data)
}
}
engine.prepare()
try engine.start()
isRecording = true
delegate?.audioRecordingDidStart()
} catch {
print("Recording failed to start: \(error)")
stopRecording()
}
}
private func stopRecording() {
audioEngine?.inputNode.removeTap(onBus: 0)
audioEngine?.stop()
isRecording = false
let finalData = audioData
audioData.removeAll()
delegate?.audioRecordingDidStop(withAudioData: finalData)
try? audioSession?.setActive(false, options: .notifyOthersOnDeactivation)
}
deinit {
if isRecording {
stopRecording()
}
}
}
Granting the deprecated "Inter-App Audio" capability did not solve the problem either.
Is recording audio from a keyboard extension even possible in general? If so, how do I fix it?
Related threads:
https://developer.apple.com/forums/thread/108055
https://developer.apple.com/forums/thread/742601
(This only started happening as of Xcode 26.)
I know macOS and watchOS don't support this property, but all other platforms do (did?) up until I upgraded Xcode. Now when I compile I get this:
Value of type 'AVPlayerItem' has no member 'externalMetadata'
Hi, I downloaded and ran https://developer.apple.com/documentation/realitykit/rendering-stereoscopic-video-with-realitykit
and noticed that memory usage grows linearly.
I replaced the sample video with a different 8k side by side video, and the app crashed almost immediately due to memory leak.
it looks like the culprit is from makeMutablePixelBuffer() function and the allocated pixelBuffers are not recycled after being used.
screenshot is from a physical device.
I tried to modify the AVCam sample code by copying the code here https://developer.apple.com/documentation/avfoundation/adopting-smart-framing-in-your-camera-app#Configure-the-smart-framing-monitor
smart framing monitors
I can ensure the activeformat supports smart framing, but the supported frames in monitor is always nil.
In my another project it has supported value, but the observation has never been triggered, then I tried to keep printing the recommended frame, it's always nil.
Could the engineer embed the code into AVCam rather than posting a few code pieces?
Hello,
Starting in iOS 17, our application started having some issue publishing to our video session. More specifically the video capture seems to be broken in some, but not all sessions. What's troubling is that we're seeing that it fails consistently every 4 sessions.
It also fails silently, without reporting any problems to the app. We only notice that there are no frames being rendered or sent to the remote devices.
Here's what shows-up in the console:
<<<< FigCaptureSourceRemote >>>> Fig assert: "! storage->connectionDied" at bail (FigCaptureSourceRemote.m:235) - (err=0)
<<<< FigCaptureSourceRemote >>>> Fig assert: "err == 0 " at bail (FigCaptureSourceRemote.m:253) - (err=-16453)
Anyone seeing this? Any idea what could be the cause? Our sessions work perfectly on iOS16 and below.
Thanks
Hello everyone,
I'm looking for a definitive clarification on how to completely disable all video stabilization, including the hardware OIS, using AVFoundation. The goal is to achieve a completely raw, unstabilized video feed, which is crucial when using external equipment like gimbals to avoid conflicting stabilization motions.
My research points to using the AVCaptureConnection property preferredVideoStabilizationMode and setting it to AVCaptureVideoStabilizationMode.off.
The documentation for the .off case states:
A mode that doesn’t stabilize video capture.
This description is slightly ambiguous. It's unclear whether this only affects software-level stabilization (EIS, EIS+OIS, etc) or if it guarantees the complete deactivation of the physical OIS module. For professional video applications, this is a critical distinction.
So, I'd like to ask the community:
Has anyone been able to definitively confirm that setting preferredVideoStabilizationMode to .off also disables the hardware OIS? Are there any known tests or documentation that prove this behavior?
Is there an alternative or more direct method to ensure the OIS module is physically inactive during video capture?
What is the community's best practice for ensuring absolutely no stabilization is applied to the video pipeline?
Any insights or shared experiences on this topic would be greatly appreciated.
Thank you!