Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.

All subtopics
Posts under Media Technologies topic

Post

Replies

Boosts

Views

Activity

Handling AVAudioEngine Configuration Change
Hi all, I have been quite stumped on this behavior for a little bit now, so thought it best to share here and see if someone more experience with AVAudioEngine / AVAudioSession can weigh in. Right now I have a AVAudioEngine that I am using to perform some voice chat with and give buffers to play. This works perfectly until route changes start to occur, which causes the AVAudioEngine to reset itself, which then causes all players attached to this engine to be stopped. Once a AVPlayerNode gets stopped due to this (but also any other time), all samples that were scheduled to be played then get purged. Where this becomes confusing for me is the completion handler gets called every time regardless of the sound actually being played. Is there a reliable way to know if a sample needs to be rescheduled after a player has been reset? I am not quite sure in my case what my observer of AVAudioEngineConfigurationChange needs to be doing, as this engine only handles output. All input is through a separate engine for simplicity. Currently I am storing a queue of samples as they get sent to the AVPlayerNode for playback, and after that completion checking if the player isPlaying or not. If it's playing I assume that the sound actually was played- and if not then I leave it in the queue and assume that an observer on the route change or the configuration change will realize there are samples in the queue and reset them Thanks for any feedback!
3
0
951
Oct ’25
Telephoto Lens Keeps Switching to Other Lenses on iPhone 16 Pro Max During PPG (Finger on Camera)
Hi, I’m building a PPG-based heart rate feature where the user places their finger over the rear telephoto camera. On iPhone 16 Pro Max, I'm explicitly selecting the telephoto lens like this: videoDevice = AVCaptureDevice.default(.builtInTelephotoCamera, for: .video, position: .back) And trying to lock it: if #available(iOS 15.0, *), device.activePrimaryConstituentDeviceSwitchingBehavior != .unsupported { try? device.lockForConfiguration() device.setPrimaryConstituentDeviceSwitchingBehavior(.locked, restrictedSwitchingBehaviorConditions: []) device.unlockForConfiguration() } I also lock everything else to prevent dynamic changes: try device.lockForConfiguration() device.focusMode = .locked device.exposureMode = .locked device.whiteBalanceMode = .locked device.videoZoomFactor = 1.0 device.automaticallyEnablesLowLightBoostWhenAvailable = false device.automaticallyAdjustsVideoHDREnabled = false device.unlockForConfiguration() Despite this, the camera still switches to another lens, especially under different lighting, even though the user’s finger fully covers the lens. Questions: How can I completely prevent lens switching in this scenario? Would using videoZoomFactor = 3.0 or 5.0 better enforce use of the telephoto lens? Thanks! Gal
3
0
196
Jul ’25
Mac OS Tahoe 26.0 (25A354) Sound Glitches When opening the simulator app
Hey there, I just upgraded to Mac OS Tahoe ,son an apple MacBook Pro 2019 16inch. am using IntellijIDEA and Flutter to develop a mobile app which I test on the simulator app running iOS 18.4 . the issue: when I start the simulator app. ( while in the loading phase and in the operation phase as well ), the audio from an already open YouTube tab on safari (this happens on chrome browser as well). the sound glitches and becomes Noise. a fix I found online is to kill the audio deamon on Mac OS, This works using the command: "sudo killall coreaudiod" this kills the audio process, (while the emulator is operational), then the macOS restarts the audio deamon then the audio works fine alongside with the simulator being open. I just want to ask is there a permanent fix for this? is Apple working on a fix for this in the upcoming update?
3
5
1.3k
Oct ’25
Recurring FigXPCUtilities / FigCaptureSourceRemote err=-17281 logs when using AVCaptureVideoDataOutput on iOS 26.x
Hi everyone, I’m seeing recurring internal AVFoundation camera logs on iOS 26.2 and I’m trying to understand whether this is expected behavior or a regression in the capture pipeline. These logs appear shortly after starting an AVCaptureSession, while video frames are being delivered, and also when the camera is stopped or the capture session is torn down. <<<< FigXPCUtilities >>>> signalled err=-17281 at <>:302 <<<< FigCaptureSourceRemote >>>> Fig assert: "err == 0 " at bail (FigCaptureSourceRemote.m:569) - (err=-17281) Even in this clean, minimal setup, the same logs appear on iOS 26.2 The exact same logic did not produce these logs on iOS 18.x. To rule out issues caused by my own code, GPT created a minimal SwiftUI example from scratch. My primary interest is to perform real-time processing on the video frames delivered by the camera (via AVCaptureVideoDataOutput), for tasks such as analysis, computer vision, or custom frame handling, while simultaneously displaying the live preview. Thanks in advance for any insight. Example Code
3
2
794
4d
Capturing 48mp photos with .builtInTripleCamera
I am able to capture 48mp photos using .builtInWideAngleCamera, but it seems like .builtInTripleCamera is capped at 12mp? Is there a way to capture 48mp photos using .builtInTripleCamera? Because .builtInTripleCamera provides smooth transition between cameras during zooming, and I'd like to keep this behavior. New iPhone 17 Pro have all their cameras at 48mp. Is there a chance that their .builtInTripleCamera is capable of capturing 48mp? Or is this an API limitation?
3
0
448
Sep ’25
AVSpeechSynthesizer read Mandarin as Cantonese(iOS 26 beta 3))
In iOS 26, AVSpeechSynthesizer read Mandarin into Cantonese pronunciation. No matter how you set the language, and change the settings of my phone system, it doesn't work. let utterance = AVSpeechUtterance(string: "你好啊") //let voice = AVSpeechSynthesisVoice(language: "zh-CN") // not work let voice = AVSpeechSynthesisVoice(language: "zh-Hans") // not work too utterance.voice = voice et synth = AVSpeechSynthesizer() synth.speak(utterance)
3
0
592
2w
AVPlayer unpredictable range requests on iOS when streaming *.mov file
Hi all, I'm trying to diagnose and resolve an issue with stuttering video playback using the standard AVPlayer. The video in question is a 4K, 39-second file in *.mov format, being played on an iOS device. It's served via a local HTTP server that proxies requests to a backend to fetch and process the content. The project uses end-to-end encrypted storage, which necessitates the proxy for handling data processing. While playback in offline scenarios is smooth, we are encountering issues with smooth playback during streaming. The same video streams smoothly on other platforms using the same connection, so network limitations are not a factor. On iOS, playback is consistently choppy, with pauses every 1-3 seconds. The video does not appear to buffer adequately for smooth playback. One particularly curious aspect is the seemingly random pattern of Content-Range requests made by the AVPlayer when streaming the video. Below is an example of the range requests:
3
2
558
Apr ’25
AVAudioSession.outputVolume not reporting correctly in iOS 18+ devices
I’m using the shared instance of AVAudioSession. After activating it with .setActive(true), I observe the outputVolume, and it correctly reports the device’s volume. However, after deactivating the session using .setActive(false), changing the volume, and then reactivating it again, the outputVolume returns the previous volume (before deactivation), not the current device volume. The correct volume is only reported after the user manually changes it again using physical buttons or Control Center, which triggers the observer. What I need is a way to retrieve the actual current device volume immediately after reactivating the audio session, even on the second and subsequent activations. Disabling and re-enabling the audio session is essential to how my application functions. I’ve tested this behavior with my colleagues, and the issue is consistently reproducible on iOS 18.0.1, iOS 18.1, iOS 18.3, iOS 18.5 and iOS 18.6.2. On devices running iOS 17.6.1 and iOS 16.0.3, outputVolume correctly reflects the current volume immediately after calling .setActive(true) multiple times.
3
1
332
Feb ’26
Is Photo Library access mandatory for 24MP Deferred Photo Capture?
Hello everyone, I'm working on a feature where I need to capture the highest possible quality photo (e.g., 24MP on supported devices) and upload it to our server. I don't need the photos to appear in user's main Photos app so I thought I could store the photos in app's private directory using FileManager until they are uploaded. This wouldn't require requesting Photo Library permission, maximizing user privacy. The documentation on AVCapturePhotoOutput states that "the 24MP setting (5712, 4284) is only serviced as 24MP when opted-in to autoDeferredPhotoDeliveryEnabled" /** @property maxPhotoDimensions @abstract Indicates the maximum resolution of the requested photo. @discussion Set this property to enable requesting of images up to as large as the specified dimensions. Images returned by AVCapturePhotoOutput may be smaller than these dimensions but will never be larger. Once set, images can be requested with any valid maximum photo dimensions by setting AVCapturePhotoSettings.maxPhotoDimensions on a per photo basis. The dimensions set must match one of the dimensions returned by AVCaptureDeviceFormat.supportedMaxPhotoDimensions for the current active format. Changing this property may trigger a lengthy reconfiguration of the capture render pipeline so it is recommended that this is set before calling -[AVCaptureSession startRunning]. Note: When supported, the 24MP setting (5712, 4284) is only serviced as 24MP when opted-in to autoDeferredPhotoDeliveryEnabled. */ @available(iOS 16.0, *) open var maxPhotoDimensions: CMVideoDimensions (btw. this note is not present in the docs https://developer.apple.com/documentation/avfoundation/avcapturephotooutput/maxphotodimensions) Enabling autoDeferredPhotoDeliveryEnabled means that for a 24MP capture, the system will call the photoOutput(_:didFinishCapturingDeferredPhotoProxy:error:) delegate method, providing a proxy object instead of the final image data. According to the WWDC23 session "Create a more responsive camera experience," this AVCaptureDeferredPhotoProxy must be saved to the PHPhotoLibrary using a PHAssetCreationRequest with the resource type .photoProxy. The system then handles the final processing in the background within the library. To use deferred photo processing, you'll need to have write permission to the photo library to store the proxy photo, and read permission if your app needs to show the final photo or wants to modify it in any way. https://developer.apple.com/videos/play/wwdc2023/10105/?time=799 This seems to create a hard dependency on the Photo Library for accessing 24MP images. My question is: Is there any way to receive the final, processed 24MP image data directly in the app after a deferred capture, without using PHPhotoLibrary as the processing intermediary? For example, is there a delegate callback or a mechanism I'm missing that provides the final data for a deferred photo, allowing an app to handle it in-memory or in its own private sandbox, completely bypassing the user's Photo Library? Our goal is to follow Apple's privacy-first principles by avoiding requesting a PHPhotoLibrary authorization when our app's core function doesn't require access to the user's photo collection. Thank you for your time and any clarification you can provide.
3
3
610
Sep ’25
Background Upload Extension
Hello, We are trying to use the new Background Upload Extension to improve uploads of assets (Photos, Live Photos, Videos) in the background in our application. 1 - We implemented the code to upload still images. We would like to do the same for Live Photos but we are not sure how to proceed since a Live Photo is composed of 2 resources that we would like to upload in the same job so that we keep the association between the 2 resources. Steps: A Live Photo is captured on the device. Our application is notified for new content in the Photo Library and the asset is queued for upload by our application. The system calls the background upload extension and the Live Photo is prepared for upload but we can schedule a job only for one resource (still photo or paired video) so we can not upload both resources in a single job. 2 - Is there a way to synchronise the application and the extension so that the application would not process the same data or execute the same requests than the extension. For example: our application is working with tokens and we would like to prevent those tokens to be consumed by the application and the extension at the same time. 3 - is there a command in xcode or terminal to start or stop the extension, something similar to what exists for processing tasks (https://developer.apple.com/documentation/backgroundtasks/starting-and-terminating-tasks-during-development).
3
1
635
Feb ’26
iOS 17 camera capture assertions and issues
Hello, Starting in iOS 17, our application started having some issue publishing to our video session. More specifically the video capture seems to be broken in some, but not all sessions. What's troubling is that we're seeing that it fails consistently every 4 sessions. It also fails silently, without reporting any problems to the app. We only notice that there are no frames being rendered or sent to the remote devices. Here's what shows-up in the console: <<<< FigCaptureSourceRemote >>>> Fig assert: "! storage->connectionDied" at bail (FigCaptureSourceRemote.m:235) - (err=0) <<<< FigCaptureSourceRemote >>>> Fig assert: "err == 0 " at bail (FigCaptureSourceRemote.m:253) - (err=-16453) Anyone seeing this? Any idea what could be the cause? Our sessions work perfectly on iOS16 and below. Thanks
3
1
1.4k
Oct ’25
What changes were made to the VideoToolbox HEVC encoder in iOS 26?
Because I want to control the grid size and number of HEIC images myself, I decided to perform HEVC encoding manually and then generate the HEIC image. Previously, I used VTCompressionSession to accomplish this task, and the results were satisfactory. It worked perfectly on iOS 16 through iOS 18 — in other words, it was able to generate correct HEVC encoding, and its CMFormatDescription should also have been correct, since I relied on it to generate the decoderConfig; otherwise, the final image would have decoding issues. However, it can no longer generate a valid HEIC image on a physical device running iOS 26. Interestingly, it still works fine on the iOS 26 simulator — it only fails on real hardware. The abnormal result is that the image becomes completely black, although the image dimensions are still correct. After my troubleshooting, I suspect that the encoding behavior of VTCompressionSession has been modified on iOS 26, which causes the final hvc1 encoding I pass in to be incorrect. I created a VTCompressionSession using the following configuration. var newSession: VTCompressionSession! var status = VTCompressionSessionCreate( allocator: kCFAllocatorDefault, width: Int32(frameSize.width), height: Int32(frameSize.height), codecType: kCMVideoCodecType_HEVC, encoderSpecification: nil, imageBufferAttributes: nil, compressedDataAllocator: nil, outputCallback: nil, refcon: nil, compressionSessionOut: &newSession ) try check(status, VideoToolboxErrorDomain) let properties: [CFString: Any] = [ kVTCompressionPropertyKey_AllowFrameReordering: false, kVTCompressionPropertyKey_AllowTemporalCompression: false, kVTCompressionPropertyKey_RealTime: false, kVTCompressionPropertyKey_MaximizePowerEfficiency: false, kVTCompressionPropertyKey_ProfileLevel: profileLevel, kVTCompressionPropertyKey_Quality: quality.rawValue, ] status = VTSessionSetProperties(newSession, propertyDictionary: properties as CFDictionary) try check(status, VideoToolboxErrorDomain) { VTCompressionSessionInvalidate(newSession) } Then use the following code to encode each Grid of the image. let status = VTCompressionSessionEncodeFrame( session, imageBuffer: buffer, presentationTimeStamp: presentationTimeStamp, duration: frameDuration, frameProperties: nil, infoFlagsOut: nil) { [weak self] status, _, sampleBuffer in try check(status, VideoToolboxErrorDomain) if let sampleBuffer { let encodedImage = try self.encodedImage(from: sampleBuffer) // handle encodedImage } } try check(status, VideoToolboxErrorDomain) If I try to display this abnormal image in the App, my console outputs the following error, so it can be inferred that the issue probably occurred during decoding. createImageBlock:3029: *** ERROR: CGImageBlockCreate {0, 0, 2316, 6176} - data is NULL callDecodeImage:2411: *** ERROR: decodeImageImp failed - NULL _blockArray createImageBlock:3029: *** ERROR: CGImageBlockCreate {0, 0, 2316, 6176} - data is NULL callDecodeImage:2411: *** ERROR: decodeImageImp failed - NULL _blockArray createImageBlock:3029: *** ERROR: CGImageBlockCreate {0, 0, 2316, 6176} - data is NULL callDecodeImage:2411: *** ERROR: decodeImageImp failed - NULL _blockArray It needs to be emphasized again that this code used to work fine in the past, and the issue only occurs on an iOS 26 physical device. I noticed that iOS 26 has introduced many new properties, but I’m not sure whether some of these new properties must be set in the new system, and there’s no information about this in the official documentation.
3
0
609
Sep ’25
AVCaptureDevice.RotationCoordinator.videoRotationAngleForHorizonLevelCapture: behavior is different with iPhone 17
The front facing camera on iPhone 16 (and every model previous) gives the following values for AVCaptureDevice.RotationCoordinator.videoRotationAngleForHorizonLevelCapture: 90 degrees portrait 180 degrees landscape left 270 degrees for upside-down 0 degrees for landscape right Using these values a transform is calculated: var transform: CGAffineTransform { let degrees = rotationCoordinator.videoRotationAngleForHorizonLevelCapture let radians = degrees * .pi / 180.0 return CGAffineTransform(rotationAngle: radians) } And then applied to the AVAssetInput: videoInput = AVAssetWriterInput(mediaType: .video, outputSettings: videoSettings, sourceFormatHint: videoFormatDescription) videoInput.transform = transform And this ensures the correct transform is added to the metadata so that the recorded video plays in the correct orientation. However, with the iPhone 17 Pro and iPhone 17 Pro Max front facing cameras, AVCaptureDevice.RotationCoordinator.videoRotationAngleForHorizonLevelCapture return the different values: 0 degrees portrait 90 degrees landscape left 180 degrees for upside-down 270 degrees for landscape right So this approach breaks down, and the video orientation is incorrect. How is this intended to be handled?
2
0
707
Jan ’26
Any way to trigger cameraLensSmudgeDetectionStatus to change?
Looking to implement to UI to tell the user to clean their lens in our app. Implemented the KVO for the cameraLensSmudgeDetectionStatus but I'm having issues reliably triggering it in, both in our app and the main camera app. Tried to get inventive by putting tupperware over the lens, but I think the model driving this or the LiDAR sensor might be smart enough to detect there is something close to the lens. Is there any way to trigger this change in a similar way we can trigger thermal changes in debug? Thanks.
2
0
445
Jan ’26
How to disable/hide Audio Controls on lock screen from WkWebView
Hi, I am trying to remove the audio controls for my app on the lock screen. Since I use WKWebView, there are 3 audio tags in my html and I play and pause em via JS. However, if I do not play any sound since app launch, there are no audio controls on the lock screen. But if I play one of those 3 files (they are even less then 3 Sec sound effects e.g. for buttons) the audio controls appears on lock screen. Note even when the sounds on pause() or not playing they were listed on the lock screen. What I have tried so far without success MPNowPlayingInfoCenter.default().nowPlayingInfo = [:] and ``try audioSession.setCategory(.playback, mode: .default, options: []) try audioSession.setActive(false, options: .notifyOthersOnDeactivation)`` and UIApplication.shared.endReceivingRemoteControlEvents() Another problem is that the app scales with iOS system settings "display zoom". Is there a way to deny it? It is latest Xcode verion 16.3 and iOS 18. I have no background mode in my Capabilities. Nothing worked so far. Has anyone an idea? Greetings
2
0
135
May ’25
AVPlayerView. Internal constraints conflicts
I’m getting Auto Layout constraint conflict warnings related to AVPlayerView in my project. I’ve reproduced the issue on macOS Tahoe 26.2. The conflict appears to originate inside AVPlayerView itself, between its internal subviews, rather than in my own layout code. This issue can be easily reproduced in an empty project by simply adding an AVPlayerView as a subview using the code below. class ViewController: NSViewController { override func viewDidLoad() { super.viewDidLoad() let playerView = AVPlayerView() view.addSubview(playerView) } } After presenting that view controller, the following Auto Layout constraint conflict warnings appear in the console: Conflicting constraints detected: <decode: bad range for [%@] got [offs:346 len:1057 within:0]>. Will attempt to recover by breaking <decode: bad range for [%@] got [offs:1403 len:81 within:0]>. Unable to simultaneously satisfy constraints: ( "<NSLayoutConstraint:0xb33c29950 H:|-(0)-[AVDesktopPlayerViewContentView:0x10164dce0](LTR) (active, names: '|':AVPlayerView:0xb32ecc000 )>", "<NSLayoutConstraint:0xb33c299a0 AVDesktopPlayerViewContentView:0x10164dce0.right == AVPlayerView:0xb32ecc000.right (active)>", "<NSAutoresizingMaskLayoutConstraint:0xb33c62850 h=--& v=--& AVPlayerView:0xb32ecc000.width == 0 (active)>", "<NSLayoutConstraint:0xb33d46df0 H:|-(0)-[AVEventPassthroughView:0xb33cfb480] (active, names: '|':AVDesktopPlayerViewContentView:0x10164dce0 )>", "<NSLayoutConstraint:0xb33d46e40 AVEventPassthroughView:0xb33cfb480.trailing == AVDesktopPlayerViewContentView:0x10164dce0.trailing (active)>", "<NSLayoutConstraint:0xb33ef8320 NSGlassView:0xb33ed8c00.trailing == AVEventPassthroughView:0xb33cfb480.trailing - 6 (active)>", "<NSLayoutConstraint:0xb33ef8460 NSGlassView:0xb33ed8c00.width == 180 (active)>", "<NSLayoutConstraint:0xb33ef84b0 NSGlassView:0xb33ed8c00.leading >= AVEventPassthroughView:0xb33cfb480.leading + 6 (active)>" ) Will attempt to recover by breaking constraint <NSLayoutConstraint:0xb33ef8460 NSGlassView:0xb33ed8c00.width == 180 (active)> Set the NSUserDefault NSConstraintBasedLayoutVisualizeMutuallyExclusiveConstraints to YES to have -[NSWindow visualizeConstraints:] automatically called when this happens. And/or, set a symbolic breakpoint on LAYOUT_CONSTRAINTS_NOT_SATISFIABLE to catch this in the debugger. Is it system bug or maybe someone knows how to fix that? Thank you.
2
0
808
Jan ’26
PHImageManager.requestImageDataAndOrientation callback is never called
I occasionally receive reports from users that photo import from the Photos library gets stuck and the progress appears to stop indefinitely. I’m using the following APIs (code to be added): func fetchAsset(_ asset: PHAsset) { let options = PHImageRequestOptions() options.deliveryMode = .highQualityFormat options.resizeMode = .exact options.isSynchronous = false options.isNetworkAccessAllowed = true options.progressHandler = { (progress, error, stop, info) in // 🚨 never called } let requestId = PHImageManager.default().requestImageDataAndOrientation( for: asset, options: options ) { data, _, _, info in // 🚨 never called } } Due to repeated reports, I added detailed logs inside the callback closures. Based on the logs, it looks like the request keeps waiting without any callbacks being invoked — neither the progressHandler nor the completion block of requestImageDataAndOrientation is called. This happens not only with the PHImageManager approach, but also when using PHAsset with PHContentEditingInputRequestOptions — the completion callback is not invoked as well. func fetchAssetByContentEditingInput(_ asset: PHAsset) { let options = PHContentEditingInputRequestOptions() options.isNetworkAccessAllowed = true asset.requestContentEditingInput(with: nil) { contentEditingInput, info in // 🚨 never called } } I suspect this is related to iCloud Photos. Here is what I confirmed from affected users: 1. Using the native picker, iCloud download proceeds normally and the photo can be attached. However, using the PHImageManager-based approach in my app, the same photo cannot be attached. 2. Even after verifying that the photo has been fully downloaded from iCloud (e.g., by trying “Export Unmodified Originals” in the Photos app as described here: https://support.apple.com/en-us/111762, and confirming the iCloud download progress completed), the callback is still not invoked for that asset. Detailed flow for (1): • I asked the user to attach the problematic photo (the one where callbacks never fire) using the native photo picker (UIImagePickerController). • The UI showed “Downloading from iCloud” progress. • The progress advanced and the photo was attached successfully. • Then I asked the user to attach the same photo again using my custom photo picker (which uses the PHImageManager APIs mentioned above). • The progress did not advance. • No callbacks were invoked. • The operation waited indefinitely and never completed. Workaround / current behavior: • If I ask users to force-quit and relaunch the app and try again, about 6 out of 10 users can attach successfully afterward. • The remaining ~4 out of 10 users still cannot attach even after relaunching. • For users who are not fixed immediately after relaunch, it seems to resolve naturally after some time. I’ve seen similar reports elsewhere, so I’m wondering if Apple is already aware of an internal issue related to this. If there is any known information, guidance, or recommended workaround, I would appreciate it. I also logged the properties of affected PHAssets (metadata) when the issue occurs, and I can share them below if that helps troubleshooting: [size=3.91MB] [PHAssetMediaSubtype(rawValue: 528)+DepthEffect | userLibrary | (4284x5712) | adjusted=true] [size=3.91MB] [PHAssetMediaSubtype(rawValue: 528)+DepthEffect | userLibrary | (4284x5712) | adjusted=true] [size=2.72MB] [PHAssetMediaSubtype(rawValue: 16)+DepthEffect | userLibrary | (3024x4032) | adjusted=true] [size=2.72MB] [PHAssetMediaSubtype(rawValue: 16)+DepthEffect | userLibrary | (3024x4032) | adjusted=true] [size=2.49MB] [PHAssetMediaSubtype(rawValue: 16)+DepthEffect | userLibrary | (3024x4032) | adjusted=true] [size=2.49MB] [PHAssetMediaSubtype(rawValue: 16)+DepthEffect | userLibrary | (3024x4032) | adjusted=true]
2
0
381
Jan ’26
Handling AVAudioEngine Configuration Change
Hi all, I have been quite stumped on this behavior for a little bit now, so thought it best to share here and see if someone more experience with AVAudioEngine / AVAudioSession can weigh in. Right now I have a AVAudioEngine that I am using to perform some voice chat with and give buffers to play. This works perfectly until route changes start to occur, which causes the AVAudioEngine to reset itself, which then causes all players attached to this engine to be stopped. Once a AVPlayerNode gets stopped due to this (but also any other time), all samples that were scheduled to be played then get purged. Where this becomes confusing for me is the completion handler gets called every time regardless of the sound actually being played. Is there a reliable way to know if a sample needs to be rescheduled after a player has been reset? I am not quite sure in my case what my observer of AVAudioEngineConfigurationChange needs to be doing, as this engine only handles output. All input is through a separate engine for simplicity. Currently I am storing a queue of samples as they get sent to the AVPlayerNode for playback, and after that completion checking if the player isPlaying or not. If it's playing I assume that the sound actually was played- and if not then I leave it in the queue and assume that an observer on the route change or the configuration change will realize there are samples in the queue and reset them Thanks for any feedback!
Replies
3
Boosts
0
Views
951
Activity
Oct ’25
Telephoto Lens Keeps Switching to Other Lenses on iPhone 16 Pro Max During PPG (Finger on Camera)
Hi, I’m building a PPG-based heart rate feature where the user places their finger over the rear telephoto camera. On iPhone 16 Pro Max, I'm explicitly selecting the telephoto lens like this: videoDevice = AVCaptureDevice.default(.builtInTelephotoCamera, for: .video, position: .back) And trying to lock it: if #available(iOS 15.0, *), device.activePrimaryConstituentDeviceSwitchingBehavior != .unsupported { try? device.lockForConfiguration() device.setPrimaryConstituentDeviceSwitchingBehavior(.locked, restrictedSwitchingBehaviorConditions: []) device.unlockForConfiguration() } I also lock everything else to prevent dynamic changes: try device.lockForConfiguration() device.focusMode = .locked device.exposureMode = .locked device.whiteBalanceMode = .locked device.videoZoomFactor = 1.0 device.automaticallyEnablesLowLightBoostWhenAvailable = false device.automaticallyAdjustsVideoHDREnabled = false device.unlockForConfiguration() Despite this, the camera still switches to another lens, especially under different lighting, even though the user’s finger fully covers the lens. Questions: How can I completely prevent lens switching in this scenario? Would using videoZoomFactor = 3.0 or 5.0 better enforce use of the telephoto lens? Thanks! Gal
Replies
3
Boosts
0
Views
196
Activity
Jul ’25
Mac OS Tahoe 26.0 (25A354) Sound Glitches When opening the simulator app
Hey there, I just upgraded to Mac OS Tahoe ,son an apple MacBook Pro 2019 16inch. am using IntellijIDEA and Flutter to develop a mobile app which I test on the simulator app running iOS 18.4 . the issue: when I start the simulator app. ( while in the loading phase and in the operation phase as well ), the audio from an already open YouTube tab on safari (this happens on chrome browser as well). the sound glitches and becomes Noise. a fix I found online is to kill the audio deamon on Mac OS, This works using the command: "sudo killall coreaudiod" this kills the audio process, (while the emulator is operational), then the macOS restarts the audio deamon then the audio works fine alongside with the simulator being open. I just want to ask is there a permanent fix for this? is Apple working on a fix for this in the upcoming update?
Replies
3
Boosts
5
Views
1.3k
Activity
Oct ’25
Recurring FigXPCUtilities / FigCaptureSourceRemote err=-17281 logs when using AVCaptureVideoDataOutput on iOS 26.x
Hi everyone, I’m seeing recurring internal AVFoundation camera logs on iOS 26.2 and I’m trying to understand whether this is expected behavior or a regression in the capture pipeline. These logs appear shortly after starting an AVCaptureSession, while video frames are being delivered, and also when the camera is stopped or the capture session is torn down. <<<< FigXPCUtilities >>>> signalled err=-17281 at <>:302 <<<< FigCaptureSourceRemote >>>> Fig assert: "err == 0 " at bail (FigCaptureSourceRemote.m:569) - (err=-17281) Even in this clean, minimal setup, the same logs appear on iOS 26.2 The exact same logic did not produce these logs on iOS 18.x. To rule out issues caused by my own code, GPT created a minimal SwiftUI example from scratch. My primary interest is to perform real-time processing on the video frames delivered by the camera (via AVCaptureVideoDataOutput), for tasks such as analysis, computer vision, or custom frame handling, while simultaneously displaying the live preview. Thanks in advance for any insight. Example Code
Replies
3
Boosts
2
Views
794
Activity
4d
Is it possible to stream video from a UVC (USB Video Class) camera on an iPhone 15?
Is it possible to stream video from a UVC (USB Video Class) camera on an iPhone 15? If so, are there any specific hardware or software requirements to enable this functionality?
Replies
3
Boosts
1
Views
189
Activity
Apr ’25
Capturing 48mp photos with .builtInTripleCamera
I am able to capture 48mp photos using .builtInWideAngleCamera, but it seems like .builtInTripleCamera is capped at 12mp? Is there a way to capture 48mp photos using .builtInTripleCamera? Because .builtInTripleCamera provides smooth transition between cameras during zooming, and I'd like to keep this behavior. New iPhone 17 Pro have all their cameras at 48mp. Is there a chance that their .builtInTripleCamera is capable of capturing 48mp? Or is this an API limitation?
Replies
3
Boosts
0
Views
448
Activity
Sep ’25
AVSpeechSynthesizer read Mandarin as Cantonese(iOS 26 beta 3))
In iOS 26, AVSpeechSynthesizer read Mandarin into Cantonese pronunciation. No matter how you set the language, and change the settings of my phone system, it doesn't work. let utterance = AVSpeechUtterance(string: "你好啊") //let voice = AVSpeechSynthesisVoice(language: "zh-CN") // not work let voice = AVSpeechSynthesisVoice(language: "zh-Hans") // not work too utterance.voice = voice et synth = AVSpeechSynthesizer() synth.speak(utterance)
Replies
3
Boosts
0
Views
592
Activity
2w
AVPlayer unpredictable range requests on iOS when streaming *.mov file
Hi all, I'm trying to diagnose and resolve an issue with stuttering video playback using the standard AVPlayer. The video in question is a 4K, 39-second file in *.mov format, being played on an iOS device. It's served via a local HTTP server that proxies requests to a backend to fetch and process the content. The project uses end-to-end encrypted storage, which necessitates the proxy for handling data processing. While playback in offline scenarios is smooth, we are encountering issues with smooth playback during streaming. The same video streams smoothly on other platforms using the same connection, so network limitations are not a factor. On iOS, playback is consistently choppy, with pauses every 1-3 seconds. The video does not appear to buffer adequately for smooth playback. One particularly curious aspect is the seemingly random pattern of Content-Range requests made by the AVPlayer when streaming the video. Below is an example of the range requests:
Replies
3
Boosts
2
Views
558
Activity
Apr ’25
AVAudioSession.outputVolume not reporting correctly in iOS 18+ devices
I’m using the shared instance of AVAudioSession. After activating it with .setActive(true), I observe the outputVolume, and it correctly reports the device’s volume. However, after deactivating the session using .setActive(false), changing the volume, and then reactivating it again, the outputVolume returns the previous volume (before deactivation), not the current device volume. The correct volume is only reported after the user manually changes it again using physical buttons or Control Center, which triggers the observer. What I need is a way to retrieve the actual current device volume immediately after reactivating the audio session, even on the second and subsequent activations. Disabling and re-enabling the audio session is essential to how my application functions. I’ve tested this behavior with my colleagues, and the issue is consistently reproducible on iOS 18.0.1, iOS 18.1, iOS 18.3, iOS 18.5 and iOS 18.6.2. On devices running iOS 17.6.1 and iOS 16.0.3, outputVolume correctly reflects the current volume immediately after calling .setActive(true) multiple times.
Replies
3
Boosts
1
Views
332
Activity
Feb ’26
Configuring capture pipeline with ProResRAW codec
I am unable to find any clearcut documentation on configuring AVCaptureSession pipeline to capture video with proResRAW codec type, which is 16 bit format. Is it supported only with AVCaptureMovieFileOutput or one can have AVCaptureVideoDataOutput emitting 16-bit sample buffers that can be vended to AVAssetWriter?
Replies
3
Boosts
0
Views
1.2k
Activity
Feb ’26
CoreMIDI: neither syslog nor unified logging works.
Hi, macOS (latest macOS, latest HW, but doesn't matter) seems to prevent CoreMIDI driver logging with standard logging procedures (syslog, unified logging). The only chance to log something is writing to a file at one of the rare write-accessible locations for CoreMIDI. How is this supposed to work? Any hint is highly appreciated. Thanks!
Replies
3
Boosts
0
Views
339
Activity
Oct ’25
Is Photo Library access mandatory for 24MP Deferred Photo Capture?
Hello everyone, I'm working on a feature where I need to capture the highest possible quality photo (e.g., 24MP on supported devices) and upload it to our server. I don't need the photos to appear in user's main Photos app so I thought I could store the photos in app's private directory using FileManager until they are uploaded. This wouldn't require requesting Photo Library permission, maximizing user privacy. The documentation on AVCapturePhotoOutput states that "the 24MP setting (5712, 4284) is only serviced as 24MP when opted-in to autoDeferredPhotoDeliveryEnabled" /** @property maxPhotoDimensions @abstract Indicates the maximum resolution of the requested photo. @discussion Set this property to enable requesting of images up to as large as the specified dimensions. Images returned by AVCapturePhotoOutput may be smaller than these dimensions but will never be larger. Once set, images can be requested with any valid maximum photo dimensions by setting AVCapturePhotoSettings.maxPhotoDimensions on a per photo basis. The dimensions set must match one of the dimensions returned by AVCaptureDeviceFormat.supportedMaxPhotoDimensions for the current active format. Changing this property may trigger a lengthy reconfiguration of the capture render pipeline so it is recommended that this is set before calling -[AVCaptureSession startRunning]. Note: When supported, the 24MP setting (5712, 4284) is only serviced as 24MP when opted-in to autoDeferredPhotoDeliveryEnabled. */ @available(iOS 16.0, *) open var maxPhotoDimensions: CMVideoDimensions (btw. this note is not present in the docs https://developer.apple.com/documentation/avfoundation/avcapturephotooutput/maxphotodimensions) Enabling autoDeferredPhotoDeliveryEnabled means that for a 24MP capture, the system will call the photoOutput(_:didFinishCapturingDeferredPhotoProxy:error:) delegate method, providing a proxy object instead of the final image data. According to the WWDC23 session "Create a more responsive camera experience," this AVCaptureDeferredPhotoProxy must be saved to the PHPhotoLibrary using a PHAssetCreationRequest with the resource type .photoProxy. The system then handles the final processing in the background within the library. To use deferred photo processing, you'll need to have write permission to the photo library to store the proxy photo, and read permission if your app needs to show the final photo or wants to modify it in any way. https://developer.apple.com/videos/play/wwdc2023/10105/?time=799 This seems to create a hard dependency on the Photo Library for accessing 24MP images. My question is: Is there any way to receive the final, processed 24MP image data directly in the app after a deferred capture, without using PHPhotoLibrary as the processing intermediary? For example, is there a delegate callback or a mechanism I'm missing that provides the final data for a deferred photo, allowing an app to handle it in-memory or in its own private sandbox, completely bypassing the user's Photo Library? Our goal is to follow Apple's privacy-first principles by avoiding requesting a PHPhotoLibrary authorization when our app's core function doesn't require access to the user's photo collection. Thank you for your time and any clarification you can provide.
Replies
3
Boosts
3
Views
610
Activity
Sep ’25
Background Upload Extension
Hello, We are trying to use the new Background Upload Extension to improve uploads of assets (Photos, Live Photos, Videos) in the background in our application. 1 - We implemented the code to upload still images. We would like to do the same for Live Photos but we are not sure how to proceed since a Live Photo is composed of 2 resources that we would like to upload in the same job so that we keep the association between the 2 resources. Steps: A Live Photo is captured on the device. Our application is notified for new content in the Photo Library and the asset is queued for upload by our application. The system calls the background upload extension and the Live Photo is prepared for upload but we can schedule a job only for one resource (still photo or paired video) so we can not upload both resources in a single job. 2 - Is there a way to synchronise the application and the extension so that the application would not process the same data or execute the same requests than the extension. For example: our application is working with tokens and we would like to prevent those tokens to be consumed by the application and the extension at the same time. 3 - is there a command in xcode or terminal to start or stop the extension, something similar to what exists for processing tasks (https://developer.apple.com/documentation/backgroundtasks/starting-and-terminating-tasks-during-development).
Replies
3
Boosts
1
Views
635
Activity
Feb ’26
iOS 17 camera capture assertions and issues
Hello, Starting in iOS 17, our application started having some issue publishing to our video session. More specifically the video capture seems to be broken in some, but not all sessions. What's troubling is that we're seeing that it fails consistently every 4 sessions. It also fails silently, without reporting any problems to the app. We only notice that there are no frames being rendered or sent to the remote devices. Here's what shows-up in the console: <<<< FigCaptureSourceRemote >>>> Fig assert: "! storage->connectionDied" at bail (FigCaptureSourceRemote.m:235) - (err=0) <<<< FigCaptureSourceRemote >>>> Fig assert: "err == 0 " at bail (FigCaptureSourceRemote.m:253) - (err=-16453) Anyone seeing this? Any idea what could be the cause? Our sessions work perfectly on iOS16 and below. Thanks
Replies
3
Boosts
1
Views
1.4k
Activity
Oct ’25
What changes were made to the VideoToolbox HEVC encoder in iOS 26?
Because I want to control the grid size and number of HEIC images myself, I decided to perform HEVC encoding manually and then generate the HEIC image. Previously, I used VTCompressionSession to accomplish this task, and the results were satisfactory. It worked perfectly on iOS 16 through iOS 18 — in other words, it was able to generate correct HEVC encoding, and its CMFormatDescription should also have been correct, since I relied on it to generate the decoderConfig; otherwise, the final image would have decoding issues. However, it can no longer generate a valid HEIC image on a physical device running iOS 26. Interestingly, it still works fine on the iOS 26 simulator — it only fails on real hardware. The abnormal result is that the image becomes completely black, although the image dimensions are still correct. After my troubleshooting, I suspect that the encoding behavior of VTCompressionSession has been modified on iOS 26, which causes the final hvc1 encoding I pass in to be incorrect. I created a VTCompressionSession using the following configuration. var newSession: VTCompressionSession! var status = VTCompressionSessionCreate( allocator: kCFAllocatorDefault, width: Int32(frameSize.width), height: Int32(frameSize.height), codecType: kCMVideoCodecType_HEVC, encoderSpecification: nil, imageBufferAttributes: nil, compressedDataAllocator: nil, outputCallback: nil, refcon: nil, compressionSessionOut: &newSession ) try check(status, VideoToolboxErrorDomain) let properties: [CFString: Any] = [ kVTCompressionPropertyKey_AllowFrameReordering: false, kVTCompressionPropertyKey_AllowTemporalCompression: false, kVTCompressionPropertyKey_RealTime: false, kVTCompressionPropertyKey_MaximizePowerEfficiency: false, kVTCompressionPropertyKey_ProfileLevel: profileLevel, kVTCompressionPropertyKey_Quality: quality.rawValue, ] status = VTSessionSetProperties(newSession, propertyDictionary: properties as CFDictionary) try check(status, VideoToolboxErrorDomain) { VTCompressionSessionInvalidate(newSession) } Then use the following code to encode each Grid of the image. let status = VTCompressionSessionEncodeFrame( session, imageBuffer: buffer, presentationTimeStamp: presentationTimeStamp, duration: frameDuration, frameProperties: nil, infoFlagsOut: nil) { [weak self] status, _, sampleBuffer in try check(status, VideoToolboxErrorDomain) if let sampleBuffer { let encodedImage = try self.encodedImage(from: sampleBuffer) // handle encodedImage } } try check(status, VideoToolboxErrorDomain) If I try to display this abnormal image in the App, my console outputs the following error, so it can be inferred that the issue probably occurred during decoding. createImageBlock:3029: *** ERROR: CGImageBlockCreate {0, 0, 2316, 6176} - data is NULL callDecodeImage:2411: *** ERROR: decodeImageImp failed - NULL _blockArray createImageBlock:3029: *** ERROR: CGImageBlockCreate {0, 0, 2316, 6176} - data is NULL callDecodeImage:2411: *** ERROR: decodeImageImp failed - NULL _blockArray createImageBlock:3029: *** ERROR: CGImageBlockCreate {0, 0, 2316, 6176} - data is NULL callDecodeImage:2411: *** ERROR: decodeImageImp failed - NULL _blockArray It needs to be emphasized again that this code used to work fine in the past, and the issue only occurs on an iOS 26 physical device. I noticed that iOS 26 has introduced many new properties, but I’m not sure whether some of these new properties must be set in the new system, and there’s no information about this in the official documentation.
Replies
3
Boosts
0
Views
609
Activity
Sep ’25
AVCaptureDevice.RotationCoordinator.videoRotationAngleForHorizonLevelCapture: behavior is different with iPhone 17
The front facing camera on iPhone 16 (and every model previous) gives the following values for AVCaptureDevice.RotationCoordinator.videoRotationAngleForHorizonLevelCapture: 90 degrees portrait 180 degrees landscape left 270 degrees for upside-down 0 degrees for landscape right Using these values a transform is calculated: var transform: CGAffineTransform { let degrees = rotationCoordinator.videoRotationAngleForHorizonLevelCapture let radians = degrees * .pi / 180.0 return CGAffineTransform(rotationAngle: radians) } And then applied to the AVAssetInput: videoInput = AVAssetWriterInput(mediaType: .video, outputSettings: videoSettings, sourceFormatHint: videoFormatDescription) videoInput.transform = transform And this ensures the correct transform is added to the metadata so that the recorded video plays in the correct orientation. However, with the iPhone 17 Pro and iPhone 17 Pro Max front facing cameras, AVCaptureDevice.RotationCoordinator.videoRotationAngleForHorizonLevelCapture return the different values: 0 degrees portrait 90 degrees landscape left 180 degrees for upside-down 270 degrees for landscape right So this approach breaks down, and the video orientation is incorrect. How is this intended to be handled?
Replies
2
Boosts
0
Views
707
Activity
Jan ’26
Any way to trigger cameraLensSmudgeDetectionStatus to change?
Looking to implement to UI to tell the user to clean their lens in our app. Implemented the KVO for the cameraLensSmudgeDetectionStatus but I'm having issues reliably triggering it in, both in our app and the main camera app. Tried to get inventive by putting tupperware over the lens, but I think the model driving this or the LiDAR sensor might be smart enough to detect there is something close to the lens. Is there any way to trigger this change in a similar way we can trigger thermal changes in debug? Thanks.
Replies
2
Boosts
0
Views
445
Activity
Jan ’26
How to disable/hide Audio Controls on lock screen from WkWebView
Hi, I am trying to remove the audio controls for my app on the lock screen. Since I use WKWebView, there are 3 audio tags in my html and I play and pause em via JS. However, if I do not play any sound since app launch, there are no audio controls on the lock screen. But if I play one of those 3 files (they are even less then 3 Sec sound effects e.g. for buttons) the audio controls appears on lock screen. Note even when the sounds on pause() or not playing they were listed on the lock screen. What I have tried so far without success MPNowPlayingInfoCenter.default().nowPlayingInfo = [:] and ``try audioSession.setCategory(.playback, mode: .default, options: []) try audioSession.setActive(false, options: .notifyOthersOnDeactivation)`` and UIApplication.shared.endReceivingRemoteControlEvents() Another problem is that the app scales with iOS system settings "display zoom". Is there a way to deny it? It is latest Xcode verion 16.3 and iOS 18. I have no background mode in my Capabilities. Nothing worked so far. Has anyone an idea? Greetings
Replies
2
Boosts
0
Views
135
Activity
May ’25
AVPlayerView. Internal constraints conflicts
I’m getting Auto Layout constraint conflict warnings related to AVPlayerView in my project. I’ve reproduced the issue on macOS Tahoe 26.2. The conflict appears to originate inside AVPlayerView itself, between its internal subviews, rather than in my own layout code. This issue can be easily reproduced in an empty project by simply adding an AVPlayerView as a subview using the code below. class ViewController: NSViewController { override func viewDidLoad() { super.viewDidLoad() let playerView = AVPlayerView() view.addSubview(playerView) } } After presenting that view controller, the following Auto Layout constraint conflict warnings appear in the console: Conflicting constraints detected: <decode: bad range for [%@] got [offs:346 len:1057 within:0]>. Will attempt to recover by breaking <decode: bad range for [%@] got [offs:1403 len:81 within:0]>. Unable to simultaneously satisfy constraints: ( "<NSLayoutConstraint:0xb33c29950 H:|-(0)-[AVDesktopPlayerViewContentView:0x10164dce0](LTR) (active, names: '|':AVPlayerView:0xb32ecc000 )>", "<NSLayoutConstraint:0xb33c299a0 AVDesktopPlayerViewContentView:0x10164dce0.right == AVPlayerView:0xb32ecc000.right (active)>", "<NSAutoresizingMaskLayoutConstraint:0xb33c62850 h=--& v=--& AVPlayerView:0xb32ecc000.width == 0 (active)>", "<NSLayoutConstraint:0xb33d46df0 H:|-(0)-[AVEventPassthroughView:0xb33cfb480] (active, names: '|':AVDesktopPlayerViewContentView:0x10164dce0 )>", "<NSLayoutConstraint:0xb33d46e40 AVEventPassthroughView:0xb33cfb480.trailing == AVDesktopPlayerViewContentView:0x10164dce0.trailing (active)>", "<NSLayoutConstraint:0xb33ef8320 NSGlassView:0xb33ed8c00.trailing == AVEventPassthroughView:0xb33cfb480.trailing - 6 (active)>", "<NSLayoutConstraint:0xb33ef8460 NSGlassView:0xb33ed8c00.width == 180 (active)>", "<NSLayoutConstraint:0xb33ef84b0 NSGlassView:0xb33ed8c00.leading >= AVEventPassthroughView:0xb33cfb480.leading + 6 (active)>" ) Will attempt to recover by breaking constraint <NSLayoutConstraint:0xb33ef8460 NSGlassView:0xb33ed8c00.width == 180 (active)> Set the NSUserDefault NSConstraintBasedLayoutVisualizeMutuallyExclusiveConstraints to YES to have -[NSWindow visualizeConstraints:] automatically called when this happens. And/or, set a symbolic breakpoint on LAYOUT_CONSTRAINTS_NOT_SATISFIABLE to catch this in the debugger. Is it system bug or maybe someone knows how to fix that? Thank you.
Replies
2
Boosts
0
Views
808
Activity
Jan ’26
PHImageManager.requestImageDataAndOrientation callback is never called
I occasionally receive reports from users that photo import from the Photos library gets stuck and the progress appears to stop indefinitely. I’m using the following APIs (code to be added): func fetchAsset(_ asset: PHAsset) { let options = PHImageRequestOptions() options.deliveryMode = .highQualityFormat options.resizeMode = .exact options.isSynchronous = false options.isNetworkAccessAllowed = true options.progressHandler = { (progress, error, stop, info) in // 🚨 never called } let requestId = PHImageManager.default().requestImageDataAndOrientation( for: asset, options: options ) { data, _, _, info in // 🚨 never called } } Due to repeated reports, I added detailed logs inside the callback closures. Based on the logs, it looks like the request keeps waiting without any callbacks being invoked — neither the progressHandler nor the completion block of requestImageDataAndOrientation is called. This happens not only with the PHImageManager approach, but also when using PHAsset with PHContentEditingInputRequestOptions — the completion callback is not invoked as well. func fetchAssetByContentEditingInput(_ asset: PHAsset) { let options = PHContentEditingInputRequestOptions() options.isNetworkAccessAllowed = true asset.requestContentEditingInput(with: nil) { contentEditingInput, info in // 🚨 never called } } I suspect this is related to iCloud Photos. Here is what I confirmed from affected users: 1. Using the native picker, iCloud download proceeds normally and the photo can be attached. However, using the PHImageManager-based approach in my app, the same photo cannot be attached. 2. Even after verifying that the photo has been fully downloaded from iCloud (e.g., by trying “Export Unmodified Originals” in the Photos app as described here: https://support.apple.com/en-us/111762, and confirming the iCloud download progress completed), the callback is still not invoked for that asset. Detailed flow for (1): • I asked the user to attach the problematic photo (the one where callbacks never fire) using the native photo picker (UIImagePickerController). • The UI showed “Downloading from iCloud” progress. • The progress advanced and the photo was attached successfully. • Then I asked the user to attach the same photo again using my custom photo picker (which uses the PHImageManager APIs mentioned above). • The progress did not advance. • No callbacks were invoked. • The operation waited indefinitely and never completed. Workaround / current behavior: • If I ask users to force-quit and relaunch the app and try again, about 6 out of 10 users can attach successfully afterward. • The remaining ~4 out of 10 users still cannot attach even after relaunching. • For users who are not fixed immediately after relaunch, it seems to resolve naturally after some time. I’ve seen similar reports elsewhere, so I’m wondering if Apple is already aware of an internal issue related to this. If there is any known information, guidance, or recommended workaround, I would appreciate it. I also logged the properties of affected PHAssets (metadata) when the issue occurs, and I can share them below if that helps troubleshooting: [size=3.91MB] [PHAssetMediaSubtype(rawValue: 528)+DepthEffect | userLibrary | (4284x5712) | adjusted=true] [size=3.91MB] [PHAssetMediaSubtype(rawValue: 528)+DepthEffect | userLibrary | (4284x5712) | adjusted=true] [size=2.72MB] [PHAssetMediaSubtype(rawValue: 16)+DepthEffect | userLibrary | (3024x4032) | adjusted=true] [size=2.72MB] [PHAssetMediaSubtype(rawValue: 16)+DepthEffect | userLibrary | (3024x4032) | adjusted=true] [size=2.49MB] [PHAssetMediaSubtype(rawValue: 16)+DepthEffect | userLibrary | (3024x4032) | adjusted=true] [size=2.49MB] [PHAssetMediaSubtype(rawValue: 16)+DepthEffect | userLibrary | (3024x4032) | adjusted=true]
Replies
2
Boosts
0
Views
381
Activity
Jan ’26