Fundamentally, my questions are: is there a known transform I can apply onto a given (pixel) position (passed into a Metal Fragment Function) to correctly sample a texture provided by the main cameras + processed by a Vision request. If so, what is it? If not, how can I accurately sample my masks?
My goal is to highlight people in a Vision Pro app using Compositor Services.
To start, I asynchronously receive camera frames for the main left and right cameras. This is the breakdown of the specific CameraVideoFormat I pass along to the CameraFrameProvider:
minFrameDuration: 0.03
maxFrameDuration: 0.033333335
frameSize: (1920.0, 1080.0)
pixelFormat: 875704422
cameraType: main
cameraPositions: [left, right]
cameraRectification: mono
From each camera frame sample, I extract the left and right buffers (CVReadOnlyPixelBuffer.withUnsafebuffer ==> CVPixelBuffer).
I asynchronously process the extracted buffers by performing a VNGeneratePersonSegmentationRequest on both of them:
// NOTE: This block of code and all following code blocks contain simplified representations of my code for clarity's sake.
var request = VNGeneratePersonSegmentationRequest()
request.qualityLevel = .balanced
request.outputPixelFormat = kCVPixelFormatType_OneComponent8
...
let lHandler = VNSequenceRequestHandler()
let rHandler = VNSequenceRequestHandler()
...
func processBuffers() async {
try lHandler.perform([request], on: lBuffer)
guard let lMask = request.results?.first?.pixelBuffer else {...}
try rHandler.perform([request], on: rBuffer)
guard let rMask = request.results?.first?.pixelBuffer else {...}
appModel.latestPersonMasks = (lMask, rMask)
}
I store the two resulting CVPixelBuffers in my appModel. For both of these buffers aka grayscale masks:
width (in pixels) = 512
height (in pixels) = 384
byters per row = 512
plane count = 0
pixel format type = 1278226488
I am using Compositor Services to render my content in Immersive Space. My implementation of Compositor Services is based off of the same code from Interacting with virtual content blended with passthrough.
Within the Shaders.metal, the tint's Fragment Shader is now passed the grayscale masks (converted from CVPixelBuffer to MTLTexture via CVMetalTextureCacheCreateTextureFromImage() at the beginning of the main render pipeline).
fragment float4 tintFragmentShader(
TintInOut in [[stage_in]],
ushort amp_id [[amplification_id]],
texture2d<uint> leftMask [[texture(0)]],
texture2d<uint> rightMask [[texture(1)]]
)
{
if (in.color.a <= 0.0) {
discard_fragment();
}
float2 uv;
if (amp_id == 0) { // LEFT
uv = ??????????????????????;
} else { // RIGHT
uv = ??????????????????????;
}
constexpr sampler linearSampler (mip_filter::linear, mag_filter::linear, min_filter::linear);
// Sample the PersonSegmentation grayscale mask
float maskValue = 0.0;
if (amp_id == 0) { // LEFT
if (leftMask.get_width() > 0) {
maskValue = rightMask.sample(linearSampler, uv).r;
}
} else { // RIGHT
if (rightMask.get_width() > 0) {
maskValue = rightMask.sample(linearSampler, uv).r;
}
}
if (maskValue > 250) {
return (1.0, 1.0, 1.0, 0.5)
}
return in.color;
}
I need to correctly sample the masks for a given fragment.
The LayerRenderer.Layout is set to .layered. From Developer Documentation.
A layout that specifies each view’s content as a slice of a single texture.
Using the Metal debugger, I know that the final render target texture for each view / eye is 1888 x 1792 pixels, giving an aspect ratio of 59:56.
The initial CVPixelBuffer provided by the main left and right cameras is 1920x1080 (16:9).
The grayscale CVPixelBuffer returned by the VNPersonSegmentationRequest is 512x384 (4:3).
All of these aspect ratios are different.
My questions come down to: is there a known transform I can apply onto a given (pixel) position to correctly sample a texture provided by the main cameras + processed by a Vision request. If so, what is it? If not, how can I accurately sample my masks?
Within the tint's Vertex Shader, after applying the modelViewProjectionMatrix, I have tried every version I have been able to find that takes the pixel space position (= vertices[vertexID].position.xy) and the viewport size (1888x1792) to compute the correct clip space position (maybe = pixel space position.xy / (viewport size * 0.5)???) of the grayscale masks but nothing has worked. The "highlight" of the person segmentations is off: scaled a little too big, offset little to far up and off to the side.
Photos & Camera
RSS for tagExplore technical aspects of capturing high-quality photos and videos, including exposure control, focus modes, and RAW capture options.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
In my app, I use api provided in Photos framework to delete specified photo.
But after upgrading to iOS 26, the delete function in some iOS device no longer work.
The api will never triggers the system confirmation dialog, and the completionHandler is never called.
In the iOS Photos app, deletion works correctly on the same assets, but calling the API from my app does not work.
Steps to Reproduce
Make sure the app has Full Photo Library Access.
Execute the following code:
PHPhotoLibrary.shared().performChanges({
let assetsToBeDeleted = PHAsset.fetchAssets(withLocalIdentifiers: delUrls, options: nil)
PHAssetChangeRequest.deleteAssets(assetsToBeDeleted)
}, completionHandler: completionHandler)
Expected Behavior
The system should present a confirmation dialog asking the user to delete the selected photos.
After the user confirms, the deletion should occur, and the completionHandler should be called with success or error.
Actual Behavior
The system delete confirmation dialog does not appear.
The completionHandler is never called.
Environment
iOS Versions: 26.1 / 26.0.1
It looks like api bug.
I want to check Is it a know issue and will be fixed. Thanks
I have the main app that saves preferences to UserDefaults.standard. So I have this one preference that the user is able to toggle - isRawOn
UserDefaults.standard.set(self.isRawOn, forKey: "isRawOn")
Now, I have LockedCameraCaptureExtension which is required know if that above setting on or off during launch. Also if it's toggled within the extension, the main app should know about it on the next launch.
The main app and the extension runs on separate containers and the preferences are not shared due to privacy reasons.
Apple mentions of using appContext of CameraCaptureIntent, but not sure how above scenario is possible through that....unless I am missing something.
Apple Reference
What I have for CameraCaptureIntent:
@available(iOS 18, *)
struct LaunchMyAppControlIntent: CameraCaptureIntent {
typealias AppContext = MyAppContext
static let title: LocalizedStringResource = "LaunchMyAppControlIntent"
static let description = IntentDescription("Capture photos with MyApp.")
@MainActor
func perform() async throws -> some IntentResult {
.result()
}
}
Topic:
Media Technologies
SubTopic:
Photos & Camera
Tags:
iOS
Photos and Imaging
PhotoKit
AVFoundation
iPhone7 : iOS 14.0 Beta 5
Xcode-beta
Mac OS : 10.15.5 (19F101)
crash info :
** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '-[PHPhotoLibrary presentLimitedLibraryPickerFromViewController:]: unrecognized selector sent to instance xxxxxx'
terminating with uncaught exception of type NSException
my code:
(void)viewDidAppear:(BOOL)animated {
[super viewDidAppear:animated];
if (@available(iOS 14, *)) {
[[PHPhotoLibrary sharedPhotoLibrary] presentLimitedLibraryPickerFromViewController:self];
}
}
Hi,
I'm a fan of the gallery in vision pro which has video as well as still photography but I'm wondering if Apple has considered adding the projected media tags to heic so that we can go that next step from Spatial photos to Immersive photos. I have a device that can give me 12k x 6k fisheye images in HDR, but it can't do it at a framerate or resolution that's good enough for video, so I want to cut my losses and show off immersive photos instead. Is there something Apple is already working on for APMP stills or should I create my own app that reads metadata inside a HEIC that I infer in a similar way to the demo "ProjectedMediaConversion" is doing for Video. It would be great to have 180VR photos, which could show as Spatial in a gallery view, but going immersive would half-surround you instead of floating in the blurred view. I think that would be a pretty amazing effect.
Topic:
Media Technologies
SubTopic:
Photos & Camera
In what scenario will an app receive the limitExceeded PHPhotosError code? This case was added in iOS 26.1 and is not currently documented. What PhotoKit APIs can encounter this error and how should it be handled?
Where can I find the documentation of the Genlock feature of the iPhone 17 Pro? How does it work and how can I use it in my app?
The iPhone 17’s front camera with the new 18MP square sensor and iOS 26’s Center Stage feature can auto-rotate between portrait and landscape like the native iOS Camera app.
Is there a Swift or AVFoundation API that allows developers to manually control front camera orientation in the same way the native Camera app does? Or is this auto-rotation strictly handled by the system without public API access?
Hi everyone,
We're encountering an unexpected issue with our iPhone-only camera app:
👉 TimeMark - Photo Proof
https://apps.apple.com/us/app/timemark-photo-proof/id6446071834
Problem Description:
Our app uses a full-screen camera view via AVCaptureSession. In some cases reported by users, the camera fails immediately upon app launch, and we receive this interruption reason:
AVCaptureSessionInterruptionReasonVideoDeviceNotAvailableWithMultipleForegroundApps
According to the Apple documentation https://developer.apple.com/documentation/avfoundation/avcapturesession/interruptionreason/videodevicenotavailablewithmultipleforegroundapps?language=objc , this interruption typically occurs when the app is running in a multi-app layout such as Slide Over, Split View, or Picture in Picture — all of which are iPad-only features.
However, this issue is being reported on iPhones, and our app does not support iPad at all.
Also noted in the documentation:
"Given your present AVCaptureSession configuration, the session may only be run if your app occupies the full screen."
Additional Context:
The issue occurs immediately on app launch, before the user can interact with the camera.
We don’t enable multitaskingCameraAccessEnabled.
We are 100% sure this is happening on iPhone, not iPad.
It’s hard to reproduce; users report it happening sporadically.
Locally, we tried playing Picture-in-Picture videos (e.g., Safari/YouTube) before launching our app, but we could not reproduce the issue.
Questions:
Why is this interruption reason occurring on iPhone, which doesn’t officially support Slide Over or Split View?
Could this be caused by some system-level multitasking or resource contention (e.g., Picture in Picture from FaceTime or Safari)?
Would enabling multitaskingCameraAccessEnabled help prevent this issue on iPhone, even though it's designed for iPad?
Enabling multitaskingCameraAccessEnabled seems to require enabling UIBackgroundModes → voip.
Would adding this background mode cause any App Store review risk or rejection if our app doesn't actually use VoIP functionality?
Any help, insight, or suggestions would be greatly appreciated. Thanks in advance!
Hi, We have created an app which allows recording 4K 60 fps videos in the app. We have noted that some time the recording switched to 20 fps (the value 20 is constant) even though the resolution settings is at 4K 60fps. We are using AVCaptureDevice to invoke the camera.
has anyone experienced this problem before ? What is unique to 20 fps? Why does it resort to 20 fps from 60 fps and why not to other numbers ?
Topic:
Media Technologies
SubTopic:
Photos & Camera
Tags:
VideoToolbox
Professional Video Applications
Media
I'm writing a program to control a PTZ camera connected via USB.
I can get access to target camera's unique_id, and also other infos provided by AVFoundation. But I don't know how to locate my target USB device to send a UVC ControlRequest.
There's many Cameras with same VendorID and ProductID connected at a time, so I need a more exact way to find out which device is my target.
It looks that the unique_id provided is (locationID<<32|VendorID<<16|ProductID) as hex string, but I'm not sure if I can always assume this behavior won't change.
Is there's a document declares how AVFoundation generate the unique_id for USB camera, so I can assume this convert will always work? Or is there's a way to send a PTZ control request to AVCaptureDevice?
https://stackoverflow.com/questions/40006908/usb-interface-of-an-avcapturedevice
I have seen this similar question. But I'm worrying that Exacting LocationID+VendorID+ProductID from unique_id seems like programming to implementation instead of interface. So, if there's any other better way to control my camera?
here's my example code for getting unique_id:
//
// camera_unique_id_test.mm
//
// 测试代码:使用C++获取当前系统摄像头的AVCaptureDevice unique_id
//
// 编译命令:
// clang++ -framework AVFoundation -framework CoreMedia -framework Foundation
// camera_unique_id_test.mm -o camera_unique_id_test
//
#include <iostream>
#include <string>
#include <vector>
#import <AVFoundation/AVFoundation.h>
#import <Foundation/Foundation.h>
struct CameraInfo {
std::string uniqueId;
};
std::vector<CameraInfo> getAllCameraDevices() {
std::vector<CameraInfo> cameras;
@autoreleasepool {
NSArray<AVCaptureDevice*>* devices =
[AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
AVCaptureDevice* defaultDevice =
[AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
// 遍历所有设备
for (AVCaptureDevice* device in devices) {
CameraInfo info;
// 获取unique_id
info.uniqueId = std::string([device.uniqueID UTF8String]);
cameras.push_back(info);
}
}
return cameras;
}
int main(int argc, char* argv[]) {
std::vector<CameraInfo> cameras = getAllCameraDevices();
for (size_t i = 0; i < cameras.size(); i++) {
const CameraInfo& camera = cameras[i];
std::cout << " 设备 " << (i + 1) << ":" << std::endl;
std::cout << " unique_id: " << camera.uniqueId << std::endl;
}
return 0;
}
and here's my code for UVC control:
// clang++ -framework Foundation -framework IOKit uvc_test.cpp -o uvc_test
#include <iostream>
#include <CoreFoundation/CoreFoundation.h>
#include <IOKit/IOCFPlugIn.h>
#include <IOKit/IOKitLib.h>
#include <IOKit/IOMessage.h>
#include <IOKit/usb/IOUSBLib.h>
#include <IOKit/usb/USB.h>
CFStringRef CreateCFStringFromIORegistryKey(io_service_t ioService,
const char* key) {
CFStringRef keyString = CFStringCreateWithCString(kCFAllocatorDefault, key,
kCFStringEncodingUTF8);
if (!keyString)
return nullptr;
CFStringRef result = static_cast<CFStringRef>(
IORegistryEntryCreateCFProperty(ioService, keyString, kCFAllocatorDefault,
kIORegistryIterateRecursively));
CFRelease(keyString);
return result;
}
std::string GetStringFromIORegistry(io_service_t ioService, const char* key) {
CFStringRef cfString = CreateCFStringFromIORegistryKey(ioService, key);
if (!cfString)
return "";
char buffer[256];
Boolean success = CFStringGetCString(cfString, buffer, sizeof(buffer),
kCFStringEncodingUTF8);
CFRelease(cfString);
return success ? std::string(buffer) : std::string("");
}
uint32_t GetUInt32FromIORegistry(io_service_t ioService, const char* key) {
CFStringRef keyString = CFStringCreateWithCString(kCFAllocatorDefault, key,
kCFStringEncodingUTF8);
if (!keyString)
return 0;
CFNumberRef number = static_cast<CFNumberRef>(
IORegistryEntryCreateCFProperty(ioService, keyString, kCFAllocatorDefault,
kIORegistryIterateRecursively));
CFRelease(keyString);
if (!number)
return 0;
uint32_t value = 0;
CFNumberGetValue(number, kCFNumberSInt32Type, &value);
CFRelease(number);
return value;
}
int main() {
// Get matching dictionary for USB devices
CFMutableDictionaryRef matchingDict =
IOServiceMatching(kIOUSBDeviceClassName);
// Get iterator for matching services
io_iterator_t serviceIterator;
IOServiceGetMatchingServices(kIOMasterPortDefault, matchingDict,
&serviceIterator);
// Iterate through matching devices
io_service_t usbService;
while ((usbService = IOIteratorNext(serviceIterator))) {
uint32_t locationId = GetUInt32FromIORegistry(usbService, "locationID");
uint32_t vendorId = GetUInt32FromIORegistry(usbService, "idVendor");
uint32_t productId = GetUInt32FromIORegistry(usbService, "idProduct");
IOCFPlugInInterface** plugInInterface = nullptr;
IOUSBDeviceInterface** deviceInterface = nullptr;
SInt32 score;
// Get device plugin interface
IOCreatePlugInInterfaceForService(usbService, kIOUSBDeviceUserClientTypeID,
kIOCFPlugInInterfaceID, &plugInInterface,
&score);
// Get device interface
(*plugInInterface)
->QueryInterface(plugInInterface,
CFUUIDGetUUIDBytes(kIOUSBDeviceInterfaceID),
(LPVOID*)&deviceInterface);
(*plugInInterface)->Release(plugInInterface);
// Try to find UVC control interface using CreateInterfaceIterator
io_iterator_t interfaceIterator;
IOUSBFindInterfaceRequest interfaceRequest;
interfaceRequest.bInterfaceClass = kUSBVideoInterfaceClass; // 14
interfaceRequest.bInterfaceSubClass = kUSBVideoControlSubClass; // 1
interfaceRequest.bInterfaceProtocol = kIOUSBFindInterfaceDontCare;
interfaceRequest.bAlternateSetting = kIOUSBFindInterfaceDontCare;
(*deviceInterface)
->CreateInterfaceIterator(deviceInterface, &interfaceRequest,
&interfaceIterator);
(*deviceInterface)->Release(deviceInterface);
io_service_t usbInterface = IOIteratorNext(interfaceIterator);
IOObjectRelease(interfaceIterator);
if (usbInterface) {
std::cout << "Get UVC device with:" << std::endl;
std::cout << "locationId: " << std::hex << locationId << std::endl;
std::cout << "vendorId: " << std::hex << vendorId << std::endl;
std::cout << "productId: " << std::hex << productId << std::endl
<< std::endl;
IOObjectRelease(usbInterface);
}
IOObjectRelease(usbService);
}
IOObjectRelease(serviceIterator);
}
Device: iPhone 16 Pro Max
OS: iOS 26.0.1
AVCaptureDevice.DeviceType: builtInUltraWideCamera
avtiveFormat: <AVCaptureDeviceFormat: 0x10ffb9ac0 'vide'/'420v' 1440x1080, { 1- 60 fps}, photo dims:{1440x1080,2016x1512}, fov:101.022, gdc fov:103.625, binned, max zoom:94.50 (upscales @1.40), system zoom range:1.0-3.0, AF System:1, ISO:15.0-3600.0, SS:0.000023-1.000000, system exposure bias range:-2.0-2.0, supports multicam, supports CS RoI, supports Smart Style, supports Smudge Detection>
API: device.isFocusModeSupported(.continuousAutoFocus) == true
setting: device.focusMode = .continuousAutoFocus
setting is ok, but it's not working actually with continuousAutoFocus
Issue: After iOS 18.5 release, our app is experiencing a significant increase in AVCaptureSessionInterruptionReason.videoDeviceNotAvailableWithMultipleForegroundApps errors.
Details:
Our camera-related code has not been updated recently.However, we've observed that the error rate has significantly increased starting from May 2025. The error rate has risen from approximately 0.02% (2 in 10,000 users) to 0.1% (1 in 1,000 users). This represents a 5x increase in error occurrence.
The frequency has increased noticeably since iOS 18.5
This is affecting our app's camera functionality and user experience
Questions:
Are there any known changes in iOS 18.5 regarding camera access management?
What are the recommended best practices to handle this interruption reason?
Are there any API changes we should be aware of?
Best,
Shay
My implementation of LockedCameraCapture does not launch my app when tapped from locked screen. But when the same widget is in the Control Center, it launches the app successfully.
Standard Xcode target template:
Lock_Screen_Capture.swift
@main
struct Lock_Screen_Capture: LockedCameraCaptureExtension {
var body: some LockedCameraCaptureExtensionScene {
LockedCameraCaptureUIScene { session in
Lock_Screen_CaptureViewFinder(session: session)
}
}
}
Lock_Screen_CaptureViewFinder.swift:
import SwiftUI
import UIKit
import UniformTypeIdentifiers
import LockedCameraCapture
struct Lock_Screen_CaptureViewFinder: UIViewControllerRepresentable {
let session: LockedCameraCaptureSession
var sourceType: UIImagePickerController.SourceType = .camera
init(session: LockedCameraCaptureSession) {
self.session = session
}
func makeUIViewController(context: Self.Context) -> UIImagePickerController {
let imagePicker = UIImagePickerController()
imagePicker.sourceType = sourceType
imagePicker.mediaTypes = [UTType.image.identifier, UTType.movie.identifier]
imagePicker.cameraDevice = .rear
return imagePicker
}
func updateUIViewController(_ uiViewController: UIImagePickerController, context: Self.Context) {
}
}
Then I have my widget:
struct CameraWidgetControl: ControlWidget {
var body: some ControlWidgetConfiguration {
StaticControlConfiguration(
kind: "com.myCompany.myAppName.lock-screen") {
ControlWidgetButton(action: MyAppCaptureIntent()) {
Label("Capture", systemImage: "camera.shutter.button.fill")
}
}
}
}
My AppIntent:
struct MyAppContext: Codable {}
struct MyAppCaptureIntent: CameraCaptureIntent {
typealias AppContext = MyAppContext
static let title: LocalizedStringResource = "MyAppCaptureIntent"
static let description = IntentDescription("Capture photos and videos with MyApp.")
@MainActor
func perform() async throws -> some IntentResult {
.result()
}
}
The Issue
LockedCameraCapture Widget does not launch my app when tapped from locked screen. You get the Face ID prompt and takes you to just Home Screen. But when the same widget is in the Control Center, it launches the app successfully.
Error Message
When tapped on Lock Screen, I get the following error code:
LaunchServices: store ‹private > or url ‹private > was nil: Error Domain=NSOSStatusErrorDomain Code=-54 "process may not map database"
UserInfo=&NSDebugDescription=process may not map database, _LSLine=72, _LSFunction=_LSServer_GetServerStoreForConnectionWithCompletionHandler}
Attempt to map database failed: permission was denied. This attempt will not be retried.
Failed to initialize client context with error Error Domain=NSOSStatusErrorDomain Code=-54 "process may not map database"
UserInfo=&NSDebugDescription=process may not map database, _LSLine=72, _LSFunction=_LSServer_GetServerStoreForConnectionWithCompletionHandler}
Things I tried
Widget image displays correctly
App ID and the Provisioning Profile seem to be fine since they work fine when the same code injected in to AVCam sample app and when used the same App ID's.
AppIntent file contains the target memberships of the Lock Screen capture and Widget
Apple compiles without errors or warnings.
I'm developing an iOS app using AVFoundation for real-time video capture and object detection.
While implementing torch functionality with camera switching (between Wide and Ultra-Wide lenses), I encountered a critical issue where the camera freezes when toggling the torch while the Ultra-Wide camera is active.
Issue
If the torch is ON and I switch from Wide to Ultra-Wide, the camera freezes
If the Ultra-Wide camera is active and I try to turn the torch ON, the camera freezes
The iPhone Camera app allows using the torch while recording video with the Ultra-Wide lens, so this should be possible via AVFoundation as well.
Code snippet
DispatchQueue.global(qos: .userInitiated).async { [weak self] in
guard let self = self else { return }
let isSwitchingToUltraWide = !self.isUsingFisheyeCamera
let cameraType: AVCaptureDevice.DeviceType = isSwitchingToUltraWide ? .builtInUltraWideCamera : .builtInWideAngleCamera
let cameraName = isSwitchingToUltraWide ? "Ultra Wide" : "Wide"
guard let selectedCamera = AVCaptureDevice.default(cameraType, for: .video, position: .back) else {
DispatchQueue.main.async {
self.showAlert(title: "Camera Error", message: "\(cameraName) camera is not available on this device.")
}
return
}
do {
let currentInput = self.videoCapture.captureSession.inputs.first as? AVCaptureDeviceInput
self.videoCapture.captureSession.beginConfiguration()
if isSwitchingToUltraWide && self.isFlashlightOn {
self.forceEnableTorchThroughWide()
}
if let currentInput = currentInput {
self.videoCapture.captureSession.removeInput(currentInput)
}
let videoInput = try AVCaptureDeviceInput(device: selectedCamera)
self.videoCapture.captureSession.addInput(videoInput)
self.videoCapture.captureSession.commitConfiguration()
self.videoCapture.updateVideoOrientation()
DispatchQueue.main.async {
if let barButton = sender as? UIBarButtonItem {
barButton.title = isSwitchingToUltraWide ? "Wide" : "Ultra Wide"
barButton.tintColor = isSwitchingToUltraWide ? UIColor.systemGreen : UIColor.white
}
print("Switched to \(cameraName) camera.")
}
self.isUsingFisheyeCamera.toggle()
} catch {
DispatchQueue.main.async {
self.showAlert(title: "Camera Error", message: "Failed to switch to \(cameraName) camera: \(error.localizedDescription)")
}
}
}
}
Expected Behavior
Torch should be able to work when Ultra-Wide is active, just like the iPhone Camera app does.
The camera should not freeze when switching between Wide and Ultra-Wide with the torch ON.
AVCaptureSession should not crash when toggling the torch while Ultra-Wide is active.
Questions & Help Needed
Is this a known issue with AVFoundation?
How does the iPhone Camera app allow using the torch while recording in Ultra-Wide?
What’s the correct way to switch between Wide and Ultra-Wide cameras without freezing when the torch is active?
Info
Device tested: iPhone 13 Pro / iPhone 15 Pro / Iphone 15
iOS Version: iOS 17.3 / iOS 18.0
Xcode Version: 16.2
I'm getting an error writing a ciImage as a heif image:
// Create CIImage directly from pixel buffer
let ciImage = CIImage(cvPixelBuffer: pixelBuffer, options: [CIImageOption.properties: combinedMetadata])
// Write HEIC synchronously
do {
try ciContext.writeHEIFRepresentation(of: ciImage, to: url, format: .RGBA8, colorSpace: colorSpace)
The error I'm getting is:
Error Domain=CINonLocalizedDescriptionKey Code=3 "(null)" UserInfo={CINonLocalizedDescriptionKey=failed to write HEIC data to file., NSUnderlyingError=0x11b1a1ec0 {Error Domain=CINonLocalizedDescriptionKey Code=10 "(null)" UserInfo={CINonLocalizedDescriptionKey=failed to add image to the PhotoCompressionSession.}}}
Both
try ciContext.writeJPEGRepresentation(of: copiedCIImage, to: url, colorSpace: colorSpace, options: options)
and
try ciContext.writePNGRepresentation(of: copiedCIImage, to: url, format: .RGBA8, colorSpace: colorSpace)
work. I also verified that the code works with iOS 18.
Is there something new we need to do for Heif images?
Thanks in advance
Just downloaded iOS 26.1 and my phone keeps ringing after the call has been answered. Any fixes for this?
Topic:
Media Technologies
SubTopic:
Photos & Camera
Hi everyone,
I’m running into an issue with PHPickerFilter when using PHPickerViewController.
When I configure the picker with a .videos and .livePhotos filter, it seems to work correctly in the Photos tab. However, when I switch to the Collections tab, the filter doesn’t always apply — users can still see and select static image assets in certain collections (e.g. from one of the People & Pets sections).
Here’s a simplified snippet of my setup:
var configuration = PHPickerConfiguration(photoLibrary: .shared())
configuration.selectionLimit = 1
var filters = [PHPickerFilter]()
filters.append(.videos)
filters.append(.livePhotos)
configuration.filter = PHPickerFilter.any(of: filters)
configuration.preferredAssetRepresentationMode = .current
let picker = PHPickerViewController(configuration: configuration)
picker.delegate = self
present(picker, animated: true)
Expected behavior:
The picker should consistently respect the filter across both Photos and Collections tabs, only showing assets that match the filter.
Actual behavior:
The filter seems to apply correctly in the Photos tab, but in the Collections tab, other asset types are still visible/selectable.
Has anyone else encountered this behavior? Is this expected or a known issue, or am I missing something in the configuration?
Thanks in advance!
Topic:
Media Technologies
SubTopic:
Photos & Camera
Tags:
Files and Storage
Media Library
Photos and Imaging
PhotoKit
I discovered when editing photos with the PhotoKit API, PHContentEditingOutput's renderedContentURL is a file in the app container's tmp directory with a filename that seems to follow the format render.<uuid>.JPG, and that file does not get deleted if the edit does not complete successfully (the user cancels the edit request, an error occurs, the app crashes, etc). I understand the system is supposed to automatically delete tmp files every once in a while, but some users are noticing my app's Documents & Data inflates, so I'm considering deleting these render files each time the app is launched. But I don't want to delete everything in the tmp directory as there could possibly be other data in there.
What's the best way to remove those temporary files? Does the filename always start with render. no matter the device language? I thought I'd delete files in NSTemporaryDirectory() with that prefix but then I discovered in Mac Catalyst the location is not the tmp directory directly, they're in tmp/TemporaryItems/<bundleid>.
Thanks!
I want to fully support the new iPhone models in my app, and ideally need to know the available lenses. However, I can't find information about this on the web and they're not reported in the simulators. The closest thing I found was this, but it's very out of date. https://developer.apple.com/library/archive/documentation/DeviceInformation/Reference/iOSDeviceCompatibility/Cameras/Cameras.html
My only other option is to buy each device, which isn't really feasible, or to log the data from real users via an analytics tool which isn't ideal either.
Thanks,
Alex
Topic:
Media Technologies
SubTopic:
Photos & Camera