Photos & Camera

RSS for tag

Explore technical aspects of capturing high-quality photos and videos, including exposure control, focus modes, and RAW capture options.

Posts under Photos & Camera subtopic

Post

Replies

Boosts

Views

Activity

Images with unusual color spaces not correctly loaded by Core Image
Some users reported that their images are not loading correctly in our app. After a lot of debugging we identified the following: This only happens when the app is build for Mac Catalyst. Not on iOS, iPadOS, or “real” macOS (AppKit). The images in question have unusual color spaces. We observed the issue for uRGB and eciRGB v2. Those images are rendered correctly in Photos and Preview on all platforms. When displaying the image inside of a UIImageView or in a SwiftUI Image, they render correctly. The issue only occurs when loading the image via Core Image. When comparing the different Core Image render graphs between AppKit (working) and Catalyst (faulty) builds, they look identical—except for the result. Mac (AppKit): Catalyst: Something seems to be off when Core Image tries to load an image with foreign color space in Catalyst. We identified a workaround: By using a CGImageDestination to transcode the image using the kCGImageDestinationOptimizeColorForSharing option, Image I/O will convert the image to sRGB (or similar) and Core Image is able to load the image correctly. However, one potentially loses fidelity this way. Or might there be a better workaround?
2
3
118
Aug ’25
Does CMIO support "hide" build-in camera
Hi guys, Can I use CMIO to achieve the following feature on macOS when a USB device (Camera/Mic/Speaker) is connected: When a third-party video conferencing app is not in a meeting, ensure the app defaults to using the USB device (Camera/Mic/Speaker). When a third-party conferencing app is in a meeting, ensure the app automatically switches to the USB device (Camera/Mic/Speaker).
2
0
154
Jul ’25
WorldCaptureKit
Does the library exists in xCode 16.4? "import WorldCaptureKit" gives error "No such module 'WorldCaptureKit'". And I do not find any information about the library in the apple documentation. But AI keeps suggesting me to use the library
2
0
121
Jun ’25
any app shows a black square instead of the camera picture
Hi! I am making an app for Apple Vision pro (VisionOS 2.5) that is scanning the surroundings and recognises all the texts around you. I tried to use the AVCaptureSession library, but when I run the app from xcode on the real AVP device, the camera is not accessible. I enabled the camera access in my Info.plist: NSCameraUsageDescription Used for live text recognition and I checked camera settings in the AVP, there are no restrictions. However I have always a black square with a crossed camera icon displayed instead of the image from the camera. I tried a couple of different apps from Github using the AVCaptureSession and they all display the black square instead of the picture. What can be wrong with the camera?
2
0
116
Jun ’25
Why a driverkit extension needs a CMIO extension
I developed a driverkit extension based on overriding-the-default-usb-video-class-extension, but the link didn’t give the details of realization. I asked DTS who gave two tips: 1, Do you also have a CMIO extension to load in place of the default overriding-the-default-usb-video-class-extension 2, Your DriverKit extension’s info.plist is also missing the CameraAssistantBundleID. I want to know why a driverkit extension needs a CMIO extension, what’s the data and control flow?
2
0
112
Jun ’25
After iOS 18.5, the AVFoundation AVCaptureSessionInterruptionReason.videoDeviceNotAvailableWithMultipleForegroundApps error has caused a significant increase in camera black screen issues.
Issue: After iOS 18.5 release, our app is experiencing a significant increase in AVCaptureSessionInterruptionReason.videoDeviceNotAvailableWithMultipleForegroundApps errors. Details: Our camera-related code has not been updated recently.However, we've observed that the error rate has significantly increased starting from May 2025. The error rate has risen from approximately 0.02% (2 in 10,000 users) to 0.1% (1 in 1,000 users). This represents a 5x increase in error occurrence. The frequency has increased noticeably since iOS 18.5 This is affecting our app's camera functionality and user experience Questions: Are there any known changes in iOS 18.5 regarding camera access management? What are the recommended best practices to handle this interruption reason? Are there any API changes we should be aware of? Best, Shay
2
1
392
Oct ’25
I cannot acquire entitlement named com.apple.developer.avfoundation.video-data-output-prepares-cellular-radio-for-machine-readable-code-scanning.
AVCaptureVideoDataOutput.preparesCellularRadioForNetworkConnection requires com.apple.developer.avfoundation.video-data-output-prepares-cellular-radio-for-machine-readable-code-scanning. But I cannot acquire its entitlement. I can't find its entitlement on 'Certificates, Identifiers & Profiles'. Any solutions? Provisioning profile "iOS Team Provisioning Profile: ......" doesn't include the com.apple.developer.avfoundation.video-data-output-prepares-cellular-radio-for-machine-readable-code-scanning entitlement.
2
0
504
Jul ’25
How to reliably detect user-modified photos?
I'm developing a photo backup app. To detect newly added or edited photos since the app launched, I keep a local dictionary in the format [localIdentifier: modification_date]. However, PHAsset.modificationDate is not reliable. It often changes unexpectedly, possibly due to system operations like iCloud metadata updates. Is there a more reliable way to detect whether a photo has been modified by user since the last app launch? I'm thinking about using content hash instead, but I'm not sure how heavy this operation is in terms of performance.
2
0
73
Aug ’25
Why is my YCbCr to sRGB camera pipeline brighter Than Preview Layer? (420YpCbCr8BiPlanarFullRange, AVCaptureVideoPreviewLayer)
Hi all, I'm working on a custom Metal-based video pipeline using AVCaptureVideoDataOutput, and I've run into an unexpected issue related to exposure. Setup: I'm capturing video frames using kCVPixelFormatType_420YpCbCr8BiPlanarFullRange. In my Metal shader, I: Convert YCbCr (full range Rec.709) to linear Rec.709 RGB. Apply Rec.709 → sRGB gamma encoding. Output to .bgra8Unorm_srgb via MTKView. Everything renders correctly in terms of colorspace math, but the image appears significantly brighter (~+3 stops EV) compared to AVCaptureVideoPreviewLayer and the native iOS Camera app under the same camera exposure settings. What I’ve verified: The color transforms are correct: YCbCr709 to RGB, then linear to sRGB. I'm not applying any tone mapping or aggressive look LUTs yet. Camera exposure is locked using: device.setExposureModeCustom(duration: ..., iso: ...) The same EV (e.g., ISO 50, 1/125s, f/5.6) on my iPhone appears visually 3 stops brighter than on my digital cameras (Sony/Canon etc). To match the look of the preview layer or camera app, I have to simulate a ~–3 EV shift in my custom pipeline. Questions: Is AVCaptureVideoPreviewLayer applying extra tone mapping, digital gain, or contrast shaping (like OOTF etc)? Does the camera ISP expose "hotter" (i.e., with more light) internally for the preview layer than what we get in video frame buffers? Is there a standard way to compensate for this ISP behavior in custom pipelines using AVCaptureVideoDataOutput? Can this be accounted for using metadata (e.g., exposure bias, gain, gamma curve)?
2
0
245
Sep ’25
AVCaptureMetadataOutput .face detection not working on iOS 26 Beta with high sessionPreset
In iOS 26 (Developer Beta), the AVCaptureMetadataOutputObjectsDelegate no longer receives callbacks when metadataOutput.metadataObjectTypes = [.face] is set. On earlier iOS versions the issue does not occur. Interestingly, face detection works if I set the sessionPreset to .medium, but not with .high — except on the iPhone 16 Pro Max, where it works regardless.
2
1
348
Sep ’25
Only fetch PHAssets from the current user's iCloud Library
I'm working on a photo app and I want to allow the user to display, edit and delete photos. I can fetch all photos using PHAsset.fetchAssets(with: options). This works as intended. However, I can't seem to find a way to prevent the user from seeing photos from a Shared Library. The PHAssetSourceType only contains typeCloudShared to only show items from a specific album; not library. How can I filter by iCloud Shared Library?
2
1
374
Sep ’25
PHPickerFilter doesn't always apply in Collections tab when using PHPickerViewController
Hi everyone, I’m running into an issue with PHPickerFilter when using PHPickerViewController. When I configure the picker with a .videos and .livePhotos filter, it seems to work correctly in the Photos tab. However, when I switch to the Collections tab, the filter doesn’t always apply — users can still see and select static image assets in certain collections (e.g. from one of the People & Pets sections). Here’s a simplified snippet of my setup: var configuration = PHPickerConfiguration(photoLibrary: .shared()) configuration.selectionLimit = 1 var filters = [PHPickerFilter]() filters.append(.videos) filters.append(.livePhotos) configuration.filter = PHPickerFilter.any(of: filters) configuration.preferredAssetRepresentationMode = .current let picker = PHPickerViewController(configuration: configuration) picker.delegate = self present(picker, animated: true) Expected behavior: The picker should consistently respect the filter across both Photos and Collections tabs, only showing assets that match the filter. Actual behavior: The filter seems to apply correctly in the Photos tab, but in the Collections tab, other asset types are still visible/selectable. Has anyone else encountered this behavior? Is this expected or a known issue, or am I missing something in the configuration? Thanks in advance!
2
0
345
Oct ’25
How can I locate a UVC camera for PTZ control by AVCaptureDevice.unique_id
I'm writing a program to control a PTZ camera connected via USB. I can get access to target camera's unique_id, and also other infos provided by AVFoundation. But I don't know how to locate my target USB device to send a UVC ControlRequest. There's many Cameras with same VendorID and ProductID connected at a time, so I need a more exact way to find out which device is my target. It looks that the unique_id provided is (locationID<<32|VendorID<<16|ProductID) as hex string, but I'm not sure if I can always assume this behavior won't change. Is there's a document declares how AVFoundation generate the unique_id for USB camera, so I can assume this convert will always work? Or is there's a way to send a PTZ control request to AVCaptureDevice? https://stackoverflow.com/questions/40006908/usb-interface-of-an-avcapturedevice I have seen this similar question. But I'm worrying that Exacting LocationID+VendorID+ProductID from unique_id seems like programming to implementation instead of interface. So, if there's any other better way to control my camera? here's my example code for getting unique_id: // // camera_unique_id_test.mm // // 测试代码:使用C++获取当前系统摄像头的AVCaptureDevice unique_id // // 编译命令: // clang++ -framework AVFoundation -framework CoreMedia -framework Foundation // camera_unique_id_test.mm -o camera_unique_id_test // #include <iostream> #include <string> #include <vector> #import <AVFoundation/AVFoundation.h> #import <Foundation/Foundation.h> struct CameraInfo { std::string uniqueId; }; std::vector<CameraInfo> getAllCameraDevices() { std::vector<CameraInfo> cameras; @autoreleasepool { NSArray<AVCaptureDevice*>* devices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo]; AVCaptureDevice* defaultDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo]; // 遍历所有设备 for (AVCaptureDevice* device in devices) { CameraInfo info; // 获取unique_id info.uniqueId = std::string([device.uniqueID UTF8String]); cameras.push_back(info); } } return cameras; } int main(int argc, char* argv[]) { std::vector<CameraInfo> cameras = getAllCameraDevices(); for (size_t i = 0; i < cameras.size(); i++) { const CameraInfo& camera = cameras[i]; std::cout << " 设备 " << (i + 1) << ":" << std::endl; std::cout << " unique_id: " << camera.uniqueId << std::endl; } return 0; } and here's my code for UVC control: // clang++ -framework Foundation -framework IOKit uvc_test.cpp -o uvc_test #include <iostream> #include <CoreFoundation/CoreFoundation.h> #include <IOKit/IOCFPlugIn.h> #include <IOKit/IOKitLib.h> #include <IOKit/IOMessage.h> #include <IOKit/usb/IOUSBLib.h> #include <IOKit/usb/USB.h> CFStringRef CreateCFStringFromIORegistryKey(io_service_t ioService, const char* key) { CFStringRef keyString = CFStringCreateWithCString(kCFAllocatorDefault, key, kCFStringEncodingUTF8); if (!keyString) return nullptr; CFStringRef result = static_cast<CFStringRef>( IORegistryEntryCreateCFProperty(ioService, keyString, kCFAllocatorDefault, kIORegistryIterateRecursively)); CFRelease(keyString); return result; } std::string GetStringFromIORegistry(io_service_t ioService, const char* key) { CFStringRef cfString = CreateCFStringFromIORegistryKey(ioService, key); if (!cfString) return ""; char buffer[256]; Boolean success = CFStringGetCString(cfString, buffer, sizeof(buffer), kCFStringEncodingUTF8); CFRelease(cfString); return success ? std::string(buffer) : std::string(""); } uint32_t GetUInt32FromIORegistry(io_service_t ioService, const char* key) { CFStringRef keyString = CFStringCreateWithCString(kCFAllocatorDefault, key, kCFStringEncodingUTF8); if (!keyString) return 0; CFNumberRef number = static_cast<CFNumberRef>( IORegistryEntryCreateCFProperty(ioService, keyString, kCFAllocatorDefault, kIORegistryIterateRecursively)); CFRelease(keyString); if (!number) return 0; uint32_t value = 0; CFNumberGetValue(number, kCFNumberSInt32Type, &value); CFRelease(number); return value; } int main() { // Get matching dictionary for USB devices CFMutableDictionaryRef matchingDict = IOServiceMatching(kIOUSBDeviceClassName); // Get iterator for matching services io_iterator_t serviceIterator; IOServiceGetMatchingServices(kIOMasterPortDefault, matchingDict, &serviceIterator); // Iterate through matching devices io_service_t usbService; while ((usbService = IOIteratorNext(serviceIterator))) { uint32_t locationId = GetUInt32FromIORegistry(usbService, "locationID"); uint32_t vendorId = GetUInt32FromIORegistry(usbService, "idVendor"); uint32_t productId = GetUInt32FromIORegistry(usbService, "idProduct"); IOCFPlugInInterface** plugInInterface = nullptr; IOUSBDeviceInterface** deviceInterface = nullptr; SInt32 score; // Get device plugin interface IOCreatePlugInInterfaceForService(usbService, kIOUSBDeviceUserClientTypeID, kIOCFPlugInInterfaceID, &plugInInterface, &score); // Get device interface (*plugInInterface) ->QueryInterface(plugInInterface, CFUUIDGetUUIDBytes(kIOUSBDeviceInterfaceID), (LPVOID*)&deviceInterface); (*plugInInterface)->Release(plugInInterface); // Try to find UVC control interface using CreateInterfaceIterator io_iterator_t interfaceIterator; IOUSBFindInterfaceRequest interfaceRequest; interfaceRequest.bInterfaceClass = kUSBVideoInterfaceClass; // 14 interfaceRequest.bInterfaceSubClass = kUSBVideoControlSubClass; // 1 interfaceRequest.bInterfaceProtocol = kIOUSBFindInterfaceDontCare; interfaceRequest.bAlternateSetting = kIOUSBFindInterfaceDontCare; (*deviceInterface) ->CreateInterfaceIterator(deviceInterface, &interfaceRequest, &interfaceIterator); (*deviceInterface)->Release(deviceInterface); io_service_t usbInterface = IOIteratorNext(interfaceIterator); IOObjectRelease(interfaceIterator); if (usbInterface) { std::cout << "Get UVC device with:" << std::endl; std::cout << "locationId: " << std::hex << locationId << std::endl; std::cout << "vendorId: " << std::hex << vendorId << std::endl; std::cout << "productId: " << std::hex << productId << std::endl << std::endl; IOObjectRelease(usbInterface); } IOObjectRelease(usbService); } IOObjectRelease(serviceIterator); }
2
0
215
Oct ’25
APMP & Photography?
Hi, I'm a fan of the gallery in vision pro which has video as well as still photography but I'm wondering if Apple has considered adding the projected media tags to heic so that we can go that next step from Spatial photos to Immersive photos. I have a device that can give me 12k x 6k fisheye images in HDR, but it can't do it at a framerate or resolution that's good enough for video, so I want to cut my losses and show off immersive photos instead. Is there something Apple is already working on for APMP stills or should I create my own app that reads metadata inside a HEIC that I infer in a similar way to the demo "ProjectedMediaConversion" is doing for Video. It would be great to have 180VR photos, which could show as Spatial in a gallery view, but going immersive would half-surround you instead of floating in the blurred view. I think that would be a pretty amazing effect.
2
0
293
Oct ’25
PHPhotoLibrary.performChanges completionHandler not called when deleting assets on iOS 26
In my app, I use api provided in Photos framework to delete specified photo. But after upgrading to iOS 26, the delete function in some iOS device no longer work. The api will never triggers the system confirmation dialog, and the completionHandler is never called. In the iOS Photos app, deletion works correctly on the same assets, but calling the API from my app does not work. Steps to Reproduce Make sure the app has Full Photo Library Access. Execute the following code: PHPhotoLibrary.shared().performChanges({ let assetsToBeDeleted = PHAsset.fetchAssets(withLocalIdentifiers: delUrls, options: nil) PHAssetChangeRequest.deleteAssets(assetsToBeDeleted) }, completionHandler: completionHandler) Expected Behavior The system should present a confirmation dialog asking the user to delete the selected photos. After the user confirms, the deletion should occur, and the completionHandler should be called with success or error. Actual Behavior The system delete confirmation dialog does not appear. The completionHandler is never called. Environment iOS Versions: 26.1 / 26.0.1 It looks like api bug. I want to check Is it a know issue and will be fixed. Thanks
2
0
208
4w
builtInLiDARDepthCamera doesn't work on the 2020 iPad Pro on iOS 26
On iOS 26.1, this throws on the 2020 iPad Pro (4th gen) but works fine on an M4 iPad Pro or iPhone 15 Pro: guard let device = AVCaptureDevice.default(.builtInLiDARDepthCamera, for: .video, position: .back) else { throw ConfigurationError.lidarDeviceUnavailable } It's just the standard code from Apple's own sample code so obviously used to work: https://developer.apple.com/documentation/AVFoundation/capturing-depth-using-the-lidar-camera Does it fail because Apple have silently dumped support for the older LiDAR sensor used prior to the M4 iPad Pro, or is there another reason? What about the 5th and 6th gen iPad Pro, does it still work on those?
2
0
366
2w
Pictures won't import from iPhone to the Photo app on iMac
Since the OS was recently updated to 18.1.1 on my iPhone 15, I am no longer able to import my pictures into the Photos app on my iMac. I have to mention that my iMac is pretty old and is running OS High Sierra 10.13.6 and is not allowing me to update the OS to a newer version. Anyway, the main error message I get is: "Some items cannot be added to your Photo library because they may be an unrecognizable file format or the file may not contain valid data". Then, for each individual photo that failed to upload, the error message I get reads, "unable to read metadata. The file may be corrupt". However, videos import just fine from my iPhone to iMac. This was not a problem before the recent iPhone update. I tried closing the Photo app and reopening it. I tried restarting my iPhone and iMac but nothing seems to work. Any help would be much appreciated.
3
1
1.8k
Dec ’24
Format of 14-bit RAW bayer data from lower bit camera sensor?
I'm working on an application that uses the iPhone camera for scientific purposes - and, as a result would like to receive sensor data in as unprocessed format as possible. I'm using AVCapturePhotoOutput to take Bayer RAW stills and receiving data in kCVPixelFormatType_14Bayer_RGGB format. However, I'm puzzled as to the content of the bits. I simply demosaic the image by taking each 2x2 square: RG GB and use R, (G+G)/2, B to get 16-bit RGB values - and this indeed works. However, I am puzzled as to the values we are getting as they seem to be approximately in the range 2048 - 16383. The top value is understandable - the maximum that you can fit in 14-bits (as implied by the pixel format type). However we don't seem to be able to get lower than ~2048 no matter how black/dark we make the sensor. I'm aware that the sensor is probably not 14-bits (we're using the iPhone 16e camera) and that maybe this is to do with the way the sensor data is packaged. The Advances in iOS Photography video (https://developer.apple.com/videos/play/wwdc2016/501/) describes it as "10-bit sensor RAW packaged in 14 bits per pixel instead of eight." Is there any documentation describing what is going on here? It's vital for our use that we get as close to the raw camera sensor light readings as possible, so any pointers as to the mapping (e.g. decompanding?) being used would be extremely useful. Many thanks in advance for your help.
3
0
150
May ’25
Significant Uptick in AVCaptureSessionWasInterrupted (Reason 4) Leading to Camera Black Screen and AVError Code -11803
In the latest production release of our iOS app (deployed via the App Store), we’ve observed a significant increase in AVCaptureSessionWasInterrupted notifications where the interruption reason has a rawValue of 4. The session does not automatically recover, even after returning from background or deleting/reinstalling the app. An employee ran into this and was able to get a recording. We see the below error when attempting to take photos. "Error Domain=AVFoundationErrorDomain Code=-11803 \"Cannot Record\" UserInfo={AVErrorRecordingFailureDomainKey=3, NSLocalizedDescription=Cannot Record, NSLocalizedRecoverySuggestion=Try recording again.}", } This interruption causes the camera preview to remain black, and any attempt to capture an image results in a failure with the following error: Some questions from our team: What common system conditions or foreground app behaviors can cause .videoDeviceNotAvailableWithMultipleForegroundApps (reason 4) to become persistent? Our teams under is under the impression the interruption reason 4 is mostly associated with iPad and PiP, but neither of these are true in the logs we see. Is manual recovery of the session required? Is there a recommended strategy to detect that the session is unrecoverable and gracefully notify the user or rebuild the session? Is there an instrument(s) in XCode you would recommend when trying to evaluate the increase in reason 4? Best, Ben
3
17
498
Jun ’25