Some users have reported an error editing portrait photo assets in my app:
The operation couldn’t be completed. (CINonLocalizedDescriptionKey error 3.)
What is that error? Will affected photos always encounter this error (due to data corruption for example) or can it be resolved in a future iOS update?
FB16241301
Photos & Camera
RSS for tagExplore technical aspects of capturing high-quality photos and videos, including exposure control, focus modes, and RAW capture options.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
The documentation for PHAssetChangeRequest.revertAssetContentToOriginal says it will fail if the original asset content is not on the current device so you should use PHAssetResourceManager to download it first, but this no longer seems to be the case in the latest iOS versions because an error no longer occurs when I take a photo on my iPhone, edit it, open Photos on my iPad and let it sync, then open my app on iPad and call revertAssetContentToOriginal for that asset. Does the system now take care of downloading the original when needed?
When shooting with an iPhone 15 or later, it’s possible to capture HEIC or JPEG images that include gain map information conforming to the ISO 21496-1 standard. However, during image format transcoding, the HEIC codec is able to preserve the ISO 21496-1 gain map. But when converting from HEIC to JPEG, the gain map is transformed into the Apple Gain Map format instead. Is there any solution to this issue?
I'm developing a photo backup app.
To detect newly added or edited photos since the app launched, I keep a local dictionary in the format [localIdentifier: modification_date].
However, PHAsset.modificationDate is not reliable.
It often changes unexpectedly, possibly due to system operations like iCloud metadata updates.
Is there a more reliable way to detect whether a photo has been modified by user since the last app launch?
I'm thinking about using content hash instead, but I'm not sure how heavy this operation is in terms of performance.
Hey,
Quick question. I noticed that Adobe's new app, Project Indigo, allows you to open the app using the Camera Control button. However, when your device is locked it just shows this screen:
Would this normally be approved by the Appstore approval process? I ask because I would like to do something similar with my camera app.
I know that this is not the best user experience, but my apps UI is not built in Swift and I don't have the resources to build the UI again. At least this way the user experience would be improved from what it is now, where users cannot even launch the app. I get many requests per week about this feature and would love to improve the UX for my users, even if it's not the best possible.
Thanks, Alex
Hello,
Does anyone have a recipe on how to raycast VNFaceLandmarkRegion2D points obtained from a frame's capturedImage?
More specifically, how to construct the "from" parameter of the frame's raycastQuery from a VNFaceLandmarkRegion2D point?
Do the points need to be flipped vertically? Is there any other transformation that needs to be performed on the points prior to passing them to raycastQuery?
What options do I have if I don't want to use Blackmagic's Camera ProDock as the external Sync Hardware, but instead I want to create my own USB-C hardware accessory which would show up as an AVExternalSyncDevice on the iPhone 17 Pro?
Which protocol does my USB-C device have to implement to show up as an eligible clock device in AVExternalSyncDevice.DiscoverySession?
I am able to capture 48mp photos using .builtInWideAngleCamera, but it seems like .builtInTripleCamera is capped at 12mp?
Is there a way to capture 48mp photos using .builtInTripleCamera? Because .builtInTripleCamera provides smooth transition between cameras during zooming, and I'd like to keep this behavior.
New iPhone 17 Pro have all their cameras at 48mp. Is there a chance that their .builtInTripleCamera is capable of capturing 48mp? Or is this an API limitation?
I want to fully support the new iPhone models in my app, and ideally need to know the available lenses. However, I can't find information about this on the web and they're not reported in the simulators. The closest thing I found was this, but it's very out of date. https://developer.apple.com/library/archive/documentation/DeviceInformation/Reference/iOSDeviceCompatibility/Cameras/Cameras.html
My only other option is to buy each device, which isn't really feasible, or to log the data from real users via an analytics tool which isn't ideal either.
Thanks,
Alex
Topic:
Media Technologies
SubTopic:
Photos & Camera
I'm writing a program to control a PTZ camera connected via USB.
I can get access to target camera's unique_id, and also other infos provided by AVFoundation. But I don't know how to locate my target USB device to send a UVC ControlRequest.
There's many Cameras with same VendorID and ProductID connected at a time, so I need a more exact way to find out which device is my target.
It looks that the unique_id provided is (locationID<<32|VendorID<<16|ProductID) as hex string, but I'm not sure if I can always assume this behavior won't change.
Is there's a document declares how AVFoundation generate the unique_id for USB camera, so I can assume this convert will always work? Or is there's a way to send a PTZ control request to AVCaptureDevice?
https://stackoverflow.com/questions/40006908/usb-interface-of-an-avcapturedevice
I have seen this similar question. But I'm worrying that Exacting LocationID+VendorID+ProductID from unique_id seems like programming to implementation instead of interface. So, if there's any other better way to control my camera?
here's my example code for getting unique_id:
//
// camera_unique_id_test.mm
//
// 测试代码:使用C++获取当前系统摄像头的AVCaptureDevice unique_id
//
// 编译命令:
// clang++ -framework AVFoundation -framework CoreMedia -framework Foundation
// camera_unique_id_test.mm -o camera_unique_id_test
//
#include <iostream>
#include <string>
#include <vector>
#import <AVFoundation/AVFoundation.h>
#import <Foundation/Foundation.h>
struct CameraInfo {
std::string uniqueId;
};
std::vector<CameraInfo> getAllCameraDevices() {
std::vector<CameraInfo> cameras;
@autoreleasepool {
NSArray<AVCaptureDevice*>* devices =
[AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
AVCaptureDevice* defaultDevice =
[AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
// 遍历所有设备
for (AVCaptureDevice* device in devices) {
CameraInfo info;
// 获取unique_id
info.uniqueId = std::string([device.uniqueID UTF8String]);
cameras.push_back(info);
}
}
return cameras;
}
int main(int argc, char* argv[]) {
std::vector<CameraInfo> cameras = getAllCameraDevices();
for (size_t i = 0; i < cameras.size(); i++) {
const CameraInfo& camera = cameras[i];
std::cout << " 设备 " << (i + 1) << ":" << std::endl;
std::cout << " unique_id: " << camera.uniqueId << std::endl;
}
return 0;
}
and here's my code for UVC control:
// clang++ -framework Foundation -framework IOKit uvc_test.cpp -o uvc_test
#include <iostream>
#include <CoreFoundation/CoreFoundation.h>
#include <IOKit/IOCFPlugIn.h>
#include <IOKit/IOKitLib.h>
#include <IOKit/IOMessage.h>
#include <IOKit/usb/IOUSBLib.h>
#include <IOKit/usb/USB.h>
CFStringRef CreateCFStringFromIORegistryKey(io_service_t ioService,
const char* key) {
CFStringRef keyString = CFStringCreateWithCString(kCFAllocatorDefault, key,
kCFStringEncodingUTF8);
if (!keyString)
return nullptr;
CFStringRef result = static_cast<CFStringRef>(
IORegistryEntryCreateCFProperty(ioService, keyString, kCFAllocatorDefault,
kIORegistryIterateRecursively));
CFRelease(keyString);
return result;
}
std::string GetStringFromIORegistry(io_service_t ioService, const char* key) {
CFStringRef cfString = CreateCFStringFromIORegistryKey(ioService, key);
if (!cfString)
return "";
char buffer[256];
Boolean success = CFStringGetCString(cfString, buffer, sizeof(buffer),
kCFStringEncodingUTF8);
CFRelease(cfString);
return success ? std::string(buffer) : std::string("");
}
uint32_t GetUInt32FromIORegistry(io_service_t ioService, const char* key) {
CFStringRef keyString = CFStringCreateWithCString(kCFAllocatorDefault, key,
kCFStringEncodingUTF8);
if (!keyString)
return 0;
CFNumberRef number = static_cast<CFNumberRef>(
IORegistryEntryCreateCFProperty(ioService, keyString, kCFAllocatorDefault,
kIORegistryIterateRecursively));
CFRelease(keyString);
if (!number)
return 0;
uint32_t value = 0;
CFNumberGetValue(number, kCFNumberSInt32Type, &value);
CFRelease(number);
return value;
}
int main() {
// Get matching dictionary for USB devices
CFMutableDictionaryRef matchingDict =
IOServiceMatching(kIOUSBDeviceClassName);
// Get iterator for matching services
io_iterator_t serviceIterator;
IOServiceGetMatchingServices(kIOMasterPortDefault, matchingDict,
&serviceIterator);
// Iterate through matching devices
io_service_t usbService;
while ((usbService = IOIteratorNext(serviceIterator))) {
uint32_t locationId = GetUInt32FromIORegistry(usbService, "locationID");
uint32_t vendorId = GetUInt32FromIORegistry(usbService, "idVendor");
uint32_t productId = GetUInt32FromIORegistry(usbService, "idProduct");
IOCFPlugInInterface** plugInInterface = nullptr;
IOUSBDeviceInterface** deviceInterface = nullptr;
SInt32 score;
// Get device plugin interface
IOCreatePlugInInterfaceForService(usbService, kIOUSBDeviceUserClientTypeID,
kIOCFPlugInInterfaceID, &plugInInterface,
&score);
// Get device interface
(*plugInInterface)
->QueryInterface(plugInInterface,
CFUUIDGetUUIDBytes(kIOUSBDeviceInterfaceID),
(LPVOID*)&deviceInterface);
(*plugInInterface)->Release(plugInInterface);
// Try to find UVC control interface using CreateInterfaceIterator
io_iterator_t interfaceIterator;
IOUSBFindInterfaceRequest interfaceRequest;
interfaceRequest.bInterfaceClass = kUSBVideoInterfaceClass; // 14
interfaceRequest.bInterfaceSubClass = kUSBVideoControlSubClass; // 1
interfaceRequest.bInterfaceProtocol = kIOUSBFindInterfaceDontCare;
interfaceRequest.bAlternateSetting = kIOUSBFindInterfaceDontCare;
(*deviceInterface)
->CreateInterfaceIterator(deviceInterface, &interfaceRequest,
&interfaceIterator);
(*deviceInterface)->Release(deviceInterface);
io_service_t usbInterface = IOIteratorNext(interfaceIterator);
IOObjectRelease(interfaceIterator);
if (usbInterface) {
std::cout << "Get UVC device with:" << std::endl;
std::cout << "locationId: " << std::hex << locationId << std::endl;
std::cout << "vendorId: " << std::hex << vendorId << std::endl;
std::cout << "productId: " << std::hex << productId << std::endl
<< std::endl;
IOObjectRelease(usbInterface);
}
IOObjectRelease(usbService);
}
IOObjectRelease(serviceIterator);
}
Recently Apple gave us the possibility to upload asset resources in the background. We implemented our background upload extension but when our CI tried to upload the app on TestFlight we got an error that the extension point identifier - in our case com.apple.photos.backgound-upload - is not an official one. Any idea when it will become official and we will be able to release a working background uploading?
I am working on an iOS application using SwiftUI where I want to convert a JPG and a MOV file to a live photo. I am utilizing the LivePhoto Class from Github for this. The JPG and MOV files are displayed correctly in my WallpaperDetailView, but I am facing issues when trying to download the live photo to the gallery and generate the Live Photo.
Here is the relevant code and the errors I am encountering:
Console prints:
Play button should be visible Image URL fetched and set: Optional("https://firebasestorage.googleapis.com/...") Video is ready to play Video downloaded to: file:///var/mobile/Containers/Data/Application/.../tmp/CFNetworkDownload_7rW5ny.tmp Failed to generate Live Photo
I have verified that the app has the necessary permissions to access the Photo Library.
The JPEG and MOV files are successfully downloaded and can be displayed in the app.
The issue seems to occur when generating the Live Photo from the downloaded files.
struct WallpaperDetailView: View {
var wallpaper: Wallpaper
@State private var isLoading = false
@State private var isImageSaved = false
@State private var imageURL: URL?
@State private var livePhotoVideoURL: URL?
@State private var player: AVPlayer?
@State private var playerViewController: AVPlayerViewController?
@State private var isVideoReady = false
@State private var showBuffering = false
var body: some View {
ZStack {
if let imageURL = imageURL {
GeometryReader { geometry in
KFImage(imageURL)
.resizable()
...
}
}
if let playerViewController = playerViewController {
VideoPlayerViewController(playerViewController: playerViewController)
.frame(maxWidth: .infinity, maxHeight: .infinity)
.clipped()
.edgesIgnoringSafeArea(.all)
}
}
.onAppear {
PHPhotoLibrary.requestAuthorization { status in
if status == .authorized {
loadImage()
} else {
print("User denied access to photo library")
}
}
}
private func loadImage() {
isLoading = true
if let imageURLString = wallpaper.imageURL, let imageURL = URL(string: imageURLString) {
self.imageURL = imageURL
if imageURL.scheme == "file" {
self.isLoading = false
print("Local image URL set: \(imageURL)")
} else {
fetchDownloadURL(from: imageURLString) { url in
self.imageURL = url
self.isLoading = false
print("Image URL fetched and set: \(String(describing: url))")
}
}
}
if let livePhotoVideoURLString = wallpaper.livePhotoVideoURL, let livePhotoVideoURL = URL(string: livePhotoVideoURLString) {
self.livePhotoVideoURL = livePhotoVideoURL
preloadAndPlayVideo(from: livePhotoVideoURL)
} else {
self.isLoading = false
print("No valid image or video URL")
}
}
private func preloadAndPlayVideo(from url: URL) {
self.player = AVPlayer(url: url)
let playerViewController = AVPlayerViewController()
playerViewController.player = self.player
self.playerViewController = playerViewController
let playerItem = AVPlayerItem(url: url)
playerItem.preferredForwardBufferDuration = 1.0
self.player?.replaceCurrentItem(with: playerItem)
...
print("Live Photo Video URL set: \(url)")
}
private func saveWallpaperToPhotos() {
if let imageURL = imageURL, let livePhotoVideoURL = livePhotoVideoURL {
saveLivePhotoToPhotos(imageURL: imageURL, videoURL: livePhotoVideoURL)
} else if let imageURL = imageURL {
saveImageToPhotos(url: imageURL)
}
}
private func saveImageToPhotos(url: URL) {
...
}
private func saveLivePhotoToPhotos(imageURL: URL, videoURL: URL) {
isLoading = true
downloadVideo(from: videoURL) { localVideoURL in
guard let localVideoURL = localVideoURL else {
print("Failed to download video for Live Photo")
DispatchQueue.main.async {
self.isLoading = false
}
return
}
print("Video downloaded to: \(localVideoURL)")
self.generateAndSaveLivePhoto(imageURL: imageURL, videoURL: localVideoURL)
}
}
private func generateAndSaveLivePhoto(imageURL: URL, videoURL: URL) {
LivePhoto.generate(from: imageURL, videoURL: videoURL, progress: { percent in
print("Progress: \(percent)")
}, completion: { livePhoto, resources in
guard let resources = resources else {
print("Failed to generate Live Photo")
DispatchQueue.main.async {
self.isLoading = false
}
return
}
print("Live Photo generated with resources: \(resources)")
self.saveLivePhotoToLibrary(resources: resources)
})
}
private func saveLivePhotoToLibrary(resources: LivePhoto.LivePhotoResources) {
LivePhoto.saveToLibrary(resources) { success in
DispatchQueue.main.async {
if success {
self.isImageSaved = true
print("Live Photo saved successfully")
} else {
print("Failed to save Live Photo")
}
self.isLoading = false
}
}
}
private func fetchDownloadURL(from gsURL: String, completion: @escaping (URL?) -> Void) {
let storageRef = Storage.storage().reference(forURL: gsURL)
storageRef.downloadURL { url, error in
if let error = error {
print("Failed to fetch image URL: \(error)")
completion(nil)
} else {
completion(url)
}
}
}
private func downloadVideo(from url: URL, completion: @escaping (URL?) -> Void) {
let task = URLSession.shared.downloadTask(with: url) { localURL, response, error in
guard let localURL = localURL, error == nil else {
print("Failed to download video: \(String(describing: error))")
completion(nil)
return
}
completion(localURL)
}
task.resume()
}
}```
Topic:
Media Technologies
SubTopic:
Photos & Camera
Tags:
Files and Storage
Swift
SwiftUI
Photos and Imaging
Since the OS was recently updated to 18.1.1 on my iPhone 15, I am no longer able to import my pictures into the Photos app on my iMac. I have to mention that my iMac is pretty old and is running OS High Sierra 10.13.6 and is not allowing me to update the OS to a newer version. Anyway, the main error message I get is: "Some items cannot be added to your Photo library because they may be an unrecognizable file format or the file may not contain valid data". Then, for each individual photo that failed to upload, the error message I get reads, "unable to read metadata. The file may be corrupt". However, videos import just fine from my iPhone to iMac.
This was not a problem before the recent iPhone update. I tried closing the Photo app and reopening it. I tried restarting my iPhone and iMac but nothing seems to work. Any help would be much appreciated.
Hello everyone,
I have a SwiftUI app using WKWebView to load a website that includes a form with a file input (). The issue is:
📌 When a user taps “Browse” and selects “Take Photo” (camera option), the app crashes before the camera opens.
Setup Details:
• App Uses SwiftUI with WKWebView
• The crash occurs only when selecting “Take Photo”, but selecting an image from the library works fine.
📌 Full Code (WKWebView in SwiftUI)
import SwiftUI
import WebKit
struct WebViewRepresentable: UIViewRepresentable {
var urlString: String
func makeUIView(context: Context) -> WKWebView {
let webView = WKWebView()
webView.configuration.allowsInlineMediaPlayback = true
webView.configuration.mediaTypesRequiringUserActionForPlayback = []
loadURL(in: webView)
return webView
}
func updateUIView(_ uiView: WKWebView, context: Context) {
loadURL(in: uiView)
}
private func loadURL(in webView: WKWebView) {
if let url = URL(string: urlString) {
webView.load(URLRequest(url: url))
}
}
}
struct ContentView: View {
@State private var currentURL: String = "https://fv-wohlensee.ch"
var body: some View {
VStack(spacing: 0) {
// Oberer Bereich in Grün
Color(red: 0, green: 0.4, blue: 0)
.frame(height: 50)
// WebView with white background
WebViewRepresentable(urlString: currentURL)
.background(Color.white)
Divider()
// Navigation buttons
HStack(spacing: 10) {
Button {
currentURL = "https://fv-wohlensee.ch/vereinshaus-eymatt/"
} label: {
VStack {
Image(systemName: "house")
.font(.system(size: 18))
Text("Klubhaus")
.font(.system(size: 12))
.minimumScaleFactor(0.7)
.lineLimit(1)
}
.padding(8)
}
.foregroundColor(.white)
.frame(maxWidth: .infinity)
Button {
currentURL = "https://fv-wohlensee.ch/vereinsboot/"
} label: {
VStack {
Image(systemName: "ferry.fill")
.font(.system(size: 18))
Text("Boot")
.font(.system(size: 12))
.minimumScaleFactor(0.7)
.lineLimit(1)
}
.padding(8)
}
.foregroundColor(.white)
.frame(maxWidth: .infinity)
Button {
currentURL = "https://fv-wohlensee.ch/aktivitaeten/"
} label: {
VStack {
Image(systemName: "calendar")
.font(.system(size: 18))
Text("Aktivitäten")
.font(.system(size: 12))
.minimumScaleFactor(0.7)
.lineLimit(1)
}
.padding(8)
}
.foregroundColor(.white)
.frame(maxWidth: .infinity)
Button {
currentURL = "https://fv-wohlensee.ch/mitglied-werden/"
} label: {
VStack {
Image(systemName: "person.badge.plus")
.font(.system(size: 18))
Text("Mitglied")
.font(.system(size: 12))
.minimumScaleFactor(0.7)
.lineLimit(1)
}
.padding(8)
}
.foregroundColor(.white)
.frame(maxWidth: .infinity)
}
.padding(.horizontal, 15)
.padding(.vertical, 10)
.background(Color(red: 0, green: 0.4, blue: 0))
}
.frame(maxWidth: .infinity, maxHeight: .infinity)
.background(Color(red: 0, green: 0.4, blue: 0))
.ignoresSafeArea()
}
}
struct ContentView_Previews: PreviewProvider {
static var previews: some View {
ContentView()
}
}
What I’ve Tried:
1️⃣ Checked Info.plist: Added permissions for camera and photo library:
<key>NSCameraUsageDescription</key>
<string>This app requires access to the camera to upload photos.</string>
<key>NSPhotoLibraryUsageDescription</key>
<string>This app requires access to your photo library.</string>
2️⃣ Enabled Media Capture in WKWebView:
webView.configuration.allowsInlineMediaPlayback = true
webView.configuration.mediaTypesRequiringUserActionForPlayback = []
3️⃣ Tested in Safari: The same form works fine when opened in Safari.
Questions:
❓ Does WKWebView need additional permissions to open the camera?
❓ Do I need to implement a delegate to handle file uploads in SwiftUI?
❓ Has anyone faced this issue and found a fix?
Any guidance would be greatly appreciated! 🚀
Thanks in advance! 😊
Topic:
Media Technologies
SubTopic:
Photos & Camera
Following WWDC 2023 "Support HDR images in your app", I'm trying to save 48-megapixel ProRAWs (taken on an iPhone 14 Pro Max) as HDR HEICs to the Photo Library. After processing the ProRAW file using CIRAWFilter, whether I use CIContext.heif10Representation() or convert to a CGImage, then UIImage, and use UIImage.heicData(), I get photos that behave oddly in the Photo Library. They appear too dark, and visibly brighten when first viewed, but more problematic is that the photos brighten a great deal more when you edit them with the Photos editor. This is the behavior when using the itur_2100_PQ color space, but itur_2100_HLG behaves similarly, except that it gets dramatically darker when edited. This behavior occurs whether CIRAWFilter.extendedDynamicRangeAmount is set to 0.0, or 2.0, or not set at all.
So what am I doing wrong? Here is a minimal iOS app -- well, just the ContentView -- that demonstrates the issue. You also need a .dng ProRAW file included in the project directory named test.dng. I'd love to include such a file, but I can't.
Be prepared for a multi-second wait when you save the photo.
import SwiftUI
import Photos
struct ContentView: View {
let context = CIContext()
let hdrColorSpace = CGColorSpace(name: CGColorSpace.itur_2100_PQ)!
var body: some View {
VStack(spacing: 100) {
Button("Save Photo From CGImage/UIImage") {
savePhotoFromUIImage()
}
Button("Save Photo From CIImage") {
savePhotoDirectFromCIImage()
}
}.padding(60)
}
//convert RAW with CIRAWFilter to CIImage, then convert to CGImage, then UIImage, then HEIF
private func savePhotoFromUIImage() {
if let ciImage = processRAW(url: Bundle.main.url(forResource:"test", withExtension: "dng")!) {
guard let outputCGImage = context.createCGImage(ciImage, from: ciImage.extent, format: .RGB10, colorSpace: hdrColorSpace) else { return }
let uiImage = UIImage(cgImage: outputCGImage)
if let heicData = uiImage.heicData() {
saveHEIFPhotoToLibrary(imageData: heicData)
} else {
print("Failed to convert UIImage to HEIC")
}
}
}
//convert RAW with CIRAWFilter to CIImage, then to HEIF
private func savePhotoDirectFromCIImage() {
if let ciImage = processRAW(url: Bundle.main.url(forResource:"test", withExtension: "dng")!) {
do {
let heif = try context.heif10Representation(of: ciImage, colorSpace: hdrColorSpace)
saveHEIFPhotoToLibrary(imageData: heif)
} catch {
print("Failed to get HEIF representation from CIContext")
}
}
}
private func processRAW(url: URL) -> CIImage? {
guard let coreRawFilter = CIRAWFilter(imageURL: url) else { return nil }
coreRawFilter.extendedDynamicRangeAmount = 2.0 //the issue persists whether this is not set, or set to 0, or set to, say, 2.0
guard let ciImage = coreRawFilter.outputImage else { return nil }
return ciImage
}
private func saveHEIFPhotoToLibrary(imageData: Data) {
PHPhotoLibrary.shared().performChanges({
let creationRequest = PHAssetCreationRequest.forAsset()
let options = PHAssetResourceCreationOptions()
creationRequest.addResource(with: .photo, data: imageData, options: options)
}) { success, error in
if let error = error {
print("Error saving photo: \(error.localizedDescription)")
} else {
print("Photo saved.")
}
}
}
}
Topic:
Media Technologies
SubTopic:
Photos & Camera
Tags:
Photos and Imaging
Core Graphics
Core Image
EDR
Hi,
I would like to use macro-mode for the custom camera using AVCaptureDevice in my project. This feature might help to automatically adjust and switch between lenses to get a close up clear image. It looks like this feature is not available and there are no open apis to achieve macro mode from Apple. Is there a way to get this functionality in the custom camera without losing the image quality. Please let me know if this is possible.
Thanks you,
Adil Thamarasseri
Based on the iPhone 14 Max camera, implement model recognition and draw a rectangular box around the recognized object. The width and height are calculated using LiDAR and displayed in centimeters on the real-time updated image.
I'm developing an iOS app using AVFoundation for real-time video capture and object detection.
While implementing torch functionality with camera switching (between Wide and Ultra-Wide lenses), I encountered a critical issue where the camera freezes when toggling the torch while the Ultra-Wide camera is active.
Issue
If the torch is ON and I switch from Wide to Ultra-Wide, the camera freezes
If the Ultra-Wide camera is active and I try to turn the torch ON, the camera freezes
The iPhone Camera app allows using the torch while recording video with the Ultra-Wide lens, so this should be possible via AVFoundation as well.
Code snippet
DispatchQueue.global(qos: .userInitiated).async { [weak self] in
guard let self = self else { return }
let isSwitchingToUltraWide = !self.isUsingFisheyeCamera
let cameraType: AVCaptureDevice.DeviceType = isSwitchingToUltraWide ? .builtInUltraWideCamera : .builtInWideAngleCamera
let cameraName = isSwitchingToUltraWide ? "Ultra Wide" : "Wide"
guard let selectedCamera = AVCaptureDevice.default(cameraType, for: .video, position: .back) else {
DispatchQueue.main.async {
self.showAlert(title: "Camera Error", message: "\(cameraName) camera is not available on this device.")
}
return
}
do {
let currentInput = self.videoCapture.captureSession.inputs.first as? AVCaptureDeviceInput
self.videoCapture.captureSession.beginConfiguration()
if isSwitchingToUltraWide && self.isFlashlightOn {
self.forceEnableTorchThroughWide()
}
if let currentInput = currentInput {
self.videoCapture.captureSession.removeInput(currentInput)
}
let videoInput = try AVCaptureDeviceInput(device: selectedCamera)
self.videoCapture.captureSession.addInput(videoInput)
self.videoCapture.captureSession.commitConfiguration()
self.videoCapture.updateVideoOrientation()
DispatchQueue.main.async {
if let barButton = sender as? UIBarButtonItem {
barButton.title = isSwitchingToUltraWide ? "Wide" : "Ultra Wide"
barButton.tintColor = isSwitchingToUltraWide ? UIColor.systemGreen : UIColor.white
}
print("Switched to \(cameraName) camera.")
}
self.isUsingFisheyeCamera.toggle()
} catch {
DispatchQueue.main.async {
self.showAlert(title: "Camera Error", message: "Failed to switch to \(cameraName) camera: \(error.localizedDescription)")
}
}
}
}
Expected Behavior
Torch should be able to work when Ultra-Wide is active, just like the iPhone Camera app does.
The camera should not freeze when switching between Wide and Ultra-Wide with the torch ON.
AVCaptureSession should not crash when toggling the torch while Ultra-Wide is active.
Questions & Help Needed
Is this a known issue with AVFoundation?
How does the iPhone Camera app allow using the torch while recording in Ultra-Wide?
What’s the correct way to switch between Wide and Ultra-Wide cameras without freezing when the torch is active?
Info
Device tested: iPhone 13 Pro / iPhone 15 Pro / Iphone 15
iOS Version: iOS 17.3 / iOS 18.0
Xcode Version: 16.2
We are encountering a critical, intermittently occurring crash issue when accessing photo data using PHAssetResourceManager.writeDataForAssetResource on iOS 18. The problem does not arise on iOS 17 or earlier versions.
We have been unable to identify a consistent reproduction path. Based on user feedback, the issue seems to involve Live Photo and Raw image files.
Our investigation has revealed that the crash occurs in the +[PISchema identifier] method of the PhotoImaging Framework. When called manually, this method causes a crash on iOS 18 but works without issues on iOS 17.
Reproduction Steps:
1.Fetch PHAsset.
2.Get PHAssetResource by [PHAssetResource assetResourcesForAsset:].
3.Call [PHAssetResourceManager writeDataForAssetResource:toFile:options:completionHandler:].
Crash Log:
Incident Identifier: CFD60092-FDB1-43B4-BA42-3F507F7B8B96
CrashReporter Key: 260b4780989083a54e0cb451930fe9a3bed64862
Hardware Model: iPhone13,4
AppStoreTools: 16C5031b
AppVariant: 1:iPhone13,4:18
Code Type: ARM-64 (Native)
Role: Foreground
Parent Process: launchd [1]
Date/Time: 2025-02-15 19:07:57.7054 +0800
Launch Time: 2025-02-15 19:07:55.4106 +0800
OS Version: iPhone OS 18.3.1 (22D72)
Release Type: User
Baseband Version: 5.20.03
Report Version: 104
Exception Type: EXC_CRASH (SIGABRT)
Exception Codes: 0x0000000000000000, 0x0000000000000000
Termination Reason: SIGNAL 6 Abort trap: 6
Terminating Process: mCloud_iPhone [11109]
Triggered by Thread: 11
Application Specific Information:
abort() called
Thread 11 name: Dispatch queue: com.apple.NSXPCConnection.m-user.com.apple.photos.service
Thread 11 Crashed:
0 libsystem_kernel.dylib 0x1e850b2d4 __pthread_kill + 8
1 libsystem_pthread.dylib 0x221b4959c pthread_kill + 268
2 libsystem_c.dylib 0x19ec24b08 abort + 128
3 NeutrinoCore 0x1bdcdbdec -[NUAssertionPolicyAbort notifyAssertion:] + 68
4 NeutrinoCore 0x1bdcdbbf4 -[NUAssertionPolicyComposite notifyAssertion:] + 160
5 NeutrinoCore 0x1bdcdc098 -[NUAssertionPolicyUnique notifyAssertion:] + 176
6 NeutrinoCore 0x1bdcdb524 -[NUAssertionHandler handleFailureInFunction:file:lineNumber:currentlyExecutingJobName:description:arguments:] + 156
7 NeutrinoCore 0x1bdcdc4bc _NUAssertFailHandler + 176
8 NeutrinoCore 0x1bdc8ea98 -[NUIdentifier initWithNamespace:name:version:] + 2352
9 NeutrinoCore 0x1bdc8eba8 -[NUIdentifier initWithName:version:] + 84
10 NeutrinoCore 0x1bdc8ec10 -[NUIdentifier initWithName:] + 68
11 PhotoImaging 0x1bda54ce4 +[PISchema identifier] + 36
12 PhotoImaging 0x1bda550fc +[PISchema registeredPhotosSchemaIdentifier] + 32
13 PhotoImaging 0x1bd9d7128 +[PIPhotoEditHelper newComposition] + 28
14 PhotoImaging 0x1bd940798 +[PICompositionSerializer deserializeCompositionFromAdjustments:metadata:formatIdentifier:formatVersion:sidecarData:error:] + 160
15 PhotoImaging 0x1bd9412ec +[PICompositionSerializer deserializeCompositionFromData:formatIdentifier:formatVersion:sidecarData:error:] + 224
16 PhotoLibraryServices 0x1afabf75c -[PLPhotoEditPersistenceManager loadCompositionFrom:formatIdentifier:formatVersion:sidecarData:error:] + 1856
17 PhotoLibraryServices 0x1afabffe4 +[PLPhotoEditPersistenceManager validateAdjustmentData:formatIdentifier:formatVersion:error:] + 108
18 Photos 0x1af4ac360 __167+[PHContentEditingInputRequestContext contentEditingInputRequestContextForAsset:requestID:managerID:networkAccessAllowed:downloadIntent:progressHandler:resultHandler:]_block_invoke + 260
19 Photos 0x1af4ac67c -[PHAdjustmentData(ContentEditingInput) _contentEditing_readableByClientWithVerificationBlock:] + 136
20 Photos 0x1af4ac4b0 -[PHAdjustmentData(ContentEditingInput) _contentEditing_requiredBaseVersionReadableByClient:verificationBlock:] + 88
21 Photos 0x1af4abb8c -[PHContentEditingInputRequestContext _adjustmentBaseVersionFromResult:request:canHandleAdjustmentData:] + 404
22 Photos 0x1af4a911c -[PHContentEditingInputRequestContext produceChildRequestsForRequest:reportingIsLocallyAvailable:isDegraded:result:] + 624
23 Photos 0x1af2c1d10 -[PHMediaRequestContext _produceChildRequestsForRequest:withResult:] + 88
24 Photos 0x1af2c11e8 -[PHMediaRequestContext mediaRequest:didFinishWithResult:] + 88
25 Photos 0x1af505184 -[PHAdjustmentDataRequest _finishFromAsynchronousCallback] + 124
26 Photos 0x1af5050a0 __39-[PHAdjustmentDataRequest startRequest]_block_invoke + 584
27 PhotoLibraryServicesCore 0x1b001be8c __106-[PLAssetsdResourceClient adjustmentDataForAsset:networkAccessAllowed:trackCPLDownload:completionHandler:]_block_invoke.86 + 864
28 CoreFoundation 0x196dd8e34 __invoking___ + 148
29 CoreFoundation 0x196dd7e7c -[NSInvocation invoke] + 428
30 Foundation 0x195a64ae0 __NSXPCCONNECTION_IS_CALLING_OUT_TO_EXPORTED_OBJECT__ + 16
31 Foundation 0x195a63514 -[NSXPCConnection _decodeAndInvokeReplyBlockWithEvent:sequence:replyInfo:] + 532
32 Foundation 0x195a6653c __88-[NSXPCConnection _sendInvocation:orArguments:count:methodSignature:selector:withProxy:]_block_invoke_5 + 188
33 libxpc.dylib 0x221babb80 _xpc_connection_reply_callout + 116
34 libxpc.dylib 0x221b9e2d0 _xpc_connection_call_reply_async + 80
35 libdispatch.dylib 0x19eb6b028 _dispatch_client_callout3 + 20
36 libdispatch.dylib 0x19eb88b64 _dispatch_mach_msg_async_reply_invoke + 340
37 libdispatch.dylib 0x19eb7242c _dispatch_lane_serial_drain + 352
38 libdispatch.dylib 0x19eb73158 _dispatch_lane_invoke + 432
39 libdispatch.dylib 0x19eb7e38c _dispatch_root_queue_drain_deferred_wlh + 288
40 libdispatch.dylib 0x19eb7dbd8 _dispatch_workloop_worker_thread + 540
41 libsystem_pthread.dylib 0x221b44680 _pthread_wqthread + 288
42 libsystem_pthread.dylib 0x221b42474 start_wqthread + 8
Topic:
Media Technologies
SubTopic:
Photos & Camera
I am writing an iOS app to present a slide show of assets in a Photo album, in a random order, including videos and live photos. I have got it all working quite nicely but for a Live Photo, I need to know what effect is selected (Live, Loop, Bounce, Long Exposure, Live Off) to display the image correctly. I can't find any mention of getting this information in the documentation. Anyone know how to do this? Thanks in advance.
Adrian.
(Xcode 16.1 iOS 18.0)
Topic:
Media Technologies
SubTopic:
Photos & Camera