Photos & Camera

RSS for tag

Explore technical aspects of capturing high-quality photos and videos, including exposure control, focus modes, and RAW capture options.

Posts under Photos & Camera subtopic

Post

Replies

Boosts

Views

Created

WWDC25 Camera & Photos group lab summary (Part 1 of 3)
(Note: this is part 1 of a 3 part posting. See Part 2 or Part 3) At WWDC25 we launched a new type of Lab event for the developer community - Group Labs. A Group Lab is a panel Q&A designed for a large audience of developers. Group Labs are a unique opportunity for the community to submit questions directly to a panel of Apple engineers and designers. Here are the highlights from the WWDC25 Group Lab for Camera & Photos. WWDC25 Camera & Photos group lab ran for one hour at 6 PM PST on Tuesday June 10th, 2025 Introductory kick-off questions Question 1 Tell us a little about the new AVFoundation Capture APIs we've made available in the new iOS 26 developer preview? Cinematic Capture API (strong/weak focus, tracking focus)(scene monitoring)(simulated aperture)(dog/cat heads/groupIDs) Camera Controls and AirPod Stem Clicks Spatial Audio and Studio Quality AirPod Mics in Camera Lens Smudge Detection Exposure and Focus Rect of Interest Question 2 I built QR code scanning into my app, but on newer iPhones I have to hold the phone very far away from the QR code, otherwise the image is blurry and it doesn't scan. Why is this happening and how can I fix it? Every year, the cameras get better and better as we push the state of the art on iPhone photography and videography. This sometimes results in changes to the characteristics of the lenses. min focus distance newer phones have multiple lenses automatic switching behavior Use virtual device like the builtInDualWide or built in Triple, rather than just the builtInWide Set the videoZoomFactor to 2. You're done. Question 3 Last year, we saw some exciting new APIs introduced in AVFoundation in the health space. With Constant Color photography, developers can take pictures that have constant color regardless of ambient lighting. There are some further advancements this year. Davide, could you tell us about them? constant color photography is mean to remove the "tone mapping" applied to photograph captured with camera app, usually incldsuing artistic intent, and instead try to be a close as possible to the real color of the scene, regardless of the illumination constant color images could be captured in HEIF anf JPEG laste year. this year we are adding Support for the DICOM medical imaging photo format. It is a fomrat used by the health industry to store images related to medical subjects like MRI, skin problems, xray and so on. It's writable and also readable format on all OS26, supported through AVCapturePhotoOutput APIs and through the coregraphics api. for coregrapphics there is a new DICOM entry in the property dictionary which includes all the dicom availbale and defined propertie in a file. finder will also display all those in the info panel (Address why a developer would want to use it) - not for regualr picture taking apps. for those HEIF and JPEG are the preferred delivery format. use dicom if your app produces output that are health related, that you can also share with health providers or your doctors Main session developer questions Question 1 LiDAR vs. Dual Camera depth generation: Which resolution does the LiDAR sensor natively have (iPhone 16 Pro) and when to prefer LiDAR over Dual Camera? Both report formats with output resolutions (we don't advertise sensor resolution) Lidar vs Dual, etc: Lidar: Best for absolute depth, real world scale and computer vision Dual, etc: relative, disparity-based, less power, photo effects Also see: 2022 WWDC session "Discovery advancements in iOS camera capture: Depth, focus and multitasking" Question 2 Can true depth and lidar camera run at 60fps? Lidar can do 30fps (edited) Front true depth can do 60fps. Question 3 What’s the first class way to use PhotoKit to reimplement a high performance photo grid? We’ve been using a LazyVGrid and the photos caching manager, but are never able to hit the holy trinity (60hz, efficient memory footprint, minimal flashes of placeholder/empty cells) use the PHCachingImageManager to get media content delivered before you need to display it specify the size you need for grid sized display set the options PHVideoRequestOptionsDeliveryModeFastFormat, PHImageRequestOptionsDeliveryModeFastFormat and PHImageRequestOptionsResizeModeFast Question 4 For rending live preview of video stream, Is there performance overhead from using async and Swift UI for image updates vs UIViewRepresentable + AVCaptureVideoPreviewLayer.self? AVCaptureVideoPreviewLayer is the most efficient display path Use VDO + AVSampleBufferDisplayLayer if you need to modify the image data Swift UI image is optimized for static image content Question 5 Is there a way to configure the AVFoundation BuiltInLiDarDepthCamera mode to provide a depth map as accurate as ARKit at close range? The AVCaptureDepthDataOutput supports filtering that reduces noise and fills in invalid values. Consider using this for smoother depth maps Question 6 Pyramid-based photo editing in core image (such as adobe camera raw highlights and shadows)? First off you may want to look a the builtin filter called CIHighlightShadowAdjust Also the noise reduction in the CIRawFilter uses a pyramid-based algorithm. You can also write your own pyramid-based algorithms by taking an input image: down sample it by two multiply times using imageByApplyingAffineTransform apply additional CIKernels to each downsampled image as needed. use a custom CIKernel to combine the results. Question 7 Is the best way to integrate an in-app camera for a “non-camera” app UIImagePickerController? Yes, UIImagePickerController provides system-provided UI for capturing photos and movies. Question 8 Hello, my question is on Deferred Photo Processing? Say I have a photo capture app that adds a CIFilter to the capture. How can I take advantage of Deferred Photo Processing? Since I don’t know how to detect when the deferred captured photo is ready CIFilter can be called on the final at that point Photo will have to be re-inserted into the Photo library as adjustment Question 9 For shipping photo style assets in the app that need transparency what is the best format to use? JPEG2000? will moving to this save a lot of space comapred to PNG or other options? If you want lossless compression PNG is good and supports unpremutiplied alpha If you want lossy compression HEIF supports premutiplied or unpremutiplied alpha (Note: this is part 1 of a 3 part posting. See Part 2 or Part 3)
0
0
259
Jul ’25
Camera become black for few propduction users during photo capture
PLATFORM AND VERSION :iOS 18.5 I wanted to bring to your attention a critical issue some of our production users are experiencing with the CoinOut app. Specifically, users are encountering a problem when attempting to capture photos of receipts using the app's customized camera feature. The camera, which utilizes AVCaptureVideoPreviewLayer and AVCaptureDevice, occasionally fails to load the preview, resulting in a black screen instead of the expected camera view. This camera blackout issue is significantly impacting the user experience as it prevents them from snapping photos of their receipts, which is a core functionality of the CoinOut app. Any help/suggestion to this issue would be greatly appreciated. STEPS TO REPRODUCE Open the app and click on camera icon. It will display camera to capture photo. Camera shows black for few production user's. class ViewController: UIViewController { @IBOutlet private weak var captureButton: UIButton! private var fillLayer: CAShapeLayer! private var previewLayer : AVCaptureVideoPreviewLayer! private var output: AVCapturePhotoOutput! private var device: AVCaptureDevice! private var session : AVCaptureSession! private var highResolutionEnabled: Bool = false private let sessionQueue = DispatchQueue(label: "session queue") override func viewDidLoad() { super.viewDidLoad() setupCamera() customiseUI() } @IBAction func startCamera(sender: UIButton) { didTapTakePhoto() } private func setupCamera() { let session = AVCaptureSession() session.sessionPreset = AVCaptureSession.Preset.high previewLayer = AVCaptureVideoPreviewLayer(session: session) output = AVCapturePhotoOutput() device = AVCaptureDevice.default(.builtInWideAngleCamera, for: AVMediaType.video, position: .back) if let device = self.device{ do{ let input = try AVCaptureDeviceInput(device: device) if session.canAddInput(input){ session.addInput(input)} else { print("\(#fileID):\(#function):\(#line) : Session Input addition failed") } if session.canAddOutput(output){ output.isHighResolutionCaptureEnabled = self.highResolutionEnabled session.addOutput(output) } else { print("\(#fileID):\(#function):\(#line) : Session Input high resolution failed") } previewLayer.videoGravity = .resizeAspectFill previewLayer.session = session sessionQueue.async { session.startRunning() } self.session = session self.session.accessibilityElementIsFocused() try device.lockForConfiguration() if device.isWhiteBalanceModeSupported(AVCaptureDevice.WhiteBalanceMode.autoWhiteBalance) { device.whiteBalanceMode = .autoWhiteBalance } else { print("\(#fileID):\(#function):\(#line) : isWhiteBalanceModeSupported no supported") } if device.isWhiteBalanceModeSupported(AVCaptureDevice.WhiteBalanceMode.continuousAutoWhiteBalance) { device.whiteBalanceMode = .continuousAutoWhiteBalance } else { print("\(#fileID):\(#function):\(#line) : isWhiteBalanceModeSupported no supported") } if device.isFocusModeSupported(.continuousAutoFocus) { device.focusMode = .continuousAutoFocus} else if device.isFocusModeSupported(.autoFocus) { device.focusMode = .autoFocus } device.unlockForConfiguration() } catch { print("\(#fileID):\(#function):\(#line) : \(error.localizedDescription)") } } else { print("\(#fileID):\(#function):\(#line) : Device found as nil") } } private func customiseUI() { let path = UIBezierPath(roundedRect: CGRect(x: 0, y: 0, width: self.view.bounds.width, height: self.view.bounds.height), cornerRadius: 0) let rectangleWidth = view.frame.width - (view.frame.width * 0.16) let x = (view.frame.width - rectangleWidth) / 2 let rectangleHeight = view.frame.height - (view.frame.height * 0.16) let y = (view.frame.height - rectangleHeight) / 2 let roundRect = UIBezierPath(roundedRect: CGRect(x: x, y: y, width: rectangleWidth, height: rectangleHeight), byRoundingCorners:.allCorners, cornerRadii: CGSize(width: 0, height: 0)) roundRect.move(to: CGPoint(x: self.view.center.x , y: self.view.center.y)) path.append(roundRect) path.usesEvenOddFillRule = true fillLayer = CAShapeLayer() fillLayer.path = path.cgPath fillLayer.fillRule = .evenOdd fillLayer.opacity = 0.4 previewLayer.addSublayer(fillLayer) previewLayer.frame = view.bounds view.layer.addSublayer(previewLayer) view.bringSubviewToFront(captureButton) } private func didTapTakePhoto() { let settings = self.getSettings(camera: self.device) if device.isAdjustingFocus { do { try device.lockForConfiguration() device.focusMode = .continuousAutoFocus device.unlockForConfiguration() device.addObserver(self, forKeyPath: "adjustingFocus", options: [.new], context: nil) } catch { print(error) } } else { output.capturePhoto(with: settings, delegate: self) } } func getSettings(camera: AVCaptureDevice) -> AVCapturePhotoSettings { var settings = AVCapturePhotoSettings() if let rawFormat = output.availableRawPhotoPixelFormatTypes.first { settings = AVCapturePhotoSettings(rawPixelFormatType: OSType(rawFormat)) } settings.isHighResolutionPhotoEnabled = self.highResolutionEnabled let previewPixelType = settings.availablePreviewPhotoPixelFormatTypes.first! let previewFormat = [kCVPixelBufferPixelFormatTypeKey as String: previewPixelType] as [String : Any] settings.previewPhotoFormat = previewFormat return settings } } extension ViewController: AVCapturePhotoCaptureDelegate { func photoOutput(_ output: AVCapturePhotoOutput, willCapturePhotoFor resolvedSettings: AVCaptureResolvedPhotoSettings) { AudioServicesDisposeSystemSoundID(1108) } func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) { guard let data = photo.fileDataRepresentation() else { return } let image = UIImage(data: data)! showImage(cropped: image) } func showImage(cropped: UIImage) { let vc = self.storyboard?.instantiateViewController(withIdentifier: "ImagePreviewViewController") as? ImagePreviewViewController vc?.captured = cropped self.present(vc!, animated: true) } }```
1
0
145
Jul ’25
After iPadOS 26 Beta and iOS 26 Beta, AVCaptureMetadataOutput no longer detects Face on some devices.
I'm creating an app that uses AVCaptureSession to pass camera input to AVCaptureMetadataOutput type set [metaout setMetadataObjectTypes:@[AVMetadataObjectTypeFace]] and scan Face. After updating to OS 26 Beta2 and iOS 26 Beta2, an issue has occurred where the delegate method of AVCaptureMetadataOutputObjectsDelegate is not called on some devices. The following devices are experiencing this issue. iPad (9th Gen) iPad air (4th Gen) iPhone 15 This issue has not occur on any other devices I have. I tried running the AVFoundation sample code on the Apple Developer site on the above device. The same problem still occurs. [https://developer.apple.com/documentation/avfoundation/capture_setup/avcambarcode_detecting_barcodes_and_faces] Are any additional settings required after OS 26 beta and iOS 26 beta? Or is there some problem on the OS side?
1
6
431
Jul ’25
After iPadOS 26 beta and iOS 26 beta, AVCaptureMetadataOutput no longer detects Face on some devices.
I'm creating an app that uses AVCaptureSession to pass camera input to AVCaptureMetadataOutput type set [metaout setMetadataObjectTypes:@[AVMetadataObjectTypeFace]] and scan Face. After updating to OS 26 Beta2 and iOS 26 Beta2, an issue has occurred where the delegate method of AVCaptureMetadataOutputObjectsDelegate is not called on some devices. The following devices are experiencing this issue. iPad (9th Gen) iPad air (4th Gen) iPhone 15 This issue has not occur on any other devices I have. I tried running the AVFoundation sample code on the Apple Developer site on the above device. The same problem still occurs. https://developer.apple.com/documentation/avfoundation/capture_setup/avcambarcode_detecting_barcodes_and_faces Are any additional settings required after OS 26 beta and iOS 26 beta? Or is there some problem on the OS side?
0
2
94
Jul ’25
Memory leak when performing DetectHumanBodyPose3DRequest request
Hi, I'm developing an application for macos and ios that has to run DetectHumanBodyPose3DRequest model in real time for retrieving the 3d skeleton from the camera. I'm experiencing a memory leak every time the model is used (when i comment that line, the memory stays constant). After a minute it uses about 1GB of ram running with mac catalyst. I attached a minimal project that has this problem Code Camera View import SwiftUI import Combine import Vision struct CameraView: View { @StateObject private var viewModel = CameraViewModel() var body: some View { HStack { ZStack { GeometryReader { geometry in if let image = viewModel.currentFrame { Image(decorative: image, scale: 1) .resizable() .scaledToFill() .frame(width: geometry.size.width, height: geometry.size.height) .clipped() } else { ProgressView() } } } } } } class CameraViewModel: ObservableObject { @Published var currentFrame: CGImage? @Published var frameRate: Double = 0 @Published var currentVisionBodyPose: HumanBodyPose3DObservation? // Store current body pose @Published var currentImageSize: CGSize? // Store current image size private var cameraManager: CameraManager? private var humanBodyPose = HumanBodyPose3DDetector() private var lastClassificationTime = Date() private var frameCount = 0 private var lastFrameTime = Date() private let classificationThrottleInterval: TimeInterval = 1.0 private var lastPoseSendTime: Date = .distantPast init() { cameraManager = CameraManager() startPreview() startClassification() } private func startPreview() { Task { guard let previewStream = cameraManager?.previewStream else { return } for await frame in previewStream { let size = CGSize(width: frame.width, height: frame.height) Task { @MainActor in self.currentFrame = frame self.currentImageSize = size self.updateFrameRate() } } } } private func startClassification() { Task { guard let classificationStream = cameraManager?.classificationStream else { return } for await pixelBuffer in classificationStream { self.classifyFrame(pixelBuffer: pixelBuffer) } } } private func classifyFrame(pixelBuffer: CVPixelBuffer) { humanBodyPose.runHumanBodyPose3DRequestOnImage(pixelBuffer: pixelBuffer) { [weak self] observation in guard let self = self else { return } DispatchQueue.main.async { if let observation = observation { self.currentVisionBodyPose = observation print(observation) } else { self.currentVisionBodyPose = nil } } } } private func updateFrameRate() { frameCount += 1 let now = Date() let elapsed = now.timeIntervalSince(lastFrameTime) if elapsed >= 1.0 { frameRate = Double(frameCount) / elapsed frameCount = 0 lastFrameTime = now } } } HumanBodyPose3DDetector import Foundation import Vision class HumanBodyPose3DDetector: NSObject, ObservableObject { @Published var humanObservation: HumanBodyPose3DObservation? = nil private let queue = DispatchQueue(label: "humanbodypose.queue") private let request = DetectHumanBodyPose3DRequest() private struct SendablePixelBuffer: @unchecked Sendable { let buffer: CVPixelBuffer } public func runHumanBodyPose3DRequestOnImage(pixelBuffer: CVPixelBuffer, completion: @escaping (HumanBodyPose3DObservation?) -> Void) { let sendableBuffer = SendablePixelBuffer(buffer: pixelBuffer) queue.async { [weak self] in Task { [weak self, sendableBuffer] in do { guard let self = self else { return } let result = try await self.request.perform(on: sendableBuffer.buffer) //process result DispatchQueue.main.async { if result.isEmpty { completion(nil) } else { completion(result[0]) } } } catch { DispatchQueue.main.async { completion(nil) } } } } } }
1
0
118
Jun ’25
When deleting photos, encountered PHPhotosError.operationInterrupted (3301).
Hi, I’ve developed a photo app that includes a photo deletion feature. Some users have reported encountering PHPhotosError.operationInterrupted (3301) when attempting to delete photos. Initially, I suspected that some of the assets might have a sourceType of typeiTunesSynced, since the documentation notes that iTunes-synced assets cannot be edited or deleted. However, after checking the logs, all of the assets involved are of typeUserLibrary. Additionally, the user mentioned that some photos in the iPhone Photos do not show a delete button. I’m unsure whether the absence of the delete button is related to the 3301 error. I’d like to confirm the following: Under what conditions does PHPhotosError.operationInterrupted (3301) occur, and how should it be handled? Why do some photos in the iPhone Photos not show a delete button? The code for deleting photos is as follows: PHPhotoLibrary *library = [PHPhotoLibrary sharedPhotoLibrary]; [library performChanges:^{ PHFetchResult *assetsToBeDeleted = [PHAsset fetchAssetsWithLocalIdentifiers:delUrls options:nil]; if (assetsToBeDeleted) { [PHAssetChangeRequest deleteAssets:assetsToBeDeleted]; } } completionHandler:^(BOOL success, NSError *error) {
0
0
83
Jun ’25
Is a Locked Capture Extension allowed to just "open the app" when the device is unlocked?
Hey, Quick question. I noticed that Adobe's new app, Project Indigo, allows you to open the app using the Camera Control button. However, when your device is locked it just shows this screen: Would this normally be approved by the Appstore approval process? I ask because I would like to do something similar with my camera app. I know that this is not the best user experience, but my apps UI is not built in Swift and I don't have the resources to build the UI again. At least this way the user experience would be improved from what it is now, where users cannot even launch the app. I get many requests per week about this feature and would love to improve the UX for my users, even if it's not the best possible. Thanks, Alex
0
0
175
Jun ’25
Capture session interrupted randomly
We are facing a strange issue where a small portion of our large userbase can not start the capture session in our app, as it gets interrupted with the following reason: AVCaptureSessionInterruptionReasonVideoDeviceNotAvailableWithMultipleForegroundApps Our users are all from iPhones, no one is using an iPad. Just to be sure we have set session.isMultitaskingCameraAccessEnabled = true but it does not seem to make any difference. Another weird interruption we are seeing
0
0
92
Jun ’25
Capture session interrupted randomly
We are facing a strange issue where a small portion of our large userbase can not start the capture session in our app, as it gets interrupted with the following reason: AVCaptureSessionInterruptionReasonVideoDeviceNotAvailableWithMultipleForegroundApps Our users are all from iPhones, no one is using an iPad. Just to be sure we have set session.isMultitaskingCameraAccessEnabled = true but it does not seem to make any difference. Another weird scenario we are seeing on an even smaller number of users is that the following call: AVCaptureDevice.default(.builtInWideAngleCamera, for: .video, position: .back) returns nil. A quick look at our error reports show this happening on iPhone XR, 13 and 14 models. They should all support this device type. Any help on investigating these issue would be greatly appreciated!
0
0
78
Jun ’25
Why a driverkit extension needs a CMIO extension
I developed a driverkit extension based on overriding-the-default-usb-video-class-extension, but the link didn’t give the details of realization. I asked DTS who gave two tips: 1, Do you also have a CMIO extension to load in place of the default overriding-the-default-usb-video-class-extension 2, Your DriverKit extension’s info.plist is also missing the CameraAssistantBundleID. I want to know why a driverkit extension needs a CMIO extension, what’s the data and control flow?
2
0
89
Jun ’25
How to override the default USB video
According to the doc, I did a simple demo to verify. My env: ProductName: macOS ProductVersion: 15.5 BuildVersion: 24F74 2.4 GHz 四核Intel Core i5 Info.plist: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>IOKitPersonalities</key> <dict> <key>UVCamera</key> <dict> <key>CFBundleIdentifierKernel</key> <string>com.apple.kpi.iokit</string> <key>IOClass</key> <string>IOUserService</string> <key>IOMatchCategory</key> <string>$(PRODUCT_BUNDLE_IDENTIFIER)</string> <key>IOProviderClass</key> <string>IOUserResources</string> <key>IOResourceMatch</key> <string>IOKit</string> <key>IOUserClass</key> <string>UVCamera</string> <key>IOUserServerName</key> <string>$(PRODUCT_BUNDLE_IDENTIFIER)</string> <key>IOProbeScore</key> <integer>100000</integer> <key>idVendor</key> <integer>1452</integer> <key>idProduct</key> <integer>34068</integer> </dict> </dict> <key>OSBundleUsageDescription</key> <string></string> </dict> </plist> UVCamera.cpp // // UVCamera.cpp // UVCamera // // Created by DTEN on 2025/6/12. // #include <os/log.h> #include <DriverKit/IOUserServer.h> #include <DriverKit/IOLib.h> #include "UVCamera.h" kern_return_t IMPL(UVCamera, Start) { kern_return_t ret; ret = Start(provider, SUPERDISPATCH); os_log(OS_LOG_DEFAULT, "Hello World"); return ret; } UVCamera.iig // // UVCamera.iig // UVCamera // // Created by DTEN on 2025/6/12. // #ifndef UVCamera_h #define UVCamera_h #include <Availability.h> #include <DriverKit/IOService.iig> class UVCamera: public IOService { public: virtual kern_return_t Start(IOService * provider) override; }; #endif /* UVCamera_h */ Then I build by xcode and mv it to /Library/DriverExtensions: sudo mv com.lqs.MyVirtualCam.UVCamera.dext /Library/DriverExtensions sudo kmutil install -R / -r /Library/DriverExtensions kmutil rebuild done However,the dext can't be loaded: kmutil showloaded --list-only | grep UVCamera No variant specified, falling back to release What's the problem? anyone can help me?
0
0
80
Jun ’25
How to toggle usb device
When I use IOKit/usb/IOUSBLib to toggle build-in camera, I got an ERROR:ret IOReturn -536870210 How can I resolve it? Can I use IOUSBLib to disable or hide build-in camera? My environment: Model Name: MacBook Pro ProductVersion: 15.5 Model Identifier: MacBookPro15,2 Processor Name: Quad-Core Intel Core i5 Processor Speed: 2.4 GHz Number of Processors: 1 // 禁用/启用USB设备 bool toggleUSBDevice(uint16_t vendorID, uint16_t productID, bool enable) { std::cout << (enable ? "Enabling" : "Disabling") << " USB device with VID: 0x" << std::hex << vendorID << ", PID: 0x" << productID << std::endl; // 创建匹配字典查找指定VID/PID的USB设备 CFMutableDictionaryRef matchingDict = IOServiceMatching(kIOUSBDeviceClassName); if (!matchingDict) { std::cerr << "Failed to create USB device matching dictionary." << std::endl; return false; } // 设置VID/PID匹配条件 CFNumberRef vendorIDRef = CFNumberCreate(kCFAllocatorDefault, kCFNumberSInt16Type, &vendorID); CFNumberRef productIDRef = CFNumberCreate(kCFAllocatorDefault, kCFNumberSInt16Type, &productID); CFDictionarySetValue(matchingDict, CFSTR(kUSBVendorID), vendorIDRef); CFDictionarySetValue(matchingDict, CFSTR(kUSBProductID), productIDRef); CFRelease(vendorIDRef); CFRelease(productIDRef); // 获取匹配的设备迭代器 io_iterator_t deviceIterator; if (IOServiceGetMatchingServices(kIOMainPortDefault, matchingDict, &deviceIterator) != KERN_SUCCESS) { std::cerr << "Failed to get USB device iterator." << std::endl; CFRelease(matchingDict); return false; } io_service_t usbDevice; bool result = false; int deviceCount = 0; // 遍历所有匹配的设备 while ((usbDevice = IOIteratorNext(deviceIterator)) != IO_OBJECT_NULL) { deviceCount++; // 获取设备路径 char path[1024]; if (IORegistryEntryGetPath(usbDevice, kIOServicePlane, path) == KERN_SUCCESS) { std::cout << "Found device at path: " << path << std::endl; } // 打开设备 IOCFPlugInInterface** plugInInterface = NULL; IOUSBDeviceInterface** deviceInterface = NULL; SInt32 score; IOReturn ret = IOCreatePlugInInterfaceForService( usbDevice, kIOUSBDeviceUserClientTypeID, kIOCFPlugInInterfaceID, &plugInInterface, &score); if (ret == kIOReturnSuccess && plugInInterface) { ret = (*plugInInterface)->QueryInterface(plugInInterface, CFUUIDGetUUIDBytes(kIOUSBDeviceInterfaceID), (LPVOID*)&deviceInterface); (*plugInInterface)->Release(plugInInterface); } if (ret != kIOReturnSuccess) { std::cerr << "Failed to open USB device interface. Error:" << ret << std::endl; IOObjectRelease(usbDevice); continue; } // 禁用/启用设备 if (enable) { // 启用设备 - 重新配置设备 ret = (*deviceInterface)->USBDeviceReEnumerate(deviceInterface, 0); if (ret == kIOReturnSuccess) { std::cout << "Device enabled successfully." << std::endl; result = true; } else { std::cerr << "Failed to enable device. Error: " << ret << std::endl; } } else { // 禁用设备 - 断开设备连接 ret = (*deviceInterface)->USBDeviceClose(deviceInterface); if (ret == kIOReturnSuccess) { std::cout << "Device disabled successfully." << std::endl; result = true; } else { std::cerr << "Failed to disable device. Error: " << ret << std::endl; } } // 关闭设备接口 (*deviceInterface)->Release(deviceInterface); IOObjectRelease(usbDevice); } IOObjectRelease(deviceIterator); if (deviceCount == 0) { std::cerr << "No device found with specified VID/PID." << std::endl; return false; } return result; }
0
0
76
Jun ’25
PHAssetChangeRequest revertAssetContentToOriginal without original asset content
The documentation for PHAssetChangeRequest.revertAssetContentToOriginal says it will fail if the original asset content is not on the current device so you should use PHAssetResourceManager to download it first, but this no longer seems to be the case in the latest iOS versions because an error no longer occurs when I take a photo on my iPhone, edit it, open Photos on my iPad and let it sync, then open my app on iPad and call revertAssetContentToOriginal for that asset. Does the system now take care of downloading the original when needed?
1
0
110
Jun ’25
RealityKit/ARKit Memory Not Fully Released After AR Session Cleanup
Hi, I'm developing a SwiftUI app using RealityKit and ARKit for an AR measuring feature. I’ve noticed that after navigating away from my AR view and performing extensive cleanup (including removing all anchors/entities, pausing the ARSession, and nil-ing out all references), memory usage remains elevated and sometimes grows with repeated AR sessions. Each time I enter and exit the AR view, memory increases The memory does not return to the baseline after cleanup, even though all custom objects are deallocated. Are there best practices beyond what I’ve described to ensure all ARKit/RealityKit resources are released after an AR session?
0
0
69
Jun ’25
any app shows a black square instead of the camera picture
Hi! I am making an app for Apple Vision pro (VisionOS 2.5) that is scanning the surroundings and recognises all the texts around you. I tried to use the AVCaptureSession library, but when I run the app from xcode on the real AVP device, the camera is not accessible. I enabled the camera access in my Info.plist: NSCameraUsageDescription Used for live text recognition and I checked camera settings in the AVP, there are no restrictions. However I have always a black square with a crossed camera icon displayed instead of the image from the camera. I tried a couple of different apps from Github using the AVCaptureSession and they all display the black square instead of the picture. What can be wrong with the camera?
2
0
106
Jun ’25