Photos & Camera

RSS for tag

Explore technical aspects of capturing high-quality photos and videos, including exposure control, focus modes, and RAW capture options.

Post

Replies

Boosts

Views

Activity

Request for Access to Apple Photos API
Hello, I'm currently investigating the possibility of accessing my photos stored on my iCloud via a dedicated API, in order to create a photo portfolio. However, after extensive research, I haven't found any documentation or public API allowing such access. I wonder if there are any future plans to make such an API available to third-party developers. I would be grateful if you could provide me with information regarding the possibility of accessing an API for Apple Photos or any other solution you might suggest. Thank you for your attention and assistance. Yours sincerely Owen
0
0
540
Nov ’23
Get cameraCalibrationData with .builtInDualWideCamera when isGeometricDistortionCorrectionEnabled = True
Hi there, I am building a camera application to be able to capture an image with the wide and ultra wide cameras simultaneously (or as close as possible) with the intrinsics and extrinsics for each camera also delivered. We are able to achieve this with an AVCaptureMultiCamSession and AVCaptureVideoDataOutput, setting up the .builtInWideAngleCamera and .builtInUltraWideCamera manually. Doing this, we are able to enable the delivery of the intrinsics via the AVCaptureConnection of the cameras. Also, geometric distortion correction is enabled for the ultra camera (by default). However, we are seeing if it possible to move the application over to the .builtInDualWideCamera with AVCapturePhotoOutput and AVCaptureSession to simplify our application and get access to depth data. We are using the isVirtualDeviceConstituentPhotoDeliveryEnabled=true property to allow for simultaneous capture. Functionally, everything is working fine, except that when isGeometricDistortionCorrectionEnabled is not set to false, the photoOutput.isCameraCalibrationDataDeliverySupported returns false. From this thread and the docs, it appears that we cannot get the intrinsics when isGeometricDistortionCorrectionEnabled=true (only applicable to the ultra wide), unless we use a AVCaptureVideoDataOutput. Is there any way to get access to the intrinsics for the wide and ultra while enabling geometric distortion correction for the ultra? guard let captureDevice = AVCaptureDevice.default(.builtInDualWideCamera, for: .video, position: .back) else { throw InitError.error("Could not find builtInDualWideCamera") } self.captureDevice = captureDevice self.videoDeviceInput = try AVCaptureDeviceInput(device: captureDevice) self.photoOutput = AVCapturePhotoOutput() self.captureSession = AVCaptureSession() self.captureSession.beginConfiguration() captureSession.sessionPreset = AVCaptureSession.Preset.hd1920x1080 captureSession.addInput(self.videoDeviceInput) captureSession.addOutput(self.photoOutput) try captureDevice.lockForConfiguration() captureDevice.isGeometricDistortionCorrectionEnabled = false // <- NB line captureDevice.unlockForConfiguration() /// configure photoOutput guard self.photoOutput.isVirtualDeviceConstituentPhotoDeliverySupported else { throw InitError.error("Dual photo delivery is not supported") } self.photoOutput.isVirtualDeviceConstituentPhotoDeliveryEnabled = true print("isCameraCalibrationDataDeliverySupported", self.photoOutput.isCameraCalibrationDataDeliverySupported) // false when distortion correction is enabled let videoOutput = AVCaptureVideoDataOutput() videoOutput.setSampleBufferDelegate(self, queue: DispatchQueue(label: "sample buffer delegate", attributes: [])) if captureSession.canAddOutput(videoOutput) { captureSession.addOutput(videoOutput) } self.videoPreviewLayer.setSessionWithNoConnection(self.captureSession) self.videoPreviewLayer.videoGravity = AVLayerVideoGravity.resizeAspect let cameraVideoPreviewLayerConnection = AVCaptureConnection(inputPort: self.videoDeviceInput.ports.first!, videoPreviewLayer: self.videoPreviewLayer); self.captureSession.addConnection(cameraVideoPreviewLayerConnection) self.captureSession.commitConfiguration() self.captureSession.startRunning()
1
0
588
Nov ’23
AVAssetWriter
As of iOS 17.X a .mov (h264/alac) recording has a problem where the track length doesn't equal the container length. So if the Audio/Video tracks are 10.00 long, the fullVideo.mov could be 10.06 long. This does not happen with any version pervious to iOS 17. Anyone else experiencing this or have advice?
0
0
249
Nov ’23
iPhone RAW photos is very dark when it import to 3rd software
I use the official api to output the raw file. when I transfer it to my mac, it is very dark. But if I shot with iOS original camera app, it is brighter. This is how I save the proraw file to photo library let creationRequest = PHAssetCreationRequest.forAsset() creationRequest.addResource(with: .photo, data: photo.compressedData, options: nil) // Save the RAW (DNG) file an alternate resource for the Photos asset. let options = PHAssetResourceCreationOptions() ////////options.shouldMoveFile = true creationRequest.addResource(with: .alternatePhoto, fileURL: photo.rawFileURL, options: options)
0
0
235
Nov ’23
Performance issues with `AVAssetWriter`
Hey all! I'm trying to build a Camera app that records Video and Audio buffers (AVCaptureVideoDataOutput and AVCaptureAudioDataOutput) to an mp4/mov file using AVAssetWriter. When creating the Recording Session, I noticed that it blocks for around 5-7 seconds before starting the recording, so I dug deeper to find out why. This is how I create my AVAssetWriter: let assetWriter = try AVAssetWriter(outputURL: tempURL, fileType: .mov) let videoWriter = self.createVideoWriter(...) assetWriter.add(videoWriter) let audioWriter = self.createAudioWriter(...) assetWriter.add(audioWriter) assetWriter.startWriting() There's two slow parts here in that code: The createAudioWriter(...) function takes ages! This is how I create the audio AVAssetWriterInput: // audioOutput is my AVCaptureAudioDataOutput, audioInput is the microphone let settings = audioOutput.recommendedAudioSettingsForAssetWriter(writingTo: .mov) let format = audioInput.device.activeFormat.formatDescription let audioWriter = AVAssetWriterInput(mediaType: .audio, outputSettings: settings, sourceFormatHint: format) audioWriter.expectsMediaDataInRealTime = true The above code takes up to 3000ms on an iPhone 11 Pro! When I remove the recommended settings and just pass nil as outputSettings: audioWriter = AVAssetWriterInput(mediaType: .audio, outputSettings: nil) audioWriter.expectsMediaDataInRealTime = true ...It initializes almost instantly - something like 30 to 50ms. Starting the AVAssetWriter takes ages! Calling this method: assetWriter.startWriting() ...takes takes 3000 to 5000ms on my iPhone 11 Pro! Does anyone have any ideas why this is so slow? Am I doing something wrong? It feels like passing nil as the outputSettings is not a good idea, and recommendedAudioSettingsForAssetWriter should be the way to go, but 3 seconds initialization time is not acceptable. Here's the full code: RecordingSession.swift from react-native-vision-camera. This gets called from here. I'd appreciate any help, thanks!
1
1
760
Nov ’23
Calling `assetWriter.startWriting()` blocks all Threads for 3 seconds in debug (LLDB hang?)
’m using the AVFoundation Swift APIs to record a Video (CMSampleBuffers) and Audio (CMSampleBuffers) to a file using AVAssetWriter. Initializing the AVAssetWriter happens quite quickly, but calling assetWriter.startWriting() fully blocks the entire application AND ALL THREADS for 3 seconds. This only happens in Debug builds, not in Release. Since it blocks all Threads and only happens in Debug, I’m lead to believe that this is an Xcode/Debugger/LLDB hang issue that I’m seeing. Does anyone experience something similar? Here’s how I set all of that up: startRecording(...) And here’s the line that makes it hang for 3+ seconds: assetWriter.startWriting(...)
2
0
652
Nov ’23
Request for Technical Specifications of iPhone SE (Model Number MMXN3ZD/A)
Dear Sir/Madam, I am reaching out as a developer working on an academic project. I am currently working on my bachelor's thesis, where I am developing a mobile application for iOS devices. For the success of this project, it is essential to have precise information about the hardware components of specific iPhone models, especially the iPhone SE with the model number MMXN3ZD/A and iOS version 17.1.1. My primary interest lies in the exact technical specifications of the LEDs and the CCD or CMOS image sensor (depending on which type is used) installed in the iPhone SE. For my project, it is crucial to understand the spectral properties of these components: LED Specifications: I require information about the spectra of the LEDs, especially which wavelengths of light they emit. This is relevant for the functionality of my app, which relies on photometric analyses. CCD/CMOS Sensor Specifications: Furthermore, it is important for me to know the wavelengths for which the sensor built into the device is sensitive. This information is critical to accurately interpret the interaction between the sensor and the illuminated environment. The results of my research and development will not only be significant for my academic work but could also provide valuable insights for the further development of iOS applications in my field of study. I would be very grateful if you could provide me with this information or direct me to the appropriate department or resource where I can obtain these specific technical details. Thank you in advance for your support and cooperation. Best regards,
1
0
298
Nov ’23
Apple ProRAW image issue when editing
I am running Sonoma 14.1.1 and having an issue with DNG ProRAW images when choosing to edit. I can choose to edit using the photos app, or edit with Photoshop and in both cases the image once edit is chosen becomes less clear and slightly fuzzy with loss of detail. This happens too when I choose to export the photo from photos as a DNG file and then also try to edit with Photoshop the photos. There is a drastically noticeable difference in quality of the image. This appears to be the way the image is handles in the photos app itself. Even if I save directly from the iPhone to files, it does the same thing once on the Mac. Attached are some screen shot examples, but still clear to see the difference.
0
0
319
Nov ’23
Synchronized depth and video data not being received with builtInLiDARDepthCamera
Hello, Faced with a really perplexing issue. Primary problem is that sometimes I get depth and video data as expected, but at other times I don't. And sometimes I'll get both data outputs for a 4-5 frames and then it'll just stop. The source code I implemented is a modified version of the sample code provided by Apple, and interestingly enough I can't re-create this issue with the Apple sample app. So wondering what I could be doing wrong? Here's the code for setting up the capture input. preferredDepthResolution is 1280 in my case. I'm running this on an iPad Pro (6th gen). iOS version 17.0.3 (21A360). Encounter this issue on iPhone 13 Pro as well. iOS version is 17.0 (21A329) private func setupLiDARCaptureInput() throws { // Look up the LiDAR camera. guard let device = AVCaptureDevice.default(.builtInLiDARDepthCamera, for: .video, position: .back) else { throw ConfigurationError.lidarDeviceUnavailable } guard let format = (device.formats.last { format in format.formatDescription.dimensions.width == preferredWidthResolution && format.formatDescription.mediaSubType.rawValue == kCVPixelFormatType_420YpCbCr8BiPlanarFullRange && format.videoSupportedFrameRateRanges.first(where: {$0.maxFrameRate >= 60}) != nil && !format.isVideoBinned && !format.supportedDepthDataFormats.isEmpty }) else { throw ConfigurationError.requiredFormatUnavailable } guard let depthFormat = (format.supportedDepthDataFormats.last { depthFormat in depthFormat.formatDescription.mediaSubType.rawValue == kCVPixelFormatType_DepthFloat16 }) else { throw ConfigurationError.requiredFormatUnavailable } // Begin the device configuration. try device.lockForConfiguration() // Configure the device and depth formats. device.activeFormat = format device.activeDepthDataFormat = depthFormat let desc = format.formatDescription dimensions = CMVideoFormatDescriptionGetDimensions(desc) let duration = CMTime(value:1, timescale:CMTimeScale(60)) device.activeVideoMinFrameDuration = duration device.activeVideoMaxFrameDuration = duration // Finish the device configuration. device.unlockForConfiguration() self.device = device print("Selected video format: \(device.activeFormat)") print("Selected depth format: \(String(describing: device.activeDepthDataFormat))") // Add a device input to the capture session. let deviceInput = try AVCaptureDeviceInput(device: device) captureSession.addInput(deviceInput) guard let audioDevice = AVCaptureDevice.default(for: .audio) else { return } // Configure audio input - always configure audio even if isAudioEnabled is false audioDeviceInput = try! AVCaptureDeviceInput(device: audioDevice) captureSession.addInput(audioDeviceInput) deviceSystemPressureStateObservation = device.observe( \.systemPressureState, options: .new ) { _, change in guard let systemPressureState = change.newValue else { return } print("system pressure \(systemPressureState.levelAsString()) due to \(systemPressureState.factors)") } } Here's how I'm setting up the output: private func setupLiDARCaptureOutputs() { // Create an object to output video sample buffers. videoDataOutput = AVCaptureVideoDataOutput() captureSession.addOutput(videoDataOutput) // Create an object to output depth data. depthDataOutput = AVCaptureDepthDataOutput() depthDataOutput.isFilteringEnabled = false captureSession.addOutput(depthDataOutput) audioDeviceOutput = AVCaptureAudioDataOutput() audioDeviceOutput.setSampleBufferDelegate(self, queue: videoQueue) captureSession.addOutput(audioDeviceOutput) // Create an object to synchronize the delivery of depth and video data. outputVideoSync = AVCaptureDataOutputSynchronizer(dataOutputs: [depthDataOutput, videoDataOutput]) outputVideoSync.setDelegate(self, queue: videoQueue) // Enable camera intrinsics matrix delivery. guard let outputConnection = videoDataOutput.connection(with: .video) else { return } if outputConnection.isCameraIntrinsicMatrixDeliverySupported { outputConnection.isCameraIntrinsicMatrixDeliveryEnabled = true } } The top part of my delegate implementation is as follows: func dataOutputSynchronizer( _ synchronizer: AVCaptureDataOutputSynchronizer, didOutput synchronizedDataCollection: AVCaptureSynchronizedDataCollection ) { // Retrieve the synchronized depth and sample buffer container objects. guard let syncedDepthData = synchronizedDataCollection.synchronizedData(for: depthDataOutput) as? AVCaptureSynchronizedDepthData, let syncedVideoData = synchronizedDataCollection.synchronizedData(for: videoDataOutput) as? AVCaptureSynchronizedSampleBufferData else { if synchronizedDataCollection.synchronizedData(for: depthDataOutput) == nil { print("no depth data at time \(mach_absolute_time())") } if synchronizedDataCollection.synchronizedData(for: videoDataOutput) == nil { print("no video data at time \(mach_absolute_time())") } return } print("received depth data \(mach_absolute_time())") } As you can see, I'm console logging whenever depth data is not received. Note because I'm driving the video frames at 60 fps, its expected that I'll only receive depth data for every alternate video frame. Console output is posted as a follow up comment (because of the character limit). I edited some lines out for brevity. You'll see it started streaming correctly but after a while it stopped received both video and depth outputs (in some other runs, it works perfectly and in some other runs I receive no depth data whatsoever). One thing to note, I sometimes run quicktime mirroring to see the device screen to see what the app is displaying (so not sure if that's causing any interference - that said I don't see any system pressure changes either). Any help is most appreciated! Thanks.
2
0
517
Dec ’23
CMIO CameraExtension with DistributedNotificationCenter and Process.run
I got code of CMIO CameraExtension by Xcode target and it is running with FaceTime. I guess this kind of Extension has lots of security limitation. I like to run command like "netstat" in Extension. Is that possible to call Process.run()? I got keep getting error like "The file zsh doesn’t exist". Same code with Process.run() worked in macOS app. I like to run DistributedNotificationCenter and send text from App to CameraExtension. Is that possible? I do not receive any message on CameraExtension. If there is any other IPC method between macOS app and CameraExtension, please let me know.
2
0
587
Dec ’23
Filtering image types on PHPickerViewController photo lists (71832162)
var config = PHPickerConfiguration() config.filter = PHPickerFilter.images I want only 'png' files to be displayed when the PHPickerViewController photo list is opened. I've read this post : https://developer.apple.com/forums/thread/687415 In this post, it is mentioned that filtering image formats by PHPickerConfiguration is not possible (2 years ago). Is it still not possible? Has issue 71832162 not been resolved?
1
0
446
Dec ’23
Customized camera can’t take photos on iPhone 15 Pro Max
When developing a custom camera for iOS, when the sessionPreset of AVCaptureSession is set to AVCaptureSessionPresetPhoto, photos cannot be taken on the iPhone 15 Pro Max, but other devices are normal.SessionPreset settings and other enumerations can be shot normally. Please help Apple developers to determine the cause. In addition, I initially thought there was a problem with our code writing, but when I looked at some demos written by others, the same problem would occur when using the AVCaptureSessionPresetPhoto enumeration and running it on iPhone15 pro max.
3
0
373
Dec ’23
App using camera crashes on macOS in "Designed for iPad" mode
I tried running this demo app in "Designed for iPad" mode on my M3 MacBook Pro, and it crashes with the following errors: LSPrefs: could not find untranslocated node for <FSNode 0x6000022578c0> { isDir = ?, path = '/private/var/folders/yk/2vw8ntf53r79cyldlxx4t4t80000gn/X/6527F067-B4CF-5E9F-8412-6ADCB21853EE/d/Wrapper/Capturing Photos.app' }, proceeding on the assumption it is not translocated: Error Domain=NSPOSIXErrorDomain Code=1 "Operation not permitted" LSPrefs: could not find untranslocated node for <FSNode 0x6000022578c0> { isDir = ?, path = '/private/var/folders/yk/2vw8ntf53r79cyldlxx4t4t80000gn/X/6527F067-B4CF-5E9F-8412-6ADCB21853EE/d/Wrapper/Capturing Photos.app' }, proceeding on the assumption it is not translocated: Error Domain=NSPOSIXErrorDomain Code=1 "Operation not permitted" LSPrefs: could not find untranslocated node for <FSNode 0x6000022578c0> { isDir = ?, path = '/private/var/folders/yk/2vw8ntf53r79cyldlxx4t4t80000gn/X/6527F067-B4CF-5E9F-8412-6ADCB21853EE/d/Wrapper/Capturing Photos.app' }, proceeding on the assumption it is not translocated: Error Domain=NSPOSIXErrorDomain Code=1 "Operation not permitted" CMIO_DAL_PlugInManagement.cpp:917:CreatePlugIn Could not find plugin with kCMIOHardwarePlugInTypeID CMIO_DAL_CMIOExtension_Device.mm:355:Device legacy uuid isn't present, using new style uuid instead CMIO_DAL_CMIOExtension_Device.mm:355:Device legacy uuid isn't present, using new style uuid instead [C:1-3] Error received: Invalidated by remote connection. CMIO_DAL_CMIOExtension_Stream.mm:1429:GetPropertyData wrong data size for kCMIOStreamPropertyCenterStageFramingMode CMIOHardware.cpp:331:CMIOObjectGetPropertyData Error: 561211770, failed Fig assert: "err == 0 " at bail (CMIOUtilities.h:133) - (err=561211770) CMIO_DAL_CMIOExtension_Stream.mm:1429:GetPropertyData wrong data size for kCMIOStreamPropertyCenterStageFramingMode CMIOHardware.cpp:331:CMIOObjectGetPropertyData Error: 561211770, failed Fig assert: "err == 0 " at bail (CMIOUtilities.h:133) - (err=561211770) Using capture device: FaceTime HD Camera Camera access not determined. Unknown client: Capturing Photos LSPrefs: could not find untranslocated node for <FSNode 0x6000022578c0> { isDir = ?, path = '/private/var/folders/yk/2vw8ntf53r79cyldlxx4t4t80000gn/X/6527F067-B4CF-5E9F-8412-6ADCB21853EE/d/Wrapper/Capturing Photos.app' }, proceeding on the assumption it is not translocated: Error Domain=NSPOSIXErrorDomain Code=1 "Operation not permitted" Error loading /System/Library/Accessibility/BundlesBase/com.apple.Photos.axbundle/com.apple.Photos (84): dlopen(/System/Library/Accessibility/BundlesBase/com.apple.Photos.axbundle/com.apple.Photos, 0x0109): Symbol not found: _OBJC_CLASS_$_PXSearchResultsViewModel Referenced from: <128FED4B-1EFC-38CC-BFB9-F6980FB96165> /System/Library/Accessibility/BundlesBase/com.apple.Photos.axbundle/Versions/A/com.apple.Photos Expected in: <0E82B4EE-CAFC-36CC-8E72-5DF1BAD3BBD2> /System/iOSSupport/System/Library/PrivateFrameworks/PhotosUICore.framework/Versions/A/PhotosUICore Error loading /System/Library/Accessibility/BundlesBase/com.apple.Photos.axbundle/com.apple.Photos (84): dlopen(/System/Library/Accessibility/BundlesBase/com.apple.Photos.axbundle/com.apple.Photos, 0x0109): Symbol not found: _OBJC_CLASS_$_PXSearchResultsViewModel Referenced from: <128FED4B-1EFC-38CC-BFB9-F6980FB96165> /System/Library/Accessibility/BundlesBase/com.apple.Photos.axbundle/Versions/A/com.apple.Photos Expected in: <0E82B4EE-CAFC-36CC-8E72-5DF1BAD3BBD2> /System/iOSSupport/System/Library/PrivateFrameworks/PhotosUICore.framework/Versions/A/PhotosUICore Error loading /System/Library/Accessibility/BundlesBase/com.apple.Photos.axbundle/com.apple.Photos (84): dlopen(/System/Library/Accessibility/BundlesBase/com.apple.Photos.axbundle/com.apple.Photos, 0x0109): Symbol not found: _OBJC_CLASS_$_PXSearchResultsViewModel Referenced from: <128FED4B-1EFC-38CC-BFB9-F6980FB96165> /System/Library/Accessibility/BundlesBase/com.apple.Photos.axbundle/Versions/A/com.apple.Photos Expected in: <0E82B4EE-CAFC-36CC-8E72-5DF1BAD3BBD2> /System/iOSSupport/System/Library/PrivateFrameworks/PhotosUICore.framework/Versions/A/PhotosUICore AX Safe category class 'SFUnifiedBarRegistrationAccessibility' was not found! Photo library access not determined. <<<< FigCaptureCameraParameters >>>> Fig assert: "success" at bail (FigCaptureCameraParameters.m:252) - (err=0) <<<< FigCaptureCameraParameters >>>> Fig assert: "success" at bail (FigCaptureCameraParameters.m:252) - (err=0) <<<< FigCaptureCameraParameters >>>> Fig assert: "success" at bail (FigCaptureCameraParameters.m:252) - (err=0) fopen failed for data file: errno = 2 (No such file or directory) Errors found! Invalidating cache... fopen failed for data file: errno = 2 (No such file or directory) Errors found! Invalidating cache... CMIOHardware.cpp:1388:CMIOStreamRegisterAsyncStillCaptureCallback stream doesn't support async still capture CMIOHardware.cpp:1412:CMIOStreamRegisterAsyncStillCaptureCallback Error: 1970171760, failed <<<< CMIOFigCaptureStream >>>> Fig assert: "! stream->streaming" at bail (CMIOFigCaptureStream.m:1173) - (err=0) -[MTLDebugDevice newTextureWithDescriptor:iosurface:plane:]:2641: failed assertion `Texture Descriptor Validation IOSurface textures must use MTLStorageModeShared libsystem_kernel.dylib`: 0x188f2e0d4 <+0>: mov x16, #0x148 0x188f2e0d8 <+4>: svc #0x80 -> 0x188f2e0dc <+8>: b.lo 0x188f2e0fc ; <+40> Thread 19: signal SIGABRT 0x188f2e0e0 <+12>: pacibsp 0x188f2e0e4 <+16>: stp x29, x30, [sp, #-0x10]! 0x188f2e0e8 <+20>: mov x29, sp 0x188f2e0ec <+24>: bl 0x188f26230 ; cerror_nocancel 0x188f2e0f0 <+28>: mov sp, x29 0x188f2e0f4 <+32>: ldp x29, x30, [sp], #0x10 0x188f2e0f8 <+36>: retab 0x188f2e0fc <+40>: ret
0
2
883
Dec ’23
App is crash [AVCaptureSession startRunning] startRunning may not be called between calls to beginConfiguration and commitConfiguration
Greetings everyone, My app is crash when i open camera screen open and close i have added subview in the camera that shows the main screen but the app does not crash every time, the app works well 5-6 times after the app crashes. I'm using instead of the Quickpose.ai library and the app crashes instead of lib. so I don't know where is the problem i have shown some code and my crash log. *** Terminating app due to uncaught exception 'NSGenericException', reason: '*** -[AVCaptureSession startRunning] startRunning may not be called between calls to beginConfiguration and commitConfiguration' *** First throw call stack: (0x1889f4870 0x180d13c00 0x1a4e30b44 0x10505cff0 0x1047ed7cc 0x1047ed84c 0x105824f50 0x105826b34 0x10582e98c 0x10582f728 0x10583c5f8 0x10583bc2c 0x1f2365964 0x1f2365a04) libc++abi: terminating due to uncaught exception of type NSException ![]("https://developer.apple.com/forums/content/attachment/a1eeece3-6529-4c79-8931-963f58818a93" "title=Screenshot 2023-12-12 at 9.35.27 AM.png;width=1920;height=1080") ![]("https://developer.apple.com/forums/content/attachment/2184c975-e299-40e4-b466-cafa5165ae03" "title=Screenshot 2023-12-12 at 9.35.32 AM.png;width=1920;height=1080") ` ![]("https://developer.apple.com/forums/content/attachment/d78ac3ac-313a-4df9-960d-0c58c3087bec" "title=Screenshot 2023-12-15 at 12.11.38 PM.png;width=1920;height=1080") ``
1
0
1.7k
Dec ’23
macOS Sonoma 14.2 camera connection
Starting with Sonoma 14.2 it is not possible any longer to connect canon cameras to an app via USB using Canon's EDSDK framework. This has been working fine up to Sonoma 14.1. The app using the EDSDK is not crashing, but the SDK is not reporting back any connected cameras any longer. The camera is connected and can be seen in the system report as well as in e.g. gphoto2 and even in the EOS Utility Software. It seems that 14.2 introduced some breaking change to the access to cameras from within apps. I've tried upgrading to the newest EDSDK version and checked with and without App Sandbox. There is no way to find the camera any longer on 14.2.
1
1
1.3k
Dec ’23
Unable to find problem. Error Domain=com.apple.accounts Code=7 "(null)"
Even with sample code from Apple, the same warning forever. "Error returned from daemon: Error Domain=com.apple.accounts Code=7 "(null)"" Failed to log access with error: access=<PATCCAccess 0x28316b070> accessor:<<PAApplication 0x283166df0 identifierType:auditToken identifier:{pid:1456, version:3962}>> identifier:2EE1A54C-344A-40AB-9328-3F8E8B5E8A85 kind:intervalEvent timestampAdjustment:0 visibilityState:0 assetIdentifierCount:0 tccService:kTCCServicePhotos, error=Error Domain=NSCocoaErrorDomain Code=4097 "connection to service with pid 235 named com.apple.privacyaccountingd" UserInfo={NSDebugDescription=connection to service with pid 235 named com.apple.privacyaccountingd} Entitlements are reviewed.
2
0
559
Dec ’23
Photos sample app can't access full-resolution photos on iOS
I'm running this SwiftUI sample app for photos without any modifications except for adding my developer profile, which is necessary to build it. When I tap on the thumbnail to see the photo library (after granting access to my photo library), I see that some of the thumbnails are stuck in a loading state, and when I tap on thumbnails, I only see a low-resolution image (the thumbnail), not the full-resolution image that should load. In the console I can see this error that occurs when tapping on a thumbnail to see the full-resolution image: CachedImageManager requestImage error: The operation couldn’t be completed. (PHPhotosErrorDomain error 3164.) When I make a few modifications necessary to run the app as a native macOS app, all the thumbnails load immediately, and clicking on them reveals the full-resolution images.
2
0
971
Dec ’23
I need a way to permantently disable Reactions from my app, ideally the universe too
So I've spent the last five years optimizing my video AI system so that it runs with less than 5% CPU while processing a 30fps video feed on a Macbook Pro M2, and everything is great, until Sonoma comes out, and I find myself consuming 40% CPU for the exact same workload. So I fire up Instruments, and the "heaviest stack trace" (see screenshot) turns out to be Espresso doing some completely unasked-for and absolutely useless processing on my video frames. I turn off Reactions, but nothing helps - the CPU consumptions stays at 40%. "Reactions" is nothing but a useless toy to please some WWDC keynote fanboys, I don't want it anywhere near my app or my users, and I especially do not want to take the blame for it pissing away the user's CPU cycles and battery. Now, how do I make it go away, for ever? Best regards Jacob
2
0
621
Dec ’23
Rejection for Guideline 2.5.1 - Performance - Software Requirements
Hello, We have a Photo Vault app. We were hiding users' photos behind a semi-functional calculator. But after rejection, we thought "decoy functionality" meant we needed to remove this fake calculator feature. We removed it, and tried many things to solve this issue, but couldn't understand what Apple wants us to change. We've been trying to contact Apple for more details, but they keep sending the same message every time. Helps appreciated. Here is the rejection message: . Your app uses public APIs in an unapproved manner, which does not comply with guideline 2.5.1 of the App Store Review Guidelines. Specifically, we found that your app uses a decoy functionality to hide a user’s photos, which is not an appropriate use of the Photos API. Since there is no accurate way of predicting how an API may be modified and what effects those modifications may have, Apple does not permit unapproved uses of public APIs in App Store apps. Next Steps Please revise your app to ensure that documented APIs are used in the manner prescribed by Apple. It would be appropriate to remove any features in your app that use a decoy functionality to hide a user's photos from your app. If there are no alternatives for providing the functionality your app requires, you can use Feedback Assistant to submit an enhancement request.
0
0
473
Dec ’23
PHPhotoLibrary.cloudIdentifierMappings(forLocalIdentifiers:) crashes
Hello, I am experiencing an unexpected issue related to PhotosKit, where an occasional crash is occurring in an area it shouldn't. I've provided the crash log below for reference. The crash can be recreated by calling PHPhotoLibrary.cloudIdentifierMappings(forLocalIdentifiers:) with random identifiers, as demonstrated below: let cloudIdentifiers = (0...10).map { _ in PHCloudIdentifier(stringValue: UUID().uuidString) } let localIdsMap = PHPhotoLibrary.shared().localIdentifierMappings(for: cloudIdentifiers) I've noticed that the method seems to crash consistently when an incorrect cloud identifier is input. This is surprising to me since the return type is [PHCloudIdentifier : Result<String, Error>], so I was anticipating an explicit error. The issue can be reproduced on both Xcode 15.0 & iOS 17.2 and Xcode 14.3.1 & iOS 16.0. While I'm fairly certain that only valid identifiers are used in my app, I would still like to know if there is a potential way to validate a cloud identifier prior to accessing this method?
0
0
329
Jan ’24