Hi,
I am developing iOS mobile camera. I noticed one issue related to the user privacy. when AVCaptureVideoStabilizationModeStandard is set to AVCaptureConnection which sessionPreset is 1920x1080Preset, after using system API to take a photo, the FOV of the photo will be bigger than preview stream and it will show more content especially in iPhone 15 pro max rear camera. I think this inconsistency will cause the user privacy issue. Can you show me the solution if I don't want to turn the StabilizationMode OFF? I tried other devices, this issue is ok but in iPhone 15pm this issue is very obvious.
Any suggestions are appreciated.
Photos & Camera
RSS for tagExplore technical aspects of capturing high-quality photos and videos, including exposure control, focus modes, and RAW capture options.
Post
Replies
Boosts
Views
Activity
I feel like I'm missing something really simple. I've got the simplest possible CIKernel, it looks like this:
extern "C" float4 Simple(coreimage::sampler s) {
float2 current = s.coord();
float2 anotherCoord = float2(current.x + 1.0, current.y);
float4 sample = s.sample(anotherCoord); // s.sample(current) works fine
return sample;
}
It's (in my mind) incrementing the x position of the sampler by 1 and sampling the neighboring pixel. What I get in practice is a bunch of banded garbage (pictured below.) The sampler seems to be pretty much undocumented, so I have no idea whether I'm incrementing by the right amount to advance one pixel. The weird banding is still present if I clamp anootherCoord to s.extent() but it behaves normally if I sample s.coord() unchanged. I'm trying to write a box blur that samples / averages neighboring pixels and am completely blocked by this. What am I missing?
AVFoundation: Strange error while trying to switch camera formats with the touch of a single button.
I'm getting the following output from my iOS app's debug console, note the error on the last line:
Capture format keys: ["600x600@25", "1200x1200@5", "1200x1200@30", "1600x1200@2", "1600x1200@30", "3200x2400@15", "3200x2400@2", "600x600@30"]
Start capture session for 1600x1200@30: <AVCaptureSession: 0x303c70190 [AVCaptureSessionPresetPhoto]>
Stop capture session: <AVCaptureSession: 0x303c70190 [AVCaptureSessionPresetInputPriority]>
<AVCaptureDeviceInput: 0x303ebb720 [Medwand S3 Camera]>[vide] -> <AVCaptureVideoDataOutput: 0x303edf1e0>
<AVCaptureDeviceInput: 0x303ebb720 [Medwand S3 Camera]>[vide] -> <AVCapturePhotoOutput: 0x303ee3e20>
<AVCaptureDeviceInput: 0x303ebb720 [Medwand S3 Camera]>[vide] -> <AVCaptureVideoPreviewLayer: 0x3030b33c0>
Start capture session for 600x600@30: <AVCaptureSession: 0x303c70190 [AVCaptureSessionPresetInputPriority]>
<AVCaptureDeviceInput: 0x303ebb720 [Medwand S3 Camera]>[vide] -> <AVCaptureVideoDataOutput: 0x303edf1e0>
<AVCaptureDeviceInput: 0x303ebb720 [Medwand S3 Camera]>[vide] -> <AVCapturePhotoOutput: 0x303ee3e20>
<AVCaptureDeviceInput: 0x303ebb720 [Medwand S3 Camera]>[vide] -> <AVCaptureVideoPreviewLayer: 0x3030b33c0>
<<<< FigSharedMemPool >>>> Fig assert: "blkHdr->useCount > 0" at (FigSharedMemPool.c:591) - (err=0)
This is in response to trying to switch capture formats between the two key modes that must regularly be used by my application. Below you will find the functions that I use to start and stop capturing frames to my preview layer. I have a UI with three buttons.
Off
Mode 1
Mode 2
If I tap the Off button in between tapping Mode 1 or Mode 2 all is well; I can do this all day. However, attempt to jump between Mode 1andMode 2` directly I run into issues. I added a layer of software between the UI and the underlying functions so that I could make sure to turn off the Camera before turning it back on in the opposite mode and was surprised to get this output. Can someone at Apple please tell me what is going on here?
For the rest of you, if anyone knows the magic incantation to safely switch camera formats, please paste that code here. Thanks. I've included my code below.
func start(for deviceFormat: String) {
sessionQueue.async { [unowned self] in
logger.debug("Start capture session for \(deviceFormat): \(self.captureSession)")
do {
guard let format = formatDict[deviceFormat] else { throw Error.captureFormatNotFound }
captureSession.stopRunning()
captureSession.beginConfiguration() // May not be necessary.
try captureDevice.lockForConfiguration() // Without this we get an error.
captureDevice.activeFormat = format
captureDevice.unlockForConfiguration() // Matching function: Necessary.
captureSession.commitConfiguration() // Matching function: May not be necessary.
captureSession.startRunning()
} catch {
logger.fault("Failed to start camera: \(error.localizedDescription)")
errorPublisher.send(error)
}
}
}
func stop() {
sessionQueue.async { [unowned self] in
logger.debug("Stop capture session: \(self.captureSession)")
captureSession.stopRunning()
}
}
I got a slow-motion video asset from the camera roll. But I can't get the URL from that asset.
Do you know how to get the URL?
Hey all, I have a pretty complicated camera setup so bear with me.
You know how Instagram's Camera supports recording a video and flipping Camera devices while recording?
I built the same thing using AVCaptureVideoDataOutput and it works fine, but it does not support rotation. (neither does Instagram, but I still need it, lol)
So there's two ways to implement rotation (and mirroring) in AVCaptureVideoDataOutput:
1. Set it on the AVCaptureConnection
Rotation and vertical mirror mode can be set directly on the AVCaptureVideoDataOutput's connection to the Camera:
let output = AVCaptureVideoDataOutput(...)
cameraSession.addOutput(output)
for connection in output.connections {
connection.videoRotation = 90
connection.isVideoMirrored = true
}
But according to the documentation this is expensive and comes with a performance overhead. I haven't really benchmarked it yet, but I assume rotating and mirroring 4k buffers isn't cheap.
I'm building a camera library that is used by a lot of people, so all performance decisions have a big impact.
2. Set it on AVAssetWriter
Instead of actually physically rotating large pixel buffers, we can also just set the AVAssetWriter's transform property to some affine transformation - which is comparable to how EXIF tags work.
We can set both rotation and mirror modes using CGAffineTransforms.
Obviously this is much more efficient and does not come with a performance overhead on the camera pipeline at all, so I'd prefer to go this route.
Problem
The problem is that when I start recording with the front Camera (AVAssetWriter.transform has a mirror on the CGAffineTransform), and then flip to the back Camera, the back Camera is also mirrored.
Now I thought I could just avoid rotation on my buffers and only use isVideoMirrored on the AVCaptureConnection when we are using the front camera, which is a fair performance compromise - but this won't work because isVideoMirrored applies mirroring alongside the vertical axis - and since the video stream is naturally in landscape orientation, this will flip the image upside down instead of mirroring it alongside the vertical axis... whoops! 😅
This is pretty obvious as the transform applies to the entire video stream, but now I am not sure if the AVAssetWriter approach will work for my use-case.
I think I will need to eagerly physically rotate the pixel buffers by setting the AVCaptureConnection's videoRotation & isVideoMirrored properties, but I wanted to ask here in case someone knows any alternatives to doing that in order to avoid the performance and memory overhead of rotating buffers?
Thanks!
I'm using CIDepthBlurEffect to create a portrait mode effect on a rendered image. The effect is working as expected however I want to create the "bokeh ball" effect which is seen in the photos app. I see that the filter has a "inputShape" input of type NSString, however the documents do not specify what value this should be.
Any pointers are help is greatly apprecaited.
I've had a photo app in the Store for 10+ years that just started behaving unexpectedly on the newly release iPad Pro M4 (May 2024). This is the first iPad with the camera on the landscape (longer) edge of the iPad.
The camera preview behaves as expected in my app, but the resulting photos are upside-down. How can I determine when I am dealing with the landscape camera? I'd like to avoid casing on a device by device basis.
I have been unable to find any mention of a new API call that would allow me to determine which front-facing camera I'm dealing with. Does something like this exist?
Thanks!
I am trying to use the AVCamFilter Apple sample project discussed in this WWDC session to get depth data using the dual camera. The project has built-in features to get depth data from the dual camera.
When the sample project was written builtInDualWideCamera didn't exist yet, and the project only tries to get builtInDualCamera and builtInWideAngleCamera. When I run the project on my iPad Pro it doesn't show any of the depth-related UI because the device doesn't have a builtInDualCamera device. So I added builtInDualWideCamera in to the videoDeviceDiscoverySession, and it seems to get that device properly, but isDepthDataDeliverySupported is returning false still.
Is there some reason why isDepthDataDeliverySupported is false even though I seem to be using a dual camera device?
I know the device has a builtInLiDARDepthCamera but I wanted to try out the dual camera depth data to see how it performs for shorter distances. I wouldn't have expected the dual camera depth data delivery to be made unavailable on the device just because the LiDAR sensor is already available.
Using iPadOS 17.5.1, iPad Pro 11-inch 4th generation.
The depth feature of this sample app works fine on an iPhone 15 I tested. Also tried on an iPhone 15 Pro and it worked even though that device also has a LiDAR sensor, so the issue is presumably not related to the fact that the iPad Pro has a LiDAR sensor.
Dear
When I use iOS17 to save videos downloaded from the network to my album, it shows Domain=PHPhotosErrorDomain Code=3302. Through searching official documents, it was found that 3302 means "An error that indicates the asset resource validation failures." However, the specific reason is unknown. Is there any article to explain
Hey!
I'm working on a camera app and I've noticed that the .builtInTripleCamera doesn't behave anything like the native app. Tested on iPhone 15 Pro Max and iPhone 12 Pro Max.
The documentation states the following, but that seems quite different from what is happening in the app:
Automatic switching from one camera to another occurs when the zoom factor, light level, and focus position allow.
So, does it automatically switch like the native camera, or do I need to do something?
Custom Camera vs Native Camera
Custom Camera
Native Camera
The code was adapted from the Apple's project
AVCamFilter.
Just download the AVCamFilter and update videoDeviceDiscoverySession:
private let videoDeviceDiscoverySession = AVCaptureDevice.DiscoverySession(
deviceTypes: [.builtInTripleCamera],
mediaType: .video,
position: .unspecified
)
When I call requestAVAssetForVideo to retrieve a video for upload, the system appends a string of unknown characters to the returned path.
like this:
/var/mobile/Media/DCIM/101APPLE/IMG_1034.MOV#YnBsaXN0MDDRAQJfEBtSZxxxx1vZGUQAAgLKQAAAAAAAAEBAAAAAAAAAAMAAAAAAAAAAAAAAAAAAAAr
ps: I encountered a similar issue before when retrieving spatial videos on systems below iOS 18.
After the session video, "Build a great Lock Screen camera capture experience", was unclear about the UI.
So do developers need to provide a whole new UI in the extension? The main UI cannot be repurposed?
From what I understood from watching the session LockedCameraCapture, the extension doesn't have access to the App Group User Defaults, so I'm wondering how I can synchronize preferences between the extension and the main app.
From what I can see, the builtin iOS Camera app is able to synchronize its preferences. For example, the "Live Photo" mode toggle state is preserved whether in the main app or in the lock screen.
Is there anything I'm not aware of?
I have an app that uses the ImageCaptureCore's ICDeviceBrowser to find and connect to external digital cameras. Prior to iOS 18 this worked just fine, the device browser would start up and find any cameras connected via USB. However since the update the device browser fails to ever detect any connected device or to trigger any delegate events at all after browser start.
I noticed that the Contents authorization in iOS 18 is undetermined, where in previous iOS versions it would default to authorized. I tried to resolve this by requesting authorization, however this immediately returns denied without ever having prompted the app user for permission. I do have the Camera Privacy Usage description setup, and also am able to request permission for the iOS camera successfully.
How can I successfully request contents authorization via ICC or otherwise?
Or are there alternative Apple libraries I can use for finding and connecting to external digital cameras on iOS?
IIRC, the new photos highlights videos pick your best photos using Apple Intelligence.
Is there an API or metadata available to use AI and get the best photo out of e.g. 5 similar photos?
when importanting photos: I have on serval occasions select delete instead of import.. My hand just Hoover above the 2 option that seat to close …. The pictures would instantly delete from the card. Can you please separate the 2
The probability of this issue occurring is very high when retrieving a newly taken Live Photo.
This leads to failure in determining the Live Photo type. How can this issue be resolved?
I generated an asset in the photolibrary by adding the unedited image, adjustmentData, and edited image with PHAssetCreationRequest.addResouce(). The image is saved in the photolibrary as HEIF. (Check with the photolibrary)
Then, when I save the image generated with PHAssetCreationRequest.addResouce() to the Files app, it becomes JPEG.
On the other hand, even if I edit an image taken with the camera and save it to the Files app, it still maintains the HEIF.
Do you know why this happens?
Also, how can I maintain the HEIF even when saving it in the Files app?
Thanks.
Is it possible to sort the user library assets by date captured? The Photos app in iOS 18 lets you choose between Date Captured and Recently Added and I want to offer that same choice in my app. This seems to always sort them by creation date (which I believe is the same as recently added):
let assetCollection = PHAssetCollection.fetchAssetCollections(with: .smartAlbum, subtype: .smartAlbumUserLibrary, options: nil).firstObject!
let fetchResult = PHAsset.fetchAssets(in: assetCollection, options: PHFetchOptions.imageMediaType())
I've had an app that edits photos in your library since the PhotoKit API was released in iOS 8. I know it was required if you preserve photo metadata you had to change the value of Orientation to 1 (up), otherwise PhotoKit would fail to perform the asset change request. When I remove this code, I'm seeing Orientation is getting changed to 1 automatically both at root and in the TIFF dictionary (tested with iOS 18). I wanted to confirm this is expected behavior, the system does this for us now? If so, can I remove this code for iOS 15+, or was it a recent iOS version this started happening? Thanks!