I'd like to add a share extension to my app (an Action app extension, I think). The extension would appear when users share a photo in the Photos app (and, ideally, Safari). If you tapped my app icon on the share sheet, iOS would pass the photo to my app and switch the user from Photos or Safari to my full app, with the shared photo(s) available for my app to work with.
I know this is possible, because Instagram (a third-party app) works exactly like this. If you look at an image in the Photos app, tap Share and then tap Instagram, iOS will background the Photos app, activate the Instagram app and let you edit and post your photo in the main Instagram app.
It seems like NSExtensionContext#open(_:completionHandler:) might do this if I add a custom URL to my main app, but the documentation for that says:
Each extension point determines whether to support this method, or under which conditions to support this method. In iOS, the Today and iMessage app extension points support this method.
That would rule out an Action, Photo Editing or Share extension. But then how does Instagram do this, and how can I achieve the same in my app?
I know that it's possible for an Action, Photo Editing or Share extension to open as a mini-app on top of the app providing the content. But coordinating the IPC for that is much, much more work (for my particular app) than just switching the user over to the app, with full access to all the functionality and data that my main app usually has access to.
Photos & Camera
RSS for tagExplore technical aspects of capturing high-quality photos and videos, including exposure control, focus modes, and RAW capture options.
Post
Replies
Boosts
Views
Activity
Hello,
I am experiencing slow image retrieval when using the requestImageForAsset:targetSize:contentMode:options:resultHandler: method in my application. The delay is significantly impacting the performance of my app.
Here are the details of my implementation:
for (PHAsset *asset in assets) {
@autoreleasepool {
PHImageManager *imageManager = [PHImageManager defaultManager];
PHImageRequestOptions *options = [[PHImageRequestOptions alloc] init];
options.synchronous = YES;
options.deliveryMode = PHImageRequestOptionsDeliveryModeFastFormat;
options.resizeMode = PHImageRequestOptionsResizeModeNone;
[imageManager requestImageForAsset:asset
targetSize:CGSizeMake(100, 100)
contentMode:PHImageContentModeAspectFill
options:options
resultHandler:^(UIImage *thumbnail, NSDictionary *info) {
CameraRollCellDto *cellDto = [[CameraRollCellDto alloc] init];
cellDto.index = index;
cellDto.thumbnail = thumbnail;
cellDto.propertyDate = asset.creationDate;
if (self.segmentedTorikomi.selectedSegmentIndex == SEG_INDEX_IKKATSU) {
cellDto.isSelected = YES;
} else {
cellDto.isSelected = NO;
}
[list addObject:cellDto];
}];
index++;
}
}
Has anyone else encountered this issue? Are there any known solutions or optimizations that can help improve the speed of image retrieval using this method?
Thank you for your assistance.
Our app filters the photo library to a certain date range for ease of picking photos. However, to do this, we have to require full permissions to the photo library. We would like to use the PHPickerViewController and have it filter the results by the assets creation date? This would allow us to use it.
I see other filter options, but not this one. And if it isn't there, is this something that is being thought about or on a roadmap?
Hello everyone, I need some help about this things.
If you also know, pls comment.
Overview
We are planning to develop an app using the “Support external cameras in your iPadOS app” feature introduced in iPadOS 17.
Before implementing this feature, it is necessary for the iPad to recognize external cameras. However, among the iPad models compatible with iPadOS 17, we have found that some of the iPads owned by our development team can recognize external cameras, while others cannot.
If you have any reports regarding compatibility issues or information on how to resolve these problems, please share them with us.
Detailed Explanation:
The results of our investigation are as follows:
External Camera Used: A 360-degree camera
Devices Firmware
RICOH Theta X 2.61.0(2024/12/26Latest)
RICOH Theta Z1
Tested iPad
Devices Firmware Status
12.9インチiPad Pro(第3世代) IOS 17.5.1 OK
11インチiPad Pro(M4) IOS 18.2 NG
Verification Method
Step 1: Power on the iPad and the external camera, ensuring both are ready for connection.
Step 2: Connect the iPad and the external camera using a USB-C cable.
Step 3: Launch FaceTime on the iPad and check the displayed camera feed.
If the external camera is recognized, the feed from the external camera will be displayed.
is forKey:fileSize considered accessing non-public API?
has your app been rejected at review stage due to this?
let resources = PHAssetResource.assetResources(for: asset)
if let resource = resources.first {
if let fileSize = resource.value(forKey: "fileSize") as? Int {
return fileSize
}
}
Hey everyone😊, I am building an app that includes a live camera feed preview. That's all I need to do along side identifying the images with createML's image classification. I don't need to capture images at all. I've seen some very complicated tutorials. I just want to use a couple of lines of code.
Support external cameras in your iPadOS app and use Swift to read multiple camera feeds?
thanks
Hey, I have a complex CIFilter chain I'm trying to debug to improve processing time. Is there any documentation on what all the colours mean and the naming, e.g. sRGB Linear_to_workspace?
Thanks
Alex
I set both the AVCapturePhotoOutput and the AVCapturePhotoSettings with maxPhotoDimensions = .init(width: 8064, height: 6048), but still get the 12mp photo.
for a while i had one photo widget (no special app, just the standard apple one) and it was set to shuffle to an album of pics of my bf. no problems at all. a few weeks later i added one to shuffle through an album of pics of my cat, and that one worked fine, but it made the one of my bf stop working, and it just showed a blank white widget, no error message or anything. so i removed the one of my cat hoping the one of my bf would go back to working, and it didn’t. i only have the widgets for find my, my bank, and then apps on my home screen otherwise.
What is the purpose of AdjustmentsSecondary.data included in the PHAssetResource for a cleaned-up image?
When using creationRequest.addResource, what should be set for the PHAssetResourceType?
If I set the PHAssetResourceType as follows to create an asset, it appears correctly in the camera roll. However, when attempting to edit the image in the Photos app, the app crashes:
IMG_5332.HEIC → .photo
FullSizeRender.HEIC → .fullSizePhoto
Adjustments.plist → .adjustmentData
AdjustmentsSecondary.data → .adjustmentData
I'm a new app developer and am trying to add a button that adds pictures from the photo library AND camera. I added the first function (adding pictures from the photo library) using the new-ish photoPicker, but I can't find a way to do the same thing for the camera. Should I just tough it out and use the UI View Controller struct that I've seen in all of the YouTube tutorials I've come across?
I also want the user to be able to crop the picture in the app after they take a picture.
Thanks in advance
Hello
Our application is backing up the user photos to some back end.
When retrieving the asset data from the Photo Library, we set the flag 'accessNetworkAllowed' to true to get the assets that might be optimized in iCloud.
In the application logs, we can see the message below, and it shows as coming from com.apple.photos.backend (PhotoKit)
Missing prefetched properties for PHAssetAdjustmentProperties on <PHAsset: 0x160b1ec00> BCF5688F-F7A7-4196-AFC7-A84E8BD95F3E/L0/001 mediaType=1/0, sourceType=1, (5601x3734), creationDate=2022-01-24 23:36:05 +0000, location=0, hidden=0, favorite=0, adjusted=0 . Fetching on demand on the main queue, which may degrade performance.
In particular, the message says 'Fetching on demand on the main queue' but I'm not sure if that means that PhotoKit will fetch on main queue or if that mean that our application is requesting the data on main queue.
Anyone could clarify?
thanks
Hi everyone,
I am working on a 3D reconstruction project.
Recently I have been able to retrieve the intrinsics from the two cameras on the back of my iPhone.
One consideration is that I want this app to run regardless if there is no LiDAR, but at least two cameras on the back. IF there is a LiDAR that is something I have considered to work later on the course of the project.
I am using a AVCaptureSession with the two cameras AVCaptureDevice:
builtInWideAngleCamera
builtInUltraWideCamera
The intrinsic matrices seem to be correct. However, the when I retrieve the extrinsics, e.g., builtInWideAngleCamera w.r.t. builtInUltraWideCamera the matrix I get looks like this:
Extrinsic Matrix (Ultra-Wide to Wide):
[0.9999968, 0.0008149305, -0.0023960583, 0.0]
[-0.0008256607, 0.9999896, -0.0044807075, 0.0]
[0.002392382, 0.0044826716, 0.99998707, 0.0].
[-14.277955, -8.135408e-10, -0.3359985, 0.0]
The extrinsic matrix of the form: [R | t], seems to be correct for the rotational part, but the translational vector is ALL ZEROS. Which suggests that the cameras are physically overlapped as well the last element not being 1 (homogeneous coordinates).
Has anyone encountered this 'issue' before?
Is there a flaw in my reasoning or something I might be missing?
Any comments are very much appreciated.
I explored several methods to trigger a 35mm camera connected via USB:
1- ICCameraDevice: Unable to make it work with Canon cameras (details).
2- Canon's EDSDK: Works but is complex to implement.
3- gPhoto2 (command-line): Simple to use but requires gPhoto2 to be installed.
In your opinion, what is the most efficient way to trigger and download images via USB from Canon cameras?
I’m building a camera app using SwiftUI and UIKit (with UIViewControllerRepsrwsentable). My app already is able to capture photos, but I also want to implement the important feature - apply my custom image filter to the image for live preview in camera and when this image is saving to the photo library (like in the default Apple camera app with Photographic styles).
My image filter must be pretty advanced because I’m a photographer and I trying to achieve the same colours as I have with my custom image preset in Lightroom. I want to control the image parameters such as basic (exposure, contrast, shadows, etc.), tone curves for each channel (Red, Green, Blue channels separately), HSL (for Red, Orange, Yellow, Green, Blue, Aqua, Purple and Magenta), apply colour grading and more.
Currently I’m straggling with implementation of this. I tried to create a custom image filter using Metal (it works with saturation) but I’m not sure if it is the best approach. I need help and recommendations of how developers implement this complex thing in their apps (what technologies should I use and etc.)
I have a photo editing app which uses a simple Metal Render to display CIFilter output images. It works just fine in Swift 5 but in Swift 6 it crashes on starting the Metal command buffer with an error in the Queue : com.Metal.CompletionQueueDispatch (serial).
The crash is occurring before I can debug.. I changed the command buffer to report
MTLCommandBufferDescriptorStatus errorOptions = .encoderExecutionStatus.
No luck with getting insight into the source of the crash..
Likewise the error is happening before any of the usual Metal debug tools are enabled.
The Metal render works just fine in Swift 5 and also works fine with almost all of the Swift Compiler Upcoming feature flags set to Yes. [The "Default Internal Imports" flag is still No. (the number of compile errors with this setting is absolutely scary! but that's another topic)
Do you have any suggestions on debugging or ideas on why the Metal library is crashing in Swift 6???
Everything is current release versions and hardware.
Hello Devs!
Anyone has an idea if it is feasible to override the native camera in apple?
So if a user have an app called "xyz" installed, when the user open his native camera and qr code is detected, we display a pop if he wants to continue with "xyz"
Thanks
Since the OS was recently updated to 18.1.1 on my iPhone 15, I am no longer able to import my pictures into the Photos app on my iMac. I have to mention that my iMac is pretty old and is running OS High Sierra 10.13.6 and is not allowing me to update the OS to a newer version. Anyway, the main error message I get is: "Some items cannot be added to your Photo library because they may be an unrecognizable file format or the file may not contain valid data". Then, for each individual photo that failed to upload, the error message I get reads, "unable to read metadata. The file may be corrupt". However, videos import just fine from my iPhone to iMac.
This was not a problem before the recent iPhone update. I tried closing the Photo app and reopening it. I tried restarting my iPhone and iMac but nothing seems to work. Any help would be much appreciated.
Hi all,
we are in the business of scanning documents and barcodes with the camera system of mobile devices. Since there is a wide variety of use cases, from scanning tiniest barcodes and small business cards to scanning barcodes or large documents from far distances we preferably rely on the triple camera devices, if available, with automatic constituent device switching.
This approach used to be working perfectly fine. Depending on the zoom level (we prefer to use an initial zoom value of 2.0) and the focusing distance the iPhone Pro models switched through the different camera systems at light speed: from ultra-wide to wide, tele and back. No issues at all.
Unfortunately the new iPhone 16 Pro models behave very different when it comes to constituent device switching based on focus distance. The switching is slow and sometimes it does not happen at all when the focusing distance changes. Especially when aiming for a at a distant object for a longer time and then aiming at a very close object that is maybe 2" away. The iPhone 15 Pro here always switches immediately to the ultra-wide camera, while the iPhone 16 Pro takes at least 2-3 seconds, in rare cases up to 10 seconds and sometimes forever to switch to the ultra-wide camera.
Of course we assumed that our code is responsible for these issues. So we experimented with restricting the devices and so on. Then we stripped more and more configuration code but nothing we tried improved the situation.
So we ended up writing a minimal example app that demonstrates the problem. You can find the code below. Execute it on various iPhones and aim at far distance (> 10 feet) and then quickly to very close distance (<5 inches).
Here is a list of devices and our test results:
iPhone 15 Pro, iOS 17.6: very fast and reliable switching
iPhone 15 Pro, iOS 18.1: very fast and reliable switching
iPhone 13 Pro Max, iOS 15.3: very fast and reliable switching
iPhone 16 (dual-wide camera), iOS 18.1: very fast and reliable switching
iPhone 16 Pro, iOS 18.1: slow switching, unreliable
iPhone 16 Pro Max, iOS 18.1: slow switching, unreliable
Questions:
Does anyone else have seen this issue? And possibly found a workaround?
Is this behaviour intended on iPhone 16 Pro models? Can we somehow improve the switching speed?
Further the iPhone 16 Pro models also show a jumping preview in the preview layer when they switch the constituent active device. Not dramatic, but compared to the other phones it looks like a glitch.
Thank you very much!
Kind regards,
Sebastian
import UIKit
import AVFoundation
class ViewController: UIViewController {
var captureSession : AVCaptureSession!
var captureDevice : AVCaptureDevice!
var captureInput : AVCaptureInput!
var previewLayer : AVCaptureVideoPreviewLayer!
var activePrimaryConstituentToken: NSKeyValueObservation?
var zoomToken: NSKeyValueObservation?
override func viewDidLoad() {
super.viewDidLoad()
}
override func viewDidAppear(_ animated: Bool) {
super.viewDidAppear(animated)
checkPermissions()
setupAndStartCaptureSession()
}
func checkPermissions() {
let cameraAuthStatus = AVCaptureDevice.authorizationStatus(for: AVMediaType.video)
switch cameraAuthStatus {
case .authorized:
return
case .denied:
abort()
case .notDetermined:
AVCaptureDevice.requestAccess(for: AVMediaType.video, completionHandler:
{ (authorized) in
if(!authorized){
abort()
}
})
case .restricted:
abort()
@unknown default:
fatalError()
}
}
func setupAndStartCaptureSession() {
DispatchQueue.global(qos: .userInitiated).async{
self.captureSession = AVCaptureSession()
self.captureSession.beginConfiguration()
if self.captureSession.canSetSessionPreset(.photo) {
self.captureSession.sessionPreset = .photo
}
self.captureSession.automaticallyConfiguresCaptureDeviceForWideColor = true
self.setupInputs()
DispatchQueue.main.async {
self.setupPreviewLayer()
}
self.captureSession.commitConfiguration()
self.captureSession.startRunning()
self.activePrimaryConstituentToken = self.captureDevice.observe(\.activePrimaryConstituent, options: [.new], changeHandler: { (device, change) in
let type = device.activePrimaryConstituent!.deviceType.rawValue
print("Device type: \(type)")
})
self.zoomToken = self.captureDevice.observe(\.videoZoomFactor, options: [.new], changeHandler: { (device, change) in
let zoom = device.videoZoomFactor
print("Zoom: \(zoom)")
})
let switchZoomFactor = 2.0
DispatchQueue.main.async {
self.setZoom(CGFloat(switchZoomFactor), animated: false)
}
}
}
func setupInputs() {
if let device = AVCaptureDevice.default(.builtInTripleCamera, for: .video, position: .back) {
captureDevice = device
} else {
fatalError("no back camera")
}
guard let input = try? AVCaptureDeviceInput(device: captureDevice) else {
fatalError("could not create input device from back camera")
}
if !captureSession.canAddInput(input) {
fatalError("could not add back camera input to capture session")
}
captureInput = input
captureSession.addInput(input)
}
func setupPreviewLayer() {
previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
view.layer.addSublayer(previewLayer)
previewLayer.frame = self.view.layer.frame
}
func setZoom(_ value: CGFloat, animated: Bool) {
guard let device = captureDevice else { return }
let maxZoom: CGFloat = captureDevice.maxAvailableVideoZoomFactor
let minZoom: CGFloat = captureDevice.minAvailableVideoZoomFactor
let zoomValue = max(min(value, maxZoom), minZoom)
let deltaZoom = Float(abs(zoomValue - device.videoZoomFactor))
do {
try device.lockForConfiguration()
if animated {
device.ramp(toVideoZoomFactor: zoomValue, withRate: max(deltaZoom * 50.0, 50.0))
} else {
device.videoZoomFactor = zoomValue
}
device.unlockForConfiguration()
} catch {
return
}
}
}