Hi,
I would like to use macro-mode for the custom camera using AVCaptureDevice in my project. This feature might help to automatically adjust and switch between lenses to get a close up clear image. It looks like this feature is not available and there are no open apis to achieve macro mode from Apple. Is there a way to get this functionality in the custom camera without losing the image quality. Please let me know if this is possible.
Thanks you,
Adil Thamarasseri
Photos & Camera
RSS for tagExplore technical aspects of capturing high-quality photos and videos, including exposure control, focus modes, and RAW capture options.
Post
Replies
Boosts
Views
Activity
I am developing an iOS app with video call functionality and implementing Picture in Picture (PiP) mode for video calls. The issue I am facing is that the camera stops capturing video when the app goes to the background, even though the PiP view is still visible.
I have noticed that some apps, like Telegram, manage to keep the camera working in PiP mode while the app is in the background. How can I achieve this in my app?
Hi! I'm attempting to add purpose string when requesting to access the camera, but for some reason it doesn't seem to be working. Below I've included the alert and the plist.
Hey, I'm building a camera app and I want to use the captured HDRGainMap along side the photo to do some processing with a CIFilter chain. How can this be done? I can't find any documentation any where on this, only on how to access the HDRGainMap from an existing HEIC file, which I have done successfully. For this I'm doing something like the following:
let gainmap = CGImageSourceCopyAuxiliaryDataInfoAtIndex(source, 0, kCGImageAuxiliaryDataTypeHDRGainMap)
let gainDict = NSDictionary(dictionary: gainmap)
let gainData = gainDict[kCGImageAuxiliaryDataInfoData] as? Data
let gainDescription = gainDict[kCGImageAuxiliaryDataInfoDataDescription]
let gainMeta = gainDict[kCGImageAuxiliaryDataInfoMetadata]
However I'm not sure what the approach is with a AVCapturePhoto output from a AVCaptureDevice.
Thanks!
How can I use my RGB Curve points:
let redCurve = [CIVector(x: 0, y: 0), CIVector(x: 0.235, y: 0.152), CIVector(x: 0.5, y: 0.5), CIVector(x: 1, y: 1)]
let greenCurve = [CIVector(x: 0, y: 0), CIVector(x: 0.247, y: 0.196), CIVector(x: 0.5, y: 0.5), CIVector(x: 1, y: 1)]
let blueCurve = [CIVector(x: 0, y: 0), CIVector(x: 0.235, y: 0.184), CIVector(x: 0.466, y: 0.466), CIVector(x: 1, y: 1)]
in colorCurvesFilter which I've found in Apple Docs:
func colorCurves(inputImage: CIImage) -> CIImage {
let colorCurvesEffect = CIFilter.colorCurves()
colorCurvesEffect.inputImage = inputImage
colorCurvesEffect.curvesDomain = CIVector(x: 0, y: 1)
colorCurvesEffect.curvesData = Data(
bytes: [Float32]([
0.0,0.0,0.0,
0.8,0.8,0.8,
1.0,1.0,1.0
]), count: 36)
colorCurvesEffect.colorSpace = CGColorSpaceCreateDeviceRGB()
return colorCurvesEffect.outputImage!
}
Following WWDC 2023 "Support HDR images in your app", I'm trying to save 48-megapixel ProRAWs (taken on an iPhone 14 Pro Max) as HDR HEICs to the Photo Library. After processing the ProRAW file using CIRAWFilter, whether I use CIContext.heif10Representation() or convert to a CGImage, then UIImage, and use UIImage.heicData(), I get photos that behave oddly in the Photo Library. They appear too dark, and visibly brighten when first viewed, but more problematic is that the photos brighten a great deal more when you edit them with the Photos editor. This is the behavior when using the itur_2100_PQ color space, but itur_2100_HLG behaves similarly, except that it gets dramatically darker when edited. This behavior occurs whether CIRAWFilter.extendedDynamicRangeAmount is set to 0.0, or 2.0, or not set at all.
So what am I doing wrong? Here is a minimal iOS app -- well, just the ContentView -- that demonstrates the issue. You also need a .dng ProRAW file included in the project directory named test.dng. I'd love to include such a file, but I can't.
Be prepared for a multi-second wait when you save the photo.
import SwiftUI
import Photos
struct ContentView: View {
let context = CIContext()
let hdrColorSpace = CGColorSpace(name: CGColorSpace.itur_2100_PQ)!
var body: some View {
VStack(spacing: 100) {
Button("Save Photo From CGImage/UIImage") {
savePhotoFromUIImage()
}
Button("Save Photo From CIImage") {
savePhotoDirectFromCIImage()
}
}.padding(60)
}
//convert RAW with CIRAWFilter to CIImage, then convert to CGImage, then UIImage, then HEIF
private func savePhotoFromUIImage() {
if let ciImage = processRAW(url: Bundle.main.url(forResource:"test", withExtension: "dng")!) {
guard let outputCGImage = context.createCGImage(ciImage, from: ciImage.extent, format: .RGB10, colorSpace: hdrColorSpace) else { return }
let uiImage = UIImage(cgImage: outputCGImage)
if let heicData = uiImage.heicData() {
saveHEIFPhotoToLibrary(imageData: heicData)
} else {
print("Failed to convert UIImage to HEIC")
}
}
}
//convert RAW with CIRAWFilter to CIImage, then to HEIF
private func savePhotoDirectFromCIImage() {
if let ciImage = processRAW(url: Bundle.main.url(forResource:"test", withExtension: "dng")!) {
do {
let heif = try context.heif10Representation(of: ciImage, colorSpace: hdrColorSpace)
saveHEIFPhotoToLibrary(imageData: heif)
} catch {
print("Failed to get HEIF representation from CIContext")
}
}
}
private func processRAW(url: URL) -> CIImage? {
guard let coreRawFilter = CIRAWFilter(imageURL: url) else { return nil }
coreRawFilter.extendedDynamicRangeAmount = 2.0 //the issue persists whether this is not set, or set to 0, or set to, say, 2.0
guard let ciImage = coreRawFilter.outputImage else { return nil }
return ciImage
}
private func saveHEIFPhotoToLibrary(imageData: Data) {
PHPhotoLibrary.shared().performChanges({
let creationRequest = PHAssetCreationRequest.forAsset()
let options = PHAssetResourceCreationOptions()
creationRequest.addResource(with: .photo, data: imageData, options: options)
}) { success, error in
if let error = error {
print("Error saving photo: \(error.localizedDescription)")
} else {
print("Photo saved.")
}
}
}
}
Hello everyone,
I have a SwiftUI app using WKWebView to load a website that includes a form with a file input (). The issue is:
📌 When a user taps “Browse” and selects “Take Photo” (camera option), the app crashes before the camera opens.
Setup Details:
• App Uses SwiftUI with WKWebView
• The crash occurs only when selecting “Take Photo”, but selecting an image from the library works fine.
📌 Full Code (WKWebView in SwiftUI)
import SwiftUI
import WebKit
struct WebViewRepresentable: UIViewRepresentable {
var urlString: String
func makeUIView(context: Context) -> WKWebView {
let webView = WKWebView()
webView.configuration.allowsInlineMediaPlayback = true
webView.configuration.mediaTypesRequiringUserActionForPlayback = []
loadURL(in: webView)
return webView
}
func updateUIView(_ uiView: WKWebView, context: Context) {
loadURL(in: uiView)
}
private func loadURL(in webView: WKWebView) {
if let url = URL(string: urlString) {
webView.load(URLRequest(url: url))
}
}
}
struct ContentView: View {
@State private var currentURL: String = "https://fv-wohlensee.ch"
var body: some View {
VStack(spacing: 0) {
// Oberer Bereich in Grün
Color(red: 0, green: 0.4, blue: 0)
.frame(height: 50)
// WebView with white background
WebViewRepresentable(urlString: currentURL)
.background(Color.white)
Divider()
// Navigation buttons
HStack(spacing: 10) {
Button {
currentURL = "https://fv-wohlensee.ch/vereinshaus-eymatt/"
} label: {
VStack {
Image(systemName: "house")
.font(.system(size: 18))
Text("Klubhaus")
.font(.system(size: 12))
.minimumScaleFactor(0.7)
.lineLimit(1)
}
.padding(8)
}
.foregroundColor(.white)
.frame(maxWidth: .infinity)
Button {
currentURL = "https://fv-wohlensee.ch/vereinsboot/"
} label: {
VStack {
Image(systemName: "ferry.fill")
.font(.system(size: 18))
Text("Boot")
.font(.system(size: 12))
.minimumScaleFactor(0.7)
.lineLimit(1)
}
.padding(8)
}
.foregroundColor(.white)
.frame(maxWidth: .infinity)
Button {
currentURL = "https://fv-wohlensee.ch/aktivitaeten/"
} label: {
VStack {
Image(systemName: "calendar")
.font(.system(size: 18))
Text("Aktivitäten")
.font(.system(size: 12))
.minimumScaleFactor(0.7)
.lineLimit(1)
}
.padding(8)
}
.foregroundColor(.white)
.frame(maxWidth: .infinity)
Button {
currentURL = "https://fv-wohlensee.ch/mitglied-werden/"
} label: {
VStack {
Image(systemName: "person.badge.plus")
.font(.system(size: 18))
Text("Mitglied")
.font(.system(size: 12))
.minimumScaleFactor(0.7)
.lineLimit(1)
}
.padding(8)
}
.foregroundColor(.white)
.frame(maxWidth: .infinity)
}
.padding(.horizontal, 15)
.padding(.vertical, 10)
.background(Color(red: 0, green: 0.4, blue: 0))
}
.frame(maxWidth: .infinity, maxHeight: .infinity)
.background(Color(red: 0, green: 0.4, blue: 0))
.ignoresSafeArea()
}
}
struct ContentView_Previews: PreviewProvider {
static var previews: some View {
ContentView()
}
}
What I’ve Tried:
1️⃣ Checked Info.plist: Added permissions for camera and photo library:
<key>NSCameraUsageDescription</key>
<string>This app requires access to the camera to upload photos.</string>
<key>NSPhotoLibraryUsageDescription</key>
<string>This app requires access to your photo library.</string>
2️⃣ Enabled Media Capture in WKWebView:
webView.configuration.allowsInlineMediaPlayback = true
webView.configuration.mediaTypesRequiringUserActionForPlayback = []
3️⃣ Tested in Safari: The same form works fine when opened in Safari.
Questions:
❓ Does WKWebView need additional permissions to open the camera?
❓ Do I need to implement a delegate to handle file uploads in SwiftUI?
❓ Has anyone faced this issue and found a fix?
Any guidance would be greatly appreciated! 🚀
Thanks in advance! 😊
How can I use my RGB Curve points:
let redCurve = [CIVector(x: 0, y: 0), CIVector(x: 0.235, y: 0.152), CIVector(x: 0.5, y: 0.5), CIVector(x: 1, y: 1)]
let greenCurve = [CIVector(x: 0, y: 0), CIVector(x: 0.247, y: 0.196), CIVector(x: 0.5, y: 0.5), CIVector(x: 1, y: 1)]
let blueCurve = [CIVector(x: 0, y: 0), CIVector(x: 0.235, y: 0.184), CIVector(x: 0.466, y: 0.466), CIVector(x: 1, y: 1)]
in colorCurvesFilter which I've found in Apple Docs:
func colorCurves(inputImage: CIImage) -> CIImage {
let colorCurvesEffect = CIFilter.colorCurves()
colorCurvesEffect.inputImage = inputImage
colorCurvesEffect.curvesDomain = CIVector(x: 0, y: 1)
colorCurvesEffect.curvesData = Data(
bytes: [Float32]([
0.0,0.0,0.0,
0.8,0.8,0.8,
1.0,1.0,1.0
]), count: 36)
colorCurvesEffect.colorSpace = CGColorSpaceCreateDeviceRGB()
return colorCurvesEffect.outputImage!
}
I'd like to add a share extension to my app (an Action app extension, I think). The extension would appear when users share a photo in the Photos app (and, ideally, Safari). If you tapped my app icon on the share sheet, iOS would pass the photo to my app and switch the user from Photos or Safari to my full app, with the shared photo(s) available for my app to work with.
I know this is possible, because Instagram (a third-party app) works exactly like this. If you look at an image in the Photos app, tap Share and then tap Instagram, iOS will background the Photos app, activate the Instagram app and let you edit and post your photo in the main Instagram app.
It seems like NSExtensionContext#open(_:completionHandler:) might do this if I add a custom URL to my main app, but the documentation for that says:
Each extension point determines whether to support this method, or under which conditions to support this method. In iOS, the Today and iMessage app extension points support this method.
That would rule out an Action, Photo Editing or Share extension. But then how does Instagram do this, and how can I achieve the same in my app?
I know that it's possible for an Action, Photo Editing or Share extension to open as a mini-app on top of the app providing the content. But coordinating the IPC for that is much, much more work (for my particular app) than just switching the user over to the app, with full access to all the functionality and data that my main app usually has access to.
Hello,
I am experiencing slow image retrieval when using the requestImageForAsset:targetSize:contentMode:options:resultHandler: method in my application. The delay is significantly impacting the performance of my app.
Here are the details of my implementation:
for (PHAsset *asset in assets) {
@autoreleasepool {
PHImageManager *imageManager = [PHImageManager defaultManager];
PHImageRequestOptions *options = [[PHImageRequestOptions alloc] init];
options.synchronous = YES;
options.deliveryMode = PHImageRequestOptionsDeliveryModeFastFormat;
options.resizeMode = PHImageRequestOptionsResizeModeNone;
[imageManager requestImageForAsset:asset
targetSize:CGSizeMake(100, 100)
contentMode:PHImageContentModeAspectFill
options:options
resultHandler:^(UIImage *thumbnail, NSDictionary *info) {
CameraRollCellDto *cellDto = [[CameraRollCellDto alloc] init];
cellDto.index = index;
cellDto.thumbnail = thumbnail;
cellDto.propertyDate = asset.creationDate;
if (self.segmentedTorikomi.selectedSegmentIndex == SEG_INDEX_IKKATSU) {
cellDto.isSelected = YES;
} else {
cellDto.isSelected = NO;
}
[list addObject:cellDto];
}];
index++;
}
}
Has anyone else encountered this issue? Are there any known solutions or optimizations that can help improve the speed of image retrieval using this method?
Thank you for your assistance.
Our app filters the photo library to a certain date range for ease of picking photos. However, to do this, we have to require full permissions to the photo library. We would like to use the PHPickerViewController and have it filter the results by the assets creation date? This would allow us to use it.
I see other filter options, but not this one. And if it isn't there, is this something that is being thought about or on a roadmap?
Hello everyone, I need some help about this things.
If you also know, pls comment.
Overview
We are planning to develop an app using the “Support external cameras in your iPadOS app” feature introduced in iPadOS 17.
Before implementing this feature, it is necessary for the iPad to recognize external cameras. However, among the iPad models compatible with iPadOS 17, we have found that some of the iPads owned by our development team can recognize external cameras, while others cannot.
If you have any reports regarding compatibility issues or information on how to resolve these problems, please share them with us.
Detailed Explanation:
The results of our investigation are as follows:
External Camera Used: A 360-degree camera
Devices Firmware
RICOH Theta X 2.61.0(2024/12/26Latest)
RICOH Theta Z1
Tested iPad
Devices Firmware Status
12.9インチiPad Pro(第3世代) IOS 17.5.1 OK
11インチiPad Pro(M4) IOS 18.2 NG
Verification Method
Step 1: Power on the iPad and the external camera, ensuring both are ready for connection.
Step 2: Connect the iPad and the external camera using a USB-C cable.
Step 3: Launch FaceTime on the iPad and check the displayed camera feed.
If the external camera is recognized, the feed from the external camera will be displayed.
is forKey:fileSize considered accessing non-public API?
has your app been rejected at review stage due to this?
let resources = PHAssetResource.assetResources(for: asset)
if let resource = resources.first {
if let fileSize = resource.value(forKey: "fileSize") as? Int {
return fileSize
}
}
Hey everyone😊, I am building an app that includes a live camera feed preview. That's all I need to do along side identifying the images with createML's image classification. I don't need to capture images at all. I've seen some very complicated tutorials. I just want to use a couple of lines of code.
Support external cameras in your iPadOS app and use Swift to read multiple camera feeds?
thanks
Hey, I have a complex CIFilter chain I'm trying to debug to improve processing time. Is there any documentation on what all the colours mean and the naming, e.g. sRGB Linear_to_workspace?
Thanks
Alex
I set both the AVCapturePhotoOutput and the AVCapturePhotoSettings with maxPhotoDimensions = .init(width: 8064, height: 6048), but still get the 12mp photo.
for a while i had one photo widget (no special app, just the standard apple one) and it was set to shuffle to an album of pics of my bf. no problems at all. a few weeks later i added one to shuffle through an album of pics of my cat, and that one worked fine, but it made the one of my bf stop working, and it just showed a blank white widget, no error message or anything. so i removed the one of my cat hoping the one of my bf would go back to working, and it didn’t. i only have the widgets for find my, my bank, and then apps on my home screen otherwise.
What is the purpose of AdjustmentsSecondary.data included in the PHAssetResource for a cleaned-up image?
When using creationRequest.addResource, what should be set for the PHAssetResourceType?
If I set the PHAssetResourceType as follows to create an asset, it appears correctly in the camera roll. However, when attempting to edit the image in the Photos app, the app crashes:
IMG_5332.HEIC → .photo
FullSizeRender.HEIC → .fullSizePhoto
Adjustments.plist → .adjustmentData
AdjustmentsSecondary.data → .adjustmentData
I'm a new app developer and am trying to add a button that adds pictures from the photo library AND camera. I added the first function (adding pictures from the photo library) using the new-ish photoPicker, but I can't find a way to do the same thing for the camera. Should I just tough it out and use the UI View Controller struct that I've seen in all of the YouTube tutorials I've come across?
I also want the user to be able to crop the picture in the app after they take a picture.
Thanks in advance