Discuss using the camera on Apple devices.

Posts under Camera tag

171 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

Minimum required device capabilities for iPhones that have true depth camera?
Hello, I am trying to submit my app to the app store and I want to make sure that my app is only installed by iPhones with a true depth camera. I have tried including the "iPhone / iPad Minimum Performance A12" in the the minimum required devices capabilities tab in info.plist but it seems to not work. I can still open my app with a phone that does not have the true depth camera. Is there a way of setting the minimum requirement to have the true depth camera through the info.plist or can I also hard code it in my app?. Your assistance is greatly appreciated.
0
0
7
24m
Performant alternative to scaling a CIImage / PixelBuffer
Hey, I’m building a camera app where I am applying real time effects to the view finder. One of those effects is a variable blur, so to improve performance I am scaling down the input image using CIFilter.lanczosScaleTransform(). This works fine and runs at 30FPS, but when running the metal profiler I can see that the scaling transforms use a lot of GPU time, almost as much as the variable blur. Is there a more efficient way to do this? The simplified chain is like this: Scale down viewFinder CVPixelBuffer (CIFilter.lanczosScaleTransform) Scale up depthMap CVPixelBuffer to match viewFinder size (CIFilter.lanczosScaleTransform) Create CIImages from both CVPixelBuffers Apply VariableDepthBlur (CIFilter.maskedVariableBlur) Scale up final image to metal view size (CIFilter.lanczosScaleTransform) Render CIImage to a MTKView using CIRenderDestination From some research, I wonder if scaling the CVPixelBuffer using the accelerate framework would be faster? Also, Instead of scaling the final image, perhaps I could offload this to the metal view? Any pointers greatly appreciated!
2
0
90
2h
How does Final Cut Camera synchronize videos
I have an application that enables recording video from multiple iPhones through an iPad. It uses Multipeer Connectivity for all the device communication. When the user presses record on the iPad, it sends a command to each device in parallel and they start capturing video. But since network latency varies, I cannot guarantee that the recording start and stop times are consistent among all the iPhones. I need the frames to be exactly in sync. I tried using the system clock on each device for synchronizing the videos. If all the device system clocks were in sync within 3ms (30 frames per second), then it should be okay. But I tested and the clocks vary quite a bit, multiple seconds. So that won't work. I ultimately solved the problem by having a countdown timer on the iPad. The user puts the iPad in view of each phone with the countdown. Then later I use a python script to cut all the videos when the countdown timer goes to 0. But that's more work for the end user and requires manual work on our end. With a little ML text recognition, this could get better. Some people have suggested using a time server and syncing the clocks that way. I still haven't tried this out, and I'm not sure if it's even possible to run a NTP server on an iPad, and whether the NTP resolution will be below 3ms. I tried out Final Cut Camera and it has solved the synchronization problem. Each frame is in sync. The phones don't start and stop at exactly the same time, and they account for this by adding black frames to the front and/or back of videos to account for differences. I've searched online and other people have the same problem. I'd love to know how Apple was able to solve the synchronization issue when recording video from multiple iPhones from an iPad over what I assume is Multipeer Connectivity.
1
0
81
4d
Iphone 14 & 15 pro model camera focusing error / UNITY APP
Hi, I would like to ask your advice with our IOS app, which has a problem with the Iphone 14 pro & Iphone 15 pro model camera focusing on close objects. The game is made with Unity and the idea is that kids can scan random QR - and barcodes to catch monsters. With Iphone the no pro models focus and read the barcodes well, but new models with three back camera systems our app does not focus on close, that makes the barcode reading hard. Somehow it seems that the Iphone pro model is not changing the camera lens to close focusing one in a third party app like it does when you use the camera itself. Do you have any advice on how to solve the focusing problem? I appreciate your help!
0
0
214
2w
Issue with iPhone QR Code Contact Preview Displaying Email Instead of Organization
Post Content: Hi everyone, I’m encountering an issue with how iPhone displays contact information from a vCard QR code in the contact preview. When I scan the QR code with my iPhone camera, the contact preview shows the email address between the name and the contact image, instead of displaying the organization name. Here’s the structure of the vCard I’m using: BEGIN:VCARD VERSION:3.0 FN:Ahmad Rana N:Rana;Ahmad;;; ORG:Company 3 TEL;TYPE=voice,msg:+1234567890 EMAIL:a(at the rate)gmail.com URL:https://example.com IMPP:facebook:fb END:VCARD What I Expect: When I scan it with camera and in the contact preview before creating the camera I want organization name between name and image of the preview but I get email instead of ogrganization name. If only organisation is passed then it displays correctly but when I pass email it displayed email in between. Steps I’ve Taken: Verified the vCard structure to ensure it follows the standard format. Reordered the fields in the vCard to prioritize the organization name and job title. Tested with a simplified vCard containing only the name, organization, and email. Despite these efforts, the email address continues to be displayed in the contact preview between the name and the contact image, while the organization name is not shown as expected. Question: How can I ensure that the organization name is displayed correctly in the contact preview on iPhone when scanning a QR code? Are there specific rules or best practices for field prioritization in vCards that I might be missing? I would appreciate any insights or suggestions on how to resolve this issue. Thank you!
1
0
234
2w
Improving object separation with live depth data
Hey, I'm building a portrait mode into my camera app but I'm having trouble with matching the quality of Apples native camera implementation. I'm streaming the depth data and applying a CIMaskedVariableBlur to the video stream which works quite well but the definition of the object in focus looks quite bad in some scenarios. See comparison below with Apples UI + depth data. What I don't quite understand is how Apple is able to do such a good cutout around my hand assuming it has similar depth data to what I am receiving. You can see in the depth image that my hand is essentially the same colour as parts of background, and this shows in the blur preview - but Apple gets around this. Does anyone have any ideas? Thanks!
0
0
193
3w
Frame Discontinuity Reasons
I have an app that uses a MultiCamCaptureSession, the devices of which are builtInUltraWideCamera and builtInLiDARDepthCamera cameras. Occasionally when outside I get some frame drops due to discontinuity that end in the media services being reset: [06-24 11:27:13][CameraSession] Capture session runtime error: related decl 'e' for AVError(_nsError: Error Domain=AVFoundationErrorDomain Code=-11819 "Cannot Complete Action" UserInfo={NSLocalizedDescription=Cannot Complete Action, NSLocalizedRecoverySuggestion=Try again later.}) This runtime error notification is always superseded by 4-5 frame drops : [06-24 11:27:10][CaptureSession] Dropped frame because Discontinuity Logging the system temperature shows [06-24 11:27:10][CaptureSession] Temperature is 'Fair' I have some inclination that the frame discontinuity is being caused by the whileBalanceMode of the capture session, perhaps the algorithm requires 5 recent frames to work. I had a similar problem with the lidar depth camera where with filtering enabled exactly 5 frame drops would make the media services reset. When the whiteBalanceMode is locked I do slightly better with 10 frame drops before the mediaServices are reset. Is there any logging utility to determine the actual reason? All of these sampleBuffers come with no info attachment only the not so useful "Dropped frame because Discontinuity." Any ideas for solving this would be helpful as well. Maybe tuning the camera to work better with quickly varying lighting conditions?
1
0
237
3w
AVCam modified for Spatial Video captureing in WWDC24
I just follow the video and add the codes, but when I switch to spatial video capturing, the videoPreviewLayer shows black. <<<< FigCaptureSessionRemote >>>> Fig assert: "! storage->connectionDied" at bail (FigCaptureSessionRemote.m:405) - (err=0) <<<< FigCaptureSessionRemote >>>> captureSessionRemote_getObjectID signalled err=-16405 (kFigCaptureSessionError_ServerConnectionDied) (Server connection was lost) at FigCaptureSessionRemote.m:405 <<<< FigCaptureSessionRemote >>>> Fig assert: "err == 0 " at bail (FigCaptureSessionRemote.m:421) - (err=-16405) <<<< FigCaptureSessionRemote >>>> Fig assert: "msg" at bail (FigCaptureSessionRemote.m:744) - (err=0) Did I miss something?
4
0
381
Jun ’24
Apple: please unlock "enterprise features" for all visionOS devs!
We are developing apps for visionOS and need the following capabilities for a consumer app: access to the main camera, to let users shoot photos and videos reading QR codes, to trigger the download of additional content So I was really happy when I noticed that visionOS 2.0 has these features. However, I was shocked when I also realized that these capabilities are restricted to enterprise customers only: https://developer.apple.com/videos/play/wwdc2024/10139/ I think that Apple is shooting itself into the foot with these restrictions. I can understand that privacy is important, but these limitations restrict potential use cases for this platform drastically, even in consumer space. IMHO Apple should decide if they want to target consumers in the first place, or if they want to go the Hololens / MagicLeap route and mainly satisfy enterprise customers and their respective devs. With the current setup, Apple is risking to push devs away to other platforms where they have more freedom to create great apps.
1
3
319
Jun ’24
builtInTripleCamera is not automatic switching from one camera to another
Hey! I'm working on a camera app and I've noticed that the .builtInTripleCamera doesn't behave anything like the native app. Tested on iPhone 15 Pro Max and iPhone 12 Pro Max. The documentation states the following, but that seems quite different from what is happening in the app: Automatic switching from one camera to another occurs when the zoom factor, light level, and focus position allow. So, does it automatically switch like the native camera, or do I need to do something? Custom Camera vs Native Camera Custom Camera Native Camera The code was adapted from the Apple's project AVCamFilter. Just download the AVCamFilter and update videoDeviceDiscoverySession: private let videoDeviceDiscoverySession = AVCaptureDevice.DiscoverySession( deviceTypes: [.builtInTripleCamera], mediaType: .video, position: .unspecified )
1
0
233
3w
Why is `isDepthDataDeliverySupported` returning false on an iPad Pro using `builtInDualWideCamera`?
I am trying to use the AVCamFilter Apple sample project discussed in this WWDC session to get depth data using the dual camera. The project has built-in features to get depth data from the dual camera. When the sample project was written builtInDualWideCamera didn't exist yet, and the project only tries to get builtInDualCamera and builtInWideAngleCamera. When I run the project on my iPad Pro it doesn't show any of the depth-related UI because the device doesn't have a builtInDualCamera device. So I added builtInDualWideCamera in to the videoDeviceDiscoverySession, and it seems to get that device properly, but isDepthDataDeliverySupported is returning false still. Is there some reason why isDepthDataDeliverySupported is false even though I seem to be using a dual camera device? I know the device has a builtInLiDARDepthCamera but I wanted to try out the dual camera depth data to see how it performs for shorter distances. I wouldn't have expected the dual camera depth data delivery to be made unavailable on the device just because the LiDAR sensor is already available. Using iPadOS 17.5.1, iPad Pro 11-inch 4th generation. The depth feature of this sample app works fine on an iPhone 15 I tested. Also tried on an iPhone 15 Pro and it worked even though that device also has a LiDAR sensor, so the issue is presumably not related to the fact that the iPad Pro has a LiDAR sensor.
4
0
367
Jun ’24
On Sonoma 14.5, after upgrading CMIO CameraExtension, daemon is not running
I made CameraExtension and installed by OSSystemExtensionRequest. I got success callback. I did uninstall old version of my CameraExtension and install new version of my CameraExtension. "systemextensionsctl list" command shows "[activated enabled]" on my new version. But no daemon process with my CameraExtension is not running. I need to reboot OS to start the daemon process. This issue is new at macOS Sonoma 14.5. I did not see this issue on 14.4.x
0
0
350
May ’24