Post not yet marked as solved
Hello folks!
How can I get a real-world measurement between the device (iPad Pro 5th. gen) and an object measured with the LiDAR?
Let's say I have a reticle in the middle of my CameraView and want to measure precisely from my position to that point I'm aiming?. Almost like the "Measure App" from Apple.
sceneDepth doesn't give me anything.
I also looked into the Sample Code "Capturing Depth Using the LiDAR Camera"
Any ideas how to do that? A push in to the right direction might also be very helpful
Thanks in advance!
Post not yet marked as solved
How can I make sure my app on iOS AppStore only show compatibility for AVCaptureMultiCamSession enabled devices only? I need to write a key under "Required Device Capabilities" in the info.plist file, but which key? I didn't find the key in the documentation
https://developer.apple.com/library/archive/documentation/General/Reference/InfoPlistKeyReference/Articles/iPhoneOSKeys.html#//apple_ref/doc/uid/TP40009252-SW3
Post not yet marked as solved
So, we use ARFaceTrackingConfiguration and ARKit for a magic mirror like experience in our apps, augmenting users faces with digital content.
On the iPad Pro 5gen customers are complaining that the camera image is too wide, I'm assuming that is because of the new wide-angle camera necessary for Apples center-stage Facetime calls?
I have looked through Tracking and Visualizing Faces and the WWDC 2021 videos, but I must have missed any API's that allow us to disable the wide-angle feature on the new iPads programmatically?
Post not yet marked as solved
Currently I'm using UIImagePickerController to allow our users to take photos within the app like so:
UIImagePickerController *picker = [UIImagePickerController new];
picker.sourceType = UIImagePickerControllerSourceTypeCamera;
picker.mediaTypes = @[(NSString *)kUTTypeImage];
picker.delegate = self;
[self presentViewController:picker animated:YES completion:nil];
And I use the delegate method to get the image out and do what is needed:
-(void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary<NSString *,id> *)info {
// Do stuff.
}
This seems to work fine for 99.9% of our users but for some reason we occasionally have an odd info dictionary with no image in it.
When I print the info dictionary, it looks like this every time:
{
UIImagePickerControllerMediaMetadata = {
"{MakerApple}" = {
25 = 0;
};
};
UIImagePickerControllerMediaType = "public.image";
}
As you can see there is no UIImagePickerControllerEditedImage or UIImagePickerControllerOriginalImage in that dictionary.
Anyone have any ideas on what this is, and what I might be able to do to 'fix it' ?
Post not yet marked as solved
Hi everyone,
I’m trying to use builtinDualWideCamera. However, I’m seeing the “macro camera issue” when shooting the closer object. The lens will be automatically switched. I see users can solve it with “macro control” in settings for the system camera. I wonder is there a similar API that developers can use to disable the automatic lens switching? Thanks!
Post not yet marked as solved
Issue
After an Update to 12.4 my AVCapture session is using between 70-90% CPU
That code is just a simple capture session in a Mac Catalyst App with the webcam of the Mac. The session calls an empty func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) and nothing more is happing in the test app I tried. Also no rendering of the stream.
My test code ran with 10-15% CPU usage a week ago, before I made an update today to macOS 12.4 - with rendering into a Metal view maybe 20-25%
To reproduce:
take AVCam project https://developer.apple.com/documentation/avfoundation/capture_setup/avcam_building_a_camera_app
Set checkbox for Mac Catalyst Support in General
run on Mac
The original sample code from Apple is also running with the same high CPU usage.
But even my AVCaptureSession code ran a week ago at about 15% which might even be too much.
I have tested this on a 2019 Intel MacBook pro and also tried it on a macOS 12.4 iMac which did result in the same high CPU usage with all the test apps.
Update to macOS 12.5 Beta 3 did also not help. But I know it ran better with macOS <12.4 and also almost 100% CPU even with 800% overall available is too much for just camera capture
Profiling in Instruments
Insturments is showing me with the heaviest stack trace a lot VN calls with object detection - is this the camera autofocusing? Do I need to put more options for the capture device? I don't call anything from the Vision Framework - is this happening automatically?
It looks like this, feels like a lot of work for just the Mac webcam:
what to do?
Is this a problem with macOS 12.4? Do you have a better running Capture session on 12.4 and what is needed to archive that? Could this be a Catalyst problem? Is this a bug which needs a bug report with Apple? I can't really go back to a previous version, and it would be neat if the code would also work on macOS 12.4 haha
Post not yet marked as solved
please give some example code for scan text with using button
IOS 16 developer issue - camera shows as a 3rd party replacement not as original equipment . Phone is a Iphone13 pro max bought brand new never repaired . Issue started as black screen when on photo mode or qr scan using back camera. All other modes had picture . Rebooted .. camera now works all modes but shows as unsupported replacement .
Post not yet marked as solved
_streamSinkIn = [[CMIOExtensionStream alloc] initWithLocalizedName:localizedName
streamID:streamInputID
direction:CMIOExtensionStreamDirectionSink clockType:CMIOExtensionStreamClockTypeHostTime
source:self];
Attempting to publish a CMIOExtensionStream with the 'sink' direction (i.e. print-to-tape) as alluded to in Brad Ford's presentation. Any attempt to create such a stream yields and invalid argument exception and if you examine the header files all the init methods are described as returning stream instances that source data (ie camera publishers).
*** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: 'Invalid argument'
Post not yet marked as solved
While trying to re-create the CIFilterCam demo shown in the WWDC session, I hit a roadblock when trying to access a hardware camera from inside my extension.
Can I simply use an AVCaptureSession + AVCaptureDeviceInput + AVCaptureVideoDataOutput to get frames from an actual hardware camera and pass them to the extension's stream? If yes, when should I ask for camera access permissions?
It seems the extension code is run as soon as I install the extension, but I never get prompted for access permission. Do I need to set up the capture session lazily? What's the best practice for this use case?
Post not yet marked as solved
It’d be nice if the extension could inherit the permissions given to the main app by the user, in my case screen capture.
Alternatively, is there a way to send data to the extension from the app, or vice versa?
Especially image or video data.
thanks!
laurent
Post not yet marked as solved
Apologies if this has been asked. I was reviewing the transcript iOS 16 camera improvement as it relates to depth and depth maps. It’s my understanding that said models including LIDAR scanners play a big role in the resulting depth maps that are captured when taking an image. I know this is an improvement from models that rely on Truedepth cameras, but how does this play into the new Lock Screen setup?
Is this effect created solely through software automations or does the presence of LIDAR and depth maps influence the results when it comes to creating the depth effect that pulls subjects to the front while allowing the background to remain in the foreground?
Thanks so much in advanced!
Post not yet marked as solved
I’m using Swift Playground on my iPad 9th gen. I’m trying to implement a website that uses camera inside of the Swift app. Can someone help me out pointing to what should I use to access the live video?
Post not yet marked as solved
At around the 5 min mark of "Discover advancements in iOS camera capture: Depth, focus, and multitasking", you state that TrueDepth delivers relative depth. This appears to contradict official documentation. In Capturing Photos with Depth, you state explicitly that TrueDepth camera measures depth directly with absolute accuracy. Why the change?
Continuity Camera is a way to stream raw video and metadata from iPhone to mac. Is it possible for an iPhone local recording app to use camera continuity to stream a preview from iPhone to mac?
Can camera continuity be made available on iPad, so that one can stream video/metadata to iPad screen (use case being a need to use better camera and user does not have a mac-book)
Post not yet marked as solved
We are using AVCaptureMetadataOutput to detect face and body rects. Face rect is shown in green and body rect is shown in blue. We would like to get body rect that encompasses human body, as shown in red box. This body rect is absolute in a sense that it will include hands, feet, arms etc.
Post not yet marked as solved
We are a Voip app, so automatically get enrolled in mic mode and video effects like portrait video. We would like to opt out of these features. Is there any way to do it?
In my understanding, there is a way for non-voip apps to opt-in, but no way for voip apps to opt-out.
Post not yet marked as solved
Continuity Camera is a way to stream raw video and metadata from iPhone to mac. Is it possible for an iPhone local recording app to use camera continuity to stream a preview from iPhone to mac?
Can camera continuity be made available on iPad, so that one can stream video/metadata to iPad screen (use case being a need to use better camera and user does not have a mac-book)
Post not yet marked as solved
I installed the iOS16 beta to test the new features and everything works well except the camera, which when I access it the screen is completely black. Initially I upgraded from iOS15, after restarting, turning off/on, I tried to restore the iPhone in different ways (from the Mac and from the iPhone) without solving the problem.
I also tried to restore to iOS15 to rule out a hardware problem and here the camera worked correctly, so I upgraded back to iOS16 and it stopped working.
The camera does not work in any application, in the Camera, in iMessage, Halide, Instagram, WhatsApp,... The controls and buttons of the camera app do work, but it doesn't take the photo or see anything.
Post not yet marked as solved
I’m getting a temperature warning under halide after installing iOS 16 beta and I have to admit it’s running warmer than usual. Am I the only one experiencing warmer than usual temperatures after installing iOS 16 beta?