Posts

Post not yet marked as solved
2 Replies
255 Views
Hello! We have a web app that uses WebRTC to get the camera stream and we display it using a video element. We also have an option to take a photo capture that basically draws the video element to a canvas and returns the image data. We do something like: function triggerCapture() { let canvas = document.createElement('canvas'); let context = canvas.getContext('2d'); canvas.width = video.videoWidth; canvas.height = video.videoHeight; context.drawImage(video, 0, 0, canvas.width, canvas.height); const imageData = context.getImageData(0, 0, canvas.width, canvas.height); context.clearRect(0, 0, canvas.width, canvas.height); return imageData; } On iOS 16 Beta the imageData always represents a black image. We noticed that on iOS 16 the GPU Process: DOM Rendering is now enabled by default and if we disable it the triggerCapture method works as expected. However, it seems this issue is not solely due to the DOM rendering feature because if it's enabled on iOS 15.5 the issue does not occur. To reproduce this issue you can use the WebRTC samples (Use getUserMedia with canvas) and you'll see every time you take a snapshot a black image will be displayed. Are you aware of this issue? Thanks!
Posted
by Frednic.
Last updated
.
Post not yet marked as solved
0 Replies
319 Views
Hello! I need to get the distance from the camera (AVCaptureDevice) to an object without knowing its real size cause that's what I'm trying to figure out. Right now I based the implementation using AVDepthData. I find a point that corresponds to the object and I find its depth in the depth map. The thing is that attaching AVDepthDataOutput to the capture session causes the dual camera to zoom to 2X (so that the wide focal length matches the tele) and I don't want that. Is there another way to achieve this? Thank you!
Posted
by Frednic.
Last updated
.