Posts

Post not yet marked as solved
1 Replies
153 Views
I can't figure out how to solve this error: Value of type 'ARFrame' has no member 'viewTrans' Nothing came up when I tried googling the error. Below I attached the entire code and if you scroll down to basically the end you will see a comment called //ERROR and right below that is the line of code throwing this error. code
Posted Last updated
.
Post marked as solved
1 Replies
206 Views
I have an application that captures an image with a depth map and calibration data and exports it so then I can work with it in python. The depth map and calibration data are all converted to Float32 and is stored as a json file. The image is stored as a jpeg file.  The depth map shape is (480, 640) and the image shape is (3024, 4032, 3)  My goal is to be able to create a point cloud from this data.   I’m new to working with data provided by apples TrueDepth camera and would like some clarity to what preprocessing steps I need to perform before creating the point cloud.  Here they are below: 1) since the 640x480 is a scaled version of the 12MP image, means that I can scale down the intrinsics as well. So I should scale [fx, fy, cx, cy] by the scaling factor 640/4032 = 0.15873? 2) after scaling comes taking care of the distortion, which I should use lensDistortionLookupTable to distort both the image and depth map? Are the above two questions correct or am I missing something??
Posted Last updated
.
Post not yet marked as solved
2 Replies
173 Views
Goal: To obtain depth data & calibration data from the TrueDepth Camera for computer vision task.   I am very confused because for example apple says, To use depth data for computer vision tasks, use the data in the cameraCalibrationData property to rectify the depth data. which I tried and get nil, and then when looking through stack overflow I read, cameraCalibrationData is always nil in photo, you have to get it from photo.depthData. As long as you're requesting depth data, you'll get the calibration data. and so when I tried print(photo.depthData) to obtain depth & calibration data my output was: Optional(hdis 640x480 (high/abs)  calibration: {intrinsicMatrix: [2735.35 0.00 2017.75 | 0.00 2735.35 1518.51 | 0.00 0.00 1.00],  extrinsicMatrix: [1.00 0.00 0.00 0.00 | 0.00 1.00 0.00 0.00 | 0.00 0.00 1.00 0.00] pixelSize:0.001 mm,  distortionCenter:{2017.75,1518.51},  ref:{4032x3024}}) ^ But where is the depth data??` Below is my entire code: Note: I'm new to Xcode and I'm use to coding in python for computer vision task so I apologize in advance for the messy code.  import AVFoundation import UIKit import Photos class ViewController: UIViewController {     var session: AVCaptureSession?     let output = AVCapturePhotoOutput()     var previewLayer = AVCaptureVideoPreviewLayer()     // MARK: - Permission check     private func checkCameraPermissions() {         switch AVCaptureDevice.authorizationStatus(for: .video) {         case .notDetermined:             AVCaptureDevice.requestAccess(for: .video) { [weak self] granted in                 guard granted else { return }                 DispatchQueue.main.async { self?.setUpCamera() }             }         case .restricted:             break         case .denied:             break         case .authorized:             setUpCamera()         @unknown default:             break         }     }     // MARK: - camera SETUP     private func setUpCamera() {         let session = AVCaptureSession()         if let captureDevice = AVCaptureDevice.default(.builtInTrueDepthCamera, for: AVMediaType.depthData, position: .unspecified) {             do {                 let input = try AVCaptureDeviceInput(device: captureDevice)                 if session.canAddInput(input) {                     session.beginConfiguration()                     session.sessionPreset = .photo                     session.addInput(input)                     session.commitConfiguration()                 }                 if session.canAddOutput(output) {                     session.beginConfiguration()                     session.addOutput(output)                     session.commitConfiguration()                 }                 output.isDepthDataDeliveryEnabled = true                 previewLayer.videoGravity = .resizeAspectFill                 previewLayer.session = session                 session.startRunning()                 self.session = session             }             catch {                 print(error)             }         }     }     //MARK: - UI Button     private let shutterButton: UIButton = {         let button = UIButton(frame: CGRect(x: 0, y: 0, width: 100, height: 100))         button.layer.cornerRadius = 50         button.layer.borderWidth = 10         button.layer.borderColor = UIColor.white.cgColor         return button     }()     //MARK: - Video Preview Setup     override func viewDidLoad() {         super.viewDidLoad()         view.backgroundColor = .black         view.layer.insertSublayer(previewLayer, at: 0)         view.addSubview(shutterButton)         checkCameraPermissions()         shutterButton.addTarget(self, action: #selector(didTapTakePhoto), for: .touchUpInside)     }     //MARK: - Video Preview Setup     override func viewDidLayoutSubviews() {         super.viewDidLayoutSubviews()         previewLayer.frame = view.bounds         shutterButton.center = CGPoint(x: view.frame.size.width/2, y: view.frame.size.height - 100)     }     //MARK: - Running and Stopping the Session     override func viewWillAppear(_ animated: Bool) {         super.viewWillAppear(animated)         session!.startRunning()     }     //MARK: - Running and Stopping the Session     override func viewWillDisappear(_ animated: Bool) {         super.viewWillDisappear(animated)         session!.stopRunning()     }     //MARK: - taking a photo     @objc private func didTapTakePhoto() {         let photoSettings = AVCapturePhotoSettings()         photoSettings.isDepthDataDeliveryEnabled = true         photoSettings.isDepthDataFiltered = true         output.capturePhoto(with: photoSettings, delegate: self)     } } extension ViewController: AVCapturePhotoCaptureDelegate {     func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) {         guard let data = photo.fileDataRepresentation() else { return }         print(photo.depthData)         let image = UIImage(data: data)         session?.stopRunning()         // ADDING the IMAGE onto the UI         let imageView = UIImageView(image: image)         imageView.contentMode = .scaleAspectFill         imageView.frame = view.bounds         view.addSubview(imageView)         session?.stopRunning()         // saving photo to library         PHPhotoLibrary.requestAuthorization { status in             guard status == .authorized else { return }             PHPhotoLibrary.shared().performChanges({                 let creationRequest = PHAssetCreationRequest.forAsset()                 creationRequest.addResource(with: .photo, data: photo.fileDataRepresentation()!, options: nil)             }, completionHandler: { _, error in                 if error != nil {                     print("error")                 }             })         }     } }
Posted Last updated
.
Post marked as solved
1 Replies
588 Views
Hi can anyone help me with how you would obtain depth data utilizing the rear facing camera equipped with LiDAR? Could you please give me some tips or code examples on how to work with/access the LiDAR data? Also from post i have previously viewed you are able to obtain information from the true depth camera API but are not able to for the Rear-facing camera? why is this? Thank you so much!!
Posted Last updated
.