I'm capturing a room via RoomPlan API and would like to access the DepthMap(sceneDepth) or SmoothDepthMap(smoothedSceneDepth) from my own provided ARSession for RoomCaptureSession.
But both depth maps are empty when handling the delegates. I have not found a solution yet. So is it even possible? Because i have not found any documentation of what RoomCaptureSession overwrites in the ARSession if I provide my own ARSession instance.
Here is a example code snippet of what i'm trying to do:
private let arSession = ARSession()
private lazy var roomPlanCaptureSession = RoomCaptureSession(arSession: arSession)
let arConfig = ARWorldTrackingConfiguration()
//Create semantics for ARconfig which is used for ARSession
var semantics: ARWorldTrackingConfiguration.FrameSemantics = []
if ARWorldTrackingConfiguration.supportsFrameSemantics(.sceneDepth) {
semantics.insert(.sceneDepth)
}
if ARWorldTrackingConfiguration.supportsFrameSemantics(.smoothedSceneDepth) {
semantics.insert(.smoothedSceneDepth)
}
arConfig.frameSemantics = semantics
//set delegates
roomPlanCaptureSession.delegate = self
arSession.delegate = self
//Check if device support for depthMap
if ARWorldTrackingConfiguration.supportsFrameSemantics(.sceneDepth){
arSession.run(arConfig)
}
else{
print(".sceneDepth is unsupported.")
}
//run roomcapture scan config
let captureConfig = RoomCaptureSession.Configuration()
roomPlanCaptureSession.run(configuration: captureConfig)
//trying to get sceneDepth
public func session(_ session: ARSession, didUpdate frame: ARFrame) {
print("session delegate capture: sceneDepth: \(String(describing: frame.sceneDepth))")
//prints: session delegate capture: sceneDepth: nil
also in this video from 2023 it is say that i can pass custom ARSession to my RoomPlan.
Explore enhancements to RoomPlan - Video
Quote 3:00: Here is the init and stop function in previous RoomPlan. And here is how you pass over a custom ARSession to init function. Any custom ARSession with ARWorldTrackingConfiguration will be honored inside RoomCaptureSession.
anyway I welcome any input. maybe im doing something wrong. :)
ARKit
RSS for tagIntegrate iOS device camera and motion features to produce augmented reality experiences in your app or game using ARKit.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
My development team admin requested the Enterprise API for camera access on the vision pro. We got that granted, got a license for usage, and got instructions for integrating it with next steps.
We did the following:
Even when I try to download and run the sample project for "Accessing the Main Camera", and follow all the exact instructions mentioned here: https://developer.apple.com/documentation/visionos/accessing-the-main-camera
I am just unable to receive camera frames.
I added the capabilities, created a new provisioning profile with this access, added the entitlements to info.plist and entitlements, replaced the dummy license file with the one we were sent, and also have a matching bundle identifier and development certificate, but it is still not showing camera access for some reason.
"Main Camera Access" shows up in our Signing & Capabilities tab, and we also added the NSMainCameraDescription in the Info.plist and allow access while opening the app. None of this works. Not on my app, and not on the sample app that I just downloaded and tried to run on the Vision Pro after replacing the dummy license file.
What is the reason the hand-tracking joints have these axes? I'm trying to create a virtual hands model and that's a mess.
Problem Description:
I am developing an application that runs in the Shared Space on Apple Vision Pro using Unity. When using the UI ScrollView (Scroll View) component, I found that the Mask / RectMask2D does not function in the Shared Space.
Scrolling content is not masked or cropped; it extends beyond the view boundary and is displayed directly.
The same UI works correctly across platforms such as Unity Editor, iOS, and macOS, but the issue only occurs in the shared space of Vision Pro.
Reproduction steps:
Create a ScrollView in Unity.
Add a Mask or RectMask2D to the viewport.
Deploy the application to Apple Vision Pro and run it in Shared Space mode.
Sliding content will not be clipped by the mask, and the masked area is entirely ineffective.
Expected behavior:
The content of ScrollView should be properly clipped by Mask / RectMask2D and should not render outside the mask boundary.
Actual results:
In the shared space of Vision Pro, the mask is ineffective, causing scrolling content to extend beyond the designated area and resulting in severe UI distortion.
Environmental Information:
Device: Apple Vision Pro
Mode: Shared Space
Unity Version: 6000.0.40f1
visionOS version: visionOS 26.0
Unity PolySpatial Version: 2.0.4
Impact
This issue causes Unity UI to fail to display correctly on Vision Pro, preventing ScrollView from properly clipping content, which impacts the UI experience and interaction effects in practical applications.
Expected Result: When running a Unity app in the shared space of visionOS, the Mask / RectMask2D of ScrollView functions correctly
Hi there,
I received an enterprise license file to include enhanced object tracking configuration for the Vision Pro. My account is part of the team which got the allowance from Apple to use this capability. Unfortunately, although I followed the guide, I do not find the Object Tracking capability when I try to add it to my project. There are other capabilities like Main Camera on the Vision Pro, but not for Object Tracking. I am using Xcode 26.1 and visionOS 26.1. What am I missing here?
Thanks in advance,
Matthias
i'd like to have a little bit control over the transparency of the videomaterial. is there any way to prepare a shadergraph unlit shader and use it with the videomaterial.
Since updating to iOS 26.0 (and confirmed on 26.1), ARBodyTrackingConfiguration no longer detects a valid ARBodyAnchor on devices with LiDAR (e.g., iPhone 15 Pro, iPhone 17 Pro Max).
This issue reproduces in custom projects and Apple’s official sample “Capturing Body Motion in 3D”.
The AR session runs normally, but the delegate call:
func session(_ session: ARSession, didUpdate anchors: [ARAnchor])
never yields an ARBodyAnchor with valid joint transforms.
All joints return nil when calling:
body.skeleton.modelTransform(for: jointName)
resulting in 0 valid joints per frame.
Environment
• Device: iPhone 17 Pro Max (LiDAR)
• iOS: 26.0 / 26.1
• Xcode: 16.0 (stable)
• Framework: ARKit + RealityKit
• Configuration used:
config.worldAlignment = .gravityAndHeading
config.isAutoFocusEnabled = true
config.environmentTexturing = .none
session.run(config)
Also tested: with and without frameSemantics = .bodyDetection
Expected Behavior
ARBodyAnchor should be detected and body.skeleton should contain ~89 valid joints with continuous updates.
I am using AccessoryTrackingProvider from ARKit to get the transform of the PSVR2 controller via originFromAnchorTransform of the AccessoryAnchor. I also am trying to use AnchorEntity on the controller using RealityKit
However, none of the three options for Accessory.LocationName, which should be used to define the AnchorEntity target, seem to match the position on the controller which is being sent from ARKit.
The picture attached is showing two transforms:
RealityKit - using .gripSurface to define the AnchoringComponent.Target.accesssory location.
ARKit - using originFromAnchorTransform for AccessoryTrackingProvider.
They are not aligned at the same point.
As for the other options of Accessory.LocationName, using .aim is located at the tip of the controller and .grip is the same position as .gripSurface but with a different orientation.
I am wondering why there is not an option for Accessory.LocationName that actually matches the transform captured by ARKit?
Error:
RoomCaptureSession.CaptureError.exceedSceneSizeLimit
Apple Documentation Explanation:
An error that indicates when the scene size grows past the framework’s limitations.
Issue:
This error is popping up in my iPhone 14 Pro (128 GB) after a few roomplan scans are done. This error shows up even if the room size is small. It occurs immediately after I start the RoomCaptureSession after the relocalisation of previous AR session (in world tracking configuration). I am having trouble understanding exactly why this error shows and how to debug/solve it.
Does anyone have any idea on how to approach to this issue?
The samples shown in volumetric work great but moving to an immersive experience the pen physical buttons don't work when you're focusing to an entity with a collision.
https://developer.apple.com/documentation/arkit/arpointcloud
https://developer.apple.com/documentation/arkit/arframe/rawfeaturepoints
The point cloud (collection of points/features) main intention is a debug visualization to what the underlying tracking algorithm processes and is not designed for additional algorithms on top of that. But, we are utilizing information contained in the points/features collected by ARKit.
Currently, the range of rawfeaturepoints is limited to about 10 meter from the device.
We see a great chance if the range is unlock. The global localization will be more robust and accurate.
ARPointCloud - Apple ARKit - FindSurface
YouTube SIdQRiLj2jY
Hardware Specifications Regarding the LiDAR scanner in the iPhone 13/14/15/16/17 Pro series, could you please provide the following technical details for academic verification:
Point Cloud Density / Resolution: The effective resolution of the depth map.
Sampling Frequency: The sensor's refresh rate.
Accuracy Metrics: Official tolerance levels regarding depth accuracy relative to distance (specifically within 0.5m – 2m range).
Data Acquisition Methodology For a scientific thesis requiring high data integrity: Does Apple recommend a custom ARKit implementation over third-party applications (e.g., Polycam) to access raw depth data? I need to confirm if third-party apps typically apply smoothing or post-processing that would obscure the sensor's native performance, which must be avoided for my error analysis.
Topic:
Spatial Computing
SubTopic:
ARKit
ARSession provides video stream from the wide angle camera. If ARSession uses the ultra wide camera at the same time, ARSession may provide video stream from that camera, otherwise AVCaptureSession with an ultra wide camera should be allowed to launch. It would be very very useful if we can access different cameras while ARSession is running. We'd like to cooperate with you if possible.
Steps to reproduce: run an AVCaptureSession and then run an ARSession. The AVCaptureSession stops.
Dear Apple Team,
I’m a high school student (vocational upper secondary school) working on my final research project about LiDAR sensors in smartphones, specifically Apple’s iPhone implementation.
My current understanding (for context):
I understand Apple’s LiDAR uses dToF with SPAD detectors: A VCSEL laser emits pulses, a DOE splits the beam into a dot pattern, and each spot’s return time is measured separately → point cloud generation.
My specific questions:
How many active projection dots does the LiDAR projector have in the iPhone 15 Pro vs. iPhone 12 Pro?
Are the dots static or do they shift/move over time?
How many depth measurement points does the system deliver internally (after processing)?
What is the ranging accuracy (cm-level precision) of each measurement point?
Experimental background: Using an IR night vision camera, I counted approximately 111 dots on the 15 Pro vs. 576 dots on the 12 Pro. Do these match the internal specifications?
Photos of my measurements are available if helpful.
Contact request: I would be very grateful if you could connect me with an Apple engineer or ARKit specialist who works with LiDAR technology. I would love to ask follow-up questions directly and would be happy to provide my contact details for this purpose.
These specifications would be essential for my research paper. Thank you very much in advance!
Best regards,
Max!
Vocational Upper Secondary School Hans-Leipelt-Schule Donauwörth
Research Project: “LiDAR Sensor Technology in Smartphones”
I have a ModelEntity with GroundingShadowComponent
entity.enumerateHierarchy { child, stop in
child.components.set(GroundingShadowComponent(castsShadow: true))
}
When I set it on the table, I can see the shadow on the table, even if I disable plane detection. However, when I enable plane detection, and the plane's material is OcclusionMaterial. I can not see the shadow on the table. As far as I know, receivesDynamicLighting is not usable in VisionOS. So how can I cast shadow on OcclusionMaterial in VisionOS? Or rather, is it possible to have the shadow properly displayed on the tabletop while ensuring that I cannot see objects beneath the table through it?