Hi!
I attempted running a sample project for detecting human pose in 3D with vision framework, that can be found here: https://developer.apple.com/documentation/vision/detecting-human-body-poses-in-3d-with-vision.
It works perfectly on my Macbook Pro M1, but fails on Apple Vision Pro. After selecting a photo, an endless loading screen is displayed and the following message is produced in the console:
Failed to initialize 2D Detection Algorithm.
Failed to initialize 2D Pose Estimation Algorithm.
Failed to initialize algorithm modules
Network path is nil: (null)
Failed to initialize 2D Detection Algorithm.
Failed to initialize 2D Pose Estimation Algorithm.
Failed to initialize algorithm modules
Unable to perform the request: Error Domain=com.apple.Vision Code=9 "Async status object reported as failed but without an error" UserInfo={NSLocalizedDescription=Async status object reported as failed but without an error}.
de-activating session 70138 after timeout
Is human pose detection expected to work on VisionOS? Is there any special configuration required, that I might be missing?
ARKit
RSS for tagIntegrate iOS device camera and motion features to produce augmented reality experiences in your app or game using ARKit.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hi!
I attempted to run a sample project for detecting human pose in photos, which can be found here:
https://developer.apple.com/documentation/vision/detecting-human-body-poses-in-3d-with-vision
The project works perfectly when run on my Macbook Pro M1, but it fails on Apple Vision Pro. After selecting the photo an endless loading screen is presented and the following output is produced in the console:
Failed to initialize 2D Detection Algorithm.
Failed to initialize 2D Pose Estimation Algorithm.
Failed to initialize algorithm modules
Network path is nil: (null)
Failed to initialize 2D Detection Algorithm.
Failed to initialize 2D Pose Estimation Algorithm.
Failed to initialize algorithm modules
Unable to perform the request: Error Domain=com.apple.Vision Code=9 "Async status object reported as failed but without an error" UserInfo={NSLocalizedDescription=Async status object reported as failed but without an error}.
de-activating session 70138 after timeout
It seems that VNDetectHumanBodyPose3DRequest is failing on Vision Pro for some reason. Are there any additional requirements for running Vision framework on VisionOS, that I might be missing?
I am looking for a material that functions in the same way that Occlusion Material does, except that it only partially occludes whatever is behind it. One way that I have thought of doing this was to change the opacity of the entity that was covered in Occlusion Material, however this did not change anything. Please let me know if this is possible.
Using the example code posted here:
https://developer.apple.com/documentation/visionOS/tracking-images-in-3d-space
I can register multiple ReferenceImage s with a ImageTrackingProvider, but only one updates at a time - to have realtime updating, I can only have one ImageAnchor in my field of view at a time.
Is it possible to track multiple imageAnchors at the same time in the same field of view? As in having several ImageAnchor's tracked and entities updated to the transforms of the anchor in the same frame/moment from the Apple Vision Pro?
Topic:
Spatial Computing
SubTopic:
ARKit
With iOS26 unveiled, has anyone noticed or found any changes related to RoomPlan?
I can't find anything myself, which is disappointing.
Has anyone found any improvements or changes?
Hello, I am trying to develop an app that broadcasts what the user sees via Apple Vision Pro. I am a graduate student studying at the university.
And I have two problems,
If I want to use passthrough in screen capture (in VisionOS), do I have to join Apple Developer Enterprise Program to get Enterprise API?
and Can I buy Apple Developer Enterprise Program (Enterprise API) with my university account?
Have any of you been able to do this?
Thank you
The AR based app I am working on right now is experiencing an issue. Sometimes, the AR session fails with a call to my ARSessionObserver's session(_ session: ARSession, didFailWithError error: Error)
with the following error:
Error Domain=com.apple.arkit.error
Code=102 "Required sensor failed."
NSLocalizedFailureReason="A sensor failed to deliver the required input.,"
NSLocalizedRecoverySuggestion="Make sure that the application has the required privacy settings."
The underlying error seems to point to the CoreMotion framework:
Domain=CMErrorDomain
Code=102 "(null)
Some people seem to have experienced this issue and solved it by making sure that the Compass Calibration switch is ON in Settings > Privacy > Location Services > System Services.
For context, the ARWorldTrackingConfiguration.worldAlignment is set to .gravity
The thing is it is already ON when I experience this issue.
I also noticed that this issue happens way more often on the iPhone 16e than in any other device.
Has anyone had similar experiences? I am looking for a way to prevent this error from happening (ideally) or handling in a way that does not affect the user. Any help is appreciated
I have a problem with the wall plane detection using visionOS/ARKit:
I am using ARKitSession's PlaneDetectionProvider detection.wall in the space of visionOS. I recorded the position and rotation information of the first detected plane, but found that the rotation value will be facing when the user starts the space. There is a deviation in different directions. That is to say, even if the plane is located on the same wall, the rotation quaternion will be different.
I hope that no matter from which direction the user enters the scan, the real direction of the wall can be correctly obtained so that the virtual content can be accurately aligned with the wall.
I have tried to use anchor.originFromAnchorTransform or Transform.rotation directly, but the rotation value is still affected by the user's initial orientation.
In addition, I would like to know whether the user's initial orientation will affect the location information. If so, please provide a solution.
Thank you!
Since updating to iOS 26.0 (and confirmed on 26.1), ARBodyTrackingConfiguration no longer detects a valid ARBodyAnchor on devices with LiDAR (e.g., iPhone 15 Pro, iPhone 17 Pro Max).
This issue reproduces in custom projects and Apple’s official sample “Capturing Body Motion in 3D”.
The AR session runs normally, but the delegate call:
func session(_ session: ARSession, didUpdate anchors: [ARAnchor])
never yields an ARBodyAnchor with valid joint transforms.
All joints return nil when calling:
body.skeleton.modelTransform(for: jointName)
resulting in 0 valid joints per frame.
Environment
• Device: iPhone 17 Pro Max (LiDAR)
• iOS: 26.0 / 26.1
• Xcode: 16.0 (stable)
• Framework: ARKit + RealityKit
• Configuration used:
config.worldAlignment = .gravityAndHeading
config.isAutoFocusEnabled = true
config.environmentTexturing = .none
session.run(config)
Also tested: with and without frameSemantics = .bodyDetection
Expected Behavior
ARBodyAnchor should be detected and body.skeleton should contain ~89 valid joints with continuous updates.
I am using AccessoryTrackingProvider from ARKit to get the transform of the PSVR2 controller via originFromAnchorTransform of the AccessoryAnchor. I also am trying to use AnchorEntity on the controller using RealityKit
However, none of the three options for Accessory.LocationName, which should be used to define the AnchorEntity target, seem to match the position on the controller which is being sent from ARKit.
The picture attached is showing two transforms:
RealityKit - using .gripSurface to define the AnchoringComponent.Target.accesssory location.
ARKit - using originFromAnchorTransform for AccessoryTrackingProvider.
They are not aligned at the same point.
As for the other options of Accessory.LocationName, using .aim is located at the tip of the controller and .grip is the same position as .gripSurface but with a different orientation.
I am wondering why there is not an option for Accessory.LocationName that actually matches the transform captured by ARKit?
Hardware Specifications Regarding the LiDAR scanner in the iPhone 13/14/15/16/17 Pro series, could you please provide the following technical details for academic verification:
Point Cloud Density / Resolution: The effective resolution of the depth map.
Sampling Frequency: The sensor's refresh rate.
Accuracy Metrics: Official tolerance levels regarding depth accuracy relative to distance (specifically within 0.5m – 2m range).
Data Acquisition Methodology For a scientific thesis requiring high data integrity: Does Apple recommend a custom ARKit implementation over third-party applications (e.g., Polycam) to access raw depth data? I need to confirm if third-party apps typically apply smoothing or post-processing that would obscure the sensor's native performance, which must be avoided for my error analysis.
Topic:
Spatial Computing
SubTopic:
ARKit
ARSession provides video stream from the wide angle camera. If ARSession uses the ultra wide camera at the same time, ARSession may provide video stream from that camera, otherwise AVCaptureSession with an ultra wide camera should be allowed to launch. It would be very very useful if we can access different cameras while ARSession is running. We'd like to cooperate with you if possible.
Steps to reproduce: run an AVCaptureSession and then run an ARSession. The AVCaptureSession stops.
Hi everyone! I am working on AR app and wanted to implement object occlusion because it removes drift pretty much from the object. This working great with RealityKit sample But I am unable to replicate such behaviour it with scenekit. Because scenekit does not offer object occlusion. Can we say scenekit is getting depricated, and we should re-write app in RealityKit (which is obviously a big task)?
What is the reason the hand-tracking joints have these axes? I'm trying to create a virtual hands model and that's a mess.
I’ve been having some issues removing anchors. I can add anchors with no issue. They will be there the next time I run the scene. I can also get updates when ARKit sends them. I can remove anchors, but not all the time. The method I’m using is to call removeAnchor() on the data provider.
worldTracking.removeAnchor(forID: uuid)
// Yes, I have also tried `removeAnchor(_ worldAnchor: WorldAnchor)`
This works if there are more than one anchor in a scene. When I’m down to one remaining anchor, I can remove it. It seems to succeed (does not raise an error) but the next time I run the scene the removed anchor is back. This only happens when there is only one remaining anchor.
do {
// This always run, but it doesn't seem to "save" the removal when there is only one anchor left.
try await worldTracking.removeAnchor(forID: uuid)
} catch {
// I have never seen this block fire!
print("Failed to remove world anchor \(uuid) with error: \(error).")
}
I posted a video on my website if you want to see it happening.
https://stepinto.vision/labs/lab-051-issues-with-world-tracking/
Here is the full code. Can you see if I’m doing something wrong? Is this a bug?
struct Lab051: View {
@State var session = ARKitSession()
@State var worldTracking = WorldTrackingProvider()
@State var worldAnchorEntities: [UUID: Entity] = [:]
@State var placement = Entity()
@State var subject : ModelEntity = {
let subject = ModelEntity(
mesh: .generateSphere(radius: 0.06),
materials: [SimpleMaterial(color: .stepRed, isMetallic: false)])
subject.setPosition([0, 0, 0], relativeTo: nil)
let collision = CollisionComponent(shapes: [.generateSphere(radius: 0.06)])
let input = InputTargetComponent()
subject.components.set([collision, input])
return subject
}()
var body: some View {
RealityView { content in
guard let scene = try? await Entity(named: "WorldTracking", in: realityKitContentBundle) else { return }
content.add(scene)
if let placementEntity = scene.findEntity(named: "PlacementPreview") {
placement = placementEntity
}
} update: { content in
for (_, entity) in worldAnchorEntities {
if !content.entities.contains(entity) {
content.add(entity)
}
}
}
.modifier(DragGestureImproved())
.gesture(tapGesture)
.task {
try! await setupAndRunWorldTracking()
}
}
var tapGesture: some Gesture {
TapGesture()
.targetedToAnyEntity()
.onEnded { value in
if value.entity.name == "PlacementPreview" {
// If we tapped the placement preview cube, create an anchor
Task {
let anchor = WorldAnchor(originFromAnchorTransform: value.entity.transformMatrix(relativeTo: nil))
try await worldTracking.addAnchor(anchor)
}
} else {
Task {
// Get the UUID we stored on the entity
let uuid = UUID(uuidString: value.entity.name) ?? UUID()
do {
try await worldTracking.removeAnchor(forID: uuid)
} catch {
print("Failed to remove world anchor \(uuid) with error: \(error).")
}
}
}
}
}
func setupAndRunWorldTracking() async throws {
if WorldTrackingProvider.isSupported {
do {
try await session.run([worldTracking])
for await update in worldTracking.anchorUpdates {
switch update.event {
case .added:
let subjectClone = subject.clone(recursive: true)
subjectClone.isEnabled = true
subjectClone.name = update.anchor.id.uuidString
subjectClone.transform = Transform(matrix: update.anchor.originFromAnchorTransform)
worldAnchorEntities[update.anchor.id] = subjectClone
print("🟢 Anchor added \(update.anchor.id)")
case .updated:
guard let entity = worldAnchorEntities[update.anchor.id] else {
print("No entity found to update for anchor \(update.anchor.id)")
return
}
entity.transform = Transform(matrix: update.anchor.originFromAnchorTransform)
print("🔵 Anchor updated \(update.anchor.id)")
case .removed:
worldAnchorEntities[update.anchor.id]?.removeFromParent()
worldAnchorEntities.removeValue(forKey: update.anchor.id)
print("🔴 Anchor removed \(update.anchor.id)")
if let remainingAnchors = await worldTracking.allAnchors {
print("Remaining Anchors: \(remainingAnchors.count)")
}
}
}
} catch {
print("ARKit session error \(error)")
}
}
}
}