Integrate iOS device camera and motion features to produce augmented reality experiences in your app or game using ARKit.

ARKit Documentation

Posts under ARKit subtopic

Post

Replies

Boosts

Views

Activity

Convert entity to image
VisionPro 开发,XCode,我想在窗口中找到一个显示模型的图片。这个模型可以改变它的材料,它不是唯一的形象,它正在改变。如何将此模型转换为图像
1
0
65
Apr ’25
[VisionPro] Placing virtual entities around my arm
Hi everyone, I'm developing a MR Vision Pro app where I’d like to anchor virtual objects (such as UI elements) around the user's arm. However, I’ve noticed that Vision Pro seems to mask out the area where the user’s real arm is, hiding virtual content in that region so that you see your real arm. Is there a way to render virtual elements on the user's arm—so that it looks like the object is placed directly on the arm despite the real-world passthrough? I was hoping there might be a way to adjust the depth or behavior of this masked-out region. Any insights or workarounds would be greatly appreciated! Thanks :)
1
0
81
Mar ’25
Using vision pro to detect distance to real life objects
Is it possible to detect distance from the vision pro to real live objects and people? I tried using scene.raycast to perform a raycast forward from the center of the viewport, but it doesn't seem to react to real life objects, only entities. I see mentioned here: https://developer.apple.com/forums/thread/776807?answerId=829576022#829576022, that a raycast with scene reconstruction should allow me to measure that distance, as long as the object is non-moving. How could I accomplish that?
2
0
157
Apr ’25
Apply mesh to real world people.
As far as I know, Apple hasn’t opened access to the Vision Pro camera for developers yet, so I’m trying to find possible workarounds within the current capabilities. I’m wondering if there’s any way to apply a mesh to a person in the scene in Vision Pro, or if there’s an alternative approach to roughly detect a human shape in front of the user?
2
0
91
Apr ’25
RealityKit System update and timing
Hi, I'm playing now with hand tracking. I want to get position of hand inside a system update function. I was not sure if transform I'm getting from hand attached AnchorEntity (with trackingMode: .predicted) would give same results as handAnchors(at:) from hand tracking provider, so I started to read them both and compare. For handAnchors i tried using context.scene.timebase.sourceTimebase!.sourceClock!.time.seconds and CACurrentMediaTime() as timestamp source. They seem to use exactly same clock, so that doesn't matter, but: for some reason update handler is always called twice with same context.deltaTime, but first time the query finds 0 entities, second time it finds them all. The query is the standard EntityQuery(where: .has(MyComponent.self)) and in update (matching: Self.query, updatingSystemWhen: .rendering). Here's part of logs: System update called, entity count: 0, dt: 0.01000458374619484, absTime: 4654.222593541 System update called, entity count: 11, dt: 0.01000458374619484, absTime: 4654.22262525 System update called, entity count: 0, dt: 0.009999999776482582, absTime: 4654.249390875 System update called, entity count: 11, dt: 0.009999999776482582, absTime: 4654.249425 accounting for the double update calling I started to calculate time delta of absolute time between calls and they're most of the time much bigger, or much smaller than advertised by system's context.deltaTime, only sometimes they kind of match, for example: system: (dt: 0.01000458374619484) scene : (dt: 0.021419291667371) (absTime: 4654.222628125001) and the very next call system: (dt: 0.010009 166784584522) scene : (dt: 0.0013097083328830195) (absTime: 4654.223937833334) but sometimes system: (dt: 0.009999999776482582) scene : (dt: 0.009 112249999816413) (absTime: 4654.351299 166668) Shouldn't those be more or less equal, or am I missing something? In the end it seems that getting hand position from AnchorEntity and with handAnchors(at:) gives kind of same results, but at different time points, so I'd love to understand what's the correct way to use them and why time flows differently :). --Edit-- P.S. Had to put spaces everywhere in logs between "9" and "1", otherwise post was blocked due to "sensitive content" :D
2
0
102
May ’25
What is the first reliable position of the apple vision pro device?
In several visionOS apps, we readjust our scenes to the user's eye level (their heads). But, we have encountered issues whereby the WorldTrackingProvider returns bad/incorrect positions for the first x number of frames. See below code which you can copy paste in any Immersive Space. Relaunch the space and observe the numberOfBadWorldInfos value is inconsistent. a. what is the most reliable way to get the devices's position? b. is this indeed a bug? c. are we using worldInfo improperly? d. as a workaround, in our apps we set to 10 the number of frames to let pass before using worldInfo, should we set our threshold differently? import ARKit import Combine import OSLog import SwiftUI import RealityKit import RealityKitContent let SUBSYSTEM = Bundle.main.bundleIdentifier! struct ImmersiveView: View { let logger = Logger(subsystem: SUBSYSTEM, category: "ImmersiveView") let session = ARKitSession() let worldInfo = WorldTrackingProvider() @State var sceneUpdateSubscription: EventSubscription? = nil @State var deviceTransform: simd_float4x4? = nil @State var numberOfBadWorldInfos = 0 @State var isBadWorldInfoLoged = false var body: some View { RealityView { content in try? await session.run([worldInfo]) sceneUpdateSubscription = content.subscribe(to: SceneEvents.Update.self) { event in guard let pose = worldInfo.queryDeviceAnchor(atTimestamp: CACurrentMediaTime()) else { return } // `worldInfo` does not return correct values for the first few frames (exact number of frames is unknown) // - known SO: https://stackoverflow.com/questions/78396187/how-to-determine-the-first-reliable-position-of-the-apple-vision-pro-device deviceTransform = pose.originFromAnchorTransform if deviceTransform!.columns.3.y < 1.6 { numberOfBadWorldInfos += 1 logger.warning("\(#function) \(#line) deviceTransform.columns.3.y \(deviceTransform!.columns.3.y), numberOfBadWorldInfos \(numberOfBadWorldInfos)") } else { if !isBadWorldInfoLoged { logger.info("\(#function) \(#line) deviceTransform.columns.3.y \(deviceTransform!.columns.3.y), numberOfBadWorldInfos \(numberOfBadWorldInfos)") } isBadWorldInfoLoged = true // stop logging. } } } } }
3
0
109
May ’25
How to obtain video streams from the digital space included in VisionPro after applying for the "Enterprise API"?
After implementing the method of obtaining video streams discussed at WWDC in the program, I found that the obtained video stream does not include digital models in the digital space or related videos such as the program UI. I would like to ask how to obtain a video stream or frame that contains only the physical world? let formats = CameraVideoFormat.supportedVideoFormats(for: .main, cameraPositions:[.left]) let cameraFrameProvider = CameraFrameProvider() var arKitSession = ARKitSession() var pixelBuffer: CVPixelBuffer? var cameraAccessStatus = ARKitSession.AuthorizationStatus.notDetermined let worldTracking = WorldTrackingProvider() func requestWorldSensingCameraAccess() async { let authorizationResult = await arKitSession.requestAuthorization(for: [.cameraAccess]) cameraAccessStatus = authorizationResult[.cameraAccess]! } func queryAuthorizationCameraAccess() async{ let authorizationResult = await arKitSession.queryAuthorization(for: [.cameraAccess]) cameraAccessStatus = authorizationResult[.cameraAccess]! } func monitorSessionEvents() async { for await event in arKitSession.events { switch event { case .dataProviderStateChanged(_, let newState, let error): switch newState { case .initialized: break case .running: break case .paused: break case .stopped: if let error { print("An error occurred: \(error)") } @unknown default: break } case .authorizationChanged(let type, let status): print("Authorization type \(type) changed to \(status)") default: print("An unknown event occured \(event)") } } } @MainActor func processWorldAnchorUpdates() async { for await anchorUpdate in worldTracking.anchorUpdates { switch anchorUpdate.event { case .added: //检查是否有持久化对象附加到此添加的锚点- //它可能是该应用程序之前运行的一个世界锚。 //ARKit显示与此应用程序相关的所有世界锚点 //当世界跟踪提供程序启动时。 fallthrough case .updated: //使放置的对象的位置与其对应的对象保持同步 //世界锚点,如果未跟踪锚点,则隐藏对象。 break case .removed: //如果删除了相应的世界定位点,则删除已放置的对象。 break } } } func arkitRun() async{ do { try await arKitSession.run([cameraFrameProvider,worldTracking]) } catch { return } } @MainActor func processDeviceAnchorUpdates() async { await run(function: self.cameraFrameUpdatesBuffer, withFrequency: 90) } @MainActor func cameraFrameUpdatesBuffer() async{ guard let cameraFrameUpdates = cameraFrameProvider.cameraFrameUpdates(for: formats[0]),let cameraFrameUpdates1 = cameraFrameProvider.cameraFrameUpdates(for: formats[1]) else { return } for await cameraFrame in cameraFrameUpdates { guard let mainCameraSample = cameraFrame.sample(for: .left) else { continue } self.pixelBuffer = mainCameraSample.pixelBuffer } for await cameraFrame in cameraFrameUpdates1 { guard let mainCameraSample = cameraFrame.sample(for: .left) else { continue } if self.pixelBuffer != nil { self.pixelBuffer = mergeTwoFrames(frame1: self.pixelBuffer!, frame2: mainCameraSample.pixelBuffer, outputSize: CGSize(width: 1920, height: 1080)) } } }
0
0
98
Apr ’25
RoomPlan - The delegate of ARSession is retaining x ARFrames
Hi, I'm encountering an issue in our app that uses RoomPlan and ARsession for scanning. After prolonged use—especially under heavy load from both the scanning process and other unrelated app operations—the iPhone becomes very hot, and the following warning begins to appear more frequently: "ARSession <0x107559680>: The delegate of ARSession is retaining 11 ARFrames. The camera will stop delivering camera images if the delegate keeps holding on to too many ARFrames. This could be a threading or memory management issue in the delegate and should be fixed." I was able to reproduce this behavior using Apple’s RoomPlanExampleApp, with only one change: I introduced a CPU-intensive workload at the end of the startSession() function: DispatchQueue.global().asyncAfter(deadline: .now() + 5) { for i in 0..<4 { var value = 10_000 DispatchQueue.global().async { while true { value *= 10_000 value /= 10_000 value ^= 10_000 value = 10_000 } } } } I suspect this is some RoomPlan API problem that's why a filed an feedback: 17441091
0
0
173
May ’25
Lidar sensor does not work on some Iphone 13 Pro
I am experience problem with three iPhone 13 Pro. They are reporting the lowest quality for all points in the depthmap from the Lidar sensor. The readings I get are unusable. If it was just one phone I would consider it a faulty sensor, but in this case it is three phones that gives the same result. I have other iPhone 13 Pro that works as expected. Have any else experienced a similar behavior? I am using iOS 18.4.1 https://developer.apple.com/documentation/avfoundation/avdepthdata/depthdataquality
0
0
51
May ’25
Is it Possible to Place a 3D Model at the Exact Position of a QR Code in AR with ARKit ?
I'm working on an iOS app using ARKit and RealityKit where I scan QR codes and want to place 3D models at the exact position of the QR code in the real world. Is it possible to accurately place a 3D model at the exact position of a QR code in AR using ARKit and RealityKit? Specifically, I want the model to appear at the precise location where the QR code is detected, rather than just somewhere in the AR space. If this is possible, could you point me in the right direction or recommend the best approach to achieve this? Thank you for your help!
0
0
98
May ’25
RealityRenderer's Perspective Camera's FOV
Hi, I have been using RealityRenderer to render scenes in MacOS as spatial videos and view it in Vision Pro and it is awesome. I understand that it uses PerspectiveCamera to render. I wanted to know what is the default FOV for this camera and how much can we push it? I want to ideally render a scene with 180 degrees of fov. Thanks
1
0
112
May ’25
Tracking over large distances
I'm developing an AR application for the iPad pro where the primary purpose is to overlay 3D design data on top of production parts. For alignment, we are using Vuforia (model targets) which work really well locally. The further the device is moved from the point of original alignment, we are seeing quite a bit of overlay error (drift?). My primary questions are: Are there any best practices to stabilize frame-to-frame tracking when using model targets? We are noticing drift as soon as the device starts moving (the drift appears to occur specifically in the direction the device is moving). After about 15 feet of movement, we are observing about 3-6" of overlay error These use cases can be over 100 feet long. In order to reset drift, we understand we'll need multiple alignment points (model targets) along the way. Is there a standard/best practice for this? Ex: have a new alignment point every x-feet? We are using plane anchors to set our alignment. Typically we attach it to the nearest plane; however, the anchor point can be very far away (the origin of the model, which often is not near where the virtual content is). Could this be the issue? The anchor is far from the plane that we attach it too. Would moving the anchor closer to the plane we attach it too improve stability? After a few steps, the plane we originally attach too will be out of FoV anyway. Thanks in advance!
0
0
72
May ’25
Request: More Fine-Grained Control in Object Capture (PhotogrammetrySession)
Hi Apple Team and Developers, First of all, I’d like to express my appreciation for the incredible results achieved using PhotogrammetrySession. I’ve been developing a portrait scanning app using Object Capture, and in many tests—especially with human models—I’ve found the reconstructed body surfaces are remarkably smooth and clean, often outperforming tools like Metashape and RealityCapture in terms of aesthetic results. However, I’ve encountered some challenges when working with complex areas like long hair overlapping the face. For instance, with female models where strands of hair partially occlude the face, the resulting mesh tends to merge the hair and facial geometry. This leads to distorted or “melted” facial features, likely due to ambiguity in the geometry estimation phase. Feature Suggestion: Would it be possible to allow developers to supply two versions of the input images: • One version (original) for texture generation • A pre-processed version (e.g., contrast-enhanced or CLAHE filtered) to guide mesh reconstruction only This would give us the flexibility to enhance edge features or shadow detail without affecting the final texture appearance. In other photogrammetry pipelines, applying image enhancement selectively before dense reconstruction improves geometry quality in low-contrast areas. Question: Is there any plan to support this kind of two-path workflow in future versions of PhotogrammetrySession? Or perhaps expose more intermediate stages or tunable parameters to developers? Also, any hints on what we can expect from WWDC 2025 regarding improvements to Object Capture or related vision/3D technologies? Thanks again for this powerful API. Looking forward to hearing insights from the team and other developers. Warm regards, KitCheng
0
0
65
May ’25
PhotogrammetrySession Polygon Count Limit – How Is It Determined by Hardware?
Hi Apple Team, I’m working on a human portrait scanning application using PhotogrammetrySession, and I’ve been very impressed by the results. Thank you for building such a powerful and accessible photogrammetry solution into macOS! I do, however, have a question regarding mesh detail limitations on different Mac hardware configurations. When using PhotogrammetrySession.Request.Detail.custom and trying to set maximumPolygonCount = 1000000, I see the following log message: Clamped max poly count: 1000000 to device limit. 250000 is used. This is on an M1 Max with 32 GB RAM. I’m aware that PhotogrammetrySession.limits can report values like maximumInputImageDimension and maximumNumberOfInputImages, but I haven’t found documentation on how the maximumPolygonCount is determined, and what hardware specs influence it. Is it tied more to: • GPU performance (e.g. neural/graphics cores)? • CPU architecture? • Memory size or bandwidth? • Or is it fixed per SoC generation? I’d love to understand what kind of hardware upgrades (e.g. moving to M4 Pro or increasing RAM) could allow me to increase mesh complexity and generate more detailed models. Any insights would be greatly appreciated—and if this is covered in upcoming WWDC sessions or documentation, I’d be happy to tune in. Thanks in advance! KitCheng
0
0
103
May ’25
Custom Room Names Saved After Merger?
I am allowing users to go through and capture different rooms, and add a custom label to that room. Is there a way to store data about this in the captured room so that it persists into the final merge? As it is now, My users mark all their merges with custom labels, but after merging there is no way to remember which room is which in the merging process so they have to go through and manually add the labels back. For larger floor plans this is not ideal.
0
0
80
May ’25
I need to loop my videoMaterial.
I need to loop my videoMaterial and I don't know how to make it happen in my code. I have included an image of my videoMaterial code. Any help making this happen with be greatly appreciated. Thank you, Christopher
1
0
106
Jun ’25
ARDepthData.confidenceMap only returns low confidence on certain devices
A few users have recently reported no longer being able to capture point clouds using our app, specifically on iPhone 15 Pro devices. We recently found an in-house device that exhibits this behavior and found that the confidenceMap contains only low confidence values, regardless of the environment being captured. Our app uses a higher confidence threshold; setting the threshold to a lower value produces noisy results as expected, so that is a non-viable option. Other LiDAR based apps have been tested with this device and the results are the same. No points, or noisy point clouds in apps that allow a lower confidence threshold setting. On devices that exhibit this behavior the "Displaying a point cloud using scene depth" Apple sample app can be used to visualize the issue. First reports of this new behavior occurred as early as iOS 18.4. Looking for recommendations on which team(s) at Apple to reach out to with these findings since the behavior manifests on only a small sample of devices.
1
0
212
Jun ’25