Integrate iOS device camera and motion features to produce augmented reality experiences in your app or game using ARKit.

Posts under ARKit tag

116 Posts

Post

Replies

Boosts

Views

Activity

Attaching procedural audio to an ARKit SCNNode
I’m developing an ARKit application where I aim to attach procedurally generated audio to detected planes in the environment. While using a static audio file with SCNAudioSource and SCNAudioPlayer works as expected, integrating procedural audio via AVAudioSourceNode does not produce any sound, nor does it generate any error messages: Stack Overflow Post Working Implementation with Static Audio File: let audioPlayer = SCNAudioPlayer(source: audioSource) node.addAudioPlayer(audioPlayer) Attempted Implementation with Procedural Audio: // Audio generation code } let audioPlayer = SCNAudioPlayer(avAudioNode: audioNode) node.addAudioPlayer(audioPlayer) In this setup, the AVAudioSourceNode successfully generates audio when connected directly to an AVAudioEngine. However, when used with SCNAudioPlayer and attached to an SCNNode, it fails to produce sound. What doesn’t work is creating some procedural audio with an AVAudioNode, as documented here: Apple docs Additionally, I explored the WWDC18 AR game project, SwiftShot, which utilizes SCNAudioPlayer(avAudioNode:). After updating it for the latest Xcode, the graphics function correctly, but the audio does not play. I also noted that the Apple documentation mentions an audioPlayerWithAVAudioNode: method, stating: Using this initializer is typically not necessary. Instead, call the audioPlayerWithAVAudioNode: method, which returns a cached audio player object if one for the specified AVAudioNode object has already been created and is available for use. However, this method does not appear to be available in Swift. Any insights or guidance on this matter would be greatly appreciated.
0
0
245
Apr ’25
Unity (IL2CPP) iOS Build: "_placeGeoAnchor" Undefined Symbol for Architecture arm64
Hi all, I’m running into a persistent linker error when building my Unity 6 project (IL2CPP, iOS target) that calls a Swift method via an Objective-C++ wrapper. Despite following all known steps, I keep getting: Undefined symbols for architecture arm64: "_placeGeoAnchor", referenced from: _GeoAnchorTrigger_placeGeoAnchor in libGameAssembly.a ... ld: symbol(s) not found for architecture arm64 I’m trying to place a persistent AR anchor at real-world GPS coordinates (so that the same asset can appear at the same location for a returning user). Since I’m targeting iOS, I can’t use Google’s geospatial anchors (but I sooo wish I could--please apple I beg of you stop being so selfish lol). I've already done these things: Swift file is added to Unity-iPhone target. .mm and .h files are in Unity-iPhone target under Compile Sources. Bridging header is set to Unity-iPhone-Bridging-Header.h. Generated header name is correct (GeoTest-Swift.h). Build Active Architecture Only set to No. Function has attribute((visibility("default"))). Unity project uses IL2CPP scripting backend. Yet I'm still getting the same linker error — it appears Unity (via IL2CPP) references the function, but Xcode doesn't link it. It’s something small that’s being missed with how IL2CPP links native symbols? Or maybe I need to explicitly include something in Link Binary With Libraries? I’ve verified symbol visibility and targets repeatedly. I’ve built AR features in Unity before (for Quest), but this is my first time trying to bridge C# → Objective-C++ → Swift in this way for a geolocation-based AR anchor for an iphone. I'm going crazy, I’ve been stuck on this for 12+ hours now, so any insight or nudge would be deeply appreciated. SPECS: Macbook Pro M4 Pro--Sequoia 15.4 Unity 6000.0.45f1 IPhone 11 iOS 18.4 Xcode 15
0
0
176
Apr ’25
RealityKit fails with EXC_BAD_ACCESS at CMClockGetAnchorTime in the simulator
Starting with iOS 18.0 beta 1, I've noticed that RealityKit frequently crashes in the simulator when an app launches and presents an ARView. I was able to create a small sample app with repro steps that demonstrates the issue, and I've submitted feedback: FB16144085 I've included a crash log with the feedback. If possible, I'd appreciate it if an Apple engineer could investigate and suggest a workaround. It's awkward to be restricted to the iOS 17 simulator, which does not exhibit this behavior. Please let me know if there's anything I can do to help. Thank you.
1
0
689
Apr ’25
Getting original LocationNode from hit tested SCNNode
I would like to modify the content of a published LocationNode upon been clicked by the user. But unfortunately: func hitTest(_ point: CGPoint, options: [SCNHitTestOption : Any]? = nil) -> [SCNHitTestResult] returns an SCNNode array from which it is impossible to retrieve the original LocationNode being inserted in order to be able to modify it. Of course the solution would be to either insert the SCNNode corresponding to the inserted LocationNode in a custom class or conversely insert the identifier of the custom object as a tag of the LocationNode, in order to solve the issue. But both options seem impossible to implement. May anyone help me?
1
0
153
Apr ’25
Apply mesh to real world people.
As far as I know, Apple hasn’t opened access to the Vision Pro camera for developers yet, so I’m trying to find possible workarounds within the current capabilities. I’m wondering if there’s any way to apply a mesh to a person in the scene in Vision Pro, or if there’s an alternative approach to roughly detect a human shape in front of the user?
2
0
119
Apr ’25
How to obtain video streams from the digital space included in VisionPro after applying for the "Enterprise API"?
After implementing the method of obtaining video streams discussed at WWDC in the program, I found that the obtained video stream does not include digital models in the digital space or related videos such as the program UI. I would like to ask how to obtain a video stream or frame that contains only the physical world? let formats = CameraVideoFormat.supportedVideoFormats(for: .main, cameraPositions:[.left]) let cameraFrameProvider = CameraFrameProvider() var arKitSession = ARKitSession() var pixelBuffer: CVPixelBuffer? var cameraAccessStatus = ARKitSession.AuthorizationStatus.notDetermined let worldTracking = WorldTrackingProvider() func requestWorldSensingCameraAccess() async { let authorizationResult = await arKitSession.requestAuthorization(for: [.cameraAccess]) cameraAccessStatus = authorizationResult[.cameraAccess]! } func queryAuthorizationCameraAccess() async{ let authorizationResult = await arKitSession.queryAuthorization(for: [.cameraAccess]) cameraAccessStatus = authorizationResult[.cameraAccess]! } func monitorSessionEvents() async { for await event in arKitSession.events { switch event { case .dataProviderStateChanged(_, let newState, let error): switch newState { case .initialized: break case .running: break case .paused: break case .stopped: if let error { print("An error occurred: \(error)") } @unknown default: break } case .authorizationChanged(let type, let status): print("Authorization type \(type) changed to \(status)") default: print("An unknown event occured \(event)") } } } @MainActor func processWorldAnchorUpdates() async { for await anchorUpdate in worldTracking.anchorUpdates { switch anchorUpdate.event { case .added: //检查是否有持久化对象附加到此添加的锚点- //它可能是该应用程序之前运行的一个世界锚。 //ARKit显示与此应用程序相关的所有世界锚点 //当世界跟踪提供程序启动时。 fallthrough case .updated: //使放置的对象的位置与其对应的对象保持同步 //世界锚点,如果未跟踪锚点,则隐藏对象。 break case .removed: //如果删除了相应的世界定位点,则删除已放置的对象。 break } } } func arkitRun() async{ do { try await arKitSession.run([cameraFrameProvider,worldTracking]) } catch { return } } @MainActor func processDeviceAnchorUpdates() async { await run(function: self.cameraFrameUpdatesBuffer, withFrequency: 90) } @MainActor func cameraFrameUpdatesBuffer() async{ guard let cameraFrameUpdates = cameraFrameProvider.cameraFrameUpdates(for: formats[0]),let cameraFrameUpdates1 = cameraFrameProvider.cameraFrameUpdates(for: formats[1]) else { return } for await cameraFrame in cameraFrameUpdates { guard let mainCameraSample = cameraFrame.sample(for: .left) else { continue } self.pixelBuffer = mainCameraSample.pixelBuffer } for await cameraFrame in cameraFrameUpdates1 { guard let mainCameraSample = cameraFrame.sample(for: .left) else { continue } if self.pixelBuffer != nil { self.pixelBuffer = mergeTwoFrames(frame1: self.pixelBuffer!, frame2: mainCameraSample.pixelBuffer, outputSize: CGSize(width: 1920, height: 1080)) } } }
0
0
171
Apr ’25
Playing USDZ animation at last known location of reference object
Hi there I'm using Reality Composer Pro to anchor virtual content to a .referenceobject. However by moving the referenceobject quickly, it causes tracking to stop. (I know this is a limitation so im trying to make it a feature) IS there a way to play a USDZ animation at the last known location, after detecting that reference object is no longer being tracked? is it possible to set this up in Reality Composer pro? Nearly everything is set up in Reality Composer pro with my immersive.scene just anchoring virtual content to the Reference object in the RCP Scene, so my immersive view just does this - if let immersiveContentEntity = try? await Entity(named: "Immersive", in: realityKitContentBundle) { content.add(immersiveContentEntity) & this .onAppear { appModel.immersiveSpaceState = .open } .onDisappear { appModel.immersiveSpaceState = .closed } I have tried Using SpatialTracking & WorldTrackProvider, but I'm still quite new to Swift and coding in general so im unsure how to implement in conjunction with my RCP scene and if this is actually the right way to do it. Apologies for my lack of knowledge.
0
0
96
Apr ’25
Playing USDZ animation at last known location of reference object
Hi there I'm using Reality Composer Pro to anchor virtual content to a .referenceobject. However moving the referenceobject quickly causes tracking to stop. (I know this is a limitation and I am trying to embrace it as a feature) Is there a way to play a USDZ animation at the last known location, after detecting that the reference object is no longer tracked? is it possible to set this up in Reality Composer pro? I'm trying to get the USDZ to play before the Virtual Content disappears (due to reference object not being located). So that it smooths out the vanishing of the content. Nearly everything is set up in Reality Composer pro with my immersive.scene just adding virtual content to the reference object which anchors it in the RCP Scene, so my immersive view just does this - if let immersiveContentEntity = try? await Entity(named: "Immersive", in: realityKitContentBundle) { content.add(immersiveContentEntity) & this .onAppear { appModel.immersiveSpaceState = .open } .onDisappear { appModel.immersiveSpaceState = .closed } I have tried Using SpatialTracking & WorldTrackingProvider, but I'm still quite new to Swift and coding in general so im unsure how to implement in conjunction with my RCP scene and/or if this is the right way to go about it. Also I have implemented this at the beginning of object tracking. All I had to do was add a onAppear behavior to the object to play a USDZ and that works. Doing it for disappearing (due to loss of reference object) seems to be a lot harder.
0
0
120
Apr ’25
Cannot reassign worldTracking / planeDetection providers in my PlacementManager when switching environments
Environment Xcode: 16.2 VisionOS SDK 2.4 Swift 6.1 Targets: Apple Vision Pro (immersive space) Frameworks: ARKit, RealityKit, SwiftUI What I’m Trying to Do I have a view-model class PlacementManager that holds two AR providers: private var worldTracking: WorldTrackingProvider private var planeDetection: PlaneDetectionProvider I want to dynamically replace these providers in a setEnvironment(_:) method (so I can save/clear a JSON scene and restart ARKit). What’s Happening If I declare them as : private let worldTracking = WorldTrackingProvider() private let planeDetection = PlaneDetectionProvider() I get compile-errors when I later do: self.worldTracking = newWorldTracking // Cannot assign to property: 'worldTracking' is a 'let' constant If I change them to un-initialized vars: private var worldTracking: WorldTrackingProvider private var planeDetection: PlaneDetectionProvider then in my init() I get: self used in property access 'worldTracking' before all stored properties are initialized Code snipet @Observable final class PlacementManager : ObservableObject { private var worldTracking: WorldTrackingProvider private var planeDetection: PlaneDetectionProvider // … other props … @MainActor init() { // error: self.worldTracking used before init… planeAnchorHandler = PlaneAnchorHandler(rootEntity: root) persistenceManager = PersistenceManager( worldTracking: worldTracking, rootEntity: root ) // … } @MainActor func setEnvironment(env: Environnement) async { let newWorldTracking = WorldTrackingProvider() let newPlaneDetection = PlaneDetectionProvider() try await appState!.arkitSession.run( [ newWorldTracking, newPlaneDetection ] ) self.worldTracking = newWorldTracking self.planeDetection = newPlaneDetection // … } } What I’ve Tried Giving them default values at declaration (= WorldTrackingProvider()) Initializing them at the top of init() before any use Passing the new providers into arkitSession.run(...) My Question What is the recommended Swift-style pattern to declare and reassign these ARKit provider properties so that: They’re fully initialized before use in init(), and I can swap them out later in setEnvironment(...) without compiler errors? Any pointers (or links to forum threads / docs) would be greatly appreciated!
0
0
146
May ’25
RoomPlan - The delegate of ARSession is retaining x ARFrames
Hi, I'm encountering an issue in our app that uses RoomPlan and ARsession for scanning. After prolonged use—especially under heavy load from both the scanning process and other unrelated app operations—the iPhone becomes very hot, and the following warning begins to appear more frequently: "ARSession <0x107559680>: The delegate of ARSession is retaining 11 ARFrames. The camera will stop delivering camera images if the delegate keeps holding on to too many ARFrames. This could be a threading or memory management issue in the delegate and should be fixed." I was able to reproduce this behavior using Apple’s RoomPlanExampleApp, with only one change: I introduced a CPU-intensive workload at the end of the startSession() function: DispatchQueue.global().asyncAfter(deadline: .now() + 5) { for i in 0..<4 { var value = 10_000 DispatchQueue.global().async { while true { value *= 10_000 value /= 10_000 value ^= 10_000 value = 10_000 } } } } I suspect this is some RoomPlan API problem that's why a filed an feedback: 17441091
0
0
280
May ’25
RealityKit System update and timing
Hi, I'm playing now with hand tracking. I want to get position of hand inside a system update function. I was not sure if transform I'm getting from hand attached AnchorEntity (with trackingMode: .predicted) would give same results as handAnchors(at:) from hand tracking provider, so I started to read them both and compare. For handAnchors i tried using context.scene.timebase.sourceTimebase!.sourceClock!.time.seconds and CACurrentMediaTime() as timestamp source. They seem to use exactly same clock, so that doesn't matter, but: for some reason update handler is always called twice with same context.deltaTime, but first time the query finds 0 entities, second time it finds them all. The query is the standard EntityQuery(where: .has(MyComponent.self)) and in update (matching: Self.query, updatingSystemWhen: .rendering). Here's part of logs: System update called, entity count: 0, dt: 0.01000458374619484, absTime: 4654.222593541 System update called, entity count: 11, dt: 0.01000458374619484, absTime: 4654.22262525 System update called, entity count: 0, dt: 0.009999999776482582, absTime: 4654.249390875 System update called, entity count: 11, dt: 0.009999999776482582, absTime: 4654.249425 accounting for the double update calling I started to calculate time delta of absolute time between calls and they're most of the time much bigger, or much smaller than advertised by system's context.deltaTime, only sometimes they kind of match, for example: system: (dt: 0.01000458374619484) scene : (dt: 0.021419291667371) (absTime: 4654.222628125001) and the very next call system: (dt: 0.010009 166784584522) scene : (dt: 0.0013097083328830195) (absTime: 4654.223937833334) but sometimes system: (dt: 0.009999999776482582) scene : (dt: 0.009 112249999816413) (absTime: 4654.351299 166668) Shouldn't those be more or less equal, or am I missing something? In the end it seems that getting hand position from AnchorEntity and with handAnchors(at:) gives kind of same results, but at different time points, so I'd love to understand what's the correct way to use them and why time flows differently :). --Edit-- P.S. Had to put spaces everywhere in logs between "9" and "1", otherwise post was blocked due to "sensitive content" :D
2
0
147
May ’25
Can not remove final World Anchor
I’ve been having some issues removing anchors. I can add anchors with no issue. They will be there the next time I run the scene. I can also get updates when ARKit sends them. I can remove anchors, but not all the time. The method I’m using is to call removeAnchor() on the data provider. worldTracking.removeAnchor(forID: uuid) // Yes, I have also tried `removeAnchor(_ worldAnchor: WorldAnchor)` This works if there are more than one anchor in a scene. When I’m down to one remaining anchor, I can remove it. It seems to succeed (does not raise an error) but the next time I run the scene the removed anchor is back. This only happens when there is only one remaining anchor. do { // This always run, but it doesn't seem to "save" the removal when there is only one anchor left. try await worldTracking.removeAnchor(forID: uuid) } catch { // I have never seen this block fire! print("Failed to remove world anchor \(uuid) with error: \(error).") } I posted a video on my website if you want to see it happening. https://stepinto.vision/labs/lab-051-issues-with-world-tracking/ Here is the full code. Can you see if I’m doing something wrong? Is this a bug? struct Lab051: View { @State var session = ARKitSession() @State var worldTracking = WorldTrackingProvider() @State var worldAnchorEntities: [UUID: Entity] = [:] @State var placement = Entity() @State var subject : ModelEntity = { let subject = ModelEntity( mesh: .generateSphere(radius: 0.06), materials: [SimpleMaterial(color: .stepRed, isMetallic: false)]) subject.setPosition([0, 0, 0], relativeTo: nil) let collision = CollisionComponent(shapes: [.generateSphere(radius: 0.06)]) let input = InputTargetComponent() subject.components.set([collision, input]) return subject }() var body: some View { RealityView { content in guard let scene = try? await Entity(named: "WorldTracking", in: realityKitContentBundle) else { return } content.add(scene) if let placementEntity = scene.findEntity(named: "PlacementPreview") { placement = placementEntity } } update: { content in for (_, entity) in worldAnchorEntities { if !content.entities.contains(entity) { content.add(entity) } } } .modifier(DragGestureImproved()) .gesture(tapGesture) .task { try! await setupAndRunWorldTracking() } } var tapGesture: some Gesture { TapGesture() .targetedToAnyEntity() .onEnded { value in if value.entity.name == "PlacementPreview" { // If we tapped the placement preview cube, create an anchor Task { let anchor = WorldAnchor(originFromAnchorTransform: value.entity.transformMatrix(relativeTo: nil)) try await worldTracking.addAnchor(anchor) } } else { Task { // Get the UUID we stored on the entity let uuid = UUID(uuidString: value.entity.name) ?? UUID() do { try await worldTracking.removeAnchor(forID: uuid) } catch { print("Failed to remove world anchor \(uuid) with error: \(error).") } } } } } func setupAndRunWorldTracking() async throws { if WorldTrackingProvider.isSupported { do { try await session.run([worldTracking]) for await update in worldTracking.anchorUpdates { switch update.event { case .added: let subjectClone = subject.clone(recursive: true) subjectClone.isEnabled = true subjectClone.name = update.anchor.id.uuidString subjectClone.transform = Transform(matrix: update.anchor.originFromAnchorTransform) worldAnchorEntities[update.anchor.id] = subjectClone print("🟢 Anchor added \(update.anchor.id)") case .updated: guard let entity = worldAnchorEntities[update.anchor.id] else { print("No entity found to update for anchor \(update.anchor.id)") return } entity.transform = Transform(matrix: update.anchor.originFromAnchorTransform) print("🔵 Anchor updated \(update.anchor.id)") case .removed: worldAnchorEntities[update.anchor.id]?.removeFromParent() worldAnchorEntities.removeValue(forKey: update.anchor.id) print("🔴 Anchor removed \(update.anchor.id)") if let remainingAnchors = await worldTracking.allAnchors { print("Remaining Anchors: \(remainingAnchors.count)") } } } } catch { print("ARKit session error \(error)") } } } }
1
2
228
May ’25
How to Achieve Realistic Colors and Textures in LiDAR Scanning with Swift
Hello, I'm developing a LiDAR-based scanning app using Swift, where I can successfully perform scans and export the results as .obj files. My goal is to have the scan's colors and textures closely resemble real-world visuals as captured by the camera, similar to the results shown in this repository. In the referenced repository, the result is demonstrated with a single screenshot, but I want to display the textures and colors throughout the entire scanning process, not just at the final result. To clarify, I'm not focused on scanning individual objects but rather larger environments like rooms, houses, or outdoor spaces such as streets. Here’s what I’m aiming for: Realistic colors and textures that match what the camera sees during the scan. Continuous texture rendering during the scanning process, not just in the final exported model. Could anyone share guidance, sample code, or point me to relevant documentation to achieve this? Any help would be greatly appreciated! Thank you!
1
0
159
May ’25
ArAnchor shifts
I encountered some issues while developing a Vision Pro program using Unity. After binding an ARAnchor to a game object, I overlapped the virtual game object with a real-world cup. However, when I moved around with the Vision Pro on, the virtual game object shifted, causing the real-world cup and the virtual object to no longer coincide. Is there a way to solve this?
1
0
96
May ’25
Is it Possible to Place a 3D Model at the Exact Position of a QR Code in AR with ARKit ?
I'm working on an iOS app using ARKit and RealityKit where I scan QR codes and want to place 3D models at the exact position of the QR code in the real world. Is it possible to accurately place a 3D model at the exact position of a QR code in AR using ARKit and RealityKit? Specifically, I want the model to appear at the precise location where the QR code is detected, rather than just somewhere in the AR space. If this is possible, could you point me in the right direction or recommend the best approach to achieve this? Thank you for your help!
0
0
135
May ’25
Transparent Material Turning into Occlusion Material
I am trying to create an object in immersive space that is partially transparent (~50% opacity). I have implemented this in a few different ways including creating a model entity and setting its opacity component to 0.5, and creating a custom material with blending set to a transparent opacity of 0.5. These both work partially, as they behaved as intended for many cases, but seemingly randomly would act like occlusion material and block any other immersive content behind them, showing the real world instead. Some notes: I am using RealityKit to render the semi-transparent object and an opaque object that is behind the semi-transparent object. I am using VisionOS 2.1, and am updating the location of the semi-transparent object often. Both objects are ModelEntities. I would appreciate any guidance on how to implement this. Please let me know if there are any other questions.
3
0
177
May ’25
Attaching procedural audio to an ARKit SCNNode
I’m developing an ARKit application where I aim to attach procedurally generated audio to detected planes in the environment. While using a static audio file with SCNAudioSource and SCNAudioPlayer works as expected, integrating procedural audio via AVAudioSourceNode does not produce any sound, nor does it generate any error messages: Stack Overflow Post Working Implementation with Static Audio File: let audioPlayer = SCNAudioPlayer(source: audioSource) node.addAudioPlayer(audioPlayer) Attempted Implementation with Procedural Audio: // Audio generation code } let audioPlayer = SCNAudioPlayer(avAudioNode: audioNode) node.addAudioPlayer(audioPlayer) In this setup, the AVAudioSourceNode successfully generates audio when connected directly to an AVAudioEngine. However, when used with SCNAudioPlayer and attached to an SCNNode, it fails to produce sound. What doesn’t work is creating some procedural audio with an AVAudioNode, as documented here: Apple docs Additionally, I explored the WWDC18 AR game project, SwiftShot, which utilizes SCNAudioPlayer(avAudioNode:). After updating it for the latest Xcode, the graphics function correctly, but the audio does not play. I also noted that the Apple documentation mentions an audioPlayerWithAVAudioNode: method, stating: Using this initializer is typically not necessary. Instead, call the audioPlayerWithAVAudioNode: method, which returns a cached audio player object if one for the specified AVAudioNode object has already been created and is available for use. However, this method does not appear to be available in Swift. Any insights or guidance on this matter would be greatly appreciated.
Replies
0
Boosts
0
Views
245
Activity
Apr ’25
How to make device cross into portal world.
I want to step into portal world. I've know PortalCrossingComponent can make an entity to cross portal, but how to make device cross into portal world?
Replies
1
Boosts
0
Views
140
Activity
Apr ’25
Unity (IL2CPP) iOS Build: "_placeGeoAnchor" Undefined Symbol for Architecture arm64
Hi all, I’m running into a persistent linker error when building my Unity 6 project (IL2CPP, iOS target) that calls a Swift method via an Objective-C++ wrapper. Despite following all known steps, I keep getting: Undefined symbols for architecture arm64: "_placeGeoAnchor", referenced from: _GeoAnchorTrigger_placeGeoAnchor in libGameAssembly.a ... ld: symbol(s) not found for architecture arm64 I’m trying to place a persistent AR anchor at real-world GPS coordinates (so that the same asset can appear at the same location for a returning user). Since I’m targeting iOS, I can’t use Google’s geospatial anchors (but I sooo wish I could--please apple I beg of you stop being so selfish lol). I've already done these things: Swift file is added to Unity-iPhone target. .mm and .h files are in Unity-iPhone target under Compile Sources. Bridging header is set to Unity-iPhone-Bridging-Header.h. Generated header name is correct (GeoTest-Swift.h). Build Active Architecture Only set to No. Function has attribute((visibility("default"))). Unity project uses IL2CPP scripting backend. Yet I'm still getting the same linker error — it appears Unity (via IL2CPP) references the function, but Xcode doesn't link it. It’s something small that’s being missed with how IL2CPP links native symbols? Or maybe I need to explicitly include something in Link Binary With Libraries? I’ve verified symbol visibility and targets repeatedly. I’ve built AR features in Unity before (for Quest), but this is my first time trying to bridge C# → Objective-C++ → Swift in this way for a geolocation-based AR anchor for an iphone. I'm going crazy, I’ve been stuck on this for 12+ hours now, so any insight or nudge would be deeply appreciated. SPECS: Macbook Pro M4 Pro--Sequoia 15.4 Unity 6000.0.45f1 IPhone 11 iOS 18.4 Xcode 15
Replies
0
Boosts
0
Views
176
Activity
Apr ’25
RealityKit fails with EXC_BAD_ACCESS at CMClockGetAnchorTime in the simulator
Starting with iOS 18.0 beta 1, I've noticed that RealityKit frequently crashes in the simulator when an app launches and presents an ARView. I was able to create a small sample app with repro steps that demonstrates the issue, and I've submitted feedback: FB16144085 I've included a crash log with the feedback. If possible, I'd appreciate it if an Apple engineer could investigate and suggest a workaround. It's awkward to be restricted to the iOS 17 simulator, which does not exhibit this behavior. Please let me know if there's anything I can do to help. Thank you.
Replies
1
Boosts
0
Views
689
Activity
Apr ’25
VideoDockingRegion avplayer video plays through all object
Hi, I'm trying to place an object in front of AVPlayer that is docked in VideoDockingRegion, but when launched in immersive space, the video passes through the objects placed in front of. How do I make sure these objects are visible? image for reference
Replies
0
Boosts
0
Views
119
Activity
Apr ’25
Getting original LocationNode from hit tested SCNNode
I would like to modify the content of a published LocationNode upon been clicked by the user. But unfortunately: func hitTest(_ point: CGPoint, options: [SCNHitTestOption : Any]? = nil) -> [SCNHitTestResult] returns an SCNNode array from which it is impossible to retrieve the original LocationNode being inserted in order to be able to modify it. Of course the solution would be to either insert the SCNNode corresponding to the inserted LocationNode in a custom class or conversely insert the identifier of the custom object as a tag of the LocationNode, in order to solve the issue. But both options seem impossible to implement. May anyone help me?
Replies
1
Boosts
0
Views
153
Activity
Apr ’25
Object Tracking with Reality Composer Pro and ARKit support for Unity
For visionOS 2.0+, it has been announced the object tracking feature. Is there any support for PolySpatial in Unity or is it only available in Swift and Xcode?
Replies
1
Boosts
1
Views
599
Activity
Apr ’25
Apply mesh to real world people.
As far as I know, Apple hasn’t opened access to the Vision Pro camera for developers yet, so I’m trying to find possible workarounds within the current capabilities. I’m wondering if there’s any way to apply a mesh to a person in the scene in Vision Pro, or if there’s an alternative approach to roughly detect a human shape in front of the user?
Replies
2
Boosts
0
Views
119
Activity
Apr ’25
How to obtain video streams from the digital space included in VisionPro after applying for the "Enterprise API"?
After implementing the method of obtaining video streams discussed at WWDC in the program, I found that the obtained video stream does not include digital models in the digital space or related videos such as the program UI. I would like to ask how to obtain a video stream or frame that contains only the physical world? let formats = CameraVideoFormat.supportedVideoFormats(for: .main, cameraPositions:[.left]) let cameraFrameProvider = CameraFrameProvider() var arKitSession = ARKitSession() var pixelBuffer: CVPixelBuffer? var cameraAccessStatus = ARKitSession.AuthorizationStatus.notDetermined let worldTracking = WorldTrackingProvider() func requestWorldSensingCameraAccess() async { let authorizationResult = await arKitSession.requestAuthorization(for: [.cameraAccess]) cameraAccessStatus = authorizationResult[.cameraAccess]! } func queryAuthorizationCameraAccess() async{ let authorizationResult = await arKitSession.queryAuthorization(for: [.cameraAccess]) cameraAccessStatus = authorizationResult[.cameraAccess]! } func monitorSessionEvents() async { for await event in arKitSession.events { switch event { case .dataProviderStateChanged(_, let newState, let error): switch newState { case .initialized: break case .running: break case .paused: break case .stopped: if let error { print("An error occurred: \(error)") } @unknown default: break } case .authorizationChanged(let type, let status): print("Authorization type \(type) changed to \(status)") default: print("An unknown event occured \(event)") } } } @MainActor func processWorldAnchorUpdates() async { for await anchorUpdate in worldTracking.anchorUpdates { switch anchorUpdate.event { case .added: //检查是否有持久化对象附加到此添加的锚点- //它可能是该应用程序之前运行的一个世界锚。 //ARKit显示与此应用程序相关的所有世界锚点 //当世界跟踪提供程序启动时。 fallthrough case .updated: //使放置的对象的位置与其对应的对象保持同步 //世界锚点,如果未跟踪锚点,则隐藏对象。 break case .removed: //如果删除了相应的世界定位点,则删除已放置的对象。 break } } } func arkitRun() async{ do { try await arKitSession.run([cameraFrameProvider,worldTracking]) } catch { return } } @MainActor func processDeviceAnchorUpdates() async { await run(function: self.cameraFrameUpdatesBuffer, withFrequency: 90) } @MainActor func cameraFrameUpdatesBuffer() async{ guard let cameraFrameUpdates = cameraFrameProvider.cameraFrameUpdates(for: formats[0]),let cameraFrameUpdates1 = cameraFrameProvider.cameraFrameUpdates(for: formats[1]) else { return } for await cameraFrame in cameraFrameUpdates { guard let mainCameraSample = cameraFrame.sample(for: .left) else { continue } self.pixelBuffer = mainCameraSample.pixelBuffer } for await cameraFrame in cameraFrameUpdates1 { guard let mainCameraSample = cameraFrame.sample(for: .left) else { continue } if self.pixelBuffer != nil { self.pixelBuffer = mergeTwoFrames(frame1: self.pixelBuffer!, frame2: mainCameraSample.pixelBuffer, outputSize: CGSize(width: 1920, height: 1080)) } } }
Replies
0
Boosts
0
Views
171
Activity
Apr ’25
Playing USDZ animation at last known location of reference object
Hi there I'm using Reality Composer Pro to anchor virtual content to a .referenceobject. However by moving the referenceobject quickly, it causes tracking to stop. (I know this is a limitation so im trying to make it a feature) IS there a way to play a USDZ animation at the last known location, after detecting that reference object is no longer being tracked? is it possible to set this up in Reality Composer pro? Nearly everything is set up in Reality Composer pro with my immersive.scene just anchoring virtual content to the Reference object in the RCP Scene, so my immersive view just does this - if let immersiveContentEntity = try? await Entity(named: "Immersive", in: realityKitContentBundle) { content.add(immersiveContentEntity) & this .onAppear { appModel.immersiveSpaceState = .open } .onDisappear { appModel.immersiveSpaceState = .closed } I have tried Using SpatialTracking & WorldTrackProvider, but I'm still quite new to Swift and coding in general so im unsure how to implement in conjunction with my RCP scene and if this is actually the right way to do it. Apologies for my lack of knowledge.
Replies
0
Boosts
0
Views
96
Activity
Apr ’25
Playing USDZ animation at last known location of reference object
Hi there I'm using Reality Composer Pro to anchor virtual content to a .referenceobject. However moving the referenceobject quickly causes tracking to stop. (I know this is a limitation and I am trying to embrace it as a feature) Is there a way to play a USDZ animation at the last known location, after detecting that the reference object is no longer tracked? is it possible to set this up in Reality Composer pro? I'm trying to get the USDZ to play before the Virtual Content disappears (due to reference object not being located). So that it smooths out the vanishing of the content. Nearly everything is set up in Reality Composer pro with my immersive.scene just adding virtual content to the reference object which anchors it in the RCP Scene, so my immersive view just does this - if let immersiveContentEntity = try? await Entity(named: "Immersive", in: realityKitContentBundle) { content.add(immersiveContentEntity) & this .onAppear { appModel.immersiveSpaceState = .open } .onDisappear { appModel.immersiveSpaceState = .closed } I have tried Using SpatialTracking & WorldTrackingProvider, but I'm still quite new to Swift and coding in general so im unsure how to implement in conjunction with my RCP scene and/or if this is the right way to go about it. Also I have implemented this at the beginning of object tracking. All I had to do was add a onAppear behavior to the object to play a USDZ and that works. Doing it for disappearing (due to loss of reference object) seems to be a lot harder.
Replies
0
Boosts
0
Views
120
Activity
Apr ’25
Cannot reassign worldTracking / planeDetection providers in my PlacementManager when switching environments
Environment Xcode: 16.2 VisionOS SDK 2.4 Swift 6.1 Targets: Apple Vision Pro (immersive space) Frameworks: ARKit, RealityKit, SwiftUI What I’m Trying to Do I have a view-model class PlacementManager that holds two AR providers: private var worldTracking: WorldTrackingProvider private var planeDetection: PlaneDetectionProvider I want to dynamically replace these providers in a setEnvironment(_:) method (so I can save/clear a JSON scene and restart ARKit). What’s Happening If I declare them as : private let worldTracking = WorldTrackingProvider() private let planeDetection = PlaneDetectionProvider() I get compile-errors when I later do: self.worldTracking = newWorldTracking // Cannot assign to property: 'worldTracking' is a 'let' constant If I change them to un-initialized vars: private var worldTracking: WorldTrackingProvider private var planeDetection: PlaneDetectionProvider then in my init() I get: self used in property access 'worldTracking' before all stored properties are initialized Code snipet @Observable final class PlacementManager : ObservableObject { private var worldTracking: WorldTrackingProvider private var planeDetection: PlaneDetectionProvider // … other props … @MainActor init() { // error: self.worldTracking used before init… planeAnchorHandler = PlaneAnchorHandler(rootEntity: root) persistenceManager = PersistenceManager( worldTracking: worldTracking, rootEntity: root ) // … } @MainActor func setEnvironment(env: Environnement) async { let newWorldTracking = WorldTrackingProvider() let newPlaneDetection = PlaneDetectionProvider() try await appState!.arkitSession.run( [ newWorldTracking, newPlaneDetection ] ) self.worldTracking = newWorldTracking self.planeDetection = newPlaneDetection // … } } What I’ve Tried Giving them default values at declaration (= WorldTrackingProvider()) Initializing them at the top of init() before any use Passing the new providers into arkitSession.run(...) My Question What is the recommended Swift-style pattern to declare and reassign these ARKit provider properties so that: They’re fully initialized before use in init(), and I can swap them out later in setEnvironment(...) without compiler errors? Any pointers (or links to forum threads / docs) would be greatly appreciated!
Replies
0
Boosts
0
Views
146
Activity
May ’25
RoomPlan - The delegate of ARSession is retaining x ARFrames
Hi, I'm encountering an issue in our app that uses RoomPlan and ARsession for scanning. After prolonged use—especially under heavy load from both the scanning process and other unrelated app operations—the iPhone becomes very hot, and the following warning begins to appear more frequently: "ARSession <0x107559680>: The delegate of ARSession is retaining 11 ARFrames. The camera will stop delivering camera images if the delegate keeps holding on to too many ARFrames. This could be a threading or memory management issue in the delegate and should be fixed." I was able to reproduce this behavior using Apple’s RoomPlanExampleApp, with only one change: I introduced a CPU-intensive workload at the end of the startSession() function: DispatchQueue.global().asyncAfter(deadline: .now() + 5) { for i in 0..<4 { var value = 10_000 DispatchQueue.global().async { while true { value *= 10_000 value /= 10_000 value ^= 10_000 value = 10_000 } } } } I suspect this is some RoomPlan API problem that's why a filed an feedback: 17441091
Replies
0
Boosts
0
Views
280
Activity
May ’25
RealityKit System update and timing
Hi, I'm playing now with hand tracking. I want to get position of hand inside a system update function. I was not sure if transform I'm getting from hand attached AnchorEntity (with trackingMode: .predicted) would give same results as handAnchors(at:) from hand tracking provider, so I started to read them both and compare. For handAnchors i tried using context.scene.timebase.sourceTimebase!.sourceClock!.time.seconds and CACurrentMediaTime() as timestamp source. They seem to use exactly same clock, so that doesn't matter, but: for some reason update handler is always called twice with same context.deltaTime, but first time the query finds 0 entities, second time it finds them all. The query is the standard EntityQuery(where: .has(MyComponent.self)) and in update (matching: Self.query, updatingSystemWhen: .rendering). Here's part of logs: System update called, entity count: 0, dt: 0.01000458374619484, absTime: 4654.222593541 System update called, entity count: 11, dt: 0.01000458374619484, absTime: 4654.22262525 System update called, entity count: 0, dt: 0.009999999776482582, absTime: 4654.249390875 System update called, entity count: 11, dt: 0.009999999776482582, absTime: 4654.249425 accounting for the double update calling I started to calculate time delta of absolute time between calls and they're most of the time much bigger, or much smaller than advertised by system's context.deltaTime, only sometimes they kind of match, for example: system: (dt: 0.01000458374619484) scene : (dt: 0.021419291667371) (absTime: 4654.222628125001) and the very next call system: (dt: 0.010009 166784584522) scene : (dt: 0.0013097083328830195) (absTime: 4654.223937833334) but sometimes system: (dt: 0.009999999776482582) scene : (dt: 0.009 112249999816413) (absTime: 4654.351299 166668) Shouldn't those be more or less equal, or am I missing something? In the end it seems that getting hand position from AnchorEntity and with handAnchors(at:) gives kind of same results, but at different time points, so I'd love to understand what's the correct way to use them and why time flows differently :). --Edit-- P.S. Had to put spaces everywhere in logs between "9" and "1", otherwise post was blocked due to "sensitive content" :D
Replies
2
Boosts
0
Views
147
Activity
May ’25
Can not remove final World Anchor
I’ve been having some issues removing anchors. I can add anchors with no issue. They will be there the next time I run the scene. I can also get updates when ARKit sends them. I can remove anchors, but not all the time. The method I’m using is to call removeAnchor() on the data provider. worldTracking.removeAnchor(forID: uuid) // Yes, I have also tried `removeAnchor(_ worldAnchor: WorldAnchor)` This works if there are more than one anchor in a scene. When I’m down to one remaining anchor, I can remove it. It seems to succeed (does not raise an error) but the next time I run the scene the removed anchor is back. This only happens when there is only one remaining anchor. do { // This always run, but it doesn't seem to "save" the removal when there is only one anchor left. try await worldTracking.removeAnchor(forID: uuid) } catch { // I have never seen this block fire! print("Failed to remove world anchor \(uuid) with error: \(error).") } I posted a video on my website if you want to see it happening. https://stepinto.vision/labs/lab-051-issues-with-world-tracking/ Here is the full code. Can you see if I’m doing something wrong? Is this a bug? struct Lab051: View { @State var session = ARKitSession() @State var worldTracking = WorldTrackingProvider() @State var worldAnchorEntities: [UUID: Entity] = [:] @State var placement = Entity() @State var subject : ModelEntity = { let subject = ModelEntity( mesh: .generateSphere(radius: 0.06), materials: [SimpleMaterial(color: .stepRed, isMetallic: false)]) subject.setPosition([0, 0, 0], relativeTo: nil) let collision = CollisionComponent(shapes: [.generateSphere(radius: 0.06)]) let input = InputTargetComponent() subject.components.set([collision, input]) return subject }() var body: some View { RealityView { content in guard let scene = try? await Entity(named: "WorldTracking", in: realityKitContentBundle) else { return } content.add(scene) if let placementEntity = scene.findEntity(named: "PlacementPreview") { placement = placementEntity } } update: { content in for (_, entity) in worldAnchorEntities { if !content.entities.contains(entity) { content.add(entity) } } } .modifier(DragGestureImproved()) .gesture(tapGesture) .task { try! await setupAndRunWorldTracking() } } var tapGesture: some Gesture { TapGesture() .targetedToAnyEntity() .onEnded { value in if value.entity.name == "PlacementPreview" { // If we tapped the placement preview cube, create an anchor Task { let anchor = WorldAnchor(originFromAnchorTransform: value.entity.transformMatrix(relativeTo: nil)) try await worldTracking.addAnchor(anchor) } } else { Task { // Get the UUID we stored on the entity let uuid = UUID(uuidString: value.entity.name) ?? UUID() do { try await worldTracking.removeAnchor(forID: uuid) } catch { print("Failed to remove world anchor \(uuid) with error: \(error).") } } } } } func setupAndRunWorldTracking() async throws { if WorldTrackingProvider.isSupported { do { try await session.run([worldTracking]) for await update in worldTracking.anchorUpdates { switch update.event { case .added: let subjectClone = subject.clone(recursive: true) subjectClone.isEnabled = true subjectClone.name = update.anchor.id.uuidString subjectClone.transform = Transform(matrix: update.anchor.originFromAnchorTransform) worldAnchorEntities[update.anchor.id] = subjectClone print("🟢 Anchor added \(update.anchor.id)") case .updated: guard let entity = worldAnchorEntities[update.anchor.id] else { print("No entity found to update for anchor \(update.anchor.id)") return } entity.transform = Transform(matrix: update.anchor.originFromAnchorTransform) print("🔵 Anchor updated \(update.anchor.id)") case .removed: worldAnchorEntities[update.anchor.id]?.removeFromParent() worldAnchorEntities.removeValue(forKey: update.anchor.id) print("🔴 Anchor removed \(update.anchor.id)") if let remainingAnchors = await worldTracking.allAnchors { print("Remaining Anchors: \(remainingAnchors.count)") } } } } catch { print("ARKit session error \(error)") } } } }
Replies
1
Boosts
2
Views
228
Activity
May ’25
How to Achieve Realistic Colors and Textures in LiDAR Scanning with Swift
Hello, I'm developing a LiDAR-based scanning app using Swift, where I can successfully perform scans and export the results as .obj files. My goal is to have the scan's colors and textures closely resemble real-world visuals as captured by the camera, similar to the results shown in this repository. In the referenced repository, the result is demonstrated with a single screenshot, but I want to display the textures and colors throughout the entire scanning process, not just at the final result. To clarify, I'm not focused on scanning individual objects but rather larger environments like rooms, houses, or outdoor spaces such as streets. Here’s what I’m aiming for: Realistic colors and textures that match what the camera sees during the scan. Continuous texture rendering during the scanning process, not just in the final exported model. Could anyone share guidance, sample code, or point me to relevant documentation to achieve this? Any help would be greatly appreciated! Thank you!
Replies
1
Boosts
0
Views
159
Activity
May ’25
ArAnchor shifts
I encountered some issues while developing a Vision Pro program using Unity. After binding an ARAnchor to a game object, I overlapped the virtual game object with a real-world cup. However, when I moved around with the Vision Pro on, the virtual game object shifted, causing the real-world cup and the virtual object to no longer coincide. Is there a way to solve this?
Replies
1
Boosts
0
Views
96
Activity
May ’25
Is it Possible to Place a 3D Model at the Exact Position of a QR Code in AR with ARKit ?
I'm working on an iOS app using ARKit and RealityKit where I scan QR codes and want to place 3D models at the exact position of the QR code in the real world. Is it possible to accurately place a 3D model at the exact position of a QR code in AR using ARKit and RealityKit? Specifically, I want the model to appear at the precise location where the QR code is detected, rather than just somewhere in the AR space. If this is possible, could you point me in the right direction or recommend the best approach to achieve this? Thank you for your help!
Replies
0
Boosts
0
Views
135
Activity
May ’25
Object Tracking Improvement of moving object
Here is the sample project from apple of Object Tracking. https://developer.apple.com/documentation/visionOS/exploring_object_tracking_with_arkit can we improve it tracking accuracy and tracking when object is moving little faster, so the 3d object that draw still follow it and make it more accurate.
Replies
0
Boosts
0
Views
145
Activity
May ’25
Transparent Material Turning into Occlusion Material
I am trying to create an object in immersive space that is partially transparent (~50% opacity). I have implemented this in a few different ways including creating a model entity and setting its opacity component to 0.5, and creating a custom material with blending set to a transparent opacity of 0.5. These both work partially, as they behaved as intended for many cases, but seemingly randomly would act like occlusion material and block any other immersive content behind them, showing the real world instead. Some notes: I am using RealityKit to render the semi-transparent object and an opaque object that is behind the semi-transparent object. I am using VisionOS 2.1, and am updating the location of the semi-transparent object often. Both objects are ModelEntities. I would appreciate any guidance on how to implement this. Please let me know if there are any other questions.
Replies
3
Boosts
0
Views
177
Activity
May ’25