Integrate iOS device camera and motion features to produce augmented reality experiences in your app or game using ARKit.

Posts under ARKit tag

140 Posts

Post

Replies

Boosts

Views

Activity

Apply mesh to real world people.
As far as I know, Apple hasn’t opened access to the Vision Pro camera for developers yet, so I’m trying to find possible workarounds within the current capabilities. I’m wondering if there’s any way to apply a mesh to a person in the scene in Vision Pro, or if there’s an alternative approach to roughly detect a human shape in front of the user?
2
0
111
Apr ’25
Getting original LocationNode from hit tested SCNNode
I would like to modify the content of a published LocationNode upon been clicked by the user. But unfortunately: func hitTest(_ point: CGPoint, options: [SCNHitTestOption : Any]? = nil) -> [SCNHitTestResult] returns an SCNNode array from which it is impossible to retrieve the original LocationNode being inserted in order to be able to modify it. Of course the solution would be to either insert the SCNNode corresponding to the inserted LocationNode in a custom class or conversely insert the identifier of the custom object as a tag of the LocationNode, in order to solve the issue. But both options seem impossible to implement. May anyone help me?
1
0
120
Apr ’25
RealityKit fails with EXC_BAD_ACCESS at CMClockGetAnchorTime in the simulator
Starting with iOS 18.0 beta 1, I've noticed that RealityKit frequently crashes in the simulator when an app launches and presents an ARView. I was able to create a small sample app with repro steps that demonstrates the issue, and I've submitted feedback: FB16144085 I've included a crash log with the feedback. If possible, I'd appreciate it if an Apple engineer could investigate and suggest a workaround. It's awkward to be restricted to the iOS 17 simulator, which does not exhibit this behavior. Please let me know if there's anything I can do to help. Thank you.
1
0
676
Apr ’25
Unity (IL2CPP) iOS Build: "_placeGeoAnchor" Undefined Symbol for Architecture arm64
Hi all, I’m running into a persistent linker error when building my Unity 6 project (IL2CPP, iOS target) that calls a Swift method via an Objective-C++ wrapper. Despite following all known steps, I keep getting: Undefined symbols for architecture arm64: "_placeGeoAnchor", referenced from: _GeoAnchorTrigger_placeGeoAnchor in libGameAssembly.a ... ld: symbol(s) not found for architecture arm64 I’m trying to place a persistent AR anchor at real-world GPS coordinates (so that the same asset can appear at the same location for a returning user). Since I’m targeting iOS, I can’t use Google’s geospatial anchors (but I sooo wish I could--please apple I beg of you stop being so selfish lol). I've already done these things: Swift file is added to Unity-iPhone target. .mm and .h files are in Unity-iPhone target under Compile Sources. Bridging header is set to Unity-iPhone-Bridging-Header.h. Generated header name is correct (GeoTest-Swift.h). Build Active Architecture Only set to No. Function has attribute((visibility("default"))). Unity project uses IL2CPP scripting backend. Yet I'm still getting the same linker error — it appears Unity (via IL2CPP) references the function, but Xcode doesn't link it. It’s something small that’s being missed with how IL2CPP links native symbols? Or maybe I need to explicitly include something in Link Binary With Libraries? I’ve verified symbol visibility and targets repeatedly. I’ve built AR features in Unity before (for Quest), but this is my first time trying to bridge C# → Objective-C++ → Swift in this way for a geolocation-based AR anchor for an iphone. I'm going crazy, I’ve been stuck on this for 12+ hours now, so any insight or nudge would be deeply appreciated. SPECS: Macbook Pro M4 Pro--Sequoia 15.4 Unity 6000.0.45f1 IPhone 11 iOS 18.4 Xcode 15
0
0
167
Apr ’25
Attaching procedural audio to an ARKit SCNNode
I’m developing an ARKit application where I aim to attach procedurally generated audio to detected planes in the environment. While using a static audio file with SCNAudioSource and SCNAudioPlayer works as expected, integrating procedural audio via AVAudioSourceNode does not produce any sound, nor does it generate any error messages: Stack Overflow Post Working Implementation with Static Audio File: let audioPlayer = SCNAudioPlayer(source: audioSource) node.addAudioPlayer(audioPlayer) Attempted Implementation with Procedural Audio: // Audio generation code } let audioPlayer = SCNAudioPlayer(avAudioNode: audioNode) node.addAudioPlayer(audioPlayer) In this setup, the AVAudioSourceNode successfully generates audio when connected directly to an AVAudioEngine. However, when used with SCNAudioPlayer and attached to an SCNNode, it fails to produce sound. What doesn’t work is creating some procedural audio with an AVAudioNode, as documented here: Apple docs Additionally, I explored the WWDC18 AR game project, SwiftShot, which utilizes SCNAudioPlayer(avAudioNode:). After updating it for the latest Xcode, the graphics function correctly, but the audio does not play. I also noted that the Apple documentation mentions an audioPlayerWithAVAudioNode: method, stating: Using this initializer is typically not necessary. Instead, call the audioPlayerWithAVAudioNode: method, which returns a cached audio player object if one for the specified AVAudioNode object has already been created and is available for use. However, this method does not appear to be available in Swift. Any insights or guidance on this matter would be greatly appreciated.
0
0
216
Apr ’25
The Question Of Two ARView Together
First, I scan first room using the roomplan api. Because I need scan second room, I stop it by “captureSession.stop(pauseARSession: false)”, I think the Arsession is continue work at that time. Second, before the another room will scan, I want to run another ARView. (in order to detect some objects which are not detected by Roomplan in first room) But, at this time, the second ARView(there is an ARView in roomplan, I think) will always black screen, can’t normally work. This is the question I want to resolve. Please help me let the second ARView go well.
0
0
136
Mar ’25
GameKit Access Point causes camera background image in ARKit to be black in iOS 18 beta only
I have an AR game using ARKit with SceneKit that works just fine in iOS 17. In the iOS 18 betas, the AR background image shows black instead of showing the real world. As a result there's no tracking and obviously the whole game is useless. I narrowed down the issue to showing the Game Center Access Point. My app has ViewController 1 (VC1) showing the main menu and that's where I want to show the GC Access Point. From there you open VC2 which shows a list of levels. Selecting any level will open VC3 which has the ARScene. Following is the code I use to start Game Center in VC1: GKLocalPlayer.local.authenticateHandler = { gcAuthVC, error in let isGameCenterReady = (gcAuthVC == nil) && (error == nil) if let viewController = gcAuthVC { self.present (viewController, animated: true, completion: nil) } if error != nil { print(error?.localizedDescription ?? "") } if isGameCenterReady { GKAccessPoint.shared.location = .topLeading GKAccessPoint.shared.showHighlights = true GKAccessPoint.shared.isActive = true } } When switching to VC2 I run GKAccessPoint.shared.isActive = false so that the Access Point will no longer show in any of the following VCs. I tried running it in VC1, VC2, and again in VC3 - it doesn't change anything. Once I reach VC3, the background is black. If in VC1 I don't run GKAccessPoint.shared.isActive = true, so I don't activate the access point, the behavior is as follows: If I wait until after the Game Center login animation completes and closes on its own and then I proceed to VC2 and VC3, the camera image will show correctly If I quickly move to VC2 before the Game Center login animation has completed, so my code will close it by setting active = false, and then I continue to VC3, I will see the black background problem. So it does look like activating the access point and then de-activating it causes the issue. BTW, if I activate the access point and leave it on in all VCs, the same black background issue persists. Other than that, when I'm in VC3 with the black background and I switch to another app (so my game moves to the background), when it returns to the foreground, the camera suddenly shows the real world correctly! I tried to manually reset the AR session by pausing and restarting it, but that didn't change anything. Also, when I check with the debugger, it looks like when the app comes back to the foreground it also doesn't run the session start code. But something does seem to reset itself so I wonder what that is. Maybe I could trigger the same manually in my cdoe??? I repeat that everything works just fine in iOS 17 and below. This problem only started with the iOS 18 beta (currently on beta 5, but it started in some of the previous betas as well). So could this be a bug in iOS 18? As a workaround I could check the iOS version and if it's iOS18 not activate the access point, hoping that the user will not jump to VC2 too quickly, and show my own button which will open Game Center. But I'd rather give the users the full experience with their own avatar and the highlights showing up. Plus, certainly some users will move quickly to VC2 and that will be an awful experience. Any help would be greatly appreciated. Thanks!
11
2
1.2k
Mar ’25
A question about adding grounding shadow in visionPro
I want adding grounding shadow on my Entity in RealityView on visionPro. However it seems that the shadow can only appear on another Entity. So I using plane detection in ARKit and add a transparent plane on it to render shadow. let planeEntity = ModelEntity(mesh: .generatePlane(width: anchor.geometry.extent.width, height: anchor.geometry.extent.height), materials: [material]) planeEntity.components.set(OpacityComponent(opacity: 0.0)) But sometimes there will be a border around my Entityon the plane. I do not know why it will happen, and I want remove the border.
5
0
434
Mar ’25
Person Occlusion on the Vision Pro
I am currently creating an app where two people share an instance of an immersive space so that they are able to point to certain things in the immersive space. Right now, other people are hidden behind the immersive space, and even with people awareness enabled for everything, people are still too difficult to see. I've found this documentation (https://developer.apple.com/documentation/arkit/occluding-virtual-content-with-people) which describes what I want to do, but it is only listed as working on iOS an iPadOS. Is there anything similar to this that will work on VisionOS?
2
0
141
Mar ’25
Can I use ARGeoAnchor with simulated locations
I'm trying to evaluate if we can support AR navigation with MapKit. The feature is supposed to be available for users in US. I tried to run the sample on my iPhone: https://developer.apple.com/documentation/arkit/tracking-geographic-locations-in-ar?language=objc But I'm in a location that ARGeoTrackingConfiguration.checkAvailabilityWithCompletionHandler: always return false. I think ARGeoAnchor isn't supported in my location. I tried to use simulated locations by Adding a gpx file when launching the app. Enabling Xcode -> Debug -> Simulate Location -> New York, NY, US But the availability for ARGeoAnchor is still false. Is that possible for me to develop the ARGeoAnchor feature outside of the covered areas?
0
0
122
Mar ’25
Adding reference image failed in visionos
I am trying the image tracking of ARKit on VisionPro, but there seems to be some problem when adding reference image. Here is my code: let images = ReferenceImage.loadReferenceImages(inGroupNamed: "photos") print("Images: \(images)") try await appState!.arkitSession.run([imageTracking]) It can successfully print those images, however sometimes it will print the error message like this: ARImageTrackingRemoteService: Adding reference image <ARReferenceImage: 0x3032399e0 name="chair" physicalSize=(0.070, 0.093)> failed. When this error message is printed, the corresponding image can not be tracked. I do not understand why this will happen, because sometimes the image can be successfully added, but other time not, even for the same image. It makes my app not stable. Besides, there are some other error messages, and I do not know whether it is related: ARPredictorRemoteService <0x1042154a0>: Query queue is not running. Execution of the command buffer was aborted due to an error during execution. Insufficient Permission (to submit GPU work from background) (00000006:kIOGPUCommandBufferCallbackErrorBackgroundExecutionNotPermitted)
1
0
362
Mar ’25
Human Body joint tracking in VisionOS
The goal is to achieve precise joint tracking for clinical assessment. The Doctor is wearing the AVP and observing the Patients movement. Do you have any recommended best practices for integrating real-time joint tracking and displaying them on the patient within visionOS? We attempted to use VNHumanBodyPose3DObservation, which theoretically should work, but we are unable to display the detected joints in an Immersive Space for real-time validation. This makes it difficult for the doctor to ensure accurate tracking and if possible a photo or video of the Range of Motion assessment would be needed for the patient record. Are there alternative methods to achieve precise real-time joint tracking without requiring main camera access (com.apple.developer.arkit.main-camera-access.allow)?
3
0
338
Mar ’25
Reality Composer Pro Performance on iOS
I need help to wrap my head around this... If I import the Reality Composer Pro package and load it into an ARView, I will see 1.3gb of memory usage and about 180-220% cpu usage. The frames will start at around 60fps, and then eventually drop to around 30fps. If I export the usdz from Reality Composer Pro and load that into the same ARView, I will see about 1gb of memory usage and around 150% cpu usage; fps holds longer at 60 but eventually drops. If I load that same usdz into a QuickLook view, I will see about 55mb of memory usage, 9-11% cpu, and the frames stay locked at 116fps. The only thing I notice is the button I have is slightly less responsive, but it all still works fine. I don't understand. How can I make the ARView work as efficiently as QuickLook?
0
0
275
Mar ’25