Hi all,
I’m running into a persistent linker error when building my Unity 6 project (IL2CPP, iOS target) that calls a Swift method via an Objective-C++ wrapper. Despite following all known steps, I keep getting:
Undefined symbols for architecture arm64:
"_placeGeoAnchor", referenced from:
_GeoAnchorTrigger_placeGeoAnchor in libGameAssembly.a
...
ld: symbol(s) not found for architecture arm64
I’m trying to place a persistent AR anchor at real-world GPS coordinates (so that the same asset can appear at the same location for a returning user). Since I’m targeting iOS, I can’t use Google’s geospatial anchors (but I sooo wish I could--please apple I beg of you stop being so selfish lol).
I've already done these things:
Swift file is added to Unity-iPhone target.
.mm and .h files are in Unity-iPhone target under Compile Sources.
Bridging header is set to Unity-iPhone-Bridging-Header.h.
Generated header name is correct (GeoTest-Swift.h).
Build Active Architecture Only set to No.
Function has attribute((visibility("default"))).
Unity project uses IL2CPP scripting backend.
Yet I'm still getting the same linker error — it appears Unity (via IL2CPP) references the function, but Xcode doesn't link it.
It’s something small that’s being missed with how IL2CPP links native symbols? Or maybe I need to explicitly include something in Link Binary With Libraries? I’ve verified symbol visibility and targets repeatedly.
I’ve built AR features in Unity before (for Quest), but this is my first time trying to bridge C# → Objective-C++ → Swift in this way for a geolocation-based AR anchor for an iphone.
I'm going crazy, I’ve been stuck on this for 12+ hours now, so any insight or nudge would be deeply appreciated.
SPECS:
Macbook Pro M4 Pro--Sequoia 15.4
Unity 6000.0.45f1
IPhone 11 iOS 18.4
Xcode 15
ARKit
RSS for tagIntegrate iOS device camera and motion features to produce augmented reality experiences in your app or game using ARKit.
Posts under ARKit tag
200 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
I’m developing an ARKit application where I aim to attach procedurally generated audio to detected planes in the environment. While using a static audio file with SCNAudioSource and SCNAudioPlayer works as expected, integrating procedural audio via AVAudioSourceNode does not produce any sound, nor does it generate any error messages: Stack Overflow Post
Working Implementation with Static Audio File:
let audioPlayer = SCNAudioPlayer(source: audioSource)
node.addAudioPlayer(audioPlayer)
Attempted Implementation with Procedural Audio:
// Audio generation code
}
let audioPlayer = SCNAudioPlayer(avAudioNode: audioNode)
node.addAudioPlayer(audioPlayer)
In this setup, the AVAudioSourceNode successfully generates audio when connected directly to an AVAudioEngine. However, when used with SCNAudioPlayer and attached to an SCNNode, it fails to produce sound. What doesn’t work is creating some procedural audio with an AVAudioNode, as documented here:
Apple docs
Additionally, I explored the WWDC18 AR game project, SwiftShot, which utilizes SCNAudioPlayer(avAudioNode:). After updating it for the latest Xcode, the graphics function correctly, but the audio does not play. I also noted that the Apple documentation mentions an audioPlayerWithAVAudioNode: method, stating:
Using this initializer is typically not necessary. Instead, call the audioPlayerWithAVAudioNode: method, which returns a cached audio player object if one for the specified AVAudioNode object has already been created and is available for use.
However, this method does not appear to be available in Swift. Any insights or guidance on this matter would be greatly appreciated.
I encountered some issues while developing a Vision Pro program using Unity. After binding an ARAnchor to a game object, I overlapped the virtual game object with a real-world cup. However, when I moved around with the Vision Pro on, the virtual game object shifted, causing the real-world cup and the virtual object to no longer coincide. Is there a way to solve this?
I want to step into portal world. I've know PortalCrossingComponent can make an entity to cross portal, but how to make device cross into portal world?
First, I scan first room using the roomplan api. Because I need scan second room, I stop it by “captureSession.stop(pauseARSession: false)”, I think the Arsession is continue work at that time.
Second, before the another room will scan, I want to run another ARView. (in order to detect some objects which are not detected by Roomplan in first room)
But, at this time, the second ARView(there is an ARView in roomplan, I think) will always black screen, can’t normally work. This is the question I want to resolve. Please help me let the second ARView go well.
We enabled ARKit replay data and chose a .mov file from reality composer. However, the app does not launch after build and run in Xcode. It stays stuck saying "Attaching to app on iPad".
Does anyone have this problem?
System Information:
macOS Version 14.6.1 (Build 23G93)
I am currently creating an app where two people share an instance of an immersive space so that they are able to point to certain things in the immersive space. Right now, other people are hidden behind the immersive space, and even with people awareness enabled for everything, people are still too difficult to see. I've found this documentation (https://developer.apple.com/documentation/arkit/occluding-virtual-content-with-people) which describes what I want to do, but it is only listed as working on iOS an iPadOS. Is there anything similar to this that will work on VisionOS?
I'm trying to evaluate if we can support AR navigation with MapKit. The feature is supposed to be available for users in US.
I tried to run the sample on my iPhone: https://developer.apple.com/documentation/arkit/tracking-geographic-locations-in-ar?language=objc
But I'm in a location that ARGeoTrackingConfiguration.checkAvailabilityWithCompletionHandler: always return false. I think ARGeoAnchor isn't supported in my location.
I tried to use simulated locations by
Adding a gpx file when launching the app.
Enabling Xcode -> Debug -> Simulate Location -> New York, NY, US
But the availability for ARGeoAnchor is still false.
Is that possible for me to develop the ARGeoAnchor feature outside of the covered areas?
Hello everyone can you help me, i have requested main camera access API Enterprise and have got the license to, and i have setting up the project main camera access demo from apple with my new license and have create app bundle and identifier for it but when i tried to deploy it test flight i got some error say "Profile doesn't support Main Camera Access" and "Profile doesn't include the com.apple.developer.arkit.main-camera-access.alow entitlement, even have do it it app Certificates, Identifiers & Profiles and add the additional capability Main Camera Access. can you help me fixing this so that i can use Main Camera Access Entitlement
I have been using ARKit to get hand tracking data on a continuous loop by implementing the AnchorUpdateSequence.
I want to try out the .predicted hand tracking, but it seems as though using ARKit session and HandTrackingProvider do not allow me to enable this feature?
The goal is to achieve precise joint tracking for clinical assessment. The Doctor is wearing the AVP and observing the Patients movement.
Do you have any recommended best practices for integrating real-time joint tracking and displaying them on the patient within visionOS?
We attempted to use VNHumanBodyPose3DObservation, which theoretically should work, but we are unable to display the detected joints in an Immersive Space for real-time validation. This makes it difficult for the doctor to ensure accurate tracking and if possible a photo or video of the Range of Motion assessment would be needed for the patient record.
Are there alternative methods to achieve precise real-time joint tracking without requiring main camera access (com.apple.developer.arkit.main-camera-access.allow)?
I am trying the image tracking of ARKit on VisionPro, but there seems to be some problem when adding reference image.
Here is my code:
let images = ReferenceImage.loadReferenceImages(inGroupNamed: "photos")
print("Images: \(images)")
try await appState!.arkitSession.run([imageTracking])
It can successfully print those images, however sometimes it will print the error message like this:
ARImageTrackingRemoteService: Adding reference image <ARReferenceImage: 0x3032399e0 name="chair" physicalSize=(0.070, 0.093)> failed.
When this error message is printed, the corresponding image can not be tracked.
I do not understand why this will happen, because sometimes the image can be successfully added, but other time not, even for the same image. It makes my app not stable.
Besides, there are some other error messages, and I do not know whether it is related:
ARPredictorRemoteService <0x1042154a0>: Query queue is not running.
Execution of the command buffer was aborted due to an error during execution. Insufficient Permission (to submit GPU work from background) (00000006:kIOGPUCommandBufferCallbackErrorBackgroundExecutionNotPermitted)
In ARKit for visionOS, I can track the user's head with a HeadAnchor, but it will not give the location. However, I can get the device's transform by calling queryDeviceAnchor(atTimestamp: CACurrentMediaTime()) on a WorldTrackingProvider.
Why the difference? - if I know the device's transform, I effectively know the head's transform.
I need help to wrap my head around this...
If I import the Reality Composer Pro package and load it into an ARView, I will see 1.3gb of memory usage and about 180-220% cpu usage. The frames will start at around 60fps, and then eventually drop to around 30fps.
If I export the usdz from Reality Composer Pro and load that into the same ARView, I will see about 1gb of memory usage and around 150% cpu usage; fps holds longer at 60 but eventually drops.
If I load that same usdz into a QuickLook view, I will see about 55mb of memory usage, 9-11% cpu, and the frames stay locked at 116fps. The only thing I notice is the button I have is slightly less responsive, but it all still works fine.
I don't understand. How can I make the ARView work as efficiently as QuickLook?
When I've made an animated UDSZ, at what framerate will the animation be rendered in QuickLook? Is it the same across all devices? (iPhone, Apple Vision Pro, etc.) and viewing environments? (QuickLook, inside an ARView, etc.)
Suppose I export my file at 30fps and the device draws at 60fps, does the device interpolate between frames automatically, animate at a lower frame rate, or play it at twice the speed? What if it were 24fps?
My primary concern with understanding frame rates is a bit of trouble I've had making perfectly looping animations. There always seems to be the slightest stutter between iterations.
Thanks in advance for any insights you're able to provide!
Hello, I'm trying to use Xcode's ARKit Session replay functionality. I have a capture I made using Reality Composer and when trying to use it with Xcode's replay functionality the installation and debugging process seems stalled forever. I've gotten it to work once so I know the capture file is functional but I have never gotten it to work a second time, even though I didn't change any settings.
No amount of restarting Xcode, the Mac, or the iPhone seem to work. I have also tried cleaning build folders, reinstalling the app, and clearing DerivedData.
I can confirm from the Xcode logs that the app installs correctly but the app never launches. If I unselect the checkbox for "ARKit Replay Data", the app launches and debugs nearly instantly.
I have tried letting it "attach" for up to 10 minutes to no avail.
I want adding grounding shadow on my Entity in RealityView on visionPro. However it seems that the shadow can only appear on another Entity. So I using plane detection in ARKit and add a transparent plane on it to render shadow.
let planeEntity = ModelEntity(mesh: .generatePlane(width: anchor.geometry.extent.width, height: anchor.geometry.extent.height), materials: [material])
planeEntity.components.set(OpacityComponent(opacity: 0.0))
But sometimes there will be a border around my Entityon the plane.
I do not know why it will happen, and I want remove the border.
I am using ARKit to detect image in visionPro. However I met some question about adding the reference image.
Some of my images can not be added correctly sometimes. (As you can see in the picture above, the 'orange' can not be added correctly, but the 'cup' can). However, sometimes they will be added without any problem. I do not know why it will happen. And I want they all be added steadily.
Hello.
I am currently building an app using AR Kit.
As for the UI, I am using SwiftUI and NavigationStack + NavigationLink for navigation and screen transitions!
Here I need to go back and forth between the AR screen and other screens.
If the number of screen transitions is small, this is not a problem.
However, if the number of screen transitions increases to 10 or 20, it crashes somewhere.
We are struggling with this problem. (The nature of the application requires multiple screen transitions.)
The crash log showed the following.
error: read memory from 0x1e387f2d4 failed
AR_Crash_Sample-2025-03-07-115914.txt
Incident Identifier: B23D806E-D578-4A95-8828-2A1E8D6BB7F8
Beta Identifier: 924A85AB-441C-41A7-9BC2-063940BDAF32
Hardware Model: iPhone16,1
Process: AR_Crash_Sample [2375]
Path: /private/var/containers/Bundle/Application/FAC3D662-DB10-434E-A006-79B9515D8B7A/AR_Crash_Sample.app/AR_Crash_Sample
Identifier: ar.crash.sample.AR.Crash.Sample
Version: 1.0 (1)
AppStoreTools: 16C7015
AppVariant: 1:iPhone16,1:18
Beta: YES
Code Type: ARM-64 (Native)
Role: Foreground
Parent Process: launchd [1]
Coalition: ar.crash.sample.AR.Crash.Sample [1464]
Date/Time: 2025-03-07 11:59:14.3691 +0900
Launch Time: 2025-03-07 11:57:47.3955 +0900
OS Version: iPhone OS 18.3.1 (22D72)
Release Type: User
Baseband Version: 2.40.05
Report Version: 104
Exception Type: EXC_CRASH (SIGABRT)
Exception Codes: 0x0000000000000000, 0x0000000000000000
Termination Reason: SIGNAL 6 Abort trap: 6
Terminating Process: AR_Crash_Sample [2375]
Triggered by Thread: 7
Application Specific Information:
abort() called
Thread 7 name: Dispatch queue: com.apple.arkit.depthtechnique
Thread 7 Crashed:
0 libsystem_kernel.dylib 0x1e387f2d4 __pthread_kill + 8
1 libsystem_pthread.dylib 0x21cedd59c pthread_kill + 268
2 libsystem_c.dylib 0x199f98b08 abort + 128
3 libc++abi.dylib 0x21ce035b8 abort_message + 132
4 libc++abi.dylib 0x21cdf1b90 demangling_terminate_handler() + 320
5 libobjc.A.dylib 0x18f6c72d4 _objc_terminate() + 172
6 libc++abi.dylib 0x21ce0287c std::__terminate(void (*)()) + 16
7 libc++abi.dylib 0x21ce02820 std::terminate() + 108
8 libdispatch.dylib 0x199edefbc _dispatch_client_callout + 40
9 libdispatch.dylib 0x199ee65cc _dispatch_lane_serial_drain + 768
10 libdispatch.dylib 0x199ee7158 _dispatch_lane_invoke + 432
11 libdispatch.dylib 0x199ee85c0 _dispatch_workloop_invoke + 1744
12 libdispatch.dylib 0x199ef238c _dispatch_root_queue_drain_deferred_wlh + 288
13 libdispatch.dylib 0x199ef1bd8 _dispatch_workloop_worker_thread + 540
14 libsystem_pthread.dylib 0x21ced8680 _pthread_wqthread + 288
15 libsystem_pthread.dylib 0x21ced6474 start_wqthread + 8
Perhaps I am using too much memory!
How can I address this phenomenon?
For the AR functionality, we are using UIViewRepresentable, which is written in UIKit and can be called from SwiftUI
import ARKit
import AsyncAlgorithms
import AVFoundation
import SCNLine
import SwiftUI
internal struct MeasureARViewContainer: UIViewRepresentable {
@Binding var tapCount: Int
@Binding var distance: Double?
@Binding var currentIndex: Int
var focusSquare: FocusSquare = FocusSquare()
let coachingOverlay: ARCoachingOverlayView = ARCoachingOverlayView()
func makeUIView(context: Context) -> ARSCNView {
let arView: ARSCNView = ARSCNView()
arView.delegate = context.coordinator
let configuration: ARWorldTrackingConfiguration = ARWorldTrackingConfiguration()
configuration.planeDetection = [.horizontal, .vertical]
if ARWorldTrackingConfiguration.supportsFrameSemantics(.sceneDepth) {
configuration.frameSemantics = [.sceneDepth, .smoothedSceneDepth]
}
arView.session.run(configuration, options: [.resetTracking, .removeExistingAnchors])
context.coordinator.sceneView = arView
context.coordinator.scanTarget()
coachingOverlay.session = arView.session
coachingOverlay.delegate = context.coordinator
coachingOverlay.goal = .horizontalPlane
coachingOverlay.activatesAutomatically = true
coachingOverlay.autoresizingMask = [.flexibleWidth, .flexibleHeight]
coachingOverlay.translatesAutoresizingMaskIntoConstraints = false
arView.addSubview(coachingOverlay)
return arView
}
func updateUIView(_ _: ARSCNView, context: Context) {
context.coordinator.mode = MeasurementMode(rawValue: currentIndex) ?? .width
if tapCount == 0 {
context.coordinator.resetMeasurement()
return
}
if distance != nil {
return
}
DispatchQueue.main.async {
if context.coordinator.distance == nil {
context.coordinator.handleTap()
}
}
}
static func dismantleUIView(_ uiView: ARSCNView, coordinator: Coordinator) {
uiView.session.pause()
coordinator.stopScanTarget()
coordinator.stopSpeech()
DispatchQueue.main.async {
uiView.removeFromSuperview()
}
}
func makeCoordinator() -> Coordinator {
Coordinator(self)
}
class Coordinator: NSObject, ARSCNViewDelegate, ARSessionDelegate, ARCoachingOverlayViewDelegate {
var parent: MeasureARViewContainer
var sceneView: ARSCNView?
var startPosition: SCNVector3?
var pointedCount: Int = 0
var distance: Float?
var mode: MeasurementMode = .width
let synthesizer: AVSpeechSynthesizer = AVSpeechSynthesizer()
var scanTargetTask: Task<Void, Never>?
var currentResult: ARRaycastResult?
init(_ parent: MeasureARViewContainer) {
self.parent = parent
}
// ... etc
}
}
Hello,
I am developing a visionOS application and am interested in obtaining detailed data of users’ hands through ARKit, including but not limited to Transform and rotation angle. I have reviewed Happy Beem, but it appears to only introduce the method of identifying the user’s specific gestures.
Could you please advise on how to obtain the Transform and rotation angle of the user’s hand?
Thank you.