After implementing the method of obtaining video streams discussed at WWDC in the program, I found that the obtained video stream does not include digital models in the digital space or related videos such as the program UI. I would like to ask how to obtain a video stream or frame that contains only the physical world?
let formats = CameraVideoFormat.supportedVideoFormats(for: .main, cameraPositions:[.left])
let cameraFrameProvider = CameraFrameProvider()
var arKitSession = ARKitSession()
var pixelBuffer: CVPixelBuffer?
var cameraAccessStatus = ARKitSession.AuthorizationStatus.notDetermined
let worldTracking = WorldTrackingProvider()
func requestWorldSensingCameraAccess() async {
let authorizationResult = await arKitSession.requestAuthorization(for: [.cameraAccess])
cameraAccessStatus = authorizationResult[.cameraAccess]!
}
func queryAuthorizationCameraAccess() async{
let authorizationResult = await arKitSession.queryAuthorization(for: [.cameraAccess])
cameraAccessStatus = authorizationResult[.cameraAccess]!
}
func monitorSessionEvents() async {
for await event in arKitSession.events {
switch event {
case .dataProviderStateChanged(_, let newState, let error):
switch newState {
case .initialized:
break
case .running:
break
case .paused:
break
case .stopped:
if let error {
print("An error occurred: \(error)")
}
@unknown default:
break
}
case .authorizationChanged(let type, let status):
print("Authorization type \(type) changed to \(status)")
default:
print("An unknown event occured \(event)")
}
}
}
@MainActor
func processWorldAnchorUpdates() async {
for await anchorUpdate in worldTracking.anchorUpdates {
switch anchorUpdate.event {
case .added:
//检查是否有持久化对象附加到此添加的锚点-
//它可能是该应用程序之前运行的一个世界锚。
//ARKit显示与此应用程序相关的所有世界锚点
//当世界跟踪提供程序启动时。
fallthrough
case .updated:
//使放置的对象的位置与其对应的对象保持同步
//世界锚点,如果未跟踪锚点,则隐藏对象。
break
case .removed:
//如果删除了相应的世界定位点,则删除已放置的对象。
break
}
}
}
func arkitRun() async{
do {
try await arKitSession.run([cameraFrameProvider,worldTracking])
} catch {
return
}
}
@MainActor
func processDeviceAnchorUpdates() async {
await run(function: self.cameraFrameUpdatesBuffer, withFrequency: 90)
}
@MainActor
func cameraFrameUpdatesBuffer() async{
guard let cameraFrameUpdates =
cameraFrameProvider.cameraFrameUpdates(for: formats[0]),let cameraFrameUpdates1 =
cameraFrameProvider.cameraFrameUpdates(for: formats[1]) else {
return
}
for await cameraFrame in cameraFrameUpdates {
guard let mainCameraSample = cameraFrame.sample(for: .left) else {
continue
}
self.pixelBuffer = mainCameraSample.pixelBuffer
}
for await cameraFrame in cameraFrameUpdates1 {
guard let mainCameraSample = cameraFrame.sample(for: .left) else {
continue
}
if self.pixelBuffer != nil {
self.pixelBuffer = mergeTwoFrames(frame1: self.pixelBuffer!, frame2: mainCameraSample.pixelBuffer, outputSize: CGSize(width: 1920, height: 1080))
}
}
}
ARKit
RSS for tagIntegrate iOS device camera and motion features to produce augmented reality experiences in your app or game using ARKit.
Posts under ARKit tag
162 Posts
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
As far as I know, Apple hasn’t opened access to the Vision Pro camera for developers yet, so I’m trying to find possible workarounds within the current capabilities. I’m wondering if there’s any way to apply a mesh to a person in the scene in Vision Pro, or if there’s an alternative approach to roughly detect a human shape in front of the user?
For visionOS 2.0+, it has been announced the object tracking feature. Is there any support for PolySpatial in Unity or is it only available in Swift and Xcode?
I would like to modify the content of a published LocationNode upon been clicked by the user. But unfortunately:
func hitTest(_ point: CGPoint, options: [SCNHitTestOption : Any]? = nil) -> [SCNHitTestResult]
returns an SCNNode array from which it is impossible to retrieve the original LocationNode being inserted in order to be able to modify it.
Of course the solution would be to either insert the SCNNode corresponding to the inserted LocationNode in a custom class or conversely insert the identifier of the custom object as a tag of the LocationNode, in order to solve the issue. But both options seem impossible to implement.
May anyone help me?
Hi, I'm trying to place an object in front of AVPlayer that is docked in VideoDockingRegion, but when launched in immersive space, the video passes through the objects placed in front of. How do I make sure these objects are visible?
image for reference
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
ARKit
RealityKit
Reality Composer Pro
Shader Graph Editor
Starting with iOS 18.0 beta 1, I've noticed that RealityKit frequently crashes in the simulator when an app launches and presents an ARView.
I was able to create a small sample app with repro steps that demonstrates the issue, and I've submitted feedback: FB16144085
I've included a crash log with the feedback.
If possible, I'd appreciate it if an Apple engineer could investigate and suggest a workaround. It's awkward to be restricted to the iOS 17 simulator, which does not exhibit this behavior.
Please let me know if there's anything I can do to help.
Thank you.
Hi all,
I’m running into a persistent linker error when building my Unity 6 project (IL2CPP, iOS target) that calls a Swift method via an Objective-C++ wrapper. Despite following all known steps, I keep getting:
Undefined symbols for architecture arm64:
"_placeGeoAnchor", referenced from:
_GeoAnchorTrigger_placeGeoAnchor in libGameAssembly.a
...
ld: symbol(s) not found for architecture arm64
I’m trying to place a persistent AR anchor at real-world GPS coordinates (so that the same asset can appear at the same location for a returning user). Since I’m targeting iOS, I can’t use Google’s geospatial anchors (but I sooo wish I could--please apple I beg of you stop being so selfish lol).
I've already done these things:
Swift file is added to Unity-iPhone target.
.mm and .h files are in Unity-iPhone target under Compile Sources.
Bridging header is set to Unity-iPhone-Bridging-Header.h.
Generated header name is correct (GeoTest-Swift.h).
Build Active Architecture Only set to No.
Function has attribute((visibility("default"))).
Unity project uses IL2CPP scripting backend.
Yet I'm still getting the same linker error — it appears Unity (via IL2CPP) references the function, but Xcode doesn't link it.
It’s something small that’s being missed with how IL2CPP links native symbols? Or maybe I need to explicitly include something in Link Binary With Libraries? I’ve verified symbol visibility and targets repeatedly.
I’ve built AR features in Unity before (for Quest), but this is my first time trying to bridge C# → Objective-C++ → Swift in this way for a geolocation-based AR anchor for an iphone.
I'm going crazy, I’ve been stuck on this for 12+ hours now, so any insight or nudge would be deeply appreciated.
SPECS:
Macbook Pro M4 Pro--Sequoia 15.4
Unity 6000.0.45f1
IPhone 11 iOS 18.4
Xcode 15
I want to step into portal world. I've know PortalCrossingComponent can make an entity to cross portal, but how to make device cross into portal world?
I’m developing an ARKit application where I aim to attach procedurally generated audio to detected planes in the environment. While using a static audio file with SCNAudioSource and SCNAudioPlayer works as expected, integrating procedural audio via AVAudioSourceNode does not produce any sound, nor does it generate any error messages: Stack Overflow Post
Working Implementation with Static Audio File:
let audioPlayer = SCNAudioPlayer(source: audioSource)
node.addAudioPlayer(audioPlayer)
Attempted Implementation with Procedural Audio:
// Audio generation code
}
let audioPlayer = SCNAudioPlayer(avAudioNode: audioNode)
node.addAudioPlayer(audioPlayer)
In this setup, the AVAudioSourceNode successfully generates audio when connected directly to an AVAudioEngine. However, when used with SCNAudioPlayer and attached to an SCNNode, it fails to produce sound. What doesn’t work is creating some procedural audio with an AVAudioNode, as documented here:
Apple docs
Additionally, I explored the WWDC18 AR game project, SwiftShot, which utilizes SCNAudioPlayer(avAudioNode:). After updating it for the latest Xcode, the graphics function correctly, but the audio does not play. I also noted that the Apple documentation mentions an audioPlayerWithAVAudioNode: method, stating:
Using this initializer is typically not necessary. Instead, call the audioPlayerWithAVAudioNode: method, which returns a cached audio player object if one for the specified AVAudioNode object has already been created and is available for use.
However, this method does not appear to be available in Swift. Any insights or guidance on this matter would be greatly appreciated.
I am developing an app in VisionPro using RealityKit and ARKit. I want my RealityKit entity looks more realistic. So it is important to render its shadow based on light in real world.
e.g. When I turn on the light in real world, the shadow of the entity will change. Can this effect be implemented in VisionPro?
We applied for the visionOS enterprise permission license, which can help us improve object tracking capabilities on Vision Pro. However, we are unsure how to use it in Unity, specifically how to implement object tracking in Unity and increase the tracking speed.
First, I scan first room using the roomplan api. Because I need scan second room, I stop it by “captureSession.stop(pauseARSession: false)”, I think the Arsession is continue work at that time.
Second, before the another room will scan, I want to run another ARView. (in order to detect some objects which are not detected by Roomplan in first room)
But, at this time, the second ARView(there is an ARView in roomplan, I think) will always black screen, can’t normally work. This is the question I want to resolve. Please help me let the second ARView go well.
I have an AR game using ARKit with SceneKit that works just fine in iOS 17.
In the iOS 18 betas, the AR background image shows black instead of showing the real world. As a result there's no tracking and obviously the whole game is useless.
I narrowed down the issue to showing the Game Center Access Point.
My app has ViewController 1 (VC1) showing the main menu and that's where I want to show the GC Access Point. From there you open VC2 which shows a list of levels. Selecting any level will open VC3 which has the ARScene.
Following is the code I use to start Game Center in VC1:
GKLocalPlayer.local.authenticateHandler = { gcAuthVC, error in
let isGameCenterReady = (gcAuthVC == nil) && (error == nil)
if let viewController = gcAuthVC {
self.present (viewController, animated: true, completion: nil)
}
if error != nil {
print(error?.localizedDescription ?? "")
}
if isGameCenterReady {
GKAccessPoint.shared.location = .topLeading
GKAccessPoint.shared.showHighlights = true
GKAccessPoint.shared.isActive = true
}
}
When switching to VC2 I run GKAccessPoint.shared.isActive = false so that the Access Point will no longer show in any of the following VCs. I tried running it in VC1, VC2, and again in VC3 - it doesn't change anything. Once I reach VC3, the background is black.
If in VC1 I don't run GKAccessPoint.shared.isActive = true, so I don't activate the access point, the behavior is as follows:
If I wait until after the Game Center login animation completes and closes on its own and then I proceed to VC2 and VC3, the camera image will show correctly
If I quickly move to VC2 before the Game Center login animation has completed, so my code will close it by setting active = false, and then I continue to VC3, I will see the black background problem.
So it does look like activating the access point and then de-activating it causes the issue. BTW, if I activate the access point and leave it on in all VCs, the same black background issue persists.
Other than that, when I'm in VC3 with the black background and I switch to another app (so my game moves to the background), when it returns to the foreground, the camera suddenly shows the real world correctly!
I tried to manually reset the AR session by pausing and restarting it, but that didn't change anything. Also, when I check with the debugger, it looks like when the app comes back to the foreground it also doesn't run the session start code.
But something does seem to reset itself so I wonder what that is. Maybe I could trigger the same manually in my cdoe???
I repeat that everything works just fine in iOS 17 and below. This problem only started with the iOS 18 beta (currently on beta 5, but it started in some of the previous betas as well).
So could this be a bug in iOS 18?
As a workaround I could check the iOS version and if it's iOS18 not activate the access point, hoping that the user will not jump to VC2 too quickly, and show my own button which will open Game Center. But I'd rather give the users the full experience with their own avatar and the highlights showing up. Plus, certainly some users will move quickly to VC2 and that will be an awful experience.
Any help would be greatly appreciated. Thanks!
I want adding grounding shadow on my Entity in RealityView on visionPro. However it seems that the shadow can only appear on another Entity. So I using plane detection in ARKit and add a transparent plane on it to render shadow.
let planeEntity = ModelEntity(mesh: .generatePlane(width: anchor.geometry.extent.width, height: anchor.geometry.extent.height), materials: [material])
planeEntity.components.set(OpacityComponent(opacity: 0.0))
But sometimes there will be a border around my Entityon the plane.
I do not know why it will happen, and I want remove the border.
I am currently creating an app where two people share an instance of an immersive space so that they are able to point to certain things in the immersive space. Right now, other people are hidden behind the immersive space, and even with people awareness enabled for everything, people are still too difficult to see. I've found this documentation (https://developer.apple.com/documentation/arkit/occluding-virtual-content-with-people) which describes what I want to do, but it is only listed as working on iOS an iPadOS. Is there anything similar to this that will work on VisionOS?
I'm trying to evaluate if we can support AR navigation with MapKit. The feature is supposed to be available for users in US.
I tried to run the sample on my iPhone: https://developer.apple.com/documentation/arkit/tracking-geographic-locations-in-ar?language=objc
But I'm in a location that ARGeoTrackingConfiguration.checkAvailabilityWithCompletionHandler: always return false. I think ARGeoAnchor isn't supported in my location.
I tried to use simulated locations by
Adding a gpx file when launching the app.
Enabling Xcode -> Debug -> Simulate Location -> New York, NY, US
But the availability for ARGeoAnchor is still false.
Is that possible for me to develop the ARGeoAnchor feature outside of the covered areas?
I am trying the image tracking of ARKit on VisionPro, but there seems to be some problem when adding reference image.
Here is my code:
let images = ReferenceImage.loadReferenceImages(inGroupNamed: "photos")
print("Images: \(images)")
try await appState!.arkitSession.run([imageTracking])
It can successfully print those images, however sometimes it will print the error message like this:
ARImageTrackingRemoteService: Adding reference image <ARReferenceImage: 0x3032399e0 name="chair" physicalSize=(0.070, 0.093)> failed.
When this error message is printed, the corresponding image can not be tracked.
I do not understand why this will happen, because sometimes the image can be successfully added, but other time not, even for the same image. It makes my app not stable.
Besides, there are some other error messages, and I do not know whether it is related:
ARPredictorRemoteService <0x1042154a0>: Query queue is not running.
Execution of the command buffer was aborted due to an error during execution. Insufficient Permission (to submit GPU work from background) (00000006:kIOGPUCommandBufferCallbackErrorBackgroundExecutionNotPermitted)
I have been using ARKit to get hand tracking data on a continuous loop by implementing the AnchorUpdateSequence.
I want to try out the .predicted hand tracking, but it seems as though using ARKit session and HandTrackingProvider do not allow me to enable this feature?
The goal is to achieve precise joint tracking for clinical assessment. The Doctor is wearing the AVP and observing the Patients movement.
Do you have any recommended best practices for integrating real-time joint tracking and displaying them on the patient within visionOS?
We attempted to use VNHumanBodyPose3DObservation, which theoretically should work, but we are unable to display the detected joints in an Immersive Space for real-time validation. This makes it difficult for the doctor to ensure accurate tracking and if possible a photo or video of the Range of Motion assessment would be needed for the patient record.
Are there alternative methods to achieve precise real-time joint tracking without requiring main camera access (com.apple.developer.arkit.main-camera-access.allow)?
In ARKit for visionOS, I can track the user's head with a HeadAnchor, but it will not give the location. However, I can get the device's transform by calling queryDeviceAnchor(atTimestamp: CACurrentMediaTime()) on a WorldTrackingProvider.
Why the difference? - if I know the device's transform, I effectively know the head's transform.