I'm building a VisionOS 2.0 app where the AVP user can change the position of the end effector of a robot model, which was generated in Reality Composer Pro (RCP) from Primitive Shapes. I'd like to use an IKComponent to achieve this functionality, following example code here. I am able to load my entityy and access its MeshResource following the IKComponent example code, but on the line
let modelSkeleton = meshResource.contents.skeletons[0]
I get an error since my MeshResource does not include a skeleton.
Is there some way to directly generate the skeleton with my entities in RCP, or is there a way to add a skeleton generated in XCode to an existing MeshResource that corresponds to my entities generated in RCP? I have tried using MeshSkeletonCollection.insert() with a skeleton I generated in XCode, but I cannot figure out how to assign this skeletonCollection to the MeshResource of the entity.
General
RSS for tagDiscuss Spatial Computing on Apple Platforms.
Post
Replies
Boosts
Views
Activity
Hi,
We are currently building an app for immersive experiences of our custom content. This is displayed from a video on a custom geometry in the immersive on the Vision Pro
I have enabled the AVPlayerViewController system controls that detach when entering immersive like in the sample:
https://developer.apple.com/documentation/visionos/building-an-immersive-media-viewing-experience
For our case, we do not need the 2D screen showing after entering the immersive, only the environment
So my question is how to remove the screen with the video and keep the controls, like in the Apple TV app for immersive experiences?
Thanks in advance
Hello experts, and question seekers,
I have been trying to get Gaussian splats working with RealityKit, however it seems not to work out for me.
The library I use for Gaussian splatting: https://github.com/scier/MetalSplatter
My idea was to use the renderers provided by RealityKit (aka RealityRenderer) https://developer.apple.com/documentation/realitykit/realityrenderer and the renderer provided by MetalSplatter (aka. SplatRenderer) https://github.com/scier/MetalSplatter/blob/main/MetalSplatter/Sources/SplatRenderer.swift
Then with a custom render pipeline, I would be able to compose the outputs of the renderers, enabling the possibility, for example to build immersive scenery with realistic environment scans, as Gaussian splats, and RealityKit to provide the necessary features to build extra scenery around Gaussian splats, eg. dynamic 3D models inside Gaussian splats.
However the problem is, as of now I am not able to do that with the current implementation of RealityRenderer.
It seems to be, that first RealityRenderer is supposed to be an API, just to render colour information onto a texture, which in first glance might be useful, but misses important information, such as for example depth, and stencil information.
Second issue is, even with that in mind, currently I am not able to execute RealityRenderer.updateAndRender, due to the following error messages:
Could not resolve material name 'engine:BuiltinRenderGraphResources/Common/realityRendererBackground.rematerial' in bundle at '/Users//Library/Developer/CoreSimulator/Devices//data/Containers/Bundle/Application//.app'. Loading via asset path.
exiting spatial tracking service update thread because wait returned 37”
I was able to build a custom Metal view with UIViewRepresentable, MTKView, and MTKViewDelegate, enabling me to build a custom rendering pipeline, by utilising some of the Metal developer workflows.
Reference: https://developer.apple.com/documentation/xcode/metal-developer-workflows/
Inside draw(in view: MTKView), in a class derived by MTKViewDelegate:
guard let currentDrawable = view.currentDrawable else {
return
}
let realityRenderer = try! RealityRenderer()
try! realityRenderer.updateAndRender(deltaTime: 0.0, cameraOutput: .init(.singleProjection(colorTexture: currentDrawable.texture)), whenScheduled: { realityRenderer in
print("Rendering scheduled")
}, onComplete: { RealityRenderer in
print("Rendering completed")
})
Can you please tell me, what I am doing wrong?
Is there any solution, that enables me to use RealityKit with for example Gaussian splats?
Any help is greatly appreciated.
All the best,
Ethem Kurt
I have updated the sample code so that the scan will start generating when 15 photos r captured. I hope I can catch this error so the app wont crash.... really need help on this and thank you in advanced !
Hardware Model: iPhone14,2
OS Version: iPhone OS 17.6.1 (21G93)
Exception Type: EXC_BREAKPOINT (SIGTRAP)
Exception Codes: 0x0000000000000001, 0x000000023363518c
Termination Reason: SIGNAL 5 Trace/BPT trap: 5
Terminating Process: exc handler [525]
Triggered by Thread: 0
Thread 0 name:
Thread 0 Crashed:
0 RealityKit_SwiftUI 0x000000023363518c CoveragePointCloudMiniView.interfaceOrientation.getter + 508 (CoveragePointCloudMiniView.swift:0)
1 RealityKit_SwiftUI 0x0000000233634cdc closure #1 in closure #2 in CoveragePointCloudMiniView.body.getter + 124 (CoveragePointCloudMiniView.swift:75)
2 RealityKit_SwiftUI 0x000000023363db9c partial apply for closure #1 in closure #2 in CoveragePointCloudMiniView.body.getter + 20 (:0)
3 SwiftUI 0x0000000195c4bbac closure #1 in withTransaction(::) + 276 (Transaction.swift:243)
4 SwiftUI 0x0000000195c4ba90 partial apply for closure #1 in withTransaction(::) + 24 (:0)
5 libswiftCore.dylib 0x00000001903f8094 withExtendedLifetime<A, B>(::) + 28 (LifetimeManager.swift:27)
6 SwiftUI 0x0000000195b17d78 withTransaction(::) + 72 (Transaction.swift:228)
7 SwiftUI 0x0000000195b17d04 withAnimation(::) + 116 (Transaction.swift:280)
8 RealityKit_SwiftUI 0x0000000233634bfc closure #2 in CoveragePointCloudMiniView.body.getter + 664 (CoveragePointCloudMiniView.swift:73)
9 SwiftUI 0x0000000195bef134 closure #1 in closure #1 in SubscriptionView.Subscriber.updateValue() + 72 (SubscriptionView.swift:66)
10 SwiftUI 0x0000000195b3f57c thunk for @escaping @callee_guaranteed () -> () + 28 (:0)
11 SwiftUI 0x0000000195b3c864 static Update.dispatchActions() + 1140 (Update.swift:151)
12 SwiftUI 0x0000000195b3bedc static Update.end() + 144 (Update.swift:58)
13 SwiftUI 0x0000000195a691fc closure #1 in SubscriptionView.Subscriber.updateValue() + 700 (SubscriptionView.swift:66)
14 SwiftUI 0x0000000195a68eb0 partial apply for thunk for @escaping @callee_guaranteed (@in_guaranteed A.Publisher.Output) -> () + 28 (:0)
15 SwiftUI 0x0000000195a68e78 closure #1 in ActionDispatcherSubscriber.respond(to:) + 76 (SubscriptionView.swift:98)
16 SwiftUI 0x0000000195a68c80 ActionDispatcherSubscriber.respond(to:) + 816 (SubscriptionView.swift:97)
17 SwiftUI 0x0000000195a68938 ActionDispatcherSubscriber.receive(:) + 16 (SubscriptionView.swift:110)
18 SwiftUI 0x0000000195a6786c SubscriptionLifetime.Connection.receive(:) + 100 (SubscriptionLifetime.swift:195)
19 Combine 0x000000019aed29d4 Publishers.Autoconnect.Inner.receive(:) + 52 (Autoconnect.swift:142)
20 Combine 0x000000019aed2928 Publishers.Multicast.Inner.receive(:) + 244 (Multicast.swift:211)
21 Combine 0x000000019aed2828 protocol witness for Subscriber.receive(_:) in conformance Publishers.Multicast<A, B>.Inner + 24 (:0)
....
(FBSScene.m:812)
46 FrontBoardServices 0x00000001aa892844 __94-[FBSWorkspaceScenesClient _queue_updateScene:withSettings:diff:transitionContext:completion:]_block_invoke_2 + 152 (FBSWorkspaceScenesClient.m:692)
47 FrontBoardServices 0x00000001aa8926cc -[FBSWorkspace _calloutQueue_executeCalloutFromSource:withBlock:] + 168 (FBSWorkspace.m:411)
48 FrontBoardServices 0x00000001aa8977fc __94-[FBSWorkspaceScenesClient _queue_updateScene:withSettings:diff:transitionContext:completion:]_block_invoke + 344 (FBSWorkspaceScenesClient.m:691)
49 libdispatch.dylib 0x00000001999aedd4 _dispatch_client_callout + 20 (object.m:576)
50 libdispatch.dylib 0x00000001999b286c _dispatch_block_invoke_direct + 288 (queue.c:511)
51 FrontBoardServices 0x00000001aa893d58 FBSSERIALQUEUE_IS_CALLING_OUT_TO_A_BLOCK + 52 (FBSSerialQueue.m:285)
52 FrontBoardServices 0x00000001aa893cd8 -[FBSMainRunLoopSerialQueue _targetQueue_performNextIfPossible] + 240 (FBSSerialQueue.m:309)
53 FrontBoardServices 0x00000001aa893bb0 -[FBSMainRunLoopSerialQueue performNextFromRunLoopSource] + 28 (FBSSerialQueue.m:322)
54 CoreFoundation 0x0000000191adb834 CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE0_PERFORM_FUNCTION + 28 (CFRunLoop.c:1957)
55 CoreFoundation 0x0000000191adb7c8 __CFRunLoopDoSource0 + 176 (CFRunLoop.c:2001)
56 CoreFoundation 0x0000000191ad92f8 __CFRunLoopDoSources0 + 340 (CFRunLoop.c:2046)
57 CoreFoundation 0x0000000191ad8484 __CFRunLoopRun + 828 (CFRunLoop.c:2955)
58 CoreFoundation 0x0000000191ad7cd8 CFRunLoopRunSpecific + 608 (CFRunLoop.c:3420)
59 GraphicsServices 0x00000001d65251a8 GSEventRunModal + 164 (GSEvent.c:2196)
60 UIKitCore 0x0000000194111ae8 -[UIApplication run] + 888 (UIApplication.m:3713)
61 UIKitCore 0x00000001941c5d98 UIApplicationMain + 340 (UIApplication.m:5303)
62 SwiftUI 0x0000000195ccc294 closure #1 in KitRendererCommon(:) + 168 (UIKitApp.swift:51)
63 SwiftUI 0x0000000195c78860 runApp(:) + 152 (UIKitApp.swift:14)
64 SwiftUI 0x0000000195c8461c static App.main() + 132 (App.swift:114)
65 SoleFit 0x0000000103046cd4 static SoleFitApp.$main() + 24 (SoleFitApp.swift:0)
66 SoleFit 0x0000000103046cd4 main + 36
67 dyld 0x00000001b52af154 start + 2356 (dyldMain.cpp:1298)
Hi everyone, I am having trouble implementing spatial video recording into files by following the WWDC24 video: Build compelling spatial photo and video experiences. Specifically, the flag "isSpatialVideoCaptureSupported" of AVCaptureMovieFileOutput shows FALSE where the code is tested on both my physical iPhone 15 Pro (iOS 18.1) and the simulator (iOS 18.0).
This is the code that I am running:
let movieFileOutput = AVCaptureMovieFileOutput()
print("movieCapture output isSpatialVideoCaptureSupported: \(movieFileOutput.isSpatialVideoCaptureSupported)")
However, one of the formats of AVCaptureDevice shows a TRUE for the flag isSpatialVideoCaptureSupported.
for format in currentDevice.formats {
if format.isSpatialVideoCaptureSupported {
print("isSpatialVideoCaptureSupported is true")
break
}
}
I am totally confused now, why DOES the camera device support spatial mode while the movieFileCapture DOES NOT? Can someone please help? Really appreciate it!!
Here are my testing environment:
iPhone 15 Pro iOS 18.1 (US version)
Xcode 16.0 beta 16A5171c
We've recently discovered that our app crashes on startup on the latest visionOS 2.0 beta 5 (22N5297g) build. In fact, the entire field of view would dim down and visionOS would then restart, showing the Apple logo. Interestingly, no app crash is reported by Xcode during debug.
After investigation, we have isolated the issue to a specific USDZ asset in our app. Loading it in a sample, blank project also causes visionOS to reliably crash, or become extremely unresponsive with rendering artifacts everywhere.
This looks like a potentially serious issue. Even if the asset is problematic, loading it should not crash the entire OS. We have filed feedback FB14756285, along with a demo project. Hopefully someone can take a look. Thanks!
I'm dynamically creating anywhere from 10-50 attachments in an immersive view by looping through an array.
When there are 10-20 attachments -> no problem, all attachments appear fine.
Where there are more than ~40 attachments -> ~25% of them never show up. The ones that don't show up are random and change each time the immersive view is loaded.
Never had a problem in visionOS 1 so wondering if this is a bug or what's going on here. Nothing in the console that would indicate a problem.
Thanks
I have a visionOS app that plays audio using AVAudioEngine and presents both a window and an immersive space. If I close the window, the audio session gets interrupted and attempting to restart the session and audio engine has no effect. I need to dismiss the app, then reopen it, which reopens the main window, in order for audio to start playing again.
This is in all visionOS 2 betas. Note that I have background audio enabled for my app.
VisionOS 2 beta 5 字体着色器无法识别Unity输出的程序
So I am tracking 2 objects in my scene, and spawning a tiny arrow on each of the objects (this part is working as intended).
Inside my scene I have added Collision Components and Physics Body Components to each of the arrows.
I want to detect when when a collision occurs between the 2 arrow entities.. I have made the collision boxes big enough so they should definitely be overlapping, however I am not able to detect when the Collision occurs.
This is the code that I use for the scene -
import SwiftUI
import RealityKit
import RealityKitContent
struct DualObjectTrackingTest: View {
@State private var subscription: EventSubscription?
var body: some View {
RealityView { content in
if let immersiveContentEntity = try? await Entity(named: "SceneFind.usda", in: realityKitContentBundle) {
content.add(immersiveContentEntity)
print("Collision check started")
}
} update: { content in
if let arrow = content.entities.first?.findEntity(named: "WhiteArrow") as? ModelEntity {
let subscription = content.subscribe(to: CollisionEvents.Began.self, on: arrow) { collisionEvent in
print("Collision has occured")
}
}
}
}
}
All I see in my console logs is "Collision check started" and then whenever I move the 2 objects really close to each other so as to overlap the collision boxes, I don't see any updates in the logs.
Can anyone give me some further guidance/resources on this?
Thanks again!
Hello all !
Received my Apple Vision Pro today. Device is on ABM, assigned to JAMF Pro with a separate Prestage.
Out of the box, it did not catch the configuration (Vision OS 1.3).
I enabled beta releases, and it installed 2.0 beta 5.
At reboot, it regenerated the Persona, and is now stuck in "waiting configuration" (from the MDM I guess.
I can not reset it. Even with the developer Strap, Apple Configurator is not able restore the ipsw (it was not paired yet).
Any idea ? Any secret DFU ?
I updated my Vision Pro to VisionOS 2.0 Beta yesterday, and now everything is very quiet even at max volume. I tested with the built in speakers, Beats Pro and Airpods Pro Gen 2 as well and same problem with all of them.
If I turn the volume down to 50% you cant tell what audio is being played anymore.
I tried restarting the headset and it makes no difference.
Anything else I can try to resolve this issue?
Hi, I would like to add a topbar in the panel in VisionOS by using ToolbarTitleMenu, reffering to the document: https://developer.apple.com/documentation/swiftui/toolbartitlemenu,
but in the simulator, I cannot see the topbar, what's wrong with my code?
I created some attachments by following the Diorama Apple example. Things have been working fine. I wanted to add BillboardComponent to my attachments. So I added it in this way
guard let attachmentEntity = attachments.entity(for: component.attachmentTag) else { return }
guard attachmentEntity.parent == nil else {return}
var billBoard = BillboardComponent()
billBoard.rotationAxis = [0,1,0]
attachmentEntity.components.set(billBoard)
content.add(attachmentEntity)
attachmentEntity.setPosition([0.0, 0.5, 0.0], relativeTo: entity)
My attachment view is like this
Text(name)
.matchedGeometryEffect(id: "Name", in: animation)
.font(titleFont)
Text(description)
.font(descriptionFont)
Button("Done") {
viewModel.arrows.remove(at: 0)
}
}
If I remove the BillboardComponent then button click works fine. but with the `BillboardComponent button click doesn't work (not even highlighting when I look at it) in certain directions. How to resolve this issue?
A second post on the same topic, as I feel I may have over complicated the earlier one.
I essentially am performing object tracking inside Reality Composer Pro and adding a digital entity to the tracked object. I now want to get the coordinates of this digital entity inside Xcode..
Secondly, can I track more than 1 object inside the same scene? For example if I want to find a spanner and a screwdriver amongst a bunch of tools laid out on the table, and spawn an arrow on top of the spanner and the screwdriver, and then get the coordinates of the arrows that I spawn, how can I go about this?
Hey, as a follow up to my earlier posts about object tracking on visionOS 2 - I'm doing some experimentation, and my use-case/requirements require me to track the coordinates of some digital entity that I attach (relative to my reference object) to my reference object.
Can something like this be done?
Right now, all I'm doing is putting my reference object in my scene, and then positioning the 3D content that I want to show at the corresponding locations on the reference object. I am then loading the scene in a RealityView block via my SwiftUI code.
I want to know now if I can also extract and use the coordinates of the digital entity that I have placed (post object-tracking), and then make some manipulations via code, for example, if the physical coordinates of the digital entity is in a certain x,y,z range -> trigger this function/bring up this alert message in a tile..
Is something like this possible, and if so, can you help me with understanding different aspects to this problem via code with some sample/reference code? So far I've only done most of the object tracking related tasks via the Reality Composer Pro, but this task that I'm trying to implement will require me to do quite a bit of programming as well, and I'm kinda lost as to how to start and go about this.
Thanks for any help that ya'll can give me!
Hi guys,
I'm currently working on a Head Tracking application for visionOS and was wondering if there are any properties or ways to access the position of the app window in an immersive space? I was planning to somehow determine if the window is/is not within the AVP's orientation (through queryDeviceAnchor()) or "visible space". Or is there a way to access a property or data that tells me if the app window is within the user's AVP orientation or not if e.g. the user is turning around having the window behind the back?
I would be extremely thankful for any helpful input!
import SwiftUI
@main
struct HeadTrackingApp: App {
init() {
HeadTrackingSystem.registerSystem()
}
var body: some Scene {
WindowGroup { // Basically getting spatial coordinates of this
ContentView()
}
ImmersiveSpace(id: "appSpace") {
}
}
}
I am trying out the BOT-anist demo and compiled it for Vision Pro. When you enter the Start Planting module, the app quits with a fatal error in this section in RobotCharacter.swift:
guard var headOffset = headOffset ?? skeleton.pins["head"]?.position,
var backpackOffset = backpackOffset ?? skeleton.pins["backpack"]?.position else {
fatalError("Didn't find expected joint for head or backpack.")
}
Thread 1: Fatal error: Didn't find expected joint for head or backpack.
How can I fix this? Thanks for any suggestions.
I want to create a ModelEntity that can glow like lightsaber in Star wars. Here is the video
https://x.com/devtom7/status/1819743159213031453/
Is there a way where I don't have to anchor my AR experience to one setting? I need to walk around the real world for this work.