Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.

All subtopics

Post

Replies

Boosts

Views

Activity

Using LiDAR DepthData with ARKit and SceneKit
Greetings! I have made use of Apple ARKit documentations to create a simple ARKit application which utilizes SceneKit (Tried Metal too) I am currently unsure of how to make use of SmoothedSceneDepth(SceneDepth) in general to acquire the DepthData from the DataMap acquired in the View. is there any particular method or way that I can access this data for displaying the depth. would be grateful with any inputs or suggestions. Thanks in advance
0
0
597
Dec ’23
The relationship between the deleted anchor and the newly created anchor (ARKit, ARPlaneAnchor))
Hello everyone I'm using the detectPlane feature in ARKit Get back ARPlaneAnchor from ARSCNViewDelegate (func renderer(SCNSceneRenderer, didAdd: SCNNode, for: ARAnchor), func renderer(SCNSceneRenderer, didUpdate: SCNNode, for: ARAnchor)), func renderer(SCNSceneRenderer, didRemove: SCNNode, for: ARAnchor) Occasionally ARPlaneAnchors are cleared by the call func renderer(SCNSceneRenderer, didRemove: SCNNode, for: ARAnchor) from ARKit I think that after deleting ARPlanAnchor, ARkit will recreate an ARPlaneAnchor in that location. so is there any relationship between deleted ARPlanAnchor and newly created ARPlaneAnchor? (Does the identifier, name, .... information reflect that relationship?)
0
0
275
Dec ’23
Does iPhone 13 work as well?
https://developer.apple.com/documentation/arkit/arkit_in_ios/content_anchors/visualizing_and_interacting_with_a_reconstructed_scene It says that fourth-generation iPad Pro running iPad OS 13.4 or later works because of the lidar. If iPhone 13 also has lidar then would it work too?
0
0
348
Dec ’23
ARView.debugOptions = .showStatistics Error: 5775
struct ARViewContainer: UIViewRepresentable { func makeUIView(context: Context) -> ARView { let arView = ARView(frame: .zero) arView.debugOptions = .showStatistics // Error: return arView } func updateUIView(_ uiView: ARView, context: Context) {} } -[MTLDebugRenderCommandEncoder validateCommonDrawErrors:]:5775: failed assertion `Draw Errors Validation Vertex Function(vsSdfFont): the offset into the buffer viewConstants that is bound at buffer index 4 must be a multiple of 256 but was set to 61840. '
1
0
579
Jan ’24
Using Vision Pro in multiple rooms
Suppose I want to use the Vision Pro device in multiple rooms in my home. I have worn the device when I entered my home, checked some notifications on the device, closed the apps. With the device still on my head, I move to my bedroom. Now I want to open some other application without removing the headset and wearing it again. Is this possible?
1
0
617
Jan ’24
Correct way to convert AVDepthData.depthDataMap to gray depth image
Hi all. I can get disparity/depth data map from AVDepthData.depthDataMap and directly use it to generate a depth image. I found that under some situations, objects on the depth image cannot be clearly distinguished. When using disparity data, objects below 1 meter can't be clearly distinguished. When using depth data, objects larger than 1 meter can't be clearly distinguished. Does anyone know why this happens and how to fix it ?
2
0
449
Jan ’24
After adding gestures to the EntityModel inside ARView, there seems to be a memory leak when attempting to remove the EntityModel.
Example Code: struct ContentView : View { @State private var isRemoveEntityModel = false var body: some View { ZStack(alignment: .bottom) { ARViewContainer(isRemoveEntityModel: $isRemoveEntityModel).edgesIgnoringSafeArea(.all) Button { isRemoveEntityModel = true } label: { Image(systemName: "trash") .font(.system(size: 35)) .foregroundStyle(.orange) } } } } struct ARViewContainer: UIViewRepresentable { @Binding var isRemoveEntityModel: Bool let arView = ARView(frame: .zero) func makeUIView(context: Context) -> ARView { let model = CustomEntityModel() model.transform.translation.y = 0.05 model.generateCollisionShapes(recursive: true) arView.installGestures(.all, for: model) // --> After executing this line of code, it allows the deletion of a custom EntityModel in ARView.scene, but the deinit {} method of the custom EntityModel is not executed. let anchor = AnchorEntity(.plane(.horizontal, classification: .any, minimumBounds: SIMD2(0.2, 0.2))) anchor.children.append(model) arView.scene.anchors.append(anchor) return arView } func updateUIView(_ uiView: ARView, context: Context) { if isRemoveEntityModel { let customEntityModel = uiView.scene.findEntity(named: "Box_EntityModel") uiView.gestureRecognizers?.removeAll() // --->After executing this line of code, ARView.scene can correctly delete the CustomEntityModel, and the deinit {} method of CustomEntityModel can also be executed properly. However, other CustomEntityModels in ARView.scene lose their Gestures as well. customEntityModel?.removeFromParent() } } } class CustomEntityModel: Entity, HasModel, HasAnchoring, HasCollision { required init() { super.init() let mesh = MeshResource.generateBox(size: 0.1) let material = SimpleMaterial(color: .gray, isMetallic: true) self.model = ModelComponent(mesh: mesh, materials: [material]) self.name = "Box_EntityModel" } deinit { print("CustomEntityModel_remove") } }
0
0
359
Jan ’24
After adding gestures to the EntityModel inside ARView, there seems to be a memory leak when attempting to remove the EntityModel.
After adding gestures to the EntityModel, when it is necessary to remove the EntityModel, if the method uiView.gestureRecognizers?.removeAll() is not executed, the instance in memory cannot be cleared. However, executing this method affects gestures for other EntityModels in the ARView. Does anyone have a better method to achieve this? Example Code: struct ContentView : View { @State private var isRemoveEntityModel = false var body: some View { ZStack(alignment: .bottom) { ARViewContainer(isRemoveEntityModel: $isRemoveEntityModel).edgesIgnoringSafeArea(.all) Button { isRemoveEntityModel = true } label: { Image(systemName: "trash") .font(.system(size: 35)) .foregroundStyle(.orange) } } } } ARViewContainer: struct ARViewContainer: UIViewRepresentable { @Binding var isRemoveEntityModel: Bool let arView = ARView(frame: .zero) func makeUIView(context: Context) -> ARView { let model = CustomEntityModel() model.transform.translation.y = 0.05 model.generateCollisionShapes(recursive: true) __**arView.installGestures(.all, for: model)**__ // here--> After executing this line of code, it allows the deletion of a custom EntityModel in ARView.scene, but the deinit {} method of the custom EntityModel is not executed. arView.installGestures(.all, for: model) let anchor = AnchorEntity(.plane(.horizontal, classification: .any, minimumBounds: SIMD2<Float>(0.2, 0.2))) anchor.children.append(model) arView.scene.anchors.append(anchor) return arView } func updateUIView(_ uiView: ARView, context: Context) { if isRemoveEntityModel { let customEntityModel = uiView.scene.findEntity(named: "Box_EntityModel") // --->After executing this line of code, ARView.scene can correctly delete the CustomEntityModel, and the deinit {} method of CustomEntityModel can also be executed properly. However, other CustomEntityModels in ARView.scene lose their Gestures as well. __** uiView.gestureRecognizers?.removeAll()**__ customEntityModel?.removeFromParent() } } } CustomEntityModel: class CustomEntityModel: Entity, HasModel, HasAnchoring, HasCollision { required init() { super.init() let mesh = MeshResource.generateBox(size: 0.1) let material = SimpleMaterial(color: .gray, isMetallic: true) self.model = ModelComponent(mesh: mesh, materials: [material]) self.name = "Box_EntityModel" } deinit { **print("CustomEntityModel_remove")** } }
0
0
485
Jan ’24
Place Entity in the Middle of a Table in VisionOS
Hello Guys, I am currently stuck on understanding how I can place a 3D Entity from a USDZ file or a Reality Composer Pro project in the middle of a table in a mixed ImmersiveSpace. When I use the AnchorEntity(.plane(.horizontal, classification: .table, minimumBounds: SIMD2<Float>(0.2, 0.2))) it just places it somewhere on the table and not in the middle and not in the orientation of the table so the edges are not aligned. Has anybody got a clue on how to to this? I would be very thankful for a response, Thanks
0
2
485
Jan ’24
ARKit in high resolution
Hope to get support for ARKit in high resolution using this api rightnow: NSArray<ARVideoFormat *> *supportedVideoFormats = [ARWorldTrackingConfiguration supportedVideoFormats]; Qs: If a high-resolution ARVideoFormat is not included in the supportedVideoFormats, is it still supported?"
1
0
297
Jan ’24
displacement map specification
Where can I find a specification document of displacement file "baked_mesh_disp0.exr" obtained from Full Quality result run by Reality Composer Pro? I ran Reality Composer Pro, selected Full Quality and ran Create Model, and obtained *.usdz, which I renamed to *.zip and unzipped. Then I found 5 maps including "baked_mesh_disp0.exr" and I want to know its data specification.
2
0
566
Jan ’24
AR - Draw on a virtual 3D object
Hello everyone, I'm working on an AR app. There I load a 3D model of an human arm and place it on a QR code (ARImageAnchor). The user can now move the model and change its texture. Is it possible to draw on this 3D model with my finger? I have seen videos where models react to a touch. But I don't just want to touch the model, I want to create a small sphere exactly at the point where I touch the model, for example. I would like to be able to draw a line on the arm. My model has a CollisionShape.
0
0
337
Jan ’24
Roomplan exceeded scene size limit error. (RoomCaptureSession.CaptureError.exceedSceneSizeLimit)
Error: RoomCaptureSession.CaptureError.exceedSceneSizeLimit Apple Documentation Explanation: An error that indicates when the scene size grows past the framework’s limitations. Issue: This error is popping up in my iPhone 14 Pro (128 GB) after a few roomplan scans are done. This error shows up even if the room size is small. It occurs immediately after I start the RoomCaptureSession after the relocalisation of previous AR session (in world tracking configuration). I am having trouble understanding exactly why this error shows and how to debug/solve it. Does anyone have any idea on how to approach to this issue?
0
1
562
Jan ’24
Is ARGeoTrackingConfiguration always more accurate than ARWorldTrackingConfiguration for world scale AR?
We are working on a world scale AR app that leverages the device location and heading to place objects in the streets, so that they are correctly and stably anchored to certain locations. Since the geo-tracking imagery is only available in certain cities and areas, we are trying to figure out how to fallback when geo-tracking is not available as the device move away, to still retain good AR camera accuracy. We might need to come up with some algorithm using the device GPS, to line up the ARCamera with our objects. Question: Does geo-tracking always provide greater than or equal to the accuracy of world tracking, for a GPS outdoor AR experience? If so, we can simply use the ARGeoTrackingConfiguration for the entire time, and rely on the ARView keeping itself aligned. Otherwise, we need to switch between it and ARWorldTrackingConfiguration when geo-tracking is not available and/or its accuracy is low, then roll our own algorithm to keep the camera aligned. Thanks.
2
0
671
Jan ’24
Use CoreImage filters on Vision Pro (visionOS) view
I have an iOS app that uses (camera) video feed and applies CoreImage filters to simulate a specific real world effect (for educational purposes). Now I wanted to make a similar app for visionOS and apply the same CoreImage filters to the content (live view) users sees while wearing Apple Vision Pro headset. Is there a way to do it with current APIs and what would you recommend? I saw that we cannot get video feed from camera(s), is there a way to do it with ARKit and applying the filters somehow using that? I know visionOS is a young/fresh platform but any help would be great! Thank you!
1
0
1.1k
Jan ’24