Discuss augmented reality and virtual reality app capabilities.

Posts under AR / VR tag

79 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

Synchronizing Multi-User AR Experiences on Apple Vision Pro
Hello Developers, I am currently in the initial planning stages of my bachelor thesis in computer science, where I will be developing an application in collaboration with a manufacturer of large-scale machinery. One of the core features I aim to implement is the ability for multiple Apple Vision Pro users to view the same object in augmented reality simultaneously, each from their respective positions relative to the object. I am still exploring how best to achieve this feature. My initial approach involves designating one device as the host of a "room" within the application, allowing other users to join. If I can accurately determine the relative positions of all users to the host device, it should be possible to display the AR content correctly in terms of angle, size, and location for each user. Despite my research, I haven't found much information on similar projects, and I would appreciate any insights or suggestions. Specifically, I am curious about common approaches for synchronizing AR experiences across multiple devices. Given that the Apple Vision Pro does not have a GPS sensor, I am also looking for alternative methods to precisely determine the positions of multiple devices relative to each other. Any advice or shared experiences would be greatly appreciated! Best regards, Revin
5
0
939
Oct ’24
USDZ with Blend Shapes Workflow Recommendations
Hi, since RealityKit 4 now supports Blend Shapes I was wondering if there are any workflow or tooling recommendations to bake/export them into a USDZ. Are Blender or Cinema4D capable to do that out of the box? Should we look into NVIDIA omniverse (https://docs.omniverse.nvidia.com/connect/latest/blender/manual.htm) So far this topic seems very sparsely documented and I would appreciate any hints. Thank you!
3
0
1.3k
Jan ’25
How to Display Transparent GIF on a 3D Point in AR Environment?
I’m currently working on a project where I need to display a transparent GIF at a specific 3D point within an AR environment. I’ve tried various methods, including using RealityKit with UnlitMaterial and TextureResource, but the GIF still appears with a black background instead of being transparent. Does anyone have experience with displaying animated transparent GIFs on an AR plane or point? Any guidance on how to achieve true transparency for GIFs in ARKit or RealityKit would be appreciated.
2
0
510
Oct ’24
How use custom segmentation occlusion in RealityKit?
I have a neural network model for segmentation, I successfully integrated it and am getting a grayscale image. Next, I need to apply the segmentation mask in RealityKit to achieve the occlusion effect (like person segmentation). I tried doing it through post-processing and other methods, but none of them worked. Is there any example of how this can be done in RealityKit?
1
0
634
Oct ’24
Change zFar in RealityView
I am working on a RealityView on iOS 18 that needs to render objects farther away than 1,000 meters. My app is used outside in open areas. I am using RealityView with content.camera = .spatialTracking and I have turned off occlusion, collisions, and plane detection with a simple scene understanding like this. let configuration = SpatialTrackingSession.Configuration( tracking: [.camera], sceneUnderstanding: [], //We don't want occlusions, collisions, etc camera: .back) let session = SpatialTrackingSession() if let unavailable = await session.run(configuration) { print("unavailable \(unavailable)") } Is this possible with spatialTracking with RealityView or with ARView? I have my RealityView working on visionOS inside an ImmersiveSpace. On visionOS I don't have the camera as a passthrough, it is virtual scene and it has wold tracking set up via the WorldTrackingProvider and I can render objects father away than 1000 meters. I would like to do the same thing on iOS. I don't need to have the camera pass through, but I do need to have the world tracking. I see that PerspectiveCameraComponent lets me set the near and far clipping planes, but I don't see how I can use that camera with world tracking.
0
0
560
Oct ’24
Turn off camera in RealityView for iOS?
I am using RealityView for an iOS program. Is it possible to turn off the camera passthrough, so only my virtual content is showing? I am looking to create VR experience. I have a work around where I turn off occlusion and then create a sphere around me (e.g., with a black texture), but in the pre-RealityView days, I think I used something like this: arView.environment.background = .color(.black) Is there something similar in RealityView for iOS? Here are some snippets of my current work around inside RealityView. First create the sphere to surround the user: // Create sphere let blackMaterial = UnlitMaterial(color: .black) let sphereMesh = MeshResource.generateSphere(radius: 100) let sphereModelComponent = ModelComponent(mesh: sphereMesh, materials: [blackMaterial]) let sphereEntity = Entity() sphereEntity.components.set(sphereModelComponent) sphereEntity.scale *= .init(x: -1, y: 1, z: 1) content.add(sphereEntity) Then turn off occlusion: // Turn off occlusion let configuration = SpatialTrackingSession.Configuration( tracking: [], sceneUnderstanding: [], camera: .back) let session = SpatialTrackingSession() await session.run(configuration)
1
0
635
Sep ’24
sceneUnderstanding occlusion not work with blendMode = alpha in ios 18
If I import a USDZ model with blendMode set to alpha, occlusion does not work on iPhone with iOS 18. How should transparent materials and occlusion be properly used in the new RealityKit? Additionally, new artifacts have appeared when working with transparent objects overlapping each other. The transparency results do not blend but rather parts of the model just not rendering.
6
0
1.2k
Sep ’24
How to perform real-time specular reflection in visionOS
In RealityKit, I know that an HDR image is pre-calculated, and through the settings of the ImageBasedLight Component, a specified specular object can reflect the content of the HDR image. If a mirror object is originally very large, such as a large-area continuous glass door, after specifying an IBL image for these glass doors, the image reflected by the mirror will be obviously deformed when it moves in space. Because IBL is a picture of the surrounding environment at a point, while the glass door is a surface. Is there a truly real-time specular reflection calculation setup in RealityKit that can reflect the model on the opposite side of the glass door?
0
0
610
Sep ’24
2.0 System Controls In Immersive Space
How do I enable the system hand controls within an immersive space? I have an ImmersiveScene and would like to enable the new 2.0 system controls like the home button and volume slider. ImmersiveSpace(id: appModel.immersiveSpaceID) { ImmersiveView() } .immersionStyle(selection: .constant(.full), in: .full) .upperLimbVisibility(.visible) While I can see my hands and arms in this view, I cannot get the "New Hand Gestures" to show up when on visionOS 2.0. When I leave the immersive space, they appear.
1
0
675
Sep ’24
A tutorial on Windows VR on ARM Mac
I'll leave this here for anyone who's interested but it is possible to slightly use Windows VR on ARM Mac, right now it's just some demos but I am still working on solutions: https://www.youtube.com/watch?v=qbucnU0dpDo&t=431s&ab_channel=NightSightProductions
0
0
470
Sep ’24
Weird Reality Composer Pro Loop Timeline bug
TLDR: Timeline does not play animation when Repeat Forever is checked. Hi! I have created a timeline for my model that does a built-in emphasize animation. Then I added a behavior to my model and has set OnAddedToScene with action to run that timeline. It works perfect well on my device. But I want the timeline to be looped. I realized that there's no loop option in the timeline, but I noticed that I can loop it if I insert it into another timeline(The loop checkbox shows up). So I did that and had my model's behavior to run that timeline. But then the model doesn't play the animation as intended. Note: I am not making a VisionPro app, but an iOS app leveraging ARKit and RealityKit Environment: iPhone 13 Pro Max with iOS18.0 Code: struct ARViewContainer: UIViewRepresentable { func makeUIView(context: Context) -> ARView { let arView = ARView(frame: .zero) arView.session.run() Task { do { let anchor = AnchorEntity(plane: .horizontal) let emojiScene = try await Entity(named: "SunglassesScene", in: bubbleAR anchor.addChild(emojiScene) arView.scene.addAnchor(anchor) } catch { print("Failed to load models: \(error)") } } return arView } } Thank you!
0
0
572
Sep ’24
1 meter size limit on object visual presentation?
I’m encountering a 1-meter size limit on the visual presentation of objects presented in an immersive environment in vision os, both in the simulator and in the device For example, if I load a USDZ object that’s 1.0x0.5x0.05 meters, all of the 1.0x0.5 meter side is visible. If I scale it by a factor of 2.0, only a 1.0x1.0 viewport onto the object is shown, even though the object size reads out as scaled when queried by usdz.visualBounds(relativeTo: nil).extents and if the USDZ is animated the animation, the animation reflects the motion of the entire object I haven’t been able to determine why this is the case, nor any way to adjust/mitigate it. Is this a wired constraint of the system or is there a workaround. Target environment is visionos 1.2
1
0
599
Sep ’24
Pink Screen with VideoMaterial in ARKit
Hi everyone, I'm developing an ARKit app using RealityKit and encountering an issue where a video displayed on a 3D plane shows up as a pink screen instead of the actual video content. Here's a simplified version of my setup: func createVideoScreen(video: AVPlayerItem, canvasWidth: Float, canvasHeight: Float, aspectRatio: Float, fitsWidth: Bool = true) -> ModelEntity { let width = (fitsWidth) ? canvasWidth : canvasHeight * aspectRatio let height = (fitsWidth) ? canvasWidth * (1/aspectRatio) : canvasHeight let screenPlane = MeshResource.generatePlane(width: width, depth: height) let videoMaterial: Material = createVideoMaterial(videoItem: video) let videoScreenModel = ModelEntity(mesh: screenPlane, materials: [videoMaterial]) return videoScreenModel } func createVideoMaterial(videoItem: AVPlayerItem) -> VideoMaterial { let player = AVPlayer(playerItem: videoItem) let videoMaterial = VideoMaterial(avPlayer: player) player.play() return videoMaterial } Despite following the standard process, the video plane renders pink. Has anyone encountered this before, or does anyone know what might be causing it? Thanks in advance!
16
3
1.1k
May ’25
Augmented Reality app unable to load the image from the camera
I have an app on the App Store for many years enabling users to post text into clouds in augmented reality. Yet last week abruptly upon installing the app on the iPhone the screen started going totally dark and a list of little comprehensible logs came up of the kind: ARSCNCompositor <0x300ad0e00>: ARSCNCompositor (0, 0) initialization failed. Matting is not set up properly. many times, then RWorldTrackingTechnique <0x106235180>: Unable to update pose [PredictorFailure] for timestamp 870.392108 ARWorldTrackingTechnique <0x106235180>: Unable to predict pose [1] for timestamp 870.392108 again several times and then: ARWorldTrackingTechnique <0x106235180>: SLAM error callback: Error Domain=Slam Error Code=7 "Non fatal error occurred due to significant drop in a IMU data" UserInfo={NSDescription=Non fatal error occurred due to significant drop in a IMU data, NSLocalizedFailureReason=SlamEngineNodeGroup Failure: IMU issue: gyro data stream verification failed [Significant data drop]. Failed on timestamp: 870.413247, Last known timestamp: 865.350198, Delta: 5.063049, System timestamp: 870.415781, Delta between system and frame: 0.002534. } and then again the pose issues several times. I hoped the new beta version would have solved the issue, but it was not the case. Unfortunately I do not know if that depends on the beta version or some other issue, given the app may be not installed on the Mac simulator.
15
2
2.1k
Jan ’25
PortalComponent – allow world content to peek out
Hello, I've been tinkering with PortalComponent on visionOS a bit but noticed that the content of the WorldComponent is always clipped to the mesh geometry of whatever entities have the PortalComponent applied. Now I'm wondering if there is any way or trick to allow contents of the portal to peek out – similar to the Encounter Dinosaurs experience on Vision Pro (I assume it also uses PortalComponent?). I saw that PortalComponent has a clippingPlane property (https://developer.apple.com/documentation/realitykit/portalcomponent/clippingplane-swift.property). But so far I haven't been able to achieve a perceptible visual difference with it. If possible I would like to avoid hacky tricks using duplicate meshes or similar to achieve this. Thanks for any hints!
5
0
1.5k
Dec ’24
How do we author a "reality file" like the ones on Apple's Gallery?
How do we author a Reality File like the ones under Examples with animations at https://developer.apple.com/augmented-reality/quick-look/ ?? For example, "The Hab" : https://developer.apple.com/augmented-reality/quick-look/models/hab/hab_en.reality Tapping on various buttons in this experience triggers various complex animations. I don't see any way to accomplish this in Reality Composer. And I don't see any way to export/compile to a "reality file" from within Xcode. How can I use multiple animations within a single GLTF file? How can I set up multiple "tap target" on a single object, where each one triggers a different action? How do we author something similar? What tools do we use? Thanks
6
2
1.9k
Nov ’24
RealityKit visionOS anchor to POV
Hi, is there a way in visionOS to anchor an entity to the POV via RealityKit? I need an entity which is always fixed to the 'camera'. I'm aware that this is discouraged from a design perspective as it can be visually distracting. In my case though I want to use it to attach a fixed collider entity, so that the camera can collide with objects in the scene. Edit: ARView on iOS has a lot of very useful helper properties and functions like cameraTransform (https://developer.apple.com/documentation/realitykit/arview/cameratransform) How would I get this information on visionOS? RealityViews content does not seem offer anything comparable. An example use case would be that I would like to add an entity to the scene at my users eye-level, basically depending on their height. I found https://developer.apple.com/documentation/realitykit/realityrenderer which has an activeCamera property but so far it's unclear to me in which context RealityRenderer is used and how I could access it. Appreciate any hints, thanks!
9
6
5.1k
Dec ’24