"Although Xcode generates loading methods for all Reality Composer files in your Xcode project"
I do not find this to be true, sadly.
Does anyone have any luck or insight on how one can build just a simple MacOS app that will import a scene from a Reality File?
The documentation suggests that the simple act of bringing a .Reality File in (What about .realitycomposerpro?) will generate code, but that doesn't seem to happen.
The sample code (Spaceship) does not compile for MacOS.
I'd really love just the most generic template of an Xcode Project that compiles with a button that pops open a scene., Like the VisionOS default immersive project.
AR / VR
RSS for tagDiscuss augmented reality and virtual reality app capabilities.
Posts under AR / VR tag
108 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
Hi,
When I attach BillboardComponent to anchor entities, I am no longer able to retrieve the tapped entity anymore because the collision shapes of the entity are messed up due to always orienting it towards the camera. And it does not updated the collision shapes because if I try pressing everywhere that is not my model entity, I get a hit out of nowhere.
I tried updating the collision shapes of the entity every frame:
for child in existingPassport.mainEntity!.children {
child.generateCollisionShapes(recursive: true)
}
However, nothing comes of it, and it is not a smart solution in the first places because it is too heavy to recreate the shapes every frame.
I am using the usual AR View Controller that works when I comment out the BillboardComponent line just fine:
private func setupTapRecognizer() {
let tapRecognizer = UITapGestureRecognizer(target: self, action: #selector(handleTap))
arView.addGestureRecognizer(tapRecognizer)
}
@objc func handleTap(_ recognizer: UITapGestureRecognizer) {
print("handle tap URL 1")
let location = recognizer.location(in: arView)
if let entity = arView.entity(at: location) {
print("handle tap URL 2")
// Assuming each entity has a URL stored in a component
if let urlComponent = entity.components[URLComponent.self] {
webViewPresenter?.presentFullScreenWebView(url: urlComponent.url)
print("handle tap URL: \(urlComponent.url)")
}
}
}
How should we tackle this issue on iOS 18?
Thanks!
I’m developing an app using RealityKit and RealityView. On newer iPhones, such as the iPhone 15 Pro, Object Occlusion appears to be enabled by default, which causes 3D entities to be hidden behind real-world objects in the scene. However, I need to disable this behavior to ensure proper rendering of my 3D content.
This issue does not occur on older devices like the iPhone 13, where the app works as intended. I haven’t been able to find a solution to explicitly disable object occlusion on the newer devices for RealityView.
Any guidance or suggestions to resolve this issue would be greatly appreciated! Thanks!
Hi guys, I have set size of cube is 20 cm. Then i want to transform scale with values x: 0.048, y: 1, z: 0.0.48, But strangely it reset all value to x: 1, y: 1, and z:1. Then i want to try with another decimal such as x: 1.2, y: 2, z: 1.2 the panel will automatically reset it into x: 1, y: 1, z: 1. My question is how to set transform scale that can accept decimal number ?
Hi, I added DockingRegion to my scene from Reality Composer Pro, and I am able to load up the scene, but DockingRegion is getting ignored and the scene is getting rendered with no change in AVPlayerViewController window. As it can be seen in Reality Composer Pro screenshot below, I set the width of the player to 666, and moved it to the back by 300cm, but the actual result does not reflect the position I set on Reality Composer Pro.
Is there anything else I should do other than loading up the Entity and adding to RealityView? Specifically, do I have to get the DockingRegion within the usda file and somehow enable it?
Hello everyone,
Since last night, the Object Capture feature in my app has stopped working. Whenever I try to use it, a blank screen is displayed instead of the expected functionality.
I’ve also tested several other apps that rely on Object Capture, and they are experiencing the same issue. This makes me think it might not be a problem specific to my device or app.
I’ve already tried restarting my device and ensuring all apps are up to date, but the issue persists.
Does anyone have more information about this issue? If so, is there any update on when it might be resolved?
Thank you in advance for your help!
Best regards
Hi, every one!
I'm trying to bind timeline(animation + audio) and behaviors on an entity in reality composer pro.
In xcode, I need to clone this entity and use the behavior, but I found that the behaviors are not clone(send notification but not received by reality composer pro and not execute the timeline).
How can I solve this problem? Thanks!
Ever since updating to Xcode 16 my AR app doesn't compile, because Xcode doesn't recognize the .rcproject files used to load the AR experiences in iOS app. The .rcproject files were authored in Reality Composer on iPadOS.
The expected behavior is described in this official Apple documentation article: https://developer.apple.com/documentation/realitykit/loading-entities-from-a-file
How do I submit a ticket to Apple?
When using ARView of RealityKit, I can code like this let results = arView.raycast(from: point, allowing: .estimatedPlane, alignment: .any) to get the 3D position of where I tap on the plane. In iOS 18, we can use RealityView and I found that unproject(_:from:to:ontoPlane:) may implement the same function, but I don't know how to set the ontoPlane parameter.
Can someone help me with some code snippets?
I have this code to make ARVR Stereo View To Be Used in VR Box Or Google Cardboard, it uses iOS 18 New RealityView but for some reason the left side showing the Entity (Box) more near to the camera than the right side which make it not identical, I wonder is this a bug and need to be fixed or what ? thanx
Here is the code
import SwiftUI
import RealityKit
struct ContentView : View {
let anchor1 = AnchorEntity(.camera)
let anchor2 = AnchorEntity(.camera)
var body: some View {
HStack (spacing: 0){
MainView(anchor: anchor1)
MainView(anchor: anchor2)
}
.background(.black)
}
}
struct MainView : View {
@State var anchor = AnchorEntity()
var body: some View {
RealityView { content in
content.camera = .spatialTracking
let item = ModelEntity(mesh: .generateBox(size: 0.25), materials: [SimpleMaterial()])
anchor.addChild(item)
content.add(anchor)
anchor.position.z = -1.0
anchor.orientation = .init(angle: .pi/4, axis:[0,1,1])
}
}
}
And Here is the View
I have an issue using RealityView to show two screens of AR, while I did succeed to make it as a non AR but now my code not working.
Also it is working using Storyboard and Swift with SceneKit, so why it is not working in RealityView?
import SwiftUI
import RealityKit
struct ContentView : View {
var body: some View {
HStack (spacing: 0){
MainView()
MainView()
}
.background(.black)
}
}
struct MainView : View {
@State var anchor = AnchorEntity()
var body: some View {
RealityView { content in
let item = ModelEntity(mesh: .generateBox(size: 0.2), materials: [SimpleMaterial()])
content.camera = .spatialTracking
anchor.addChild(item)
anchor.position = [0.0, 0.0, -1.0]
anchor.orientation = .init(angle: .pi/4, axis:[0,1,1])
// Add the horizontal plane anchor to the scene
content.add(anchor)
}
}
}
I want use SwiftUI views as RealityKit entities to display AR Labels within a RealityKit scene, and the labels could be more complicated than just text and window as they might include images, dynamic texts, animations, WebViews, etc. Vision OS enables this through RealityView attachments, and there is a RealityView support on iOS 18.
Tried running RealityView attachments code samples from VisionOS on iOS 18. However, the code below gives errors on iOS 18:
import SwiftUI
import RealityKit
struct PassportRealityView: View {
let qrCodeCenter: SIMD3<Float>
let assetID: String
var body: some View {
RealityView { content, attachments in
// Setup your AR content, such as markers or 3D models
if let qrAnchor = try? await Entity(named: "QRAnchor") {
qrAnchor.position = qrCodeCenter
content.add(qrAnchor)
}
} attachments: {
Attachment(id: "passportTextAttachment") {
Text(assetID)
.font(.title3)
.foregroundColor(.white)
.background(Color.black.opacity(0.7))
.padding(5)
.cornerRadius(5)
}
}
.frame(width: 300, height: 400)
}
}
When I remove "attachments" keyword and the block, the errors are kind of gone. That does not help me as I want to attach SwiftUI views to Anchor Entities in RealityKit.
As I understand, RealityView attachments are not supported on iOS 18. I wonder if there is any way of showing SwiftUI views as entities on iOS 18 at this point. Or am I forced to use the text meshes and 3d planes to build the UI? I checked out the RealityUI plugin, but it's too simple for my use case of building complex AR labels. Any advice would be appreciated. Thanks!
In visionOS, ARKit is to integrate virtual and reality. However, most of the functions RealityKit can be easily implemented (except for Scene reconstruction, Room Tracking and enterprise API), so do I still need to use ARKit? Is there any difference between them?
Hi,
I'm currently working on an ARKit project where I need to implement object occlusion on devices that do not have a LiDAR sensor (e.g., iPhone XR, iPhone 11).
I used CoreML models like DepthAnythingV2 to create depth maps and DETRResnet50SemanticSegmentationF16P8 to to perform real-time segmentation. But these models are too heavy for devices.
Much appreciated on any advice or pointers to resources.
As mentioned in https://forums.developer.apple.com/forums/thread/756736?answerId=810096022#810096022
Is there any update about the full support to WebXR AR Module, which should enable immersive-ar mode?
Are the features such as DOM overlays and WebGPU bindings on the roadmap?
Is it possible to capture stereoscopic video either internally or externally or via airplay for debugging purposes?
Thanks
Hi folks:
I've been creating .reality files out of Reality Composer for over a year. Some of the files are up to 500 mB and, prior to the last month they opened fine as AR projected experiences on even basic iPhones and iPads. Now, I think since iOS 18, a 64Mb file will open as an AR experience but files it seems from about 350MB up don't open. Files just opens a window displaying the name of the file, that it's a .reality file and the file size. But it no longer opens into either an AR or Object display of the .reality scene. Has there been a new file size limit put on .reality files that Files will open or what else is going on here. Have a client who was about to launch and experience based on the .Reality file I can no longer open. Please help.
Hello Apple Team,
Is it possible to change the zoom factor, exposure, white balance and other settings, of an iOS ARKit session?
I know how to do it using an AVCaptureSession,
however, I can't figure out how to access the AVCaptureDeviceInput of the current AR session.
Thanks
PS: I'm using ARkit and RealityKit on iOS 17
Hello Developers,
I am currently in the initial planning stages of my bachelor thesis in computer science, where I will be developing an application in collaboration with a manufacturer of large-scale machinery. One of the core features I aim to implement is the ability for multiple Apple Vision Pro users to view the same object in augmented reality simultaneously, each from their respective positions relative to the object.
I am still exploring how best to achieve this feature. My initial approach involves designating one device as the host of a "room" within the application, allowing other users to join. If I can accurately determine the relative positions of all users to the host device, it should be possible to display the AR content correctly in terms of angle, size, and location for each user.
Despite my research, I haven't found much information on similar projects, and I would appreciate any insights or suggestions. Specifically, I am curious about common approaches for synchronizing AR experiences across multiple devices. Given that the Apple Vision Pro does not have a GPS sensor, I am also looking for alternative methods to precisely determine the positions of multiple devices relative to each other.
Any advice or shared experiences would be greatly appreciated!
Best regards,
Revin
Hi everyone,
I’m working on an app for VisionOS that needs to recognize individual rooms in a hallway based on the person the room belongs to (using the name displayed on each office door). Is there any sample code or resource that can guide me in implementing this feature?
Thanks in advance for your help!
Hi,
since RealityKit 4 now supports Blend Shapes I was wondering if there are any workflow or tooling recommendations to bake/export them into a USDZ.
Are Blender or Cinema4D capable to do that out of the box? Should we look into NVIDIA omniverse (https://docs.omniverse.nvidia.com/connect/latest/blender/manual.htm)
So far this topic seems very sparsely documented and I would appreciate any hints. Thank you!