Discuss augmented reality and virtual reality app capabilities.

Posts under AR / VR tag

114 Posts
Sort by:
Post not yet marked as solved
2 Replies
455 Views
Dear Apple Developer Forum Community, I hope this message finds you well. I am writing to seek assistance regarding an error I encountered while attempting to create a "Hello World" application using Xcode. Upon launching Xcode and starting a new project, I followed the standard procedure for creating a simple iOS application. However, during the process, I encountered an unexpected error that halted my progress. The error message I received was [insert error message here]. I have attempted to troubleshoot the issue by see two images, but unfortunately, I have been unsuccessful in resolving it. I am reaching out to the community in the hope that someone might have encountered a similar issue or have expertise in troubleshooting Xcode errors. Any guidance, suggestions, or solutions would be greatly appreciated. Thank you very much for your time and assistance. Sincerely, Zipzy games y Games
Posted Last updated
.
Post not yet marked as solved
0 Replies
380 Views
Hello everyone, I want to develop an app for vision pro that aims to help people with vertigo and dizziness problems. The problem is that I can not afford vision pro. If I use standart vr set with an iPhone inside would it cause issues on real vision pro?
Posted
by Uvrutfus.
Last updated
.
Post not yet marked as solved
1 Replies
903 Views
In full immersive (VR) mode on visionOS, if I want to use compositor services and a custom Metal renderer, can I still get the user’s hands texture so my hands appear as they are in reality? If so, how? If not, is this a valid feature request in the short term? It’s purely for aesthetic reasons. I’d like to see my own hands, even in immersive mode.
Posted Last updated
.
Post not yet marked as solved
1 Replies
302 Views
Hi, Please forgive me if i am asking a basic question. Because after my R&D I didn't see how can I build a solution where user can scan a QR code hanging on a specific wall at a specific fixed position. So when workers scan qr code from their iOS device they could see all the wirings, pipeline e.t.c. It would be really helpful If someone can let me know if its possible with ARKit and how.
Posted Last updated
.
Post not yet marked as solved
2 Replies
521 Views
When trying to run my app with .windowStyle(.volumetric) for vision OS, this error is returning: Fatal error: Your app was given a scene with session role UISceneSessionRole(_rawValue: UIWindowSceneSessionRoleApplication) but no scenes declared in your App body match this scroll.
Posted Last updated
.
Post marked as solved
1 Replies
300 Views
Why high-level RealityKit's APIs are only available on visionOS? RealityView & Model3D only to name some. On other platforms currently, the only way to deploy RealityKit & or ARKit, is by using either UIKit or UIKit's integration with SwiftUI (UIViewRepresentable). Are these newer APIs coming to other platforms as well?
Posted
by Treata.
Last updated
.
Post not yet marked as solved
2 Replies
455 Views
I'm constructing a RealityView where I'd like to display content in front of user's face. When testing, I found that the deviceAnchor I initially get was wrong, so I implement following code to wait until the deviceAnchor I get from worldTrackingProvider has the correct value: private let arkitSession = ARKitSession() private let worldTrackingProvider = WorldTrackingProvider() var body: some View { RealityView { content, attachments in Task { do { // init worldTrackingProvider try await arkitSession.run([worldTrackingProvider]) // wait until deviceAnchor returns correct info var deviceAnchor : DeviceAnchor? // continuously get deviceAnchor and check until it's valid while (deviceAnchor == nil || !checkDeviceAnchorValid(Transform(matrix: deviceAnchor!.originFromAnchorTransform).translation)) { deviceAnchor = worldTrackingProvider.queryDeviceAnchor(atTimestamp: CACurrentMediaTime()) } let cameraTransform = Transform(matrix: deviceAnchor!.originFromAnchorTransform) // ...codes that update my entity's translation } catch { print("Error: \(error)") } } } } private func checkDeviceAnchorValid(_ translation: SIMD3<Float>) -> Bool { // codes that check if the `deviceAnchor` has a valid translation. } However, I found that sometimes I can't get out from the while loop defined above. Not because my rules inside checkDeviceAnchorValid func are too strict, but because the translation I get from deviceAnchor is always invalid(it is [0,0,0] and never changed) Why is this happening? Is this a known issue? I wonder if I can get recalled when the worldTrackingProvider returns the correct deviceAnchor,
Posted Last updated
.
Post not yet marked as solved
2 Replies
328 Views
Hi, i am required to upload my CFD simulation results to the new vision pro glasses. This simulation shall be visible as a soft VR/AR object in the room. I am very new to the developer world. Could someone give me a hint which IDE, tool etc. to use for this task? SwiftUI, swift, visionOS, Xcode, ... ???? After I know what IDE/tool/language to use, I will start learning courses with it. Thanks a lot!!
Posted Last updated
.
Post not yet marked as solved
4 Replies
1.1k Views
Hello everyone! I'm completely new to Apple. So my Situation is the following: In the company I work we have a Virtual Reality Educational Game made in Unreal Engine 4.27. Now with the release of the Apple Vision Pro we plan on making our game available on VisionOS, but like I said, we're kinda new to the Apple Enviroment and I already encountered some Problems with Building Unreal Engine 4.27 on XCode 15.2, which is a requirement for VisionOS if I understand correctly. So what I wanted to ask is, if anyone has some experiences with porting an Unreal Engine Game into VisionOS already, what's the best guideline to accomplish that everything works out correctly. Maybe some access to a Tutorial or Guide, etc. Our progress is the following: I'm using a MacBook Pro with Sonoma 14.2.1 with Xcode 15.2. I was following this Guide to setup Unreal Engine on the Mac: https://docs.unrealengine.com/5.3/en-US/downloading-unreal-engine-source-code/ The first problem I encountered was some weird Errors when building the ShaderCompileWorker. Errors like "variable 'x' set but not used [-Werror,-Wunused-but-set-variable]" and "use of bitwise '&' with boolean operands [-Werror,-Wbitwise-instead-of-logical] I know the reason why those errors are happening but I didn't want to reassamble the whole Unreal Engine Code so I was looking for a general solution but the only thing I found was People reverting to XCode 13.4.1, which is not possible when I want to use VisionOS I believe. So now I'm thinking if a Unreal Engine 4.27 Game Port to VisionOS is a reasonable thing to do, or if it's just hardly possible to do. I would like to have some more insight about that topic, before putting a lot of work and resources into that task, only to realize that it maybe won't work at the end. I'd appreciate any kind of advice or help on that topic, just to have a better view on the whole issue - like I said, we're new to Apple :) Thanks a lot in advance!
Posted
by Itama22.
Last updated
.
Post not yet marked as solved
0 Replies
353 Views
Hello Guys, I am currently stuck on understanding how I can place a 3D Entity from a USDZ file or a Reality Composer Pro project in the middle of a table in a mixed ImmersiveSpace. When I use the AnchorEntity(.plane(.horizontal, classification: .table, minimumBounds: SIMD2<Float>(0.2, 0.2))) it just places it somewhere on the table and not in the middle and not in the orientation of the table so the edges are not aligned. Has anybody got a clue on how to to this? I would be very thankful for a response, Thanks
Posted
by Flex05.
Last updated
.
Post not yet marked as solved
2 Replies
462 Views
All of a sudden (like when XCode 15.2 left beta yesterday?) I can't build attachments into my RealityView: var body: some View { RealityView { content, attachments in // stuff } attachments: { // stuff } Produces "No exact matches in call to initializer" on the declaration line (RealityView { content, attachments in). So far as I can tell, this is identical to the sample code provided at the WWDC session, but I've been fussing with various syntaxes for an hour now and I can't figure out what the heck it wants.
Posted
by ckempke.
Last updated
.
Post not yet marked as solved
0 Replies
318 Views
After adding gestures to the EntityModel, when it is necessary to remove the EntityModel, if the method uiView.gestureRecognizers?.removeAll() is not executed, the instance in memory cannot be cleared. However, executing this method affects gestures for other EntityModels in the ARView. Does anyone have a better method to achieve this? Example Code: struct ContentView : View { @State private var isRemoveEntityModel = false var body: some View { ZStack(alignment: .bottom) { ARViewContainer(isRemoveEntityModel: $isRemoveEntityModel).edgesIgnoringSafeArea(.all) Button { isRemoveEntityModel = true } label: { Image(systemName: "trash") .font(.system(size: 35)) .foregroundStyle(.orange) } } } } ARViewContainer: struct ARViewContainer: UIViewRepresentable { @Binding var isRemoveEntityModel: Bool let arView = ARView(frame: .zero) func makeUIView(context: Context) -> ARView { let model = CustomEntityModel() model.transform.translation.y = 0.05 model.generateCollisionShapes(recursive: true) __**arView.installGestures(.all, for: model)**__ // here--> After executing this line of code, it allows the deletion of a custom EntityModel in ARView.scene, but the deinit {} method of the custom EntityModel is not executed. arView.installGestures(.all, for: model) let anchor = AnchorEntity(.plane(.horizontal, classification: .any, minimumBounds: SIMD2<Float>(0.2, 0.2))) anchor.children.append(model) arView.scene.anchors.append(anchor) return arView } func updateUIView(_ uiView: ARView, context: Context) { if isRemoveEntityModel { let customEntityModel = uiView.scene.findEntity(named: "Box_EntityModel") // --->After executing this line of code, ARView.scene can correctly delete the CustomEntityModel, and the deinit {} method of CustomEntityModel can also be executed properly. However, other CustomEntityModels in ARView.scene lose their Gestures as well. __** uiView.gestureRecognizers?.removeAll()**__ customEntityModel?.removeFromParent() } } } CustomEntityModel: class CustomEntityModel: Entity, HasModel, HasAnchoring, HasCollision { required init() { super.init() let mesh = MeshResource.generateBox(size: 0.1) let material = SimpleMaterial(color: .gray, isMetallic: true) self.model = ModelComponent(mesh: mesh, materials: [material]) self.name = "Box_EntityModel" } deinit { **print("CustomEntityModel_remove")** } }
Posted
by shaping.
Last updated
.
Post not yet marked as solved
1 Replies
357 Views
struct ARViewContainer: UIViewRepresentable { func makeUIView(context: Context) -> ARView { let arView = ARView(frame: .zero) arView.debugOptions = .showStatistics // Error: return arView } func updateUIView(_ uiView: ARView, context: Context) {} } -[MTLDebugRenderCommandEncoder validateCommonDrawErrors:]:5775: failed assertion `Draw Errors Validation Vertex Function(vsSdfFont): the offset into the buffer viewConstants that is bound at buffer index 4 must be a multiple of 256 but was set to 61840. '
Posted
by shaping.
Last updated
.
Post not yet marked as solved
1 Replies
665 Views
Hey everyone, I'm running into this issue of my USDZ model not showing up in Reality Composer Pro, exported from Blender as a USD and converted in Reality Converter. See Attached image: It's strange, because the USDz model appears fine in Previews. But once it is brought inside RCP, I receive this pop up, and does not appear. Not sure how to resolve this multiple root level issue. If anyone can point me in the right direction or any feedback, all is much appreciated! Thank you!
Posted Last updated
.
Post not yet marked as solved
0 Replies
478 Views
I have a RealityView and I want to add an Entity with an Attachment. Assuming I have a viewModel manage my entities, and the addEntityGesture() will add a new Entity under the rootEntity. RealityView { content, attachments in // Load initial content content.add(viewModel.rootEntity) } update: { updateContent, updateAttachments in // } attachments: { // } .gesture(addEntityGesture()) I know that we can create attachment in the attachments closure, and add those attachments as entities in our make closure, however, what if I want to add entity with an attachment on the fly?
Posted
by GerryGGG.
Last updated
.
Post not yet marked as solved
1 Replies
506 Views
Hey guys How I can fit RealityView content inside a volumetric window? I have below simple example: WindowGroup(id: "preview") { RealityView { content in if let entity = try? await Entity(named: "name") { content.add(entity) entity.setPosition(.zero, relativeTo: entity.parent) } } } .defaultSize(width: 0.6, height: 0.6, depth: 0.6, in: .meters) .windowStyle(.volumetric) I understand that we can resize a Model3D view automatically using .resizable() and .scaledToFit() after the model loaded. Can we achieve the same result using a RealityView? Cheers
Posted
by GerryGGG.
Last updated
.
Post not yet marked as solved
1 Replies
412 Views
I successfully changed a picture to the background in ImmersiveSpace in a full state with the following code. import RealityKit struct MainBackground: View { var body: some View { RealityView { content in guard let resource = try? await TextureResource(named: "Image_Name") else { fatalError("Error.") } var material = UnlitMaterial() material.color = .init(texture: .init(resource)) let entity = Entity() entity.components.set(ModelComponent( mesh: .generateSphere(radius: 1000), materials: [material] )) entity.scale *= .init(x: -1, y: 1, z: 1) content.add(entity) } } } However, when running, I found that when the user moves, the background is not fixed, but follows the user's movement, which I feel unrealistic. How to fix the background in the place where it first appears, and give the user a kind of movement that is really like walking in the real world, instead of letting the background follow the user.
Posted
by lijiaxu.
Last updated
.
Post not yet marked as solved
1 Replies
467 Views
Hello I am very new here in the forum (in iOS dev as well). I am trying to build an app that uses 3d face filters and I want to use Reality Composer. I knew Xcode 15 did not have it so I downloaded the beta 8 version (as suggested in another post). This one actually has Reality Composure Pro (XCode -> Developer tools -> Reality Composure Pro) but the Experience.rcproject still does not appear. Is there a way to create one? When I use Reality Composure it seems only able to create standalone projects and it does not seem to be bundled in a any way to xCode. Thanks for your time people!
Posted Last updated
.