Discuss augmented reality and virtual reality app capabilities.

Posts under AR / VR tag

107 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

where to apply "grounding shadow" in Reality Composer Pro?
So there's a "grounding shadow" component we can add to any entity in Reality Composer Pro. In case my use case is Apple Vision Pro + Reality Kit: I'm wondering, by default I think I'd want to add this to any entity that's a ModelEntity in my scene... right? Should we just add this component once to the root transform? Or should we add it to any entity individually if it's a model entity? Or should we not add this at all? Will RealityKit do it for us? Or does it also depend if we use a volume or a full space?
2
0
727
Sep ’23
Augmented Reality (AR) measurement app using ARViewContainer Value of type - 'ARViewContainer' has no member 'parentSession'
import SwiftUI import RealityKit import ARKit struct ARViewContainer: UIViewRepresentable { @Binding var distance: Float? func makeUIView(context: Context) -> ARView { let arView = ARView(frame: .zero) let configuration = ARWorldTrackingConfiguration() arView.session.run(configuration) arView.addGestureRecognizer(UITapGestureRecognizer(target: context.coordinator, action: #selector(context.coordinator.handleTap(_:)))) return arView } func updateUIView(_ uiView: ARView, context: Context) { } func makeCoordinator() -> Coordinator { Coordinator(self) } class Coordinator: NSObject, ARSessionDelegate { var parent: ARViewContainer init(_ parent: ARViewContainer) { self.parent = parent super.init() parent.parentSession.delegate = self }
0
0
411
Sep ’23
Verifying Image Tracking in Vision Pro Simulator
Hello! I am making a Vision Pro app. It is as simple AR Image Recognition-to-3D as possible: Load 10 images as ARReferenceImages A short code saying "if you see this image, display this 3D model on top of the image" Open AR camera When I build to iPhone it works just as intended. When I build to Vision Pro Simulator is - not surprisingly - does not work. Only a white rectangle in the "Simulator environment". Thought: I can build and run + getting "Successfully loaded 10 ARReferenceImages." Console print. My question: Can I verify somehow that this project will work on a Vision Pro? Best regards
1
1
659
Sep ’23
Can't center entity on AnchorEntity(.plane)
How can entities be centered on a plane AnchorEntity? On top of the pure existence of the box's offset from the anchor's center, the offset also varies depending on the user's location in the space when the app is being started. This is my code: struct ImmersiveView: View { var body: some View { RealityView { content in let wall = AnchorEntity(.plane(.vertical, classification: .wall, minimumBounds: [2.0, 1.5]), trackingMode: .continuous) let mesh = MeshResource.generateBox(size: 0.3) let box = ModelEntity(mesh: mesh, materials: [SimpleMaterial(color: .green, isMetallic: false)]) box.setParent(wall) content.add(wall) } } } With PlaneDetectionProvider being unavailable on the simulator, I currently don't see a different way to set up entities at least somewhat consistently at anchors in full space.
1
0
642
Sep ’23
Is ARKit's detecting 3D object reliable enough to use?
Hi, I'm doing a research on AR using real world object as the anchor. For this I'm using ARKit's ability to scan and detect 3d object. Here's what I found so far after scanning and testing object detection on many objects : It works best on 8-10" object (basically objects you can place on your desk) It works best on object that has many visual features/details (makes sense just like plane detection) Although things seem to work well and exactly the behavior I need, I noticed issues in detecting when : Different lighting setup, this means just directions of the light. I always try to maintain bright room light. But I noticed testing in the morning and in the evening sometimes if not most of the time will make detection harder/fails. Different environment, this means simply moving the object from one place to another will make detection fails or harder(will take significant amount of time to detect it). -> this isn't scanning process, this is purely anchor detection from the same arobject file on the same real world object. These two difficulties make me wonder if scanning and detecting 3d object will ever be reliable enough for real world case. For example you want to ship an AR app that contains the manual of your product where you can use AR app to detect and point the location/explanation of your product features. Has anyone tried this before? Is your research show the same behavior as mine? Does using LIDAR will help in scanning and detection accuracy? So far there doesn't seem to be any information on what actually ARKit does when scanning and detecting, maybe if anyone has more information I can learn on how to make better scan or what not. Any help or information regarding this matter that any of you willing to share will be really appreciated Thanks
5
0
1.4k
Oct ’23
Collision handling between the entity and the real environment objects and planes in RealityKit
I'm trying to achieve a similar behaviour to the native AR preview app on iOS when we can place a model and once we move or rotate it, it automatically detects the obstacles and gives a haptic feedback, and doesn't go through the walls. I'm using the devices with LiDAR only. Here is what I have so far: Session setup private func configureWorldTracking() { let configuration = ARWorldTrackingConfiguration() configuration.planeDetection = [.horizontal, .vertical] configuration.environmentTexturing = .automatic if ARWorldTrackingConfiguration.supportsSceneReconstruction(.meshWithClassification) { configuration.sceneReconstruction = .meshWithClassification } let frameSemantics: ARConfiguration.FrameSemantics = [.smoothedSceneDepth, .sceneDepth] if ARWorldTrackingConfiguration.supportsFrameSemantics(frameSemantics) { configuration.frameSemantics.insert(frameSemantics) } session.run(configuration) session.delegate = self arView.debugOptions.insert(.showSceneUnderstanding) arView.renderOptions.insert(.disableMotionBlur) arView.environment.sceneUnderstanding.options.insert([.collision, .physics, .receivesLighting, .occlusion]) } Custom entity: class CustomEntity: Entity, HasModel, HasCollision, HasPhysics { var modelName: String = "" private var cancellable: AnyCancellable? init(modelName: String) { super.init() self.modelName = modelName self.name = modelName load() } required init() { fatalError("init() has not been implemented") } deinit { cancellable?.cancel() } func load() { cancellable = Entity.loadModelAsync(named: modelName + ".usdz") .sink(receiveCompletion: { result in switch result { case .finished: break case .failure(let failure): debugPrint(failure.localizedDescription) } }, receiveValue: { modelEntity in modelEntity.generateCollisionShapes(recursive: true) self.model = modelEntity.model self.collision = modelEntity.collision self.collision?.filter.mask.formUnion(.sceneUnderstanding) self.physicsBody = modelEntity.physicsBody self.physicsBody?.mode = .kinematic }) } Entity loading and placing let tapLocation = sender.location(in: arView) guard let raycastResult = arView.raycast(from: tapLocation, allowing: .estimatedPlane, alignment: .horizontal).first else { return } let entity = CustomEntity(modelName: modelName) let anchor = AnchorEntity(world: raycastResult.worldTransform) anchor.name = entity.name anchor.addChild(entity) arView.scene.addAnchor(anchor) arView.installGestures([.rotation, .translation], for: entity) This loads my model properly and allows me to move it and rotate as well, but I cannot figure out how to handle the collision handling with the real environment like walls and interrupt gestures once my model starts going thought it?
0
0
433
Sep ’23
Different ARkit behavior between iPhone 12 pro and iPhone 15 pro max
I am making an app that has a companion watch app that only purpose is to display QR codes for the main iOS app to image track using AR. I've been testing this capability with a iPhone 12 Pro, and have had no issues. But when I try to test the same capability with the iPhone 15 pro max, the phone won't pick up the image on the watch, but will pick up the physical print out of the image I have next to it. iPhone 12 pro is running iOS 17.0.1, and iPhone 15 pro is running iOS 17.0.2, so I don't think it is an iOS bug, like the camera bug last year for iOS 16. iPhone 12 pro short video (Both images tracked): https://imgur.com/a/iCC8aV5 iPhone 15 pro max video (Digital image is not tracked, but physical is): https://imgur.com/a/eXE07IM Can anyone help as to why the same build of the same app running on the same iOS has different outcomes on different hardware? It might appear from the video that it might be because it is out of focus for a time, but trust me I have been trying to get this to work all day, doing various tests in various directions. Thank you for the help!
0
0
506
Sep ’23
Finding device heading with ARKit and ARGeoTrackingConfiguration
I am working on an app where I need to orient a custom view depending on the device heading. I am using ARKit and ARSCNView with the ARGeoTrackingConfiguration in order to overlay my custom view in real world geographic coordinates. I've got a lot of it working, but the heading of my custom view is off. Once the ARSession gets a ARGeoTrackingStatus.State of .localized, I need to be able to get the devices heading (0-360) so that I can orient my view. I'm having trouble figuring out how to do this missing piece. Any help is appreciated.
3
1
997
Feb ’24
Convert Blender to .usdz
I have a blender project, for simplicity a black hole. The way that it is modeled is a sphere on top of a round plane, and then a bunch of effects on that. I have tried multiple ways: convert to USD from the file menu convert to obj and then import But all of them have resulted in just the body, not any effects. Does anybody know how to do this properly? I seem to have no clue except for going through the Reality Converter Pro (which I planned on going through already - but modeling it there)
0
1
446
Oct ’23
Using ARView's project(_:) method to convert to screen coordinates.
I'm trying to understand how to use the project(_:) function provided by ARView to convert 3D model coordinates to 2D screen coordinates, but am getting unexpected results. Below is the default Augmented Reality App project, modified to have a single button that when tapped will place a circle over the center of the provided cube. However, when the button is pressed, the circle's position does not line up with the cube. I've looked at the documentation for project(_:), but it doesn't give any details about how to convert a point from model coordinates to "the 3D world coordinate system of the scene". Is there better documentation somewhere on how to do this conversion? // ContentView.swift import SwiftUI import RealityKit class Coordinator { var arView: ARView? var anchor: AnchorEntity? var model: Entity? } struct ContentView : View { @State var coord = Coordinator() @State var circlePos = CGPoint(x: -100, y: -100) var body: some View { ZStack { ARViewContainer(coord: coord).edgesIgnoringSafeArea(.all) VStack { Spacer() Circle() .frame(width: 10, height: 10) .foregroundColor(.red) .position(circlePos) Button(action: { showMarker() }, label: { Text("Place Marker") }) } } } func showMarker() { guard let arView = coord.arView else { return } guard let model = coord.model else { return } guard let anchor = coord.anchor else { return } print("Model position is: \(model.position)") // convert position into anchor's space let modelPos = model.convert(position: model.position, to: anchor) print("Converted position is: \(modelPos)") // convert model locations to screen coordinates circlePos = arView.project(modelPos) ?? CGPoint(x: -1, y: -1) print("circle position is now \(circlePos)") } } struct ARViewContainer: UIViewRepresentable { var coord: Coordinator func makeUIView(context: Context) -> ARView { let arView = ARView(frame: .zero) coord.arView = arView // Create a cube model let mesh = MeshResource.generateBox(size: 0.1, cornerRadius: 0.005) let material = SimpleMaterial(color: .gray, roughness: 0.15, isMetallic: true) let model = ModelEntity(mesh: mesh, materials: [material]) coord.model = model // Create horizontal plane anchor for the content let anchor = AnchorEntity(.plane(.horizontal, classification: .any, minimumBounds: SIMD2<Float>(0.2, 0.2))) anchor.children.append(model) coord.anchor = anchor // Add the horizontal plane anchor to the scene arView.scene.anchors.append(anchor) return arView } func updateUIView(_ uiView: ARView, context: Context) {} } #Preview { ContentView(coord: Coordinator()) }
1
0
561
Oct ’23
Quick Look AR shows "Object requieres a newer version of IOS" on iOS17 when using .reality files
Hi there Hosting in my server a no-doubt-well-formed AR file, as is the "CosmonautSuit_en.reality" from Apple's examples (https://developer.apple.com/augmented-reality/quick-look/) the infamous and annoying "Object requires a newer version of iOS." message appears, even when I'm running iOS 17.1 in my iPad. That is, the very last available version. All works flawless in uOS16 and below. Of course, my markup is following the required format, namely: <a rel="ar" href="https://artest.myhost.com/CosmonautSuit_en.reality"> <img class="image-model" src="https://artest.myhost.com/cosmonaut.png"> </a> Accessing this same .reality file from the aforementioned Apple's site page works fine. Why is not working in my hosting server? For you rinformation, when I use in my server a USDZ instead, also from the Apple's web page of examples, as is the toy_drummer_idle.usdz file, all works flawless. Again, I'm using the same markup schema: <a rel="ar" href="https://artest.myhost.com/toy_drummer_idle.usdz"> <img class="image-model" src="https://artest.myhost.com/toy_drummerpng"> </a> Also, when I delete the rel="ar" option, AR experience is launched, but by means of an extra step, that implied go thought an ugly poster (generated by QLAR on-the-fly), that ruins all the UX/UI of my webapp. This bahavior is, by the way, the same that you can experience when accessing directly the .realiity file by typing its URL in the Safari browser box. Any tip on this? Thanks for your time.
2
0
774
Oct ’23