Discuss augmented reality and virtual reality app capabilities.

Posts under AR / VR tag

107 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

Quicklook AugmentedReality Button Greyed-out
Hi, We are having problems with IOs Quick Look not working. Specifically, the AR button being greyed out after having opened the Scene / AR model previously. This is all running off our Web-App. What we have figured out is clearing the device's cache solves the issue and the greyed out button turns blue and clickable again. We are receiving this issue very inconsistently though - on iPad as well as iPhone and on both newer and older IOs versions. Very happy for any responses and advice to solve this issue as its behaviour makes the quick look function - even if it's great (when it works) unviable for Production (because it doesn't work consistently). Best Regards
0
0
332
May ’24
VisonOS Image tracking help
Hi all, I need some help debugging some code I wrote. Just as a preface, I'm an extremely new VR/AR developer and also very new to using ARKit + RealityKit. So please bear with me :) I'm just trying to make a simple program that will track an image and place an entity on it. The image is tracked correctly, but the moment the program recognizes the image and tries to place an entity on it, the program crashes. Here’s my code: VIEWMODEL CODE: Observable class ImageTrackingModel { var session = ARKitSession() // ARSession used to manage AR content var imageAnchors = [UUID: Bool]() // Tracks whether specific anchors have been processed var entityMap = [UUID: ModelEntity]() // Maps anchors to their corresponding ModelEntity var rootEntity = Entity() // Root entity to which all other entities are added let imageInfo = ImageTrackingProvider( referenceImages: ReferenceImage.loadReferenceImages(inGroupNamed: "referancePaper") ) init() { setupImageTracking() } func setupImageTracking() { if ImageTrackingProvider.isSupported { Task { try await session.run([imageInfo]) for await update in imageInfo.anchorUpdates { updateImage(update.anchor) } } } } func updateImage(_ anchor: ImageAnchor) { let entity = ModelEntity(mesh: .generateSphere(radius: 0.05)) // THIS IS WHERE THE CODE CRASHES if imageAnchors[anchor.id] == nil { rootEntity.addChild(entity) imageAnchors[anchor.id] = true print("Added new entity for anchor \(anchor.id)") } if anchor.isTracked { entity.transform = Transform(matrix: anchor.originFromAnchorTransform) print("Updated transform for anchor \(anchor.id)") } } } APP: @main struct MyApp: App { @State var session = ARKitSession() @State var immersionState: ImmersionStyle = .mixed private var viewModel = ImageTrackingModel() var body: some Scene { WindowGroup { ModeSelectView() } ImmersiveSpace(id: "appSpace") { ModeSelectView() } .immersionStyle(selection: $immersionState, in: .mixed) } } Content View: RealityView { content in Task { viewModel.setupImageTracking() } } //Im serioulsy so clueless on how to use this view
1
0
439
May ’24
How does Encounter Dinosaurs blend between portal lighting and real-world lighting?
When the dinosaur protrudes from the portal in the Encounter Dinosaurs app, it appears to be lit by the real room lighting, just like any other RealityKit content is by default. When the dinosaur is inside of the portal, it appears to be lit by the virtual environment, and the two light sources seem to be smoothly blended between at the plane of the portal. How is this done? ImageBasedLightReceiverComponent allows the IBL to be changed on a per-entity basis, but the actual lightning calculation shader code seems to be a black box, and I have not seen a way to specify which IBL texture is used on a per-fragment basis.
0
0
445
May ’24
ARKit ARWorldTrackingConfiguration doesn't recognize faces
Hi all, I am trying to use ARWorldTrackingConfiguration to find any faces in my scene. However when I query the scene, using the same type of query one would use in ARFaceTrackingConfiguration, I don't get an Entity back. Here's my code: var entityCollection : Set<Entity> = [] let faceEntity = scene.performQuery(query1).first { $0.components[SceneUnderstandingComponent.self]?.entityType == .face } Every single time faceEntity returns as empty. Any help/pointers would be appreciated!
1
0
394
Apr ’24
Video Player controllers aren't showing for the video in immersive view
I have a AVPlayer() which loads the video and places it on the screen ModelEntity in the immersive view using the VideoMaterial. This also makes the video untappable as it is a VideoMaterial. Here's the code for the same: let screenModelEntity = model.garageScreenEntity as! ModelEntity let modelEntityMesh = screenModelEntity.model!.mesh let url = Bundle.main.url(forResource: "<URL>", withExtension: "mp4")! let asset = AVURLAsset(url: url) let playerItem = AVPlayerItem(asset: asset) let player = AVPlayer() let material = VideoMaterial(avPlayer: player) screenModelEntity.components[ModelComponent.self] = .init(mesh: modelEntityMesh, materials: [material]) player.replaceCurrentItem(with: playerItem) return player I was able to load and play the video. However, I cannot figure out how to show the player controls (AVPlayerViewController) to the user, similar to the DestinationVideo sample app. How can I add the video player controls in this case?
0
0
472
Apr ’24
Vision Pro - viable for industry applications?
I'm in Europe, Vision Pro isn't available here yet. I'm a developer / designer, and I want to find out whether it's worthwhile to try and sell the idea of investing in a bunch of Vision Pro devices as well as in app development for it, to the people overseeing the budget for a project I'm part of. The project is broadly in an "industry" where several constraints apply, most of them are security and safety. So far, all the Vision Pro discussion I've seen is about consumer-level media consumption and tippy-tappy-app-stuff for a broad user base. Now, the hardware and the OS features and SDK definitely look like professional niche use cases are possible. But some features, such as SharePlay, will for example require an Apple ID and internet connection (I guess?). This for example is a strict nope in my case, for security reasons. I'd like to start a discussion of what works and what doesn't work, outside the realm of watching Disney+ in your condo. Potentially, this device has several marks ticked with regards to incredibly useful features in general. very good indoor tracking pass through with good fidelity hands free operation The first point especially, is kind of a really big deal, and for me, the biggest open question. I have multiple make or break questions with regard to this. (These features are not available in the simulator) For sake of argument, lets say the app I'm building is Cave Mapper. it's meant to be used by archeologists inside a cave system where we have no internet, no reliable compass, and no GPS. We have a local network that we can carry around though. We can also bring lights. One feature of the app is to build out a catalog of cave paintings and store them in a database. The archeologist wants to walk around, look at a cave painting, and tap on it to capture its position relative to the cave entrance. The next day, another archeologist may work inside the same cave, and they would want to have synchronised access to the same spatial data from the day before. For that: How good, precise, reliable, stable is the indoor tracking really? Hyped reviewers said it's rock solid, others have said it can drift. How well do the persistent WorldAnchor objects work? How well do they work when you're in a concrete bunker or a cave without GPS? Can I somehow share a world anchor with another user? is it possible to sync the ARKit map that one device has built, with another device? Other showstoppers? in case you cannot share your mapped world or world anchors: How solid is the tracking of an ImageAnchor (which we could physically nail to the cave entrance to use as a shared positional / rotational reference) Other, practical stuff: can you wear Vision Pro with a safety helmet? does it work with gloves?
1
0
463
Apr ’24
VisionOS Portal
In the WWDC talk "Enhance your spatial computing app with RealityKit." we see how to create a portal effect with RealityKit. In the "Encounter Dinosaurs" experience on Vision Pro there is a similar portal, except this portal allows entities to stick out of the portal. Using the provided example code, I have been unable to replicate this effect. With the example code, anything that sticks out of the portal gets clipped. How do I get entities to stick out of the portal in a way similar to the "Encounter Dinosaurs" experience? I am familiar with the old way of using OcclusionMaterial to create portals, but if the camera gets between the OcclusionMaterial and the entity (such as walking behind the portal), this can break the effect, and I was unable to break the effect in the "Encounter Dinosaurs" experience. If it helps at all: I have noticed that if you look from the edge of the portal very closely, the rocks will not stick out the way that the dinosaurs do; The rocks get clipped. Therefore, the dinosaurs are somehow being rendered differently.
5
1
1k
Apr ’24
Having issue with two ar session togather
I have the following issue regarding running 2 AR service. I am trying to develop an app for my masters thesis. Case 1: I first scan the room using the roomplan api. Then I stop the roomplan api session and start the realitykit session. When the realitykit session starts, the camera is not showing anything but black screen. Case 2: When I had the issue with case one, I tried a seperate test app where I had 2 seperate screen for roomplan api and realitykit. There is no relation. but as soon as I introduced roomplan api, realitykit stopped working, having the same black screen as above. There might be any states that changed by the roomplan api, that's why realitykit is not able to access the camera. Let me know if you have any idea about it or any sample. I am using the following stack: Xcode - Latest; Swiftui; latest os in mac mini and iphone
2
1
581
Apr ’24
Entity disappears when changing position
I have some strange behavior in my app. When I set the position to .zero you can see the sphere normally. But when I change it to any number it doesn't matter which and how small. The Sphere isn't visible or in the view. The RealityView import SwiftUI import RealityKit import RealityKitContent struct TheSphereOfDoomRV: View { @StateObject var viewModel: SphereViewModel = SphereViewModel() let sphere = SphereEntity(radius: 0.25, materials: [SimpleMaterial(color: .red, isMetallic: true)], name: "TheSphere") var body: some View { RealityView { content, attachments in content.add(sphere) } update: { content, attachments in sphere.scale = SIMD3<Float>(x: viewModel.scale, y: viewModel.scale, z: viewModel.scale) } attachments: { VStack { Text("The Sphere of Doom is one of the most powerful Objects. You can interact with him in every way you can imagine ").multilineTextAlignment(.center) Button { } label: { Text("Play Video!") } }.tag("description") }.modifier(GestureModifier()).environmentObject(viewModel) } } SphereEntity: import Foundation import RealityKit import RealityKitContent class SphereEntity: Entity { private let sphere: ModelEntity @MainActor required init() { sphere = ModelEntity() super.init() } init(radius: Float, materials: [Material], name: String) { sphere = ModelEntity(mesh: .generateSphere(radius: radius), materials: materials) sphere.generateCollisionShapes(recursive: false) sphere.components.set(InputTargetComponent()) sphere.components.set(HoverEffectComponent()) sphere.components.set(CollisionComponent(shapes: [.generateSphere(radius: radius)])) sphere.name = name super.init() self.addChild(sphere) self.position = .zero // .init(x: Float, y: Float, z: Float) and [Float, Float, Float] doesn't work ... } }
1
1
598
Mar ’24
How to spawn in particles that don't move
I am trying to make an application for the Vision Pro where the particles don't move but rather stay still so that there is no lag. For example I am trying to spawn in a 100 particles here: I want the particles to remain static but spawning in many causes the simulator to lag. Also is there maybe a way i can get a particle system to follow a specific shape like the one i have in the image. Currently, I have multiple model entities that take on a particle system component for i in 0..<100 { let newEntity = ModelEntity() var particleSystem = particleSystem(color: newColor) newEntity.components.set(particleSystem) newEntity.position = position newEntity.scale = scale stars.append(newEntity) } } func particleSystem(color: UIColor) -> ParticleEmitterComponent { var particles = ParticleEmitterComponent() particles.emitterShapeSize = .init(repeating: 0.02) // make burst smaller particles.emitterShape = .sphere particles.mainEmitter.birthRate = 1 particles.mainEmitter.lifeSpan = 2 particles.mainEmitter.size = 0.02 particles.burstCount = 50 particles.speed = 0.01 particles.mainEmitter.isLightingEnabled = false particles.mainEmitter.color = .constant(.single(color)) return particles }
0
0
487
Mar ’24
Finding device heading with ARKit and ARGeoTrackingConfiguration
I am working on an app where I need to orient a custom view depending on the device heading. I am using ARKit and ARSCNView with the ARGeoTrackingConfiguration in order to overlay my custom view in real world geographic coordinates. I've got a lot of it working, but the heading of my custom view is off. Once the ARSession gets a ARGeoTrackingStatus.State of .localized, I need to be able to get the devices heading (0-360) so that I can orient my view. I'm having trouble figuring out how to do this missing piece. Any help is appreciated.
3
1
997
Feb ’24
PortalComponent – allow world content to peek out
Hello, I've been tinkering with PortalComponent on visionOS a bit but noticed that the content of the WorldComponent is always clipped to the mesh geometry of whatever entities have the PortalComponent applied. Now I'm wondering if there is any way or trick to allow contents of the portal to peek out – similar to the Encounter Dinosaurs experience on Vision Pro (I assume it also uses PortalComponent?). I saw that PortalComponent has a clippingPlane property (https://developer.apple.com/documentation/realitykit/portalcomponent/clippingplane-swift.property). But so far I haven't been able to achieve a perceptible visual difference with it. If possible I would like to avoid hacky tricks using duplicate meshes or similar to achieve this. Thanks for any hints!
4
0
774
Feb ’24
Unable to draw textures on SCNGeometry which is created from ARKit FaceAnchor points.
In the below code I have extracted face mesh vertices from ARKit face anchors and created a custom face mesh using SceneKit SCNGeometry. This enabled me to stretch face mesh vertices as per my requirement. Now the problem I am facing is as follows. I am trying to apply a lipstick texture material which is of type SCNMaterial. Although ARSCNFaceGeometry lets me apply different textures through SCNMaterial and SCNNode, I am not able to do the same using mu CustomFaceGeometry. When I am applying a lipstick texture which looks like the image attached below, the full face is getting colored or modified, I want only that part of the face which has texture transparency as >0 and I dont want other part of the face to be modified. Can you give me a detailed solution using code? // ViewController.swift import UIKit import ARKit import SceneKit import simd class ViewController: UIViewController, ARSCNViewDelegate, ARSessionDelegate{ @IBOutlet weak var sceneView: ARSCNView! let vertexIndicesOfInterest = [250] var customFaceGeometry: CustomFaceGeometry! var scnFaceGeometry: SCNGeometry! private var faceUvGenerator: FaceTextureGenerator! var faceGeometry: ARSCNFaceGeometry! override func viewDidLoad() { super.viewDidLoad() sceneView.delegate = self override func viewWillAppear(_ animated: Bool) { super.viewWillAppear(animated) let configuration = ARFaceTrackingConfiguration() sceneView.session.run(configuration) } } extension ViewController { func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor) { guard let faceAnchor = anchor as? ARFaceAnchor else { return } customFaceGeometry = CustomFaceGeometry(fromFaceAnchor: faceAnchor) let customGeometryNode = SCNNode(geometry: customFaceGeometry.geometry) customFaceGeometry.geometry.firstMaterial?.fillMode = .lines customFaceGeometry.geometry.firstMaterial?.transparency = 0.0 customFaceGeometry.geometry.firstMaterial?.isDoubleSided = true node.addChildNode(customGeometryNode) } func renderer(_ renderer: SCNSceneRenderer, willUpdate node: SCNNode, for anchor: ARAnchor) { guard let faceAnchor = anchor as? ARFaceAnchor, let faceMeshNode = node.childNodes.first else { return } DispatchQueue.main.async { self.customFaceGeometry.update(withFaceAnchor: faceAnchor, node: faceMeshNode) } } } class CustomFaceGeometry { var geometry: SCNGeometry let lipImage = UIImage(named: "Face.scnassets/lip_arks_y7.png") init(fromFaceAnchor faceAnchor: ARFaceAnchor) { self.geometry = CustomFaceGeometry.createCustomSCNGeometry(from: faceAnchor)! } static func createCustomFaceGeometry(fromVertices vertices_o: [SCNVector3]) -> SCNGeometry { var vertices = vertices_o let vertexData = Data(bytes: vertices, count: vertices.count * MemoryLayout<SCNVector3>.size) let vertexSource = SCNGeometrySource(data: vertexData, semantic: .vertex, vectorCount: vertices.count, usesFloatComponents: true, componentsPerVector: 3, bytesPerComponent: MemoryLayout<Float>.size, dataOffset: 0, dataStride: MemoryLayout<SCNVector3>.stride) let indices: [Int32] = Array(0..<Int32(vertices.count)) let indexData = Data(bytes: indices, count: indices.count * MemoryLayout<Int32>.size) let element = SCNGeometryElement(data: indexData, primitiveType: .point, primitiveCount: vertices.count, bytesPerIndex: MemoryLayout<Int32>.size) return SCNGeometry(sources: [vertexSource], elements: [element]) } static func createGeometry(fromFaceAnchor faceAnchor: ARFaceAnchor) -> SCNGeometry let vertices = faceAnchor.geometry.vertices.map { SCNVector3($0.x, $0.y, $0.z) } return CustomFaceGeometry.createCustomFaceGeometry(fromVertices: vertices) } func update(withFaceAnchor faceAnchor: ARFaceAnchor, node: SCNNode) { if let newGeometry = CustomFaceGeometry.createCustomSCNGeometry(from: faceAnchor) { node.geometry = newGeometry let lipstickNode = SCNNode(geometry: newGeometry) let lipstickTextureMaterial = SCNMaterial() lipstickTextureMaterial.diffuse.contents = lipImage lipstickTextureMaterial.transparency = 1.0 lipstickNode.geometry?.firstMaterial = lipstickTextureMaterial node.geometry?.firstMaterial?.fillMode = .lines node.geometry?.firstMaterial?.transparency = 0.5 } } static func createCustomSCNGeometry(from faceAnchor: ARFaceAnchor) -> SCNGeometry? { let faceGeometry = faceAnchor.geometry var vertices: [SCNVector3] = faceGeometry.vertices.map { SCNVector3($0.x, $0.y, $0.z) } print(vertices[250]) let ll_ratio_y = Float(0.969999) vertices[290] = SCNVector3(x: vertices[290].x, y: vertices[290].y*ll_ratio_y, z: vertices[290].z) vertices[274] = SCNVector3(x: vertices[274].x, y: vertices[274].y*ll_ratio_y, z: vertices[274].z) vertices[265] = SCNVector3(x: vertices[265].x, y: vertices[265].y*ll_ratio_y, z: vertices[265].z) vertices[700] = SCNVector3(x: vertices[700].x, y: vertices[700].y*ll_ratio_y, z: vertices[700].z) vertices[730] = SCNVector3(x: vertices[730].x, y: vertices[730].y*ll_ratio_y, z: vertices[730].z) vertices[25] = SCNVector3(x: vertices[25].x, y: vertices[25].y*ll_ratio_y, z: vertices[25].z) vertices[709] = SCNVector3(x: vertices[709].x, y: vertices[709].y*ll_ratio_y, z: vertices[709].z) vertices[725] = SCNVector3(x: vertices[725].x, y: vertices[725].y*ll_ratio_y, z: vertices[725].z) vertices[710] = SCNVector3(x: vertices[710].x, y: vertices[710].y*ll_ratio_y, z: vertices[710].z) let vertexData = Data(bytes: vertices, count: vertices.count * MemoryLayout<SCNVector3>.size) let vertexSource = SCNGeometrySource(data: vertexData, semantic: .vertex, vectorCount: vertices.count, usesFloatComponents: true, componentsPerVector: 3, bytesPerComponent: MemoryLayout<Float>.size, dataOffset: 0, dataStride: MemoryLayout<SCNVector3>.stride) let indices: [UInt16] = faceGeometry.triangleIndices.map(UInt16.init) let indexData = Data(bytes: indices, count: indices.count * MemoryLayout<UInt16>.size) let element = SCNGeometryElement(data: indexData, primitiveType: .triangles, primitiveCount: indices.count / 3, bytesPerIndex: MemoryLayout<UInt16>.size) return SCNGeometry(sources: [vertexSource], elements: [element]) } }
2
0
571
Feb ’24
indoor sky box is displayed large and far in the field of view in visionos?
The indoor sky box is displayed large and far in the field of view in visionos? why? func addSkybox(for destination: Destination) { let subscription = TextureResource.loadAsync(named: destination.imageName).sink( receiveCompletion: { switch $0 { case .finished: break case .failure(let error): assertionFailure("\(error)") } }, receiveValue: { [weak self] texture in guard let self = self else { return } var material = UnlitMaterial() material.color = .init(texture: .init(texture)) self.components.set(ModelComponent( mesh: .generateSphere(radius: 1E3), materials: [material] )) // We flip the sphere inside out so the texture is shown inside. self.scale *= .init(x: -1, y: 1, z: 1) self.transform.translation += SIMD3<Float>(0.0, 1.0, 0.0) // Rotate the sphere to show the best initial view of the space. updateRotation(for: destination) } ) components.set(Entity.SubscriptionComponent(subscription: subscription)) } by https://developer.apple.com/documentation/visionos/destination-video
3
0
461
Feb ’24
RealityKit: "annotating" an object
Hello, I want to be able to tap on a previously-placed ModelEntity box and add a dot or a text at that location on the box (kind of like I'm adding an annotation on the box) I have something like this, but not sure how I should do it correctly: class MyARView: ARView { // ... private func didTap(_ gestureRecognizer: UITapGestureRecognizer) { let pos = gestureRecognizer.location(in: self) if !didPlaceCube { placeCube(pos) return } let hitTestResult = self.hitTest(pos) guard let firstResult = hitTestResult.first else { return} let entity = firstResult.entity let textEntity = ModelEntity(mesh: .generateText("Hello there", extrusionDepth: 0.4, font: .boldSystemFont(ofSize: 0.05), containerFrame: .zero, alignment: .center, lineBreakMode: .byWordWrapping)) textEntity.setPosition(entity.position + firstResult.position, relativeTo: entity) entity.addChild(textEntity) } // ... }
0
0
418
Feb ’24