Integrate iOS device camera and motion features to produce augmented reality experiences in your app or game using ARKit.

ARKit Documentation

Post

Replies

Boosts

Views

Activity

Adding custom Simulated Scenes to Apple Vision Pro Simulator
Greetings, Been playing around with some of the first examples and apps documented. After a while, I wanted to move to a custom space in the simulator. Scanned throughout the options in the simulator and in XCode but couldn't find how to add custom Simulated Scenes (e.g. my own room) to the Simulator. Could someone point out how to get this done? Even if programatically, some pointers would be welcome. Thanks,
5
1
1k
Oct ’23
Texture not appling on roomplan wall object (capturedata)
We are attempting to update the texture on a node. The code below works correctly when we use a color, but it encounters issues when we attempt to use an image. The image is available in the bundle, and it image correctly in other parts of our application. This texture is being applied to both the floor and the wall. Please assist us with this issue." for obj in Floor_grp[0].childNodes { let node = obj.flattenedClone() node.transform = obj.transform let imageMaterial = SCNMaterial() node.geometry?.materials = [imageMaterial] node.geometry?.firstMaterial?.diffuse.contents = UIColor.brown obj.removeFromParentNode() Floor_grp[0].addChildNode(node) }
0
0
454
Oct ’23
How to trigger scene custom behaviour in Xcode 15
Hi, I'm working on an AR App. With Reality Composer and Xcode 14 I was triggering custom behaviours from with just: myScene.notifications.myBox.post() called from let myScene = try! Experience.loadBox() Now in Xcode 15 I don't have an Experience instead with the .reality file I have to use Entities so : let objectAR = try! Entity.load(named: "myProject.reality") How can I trigger my previously Reality Composer exported custom behaviour from that ?
0
1
409
Oct ’23
Roompaln wall group objects apply texture
I am writing to seek assistance with a challenge I am facing while working on a 3D model rendering project. I believe your expertise in this area could be immensely helpful in resolving the issue. The problem I am encountering involves difficulties in displaying textures on both parent and child nodes within the 3D model. Here are the key details of the problem: This model contents wall_grp(doors, windows and wall) objects. We are using roomplan data in SCNView. This code dependent on scene kit and room plan apis When we are comment childnode code its working but in this case we don’t have windows and door on wall. func updateWallObjects() { if arch_grp.count > 0 { if !arch_grp.isEmpty { for obj in arch_grp[0].childNodes { let color = UIColor.init(red: 255/255, green: 229/255, blue: 204/255, alpha: 1.0) let parentNode = obj.flattenedClone() for childObj in obj.childNodes { let childNode = childObj.flattenedClone() let childMaterial = SCNMaterial() childNode.geometry?.materials = [childMaterial] if let name = childObj.name { if (removeNumbers(from: name) != "Wall") { childNode.geometry?.firstMaterial?.diffuse.contents = UIColor.white } else { childNode.geometry?.firstMaterial?.diffuse.contents = color } } childObj.removeFromParentNode() parentNode.addChildNode(childObj) } let material = SCNMaterial() parentNode.geometry?.materials = [material] parentNode.geometry?.firstMaterial?.diffuse.contents = color obj.removeFromParentNode() arch_grp[0].addChildNode(parentNode) } } } }``` Please suggest us
0
0
478
Oct ’23
Face Anchor in Reality Composer: Enabling Ball Movement Based on Head Tilts
Using the face anchor feature in Reality Composer, I'm exploring the potential for generating content movement based on facial expressions and head movement. In my current project, I've positioned a horizontal wood plane on the user's face, and I've added some dynamic physics-enabled balls on the wood surface. While I've successfully anchored the wood plane to the user's head movements, I'm facing a challenge with the balls. I'm aiming to have these balls respond to the user's head tilts, effectively rolling in the direction of the head movement. For instance, a tilt to the right should trigger the balls to roll right, and likewise for leftward tilts. However, my attempts thus far have not yielded the expected results, as the balls seem to be unresponsive to the user's head movements. The wood plane, on the other hand, follows the head's motion seamlessly. I'd greatly appreciate any insights, guidance, or possible solutions you may have regarding this matter. Are there specific settings or techniques I should be implementing to enable the balls to respond to the user's head movement as desired? Thank you in advance for your assistance.
0
0
576
Oct ’23
ArKit entire scene is drifting away
I'm using ArKit with Metal in my ios app without ARSCNView. Most of times it works fine. However sometimes the whole scene is drifting away (especially first few seconds), then it sort of comes back but far from the original place. I don't see this effect in other model viewers - Unity based or WebAR based. Surely they use ArKit under the hood (not all but most of them do). Here is a snippet of my code: private let session = ARSession() func initialize() { let configuration = ARWorldTrackingConfiguration() configuration.maximumNumberOfTrackedImages = 10 configuration.planeDetection = [.horizontal, .vertical] configuration.automaticImageScaleEstimationEnabled = true configuration.isLightEstimationEnabled = true session.delegate = self session.run(configuration) } func getCameraViewMat(...) -> simd_float4x4 { //... return session.currentFrame.camera.viewMatrix(for: .portrait) } func createAnchor(...) -> Int { // ... anchors[id] = ARAnchor(transform: mat) session.add(anchor: anchors[id]!) return id } func getAnchorTransform(...) -> simd_float4x4 { //... return anchors[id]!.transform } func onUpdate(...) { // draw session.currentFrame.rawFeaturePoints!.points // draw all ARPlaneAnchor } func session(_ session: ARSession, cameraDidChangeTrackingState camera: ARCamera) { print("TRACKING STATE CHANGED: \(camera.trackingState)") } I can see it's not just anchors problem - everything moves including the point cloud. The tracking state is normal and changes to limited only after the drift occurred which is too late. What can I do to prevent the drift?
0
0
293
Oct ’23
Vuforia Model Target using ARKit
Hi, Does anyone have any experience using ARKit to emulate something like Vuforia's Model Target, where it will detect a 3D object within an environment corresponding to a 3D model and then overlay the 3D model on top of the real life object? Is it technically feasible or is Vuforia the only option? Thanks!
0
0
192
Oct ’23
Transitioning from SceneKit to RealityKit - shadows and custom shaders
We have a content creation application that uses SceneKit for rendering. In our application, we have a 3D view (non-AR), and an AR "mode" the user can go into. Currently we use a SCNView and an ARSCNView to achieve this. Our application currently targets iOS and MacOS (with AR only on iOS). With VisionOS on the horizon, we're trying to bring the tech stack up to date, as SceneKit no longer seems to be supported, and isn't supported at all on VisionOS. We'd like to use RealityKit for 3D rendering on all platforms; MacOS, iOS and VisionOS, in non-AR and AR mode where appropriate. So far this hasn't been too difficult. The greatest challenge has been adding gesture support to replace the allowsCameraControl option on the SCNView, as no such option on ARView. However, now we get to control shading, we're hitting a bit of a roadblock. When viewing the scene in Non-AR mode, we would like to add a ground plane underneath the object that only displays a shadow - in other words, it's opacity would be determined by light contribution. I've had a dig through the CustomMaterial API and it seems extremely primitive - there doesn't seem any way to get light information for a particular fragment, unless I'm missing something? Additionally, we support a custom shader that we can apply as materials. This custom shader allows the properties of the material to vary depending on the light contribution, light incidence angle...etc. Looking at the CustomMaterial, the API seems to be defining a CustomMaterial, whereas as guess we want to customise the BRDF calculation. We achieve this in SceneKit using a series of shader modifiers hooked into the various SCNShaderModifierEntryPoint. On VisionOS of course the lack of support for CustomMaterial is a shame, but I would hope something similar can be achieved with RealityComposer? We can live with the lack of custom material, but the shadow catcher is a killer for adoption for us. I'd even accept a different limited features on VisionOS, as long as we can matching our existing feature set on existing platforms. What am I missing?
1
1
835
Oct ’23
Deployment iOS target of app older than iOS currently running on iPhone?
I have an iPhone 8 Plus running iOS 16.7.1 I have made a very simplistic app only for personal use that I install from time to time on my iPhone. It doesn't even need internet. But ı take a error like that iPhone’s iOS 16.7.1 doesn’t match AugmentedRealityApp.app’s iOS 17.0 deployment target. I'd like to see your opinions or a comment if anyone has tried something similar. Thanks in advance.
0
0
422
Oct ’23
Using ARView's project(_:) method to convert to screen coordinates.
I'm trying to understand how to use the project(_:) function provided by ARView to convert 3D model coordinates to 2D screen coordinates, but am getting unexpected results. Below is the default Augmented Reality App project, modified to have a single button that when tapped will place a circle over the center of the provided cube. However, when the button is pressed, the circle's position does not line up with the cube. I've looked at the documentation for project(_:), but it doesn't give any details about how to convert a point from model coordinates to "the 3D world coordinate system of the scene". Is there better documentation somewhere on how to do this conversion? // ContentView.swift import SwiftUI import RealityKit class Coordinator { var arView: ARView? var anchor: AnchorEntity? var model: Entity? } struct ContentView : View { @State var coord = Coordinator() @State var circlePos = CGPoint(x: -100, y: -100) var body: some View { ZStack { ARViewContainer(coord: coord).edgesIgnoringSafeArea(.all) VStack { Spacer() Circle() .frame(width: 10, height: 10) .foregroundColor(.red) .position(circlePos) Button(action: { showMarker() }, label: { Text("Place Marker") }) } } } func showMarker() { guard let arView = coord.arView else { return } guard let model = coord.model else { return } guard let anchor = coord.anchor else { return } print("Model position is: \(model.position)") // convert position into anchor's space let modelPos = model.convert(position: model.position, to: anchor) print("Converted position is: \(modelPos)") // convert model locations to screen coordinates circlePos = arView.project(modelPos) ?? CGPoint(x: -1, y: -1) print("circle position is now \(circlePos)") } } struct ARViewContainer: UIViewRepresentable { var coord: Coordinator func makeUIView(context: Context) -> ARView { let arView = ARView(frame: .zero) coord.arView = arView // Create a cube model let mesh = MeshResource.generateBox(size: 0.1, cornerRadius: 0.005) let material = SimpleMaterial(color: .gray, roughness: 0.15, isMetallic: true) let model = ModelEntity(mesh: mesh, materials: [material]) coord.model = model // Create horizontal plane anchor for the content let anchor = AnchorEntity(.plane(.horizontal, classification: .any, minimumBounds: SIMD2<Float>(0.2, 0.2))) anchor.children.append(model) coord.anchor = anchor // Add the horizontal plane anchor to the scene arView.scene.anchors.append(anchor) return arView } func updateUIView(_ uiView: ARView, context: Context) {} } #Preview { ContentView(coord: Coordinator()) }
1
0
552
Oct ’23
Getting Child ModelEntity from Reality Composer Pro
Hi, I have a file in Reality Composer Pro that has a deep hierarchy. I've downloaded it from an asset store so I don't know how it is build. As you can see from the screenshot, I'm trying to access banana and banana_whole entities as ModelEntity but I'm not able to load them as ModelEntity in Xcode. I can load them as Entity and show them in visionOS Simulator but not as ModelEntity which I need to do to do some operations. What should I do?
2
0
624
Oct ’23
Index out of range in CoreRE framework
Hi, encountered an issue when running Underwater RealityKit app example (https://developer.apple.com/documentation/realitykit/building_an_immersive_experience_with_realitykit) Target device was iPhone 14 Pro running iOS 17.1 The same issue occurred on my own project, so the bug is not in Underwater app The bug itself: Crashed Thread: Render Func: re::DataArray<re::MeshInstance>::get(re::DataArrayHandle<re::MeshInstance>) const + 80 Message: Index out of range in operator[]. index = 18 446 744 073 709 551 615, maximum = 112
0
1
279
Oct ’23
Exporting scripts to a USDZ file in Reality Composer PRO.
Hello. I've started exploring the new features in Reality Composer PRO and noticed that Composer now supports adding custom scripts as components to any objects in the scene. I'm curious about the following: will these scripts work if I export such a scene to a USDZ file and try to open it using Apple Quick Look? For instance, I want to add a 3D button and a cube model. When I press the button (touch it), I want to change the material or material color to another one using a script component. Is such functionality possible?
0
0
536
Oct ’23