Integrate iOS device camera and motion features to produce augmented reality experiences in your app or game using ARKit.

Posts under ARKit tag

200 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

VisionOS: Detect plane to place objects issue for animated objects
Hi, I have used the template code for Plane Detection and placing models on them from here https://developer.apple.com/documentation/visionos/placing-content-on-detected-planes This source code did not copy the animations in the preview model to the PlacedModel and hence I modified it to do a manual copy of animations and textures. There is a function called materialize() that does this and I was able to modify it to get it working where the placed models are now animating. The issue is when I apply gestures on them like drag or rotate. For those models that go through this logic I'm unable to add gestures even though I'm making sure that Collision and Input Target is set on the Placed Models. Has anyone been able to get this working or is it even a possibility? My materialize function func materialize() -> PlacedObject { let shapes = previewEntity.components[CollisionComponent.self]!.shapes // Clone render content first as we need its materials let clonedRenderContent = renderContent.clone(recursive: true) print("To be finding main model: \(descriptor.displayName)") // Find the main model in preview hierarchy func findMainModel(_ entity: Entity) -> Entity? { if entity.name == descriptor.displayName.replacingOccurrences(of: " ", with: "_") { print("Found main model: \(entity.name)") return entity } for child in entity.children { if child.name == descriptor.displayName.replacingOccurrences(of: " ", with: "_") { print("Found main model in children: \(child.name)") return child } } return nil } // Clone hierarchy preserving structure, names, and materials func cloneHierarchy(_ entity: Entity) -> Entity { print("Cloning: \(entity.name)") let cloned: Entity if let model = entity as? ModelEntity { // Clone with recursive false to handle children manually cloned = model.clone(recursive: false) if let clonedModel = cloned as? ModelEntity, let originalMaterials = model.model?.materials { // Preserve the original model's materials clonedModel.model?.materials = originalMaterials } } else { cloned = Entity() } // Preserve name and transform cloned.name = entity.name cloned.transform = entity.transform // Clone children for child in entity.children { let clonedChild = cloneHierarchy(child) cloned.addChild(clonedChild) } return cloned } print("=== Cloning Preview Structure ===") // Clone the preview hierarchy with proper structure let clonedStructure = cloneHierarchy(previewEntity) // Find and use the main model if let mainModel = findMainModel(clonedStructure) { print("Using main model for PlacedObject") let modelEntity: ModelEntity if let asModel = mainModel as? ModelEntity { print("Using asModel ") modelEntity = asModel } else { modelEntity = ModelEntity() modelEntity.name = mainModel.name // Copy children and transforms for child in mainModel.children { modelEntity.addChild(child) } modelEntity.transform = mainModel.transform } // Add collision component here let collisionComponent = CollisionComponent(shapes: shapes, isStatic: false, filter: CollisionFilter(group: PlacedObject.collisionGroup, mask: .all)) modelEntity.components.set(collisionComponent) // Create the placed object let placedObject = PlacedObject(descriptor: descriptor, renderContentToClone: modelEntity, shapes: shapes) // Set input target on the placed object itself placedObject.components.set(InputTargetComponent(allowedInputTypes: [.direct, .indirect])) return placedObject } else { print("Fallback to original render content") let placedObject = PlacedObject(descriptor: descriptor, renderContentToClone: clonedRenderContent, shapes: shapes) placedObject.components.set(InputTargetComponent(allowedInputTypes: [.direct, .indirect])) return placedObject } } My PlacedObject class where the init has the recursive cloning removed because it is handled in materialize class PlacedObject: Entity { let fileName: String // The 3D model displayed for this object. private let renderContent: ModelEntity static let collisionGroup = CollisionGroup(rawValue: 1 << 29) // The origin of the UI attached to this object. // The UI is gravity aligned and oriented towards the user. let uiOrigin = Entity() var affectedByPhysics = false { didSet { guard affectedByPhysics != oldValue else { return } if affectedByPhysics { components[PhysicsBodyComponent.self]!.mode = .static } else { components[PhysicsBodyComponent.self]!.mode = .static } } } var isBeingDragged = false { didSet { affectedByPhysics = !isBeingDragged } } var positionAtLastReanchoringCheck: SIMD3<Float>? var atRest = false init(descriptor: ModelDescriptor, renderContentToClone: ModelEntity, shapes: [ShapeResource]) { fileName = descriptor.fileName // renderContent = renderContentToClone.clone(recursive: true) renderContent = renderContentToClone super.init() name = renderContent.name // Apply the rendered content’s scale to this parent entity to ensure // that the scale of the collision shape and physics body are correct. scale = renderContent.scale renderContent.scale = .one // Make the object respond to gravity. let physicsMaterial = PhysicsMaterialResource.generate(restitution: 0.0) let physicsBodyComponent = PhysicsBodyComponent(shapes: shapes, mass: 1.0, material: physicsMaterial, mode: .static) components.set(physicsBodyComponent) components.set(CollisionComponent(shapes: shapes, isStatic: false, filter: CollisionFilter(group: PlacedObject.collisionGroup, mask: .all))) addChild(renderContent) addChild(uiOrigin) uiOrigin.position.y = extents.y / 2 // Position the UI origin in the object’s center. // Allow direct and indirect manipulation of placed objects. components.set(InputTargetComponent(allowedInputTypes: [.direct, .indirect])) // Add a grounding shadow to placed objects. renderContent.components.set(GroundingShadowComponent(castsShadow: true)) } required init() { fatalError("`init` is unimplemented.") } } Thanks
4
0
354
2w
Tracking geographic locations in AR - Sample App Code
Hello, I was looking back into downloading the Tracking geographic locations in AR sample app from https://developer.apple.com/documentation/arkit/tracking-geographic-locations-in-ar Unfortunately the Download links to the .zip of the DisplayingAPointCloudUsingSceneDepth sample project. The exact same issue occurs when trying to download the sample code from https://developer.apple.com/documentation/ARKit/creating-a-fog-effect-using-scene-depth Wondering if those links are deliberately broken because of possible deprecations. Thanks to any Apple Engineer willing to look into that.
2
0
355
3w
Place a box on a wall or on the floor
Hi, I wanted to do something quite simple: Put a box on a wall or on the floor. My box: let myBox = ModelEntity( mesh: .generateBox(size: SIMD3<Float>(0.1, 0.1, 0.01)), materials: [SimpleMaterial(color: .systemRed, isMetallic: false)], collisionShape: .generateBox(size: SIMD3<Float>(0.1, 0.1, 0.01)), mass: 0.0) For that I used Plane Detection to identify the walls and floor in the room. Then with SpatialTapGesture I was able to retrieve the position where the user is looking and tap. let position = value.convert(value.location3D, from: .local, to: .scene) And then positioned my box myBox.setPosition(position, relativeTo: nil) When I then tested it I realized that the box was not parallel to the wall but had a slightly inclined angle. I also realized if I tried to put my box on the wall to my left the box was placed perpendicular to this wall and not placed on it. After various searches and several attempts I ended up playing with transform.matrix to identify if the plane is wall or a floor, if it was in front of me or on the side and set up a rotation on the box to "place" it on the wall or a floor. let surfaceTransform = surface.transform.matrix let surfaceNormal = normalize(surfaceTransform.columns.2.xyz) let baseRotation = simd_quatf(angle: .pi, axis: SIMD3<Float>(0, 1, 0)) var finalRotation: simd_quatf if acos(abs(dot(surfaceNormal, SIMD3<Float>(0, 1, 0)))) < 0.3 { logger.info("Surface: ceiling/floor") finalRotation = simd_quatf(angle: surfaceNormal.y > 0 ? 0 : .pi, axis: SIMD3<Float>(1, 0, 0)) } else if abs(surfaceNormal.x) > abs(surfaceNormal.z) { logger.info("Surface: left/right") finalRotation = simd_quatf(angle: surfaceNormal.x > 0 ? .pi/2 : -.pi/2, axis: SIMD3<Float>(0, 1, 0)) } else { logger.info("Surface: front/back") finalRotation = baseRotation } Playing with matrices is not really my thing so I don't know if I'm doing it right. Could you tell me if my tests for the orientation of the walls are correct? During my tests I don't always correctly identify whether the wall is in front or on the side. Is this generally the right way to do it? Is there an easier way to do this? Regards Tof
0
0
317
Feb ’25
SceneKit Transparent Material Self-Overlapping Issue (Front Face Overlapping)
Description: I'm developing an AR effect using SceneKit and applying a transparent material to a face mesh. However, I'm facing an issue where the front faces of the mesh overlap each other, causing incorrect rendering. Problem: The front faces of the mesh overlap with each other when transparency is applied. This causes areas like the cheeks to be visible through the nose, even though they should be occluded. Expected Behavior: The material should behave as if it were opaque to itself—that is, overlapping front faces should be occluded properly, while still allowing transparency for background elements. Actual Behavior: The mesh renders its own front faces incorrectly, making parts of the face visible through others when they should be blocked. What I Have Tried: testMaterial.writesToDepthBuffer = true testMaterial.readsFromDepthBuffer = true Question: 👉 How can I prevent SceneKit's transparent material from rendering overlapping front faces? 👉 Is there a way to force SceneKit to treat its own mesh as opaque for itself while still being transparent to the background? 👉 Does SceneKit support a proper depth pre-pass or an equivalent to Unity’s ZWrite shaders to solve this issue? Attached screenshots demonstrate the problem visually. Any help would be greatly appreciated! 🚀
0
0
309
Feb ’25
Significant deviation of depth map values captured in ARKit framework
I use ARKit to build an app, scan rooms to collect the spatial data of objects and re-construct the 3D scene. the problem is I found the depth map values captured in ARFrame significantly deviate from the real distances, even nonlinearly, for the distances below 1.5m, values are basically correct, but beyond 1.5m, they are smaller than real values. for example read 1.9m from the generated depthmap.tiff, but real distance is 3 meters. below is my code of generating tiff file to record depth map data: Generated TIFF file (captured from ARKit): as shown above, the maximum distance is around 1.9m, but real distance to that wall is more than 3 meters, and also you can see, the depth map picture captured in ARKit is quite blurry, particularly at far distance (> 2.0m), almost smeared out. Generated TIFF file (captured from AVFoundation): In comparison, the depth map captured from traditional AVFoundation and with the same hardware device is much clear, the values seem not in meter unit though.
1
0
395
Feb ’25
RoomCaptureSession with ARSCNView crashes when scanning multiple hotspots across different rooms
We're developing an iOS application that integrates RoomCaptureSession with ARSCNView for room scanning. Our implementation differs from the standard RoomCaptureView because we need custom UI guidance with 3D dots placed in the scanning environment to guide users through the capture process. Bug Description: The application crashes when users attempt to scan multiple rooms or apartments in sequence. The crash specifically occurs with the following pattern: User successfully scans first room with multiple hotspots (working correctly) User stops scanning, moves to a new room In the new room, first 1-2 hotspots work correctly Application crashes when attempting to scan additional hotspots Technical Details: Error: SLAM Anchor assertion failure in SlamAnchor.cpp:37 : HasValidPose() Crash occurs in Thread 27 with CAPIDetectionOutputFwdNode Error suggests invalid positioning when placing AR anchors Steps to Reproduce: Start room scan Complete multiple hotspot captures in first room Stop scanning Start new room scan Capture 1-2 hotspots successfully Attempt additional hotspot captures -> crashes Attempted Solutions: Implemented anchor cleanup between sessions Added position validation before anchor placement Implemented ARSession error handling Added proper thread management for AR operations Environment: Device: iPhone 14 Pro (LiDAR equipped) iOS Version: 18.1.1 (22B91) Testing through TestFlight Crash Log Details: Exception Type: EXC_CRASH (SIGABRT) Exception Codes: 0x0000000000000000, 0x0000000000000000 Exception Note: EXC_CORPSE_NOTIFY Triggered by Thread: 27 Thread 27 Crashed: 0 libsystem_kernel.dylib 0x00000001f0cc91d4 __pthread_kill + 8 1 libsystem_pthread.dylib 0x0000000228e12ef8 pthread_kill + 268 2 libsystem_c.dylib 0x00000001a86bbad8 abort + 128 3 AppleCV3D 0x0000000234d71a28 cv3d::vio::capi::SlamAnchor::SlamAnchor Question: Is there a recommended approach for handling multiple room captures with custom ARSCNView integration? The standard RoomCaptureView implementation doesn't show this behavior, but we need the custom guidance functionality that ARSCNView provides. Crash Log Code and full crash logs can be provided if needed.
2
1
402
Feb ’25
Automatic Plane Measurements like in the Apple Measure App
I’m working on an iOS app that needs to measure the area of planes or surfaces, like the length and width of objects, just like the Apple Measure app does. I’ve been exploring ARKit, but I’m curious if there are any APIs or techniques that can help automate the process of detecting and measuring planes. Specifically, I’m looking for a way to automatically detect and measure planes (e.g., from a top-down view). For example: Measuring a box width and length. I have attached a screenshot and a video of the Apple Measure App doing it. Does Apple provide any tools or APIs for this, or are there any best practices I should know about? I’d love to hear from anyone who’s tackled something similar. Video: https://drive.google.com/file/d/1BxM7fIbFxsCsYwY7w8ZxIeq_4WTGkkwA/view?usp=drive_link
5
0
525
Jan ’25
LockedCameraCapture with ARKit based App
Hello, I'm building a camera app around ARKit. I've created a Lockscreen Capture Extension and added a control to initiate my camera app, but when I launch the extension I see just a black screen with no hints at any errors. Also attaching the debugger to the running process shows no logs. Im wondering: Is LockedCameraCapture supported with ARView and ARSession? ARKit was featured in a WWDC video with a camera app use-case, also the introduction of captureHighResolutionFrame(completion:) made me pick it up as an interesting camera app backbone - but if lockscreen capture is not possible with it I have to refactor my codebase.
1
0
366
Jan ’25
SwiftUI Manual Orientation Control for Views - Solution and a Bug!
I am working on a project that contains a QuickLook View and Some ARViews. I want to restrict the entire app to Portrait orientation, but I want to allow the ARView to have Portrait and Landscape orientation. If I restrict the app to Portrait in the Deployment Info settings, we can still turn the device to landscape in the ARView, However, there is an issue with "some" spatial audio files within the digital experience. Some spatial audio items, are placed appropriately, and others are panned oddly left. If we allow Landscape Left and Right in the Deployment Info settings, all spatial audio behaves appropriately. So, we need to "lock" every other view as Portrait and only allow Portrait and Landscape on the ARView. I'm not smart enough to know how to do that, but I found this excellent package on GitHub. It works as expected. https://github.com/wvteijlingen/swiftui-interface-orientation However! When we wrap SwiftUI with UIKit, it appears every single view that contains an ARView is initialized at launch even though it is not visible. So, when the app launches, it is running multiple ARViews at once. It appears we need to have some kind of lazy loading, so this doesn't occur, but again, I am not knowledgable enough for this yet. I tried to wrap it all in a LazyVStack, I tried a LazyView struct, but I couldn't get it to build appropriately. I feel like this might be a common thing, so maybe there's already a simple answer I'm not able to locate? Any ideas??
1
0
360
Jan ’25
Slow Auto Focus on iPhone 16 Pro with ARkit camera
I have recently started testing ARKit on an iPhone 16 Pro and I have noticed that the AutoFocus reaction on this device is much slower than other devices. For example, if I point the camera to a close object AutoFocus takes 4-5 seconds to stabilize, the focal length is adjusted very very slowly. In some cases (although this is rare) AutoFocus seems almost stuck and requires a bit of device movement to trigger. This is quite problematic when using some ARKit features like Image and Object detection as the detection algorithms struggle with out-of-focus images. This problem is limited to ARKit. AutoFocus is significantly more responsive when the standard AVFoundation Camera API is used. This behavior is easy to reproduce with any of the ARKit samples like https://developer.apple.com/documentation/arkit/arkit_in_ios/content_anchors/tracking_and_visualizing_planes Is anybody else experiencing this problem?
4
1
543
Jan ’25
ARKit: Prevent Asset Clipping
Hello Apple Team, I am working on a RealityKit project for iOS, where I need to place a 3D asset far away from the camera (approximately 15 to 30 meters). When enabling people occlusion, the 3D asset gets clipped when moved far away. Is it possible to enable people occlusion for assets at close range (less than 10 meters) while disabling it for assets farther away to prevent clipping? I understand that it is possible to switch configurations at runtime. However, I would like to place assets both close to and far from the camera simultaneously. Thank you for your help! Kind regards
1
0
420
Jan ’25
Combining ARKit Face Tracking with High-Resolution AVCapture and Perspective Rendering on Front Camera
Subject: Combining ARKit Face Tracking with High-Resolution AVCapture and Perspective Rendering on Front Camera Message: Hello Apple Developer Community, We’re developing an application using the front camera that requires both real-time ARKit face tracking/guidance and the capture of high-resolution still images via AVCaptureSession. Our goal is to leverage ARKit’s depth and face data to render a captured image from another perspective post-capture, maintaining high image quality. Our Approach: Real-Time ARKit Guidance: Utilize ARKit (e.g., ARFaceTrackingConfiguration) for continuous face tracking, depth, and scene understanding to guide the user in real time. High-Resolution Capture Transition: At the moment of capture, we plan to pause the ARKit session and switch to an AVCaptureSession to take a high-resolution image. We assume that for a front-facing image, the subject’s face is directly front-on, and the relative pose between the face and camera remains the same during the transition. The only variation we expect is a change in distance. Our intention is to minimize the delay between the last ARKit frame and the high-res capture to maintain temporal consistency, assuming that aside from distance, the face-camera relative pose remains unchanged. Post-Processing Perspective Rendering: Using the last ARKit face data (depth, pose, and landmarks) along with the high-resolution 2D image, we aim to render the scene from another perspective. We want to correct the perspective of the 2D image using SceneKit or RealityKit, leveraging the collected ARKit scene information to achieve a natural, high-quality rendering from a different viewpoint. The rendering should match the quality of a normally captured high-resolution image, adjusting for the difference in distance while using the stored ARKit data to correct perspective. Our Questions: Session Transition Best Practices: What are the recommended best practices to seamlessly pause ARKit and switch to a high-resolution AVCapture session on the front camera How can we minimize user movement or other issues during this brief transition, given our assumption that the face-camera pose remains largely consistent except for distance changes? Data Integration for Perspective Rendering: How can we effectively integrate stored ARKit face, depth, and pose data with the high-res image to perform accurate perspective correction or rendering from another viewpoint? Given that we assume the relative pose is constant except for distance, are there strategies or APIs to leverage this assumption for simplifying the perspective transformation? Perspective Correction with SceneKit/RealityKit: What techniques or workflows using SceneKit or RealityKit are recommended for correcting the perspective of a captured 2D image based on ARKit scene data? How can we use these frameworks to render the high-resolution image from an alternative perspective, while maintaining image quality and fidelity? 4. Pitfalls and Guidelines: What common pitfalls should we be aware of when combining ARKit tracking data with high-res capture and post-processing for perspective rendering? Are there performance considerations, recommended thresholds for acceptable temporal consistency, or validation techniques to ensure the ARKit data remains applicable at the moment of high-res capture? We appreciate any advice, sample code references, or documentation pointers that could assist us in implementing this workflow effectively. Thank you!
2
0
531
Jan ’25
Publishing changes from within view updates is not allowed - SwiftUI with ARKit
Hello, I’m encountering the error "Publishing changes from within view updates is not allowed, this will cause undefined behavior". while developing an app using SwiftUI and ARKit. I have a ObjectTracking class where I update some @Published variables inside a function called processObjectAttributes after detecting objects. However, when I try to update these state variables in the View (like isPositionChecked, etc.), the error keeps appearing. Here is a simplified version of my code: class ObjectTracking: ObservableObject { @Published var isPositionChecked: Bool = false @Published var isSizeChecked: Bool = false @Published var isOrientationChecked: Bool = false func checkAttributes(objectAnchor: ARObjectAnchor, _ left: ARObjectAnchor.AttributeLocation, _ right: ARObjectAnchor.AttributeLocation? = nil, threshold: Float) -> Bool { let attributes = objectAnchor.attributes guard let leftValue = attributes[left]?.floatValue else { return false } let rightValue = right != nil ? attributes[right!]?.floatValue ?? 0 : 0 return leftValue > threshold && (right == nil || rightValue > threshold) } func isComplete(objectAnchor: ARObjectAnchor) -> Bool { isPositionChecked = checkAttributes(objectAnchor: objectAnchor, .positionLeft, .positionRight, threshold: 0.5) isSizeChecked = checkAttributes(objectAnchor: objectAnchor, .sizeLeft, .sizeRight, threshold: 0.3) isOrientationChecked = checkAttributes(objectAnchor: objectAnchor, .orientationLeft, .orientationRight, threshold: 0.3) return isPositionChecked && isSizeChecked && isOrientationChecked } func processObjectAttributes(objectAnchor: ARObjectAnchor) { currentObjectAnchor = objectAnchor } } In my View, I am using @ObservedObject to observe the state of these variables, but the error persists when I try to update them during view rendering. Could anyone help me understand why this error occurs and how to avoid it? I understand that state should not be updated during view rendering, but I can’t find a solution that works in this case. Thank you in advance for your help!
0
0
222
Jan ’25
How to get the floor plane with Spatial Tracking Session and Anchor Entity
In the WWDC session titled "Deep dive into volumes and immersive spaces", the developers discussed adding a Spatial Tracking Session and an Anchor Entity to detect the floor. They then glossed over some important details. They added a spatial tap gesture to let the user place content relative to the floor anchor, but they left a lot of information. .gesture( SpatialTapGesture( coordinateSpace: .immersiveSpace ) .targetedToAnyEntity() .onEnded { value in handleTapOnFloor(value: value) } ) My understanding is that an entity has to have input and collision components for gestures like this to work. How can we add a collision to an AnchorEntity when we don't know its size or shape? I've been trying for days to understand what is happening here and I just don't get it. It is even more frustrating that the example project that Apple released does not contain any of these features. I would like to be able Detect the floor plane Get the position/transform of the floor plane Add a collider to the floor plane Enable collisions and physics on the floor plane Enable gestures on the floor plane It seems to me that the Anchor Entity is placed as an entirely arbitrary position. It has absolutely no relationship to the rectangle with the floor label that I can see in the Xcode visualization. It is just a point, not a plane or rect that I can use. I've tried manually calculating the collision shape after the anchor is detected, but nothing that I have tried works. I can't tap on the floor with gestures. I can't drop entities onto the floor. I can't seem to do ANYTHING at all with this floor anchor other than place entity at the totally arbitrary location somewhere on the floor. Is there anyway at all with Spatial Tracking Session and Anchor Entity to get the actual plane that was detected? struct FloorExample: View { @State var trackingSession: SpatialTrackingSession = SpatialTrackingSession() @State var subject: Entity? @State var floor: AnchorEntity? var body: some View { RealityView { content, attachments in let session = SpatialTrackingSession() let configuration = SpatialTrackingSession.Configuration(tracking: [.plane]) _ = await session.run(configuration) self.trackingSession = session let floorAnchor = AnchorEntity(.plane(.horizontal, classification: .floor, minimumBounds: SIMD2(x: 0.1, y: 0.1))) floorAnchor.anchoring.physicsSimulation = .none floorAnchor.name = "FloorAnchorEntity" floorAnchor.components.set(InputTargetComponent()) floorAnchor.components.set(CollisionComponent(shapes: .init())) content.add(floorAnchor) self.floor = floorAnchor // This is just here to let me see where visinoOS decided to "place" the floor anchor. let floorPlaced = ModelEntity( mesh: .generateSphere(radius: 0.1), materials: [SimpleMaterial(color: .black, isMetallic: false)]) floorAnchor.addChild(floorPlaced) if let scene = try? await Entity(named: "AnchorLabsFloor", in: realityKitContentBundle) { content.add(scene) if let subject = scene.findEntity(named: "StepSphereRed") { self.subject = subject } // I can see when the anchor is added _ = content.subscribe(to: SceneEvents.AnchoredStateChanged.self) { event in event.anchor.generateCollisionShapes(recursive: true) // this doesn't seem to work print("**anchor changed** \(event)") print("**anchor** \(event.anchor)") } // place the reset button near the user if let panel = attachments.entity(for: "Panel") { panel.position = [0, 1, -0.5] content.add(panel) } } } update: { content, attachments in } attachments: { Attachment(id: "Panel", { Button(action: { print("**button pressed**") if let subject = self.subject { subject.position = [-0.5, 1.5, -1.5] // Remove the physics body and assign a new one - hack to remove momentum if let physics = subject.components[PhysicsBodyComponent.self] { subject.components.remove(PhysicsBodyComponent.self) subject.components.set(physics) } } }, label: { Text("Reset Sphere") }) }) } } }
5
0
665
Jan ’25
Comparing colors of two ModelEntities
I want to compare the colors of two model entities (spheres). How can i do it? The method i'm currently trying to apply is as follows case let .color(controlColor) = controlMaterial.baseColor, controlColor == .green { // Flip target sphere colour if let targetMaterial = targetsphere.model?.materials.first as? SimpleMaterial, case let .color(targetColor) = targetMaterial.baseColor, targetColor == .blue { targetsphere.model?.materials = [SimpleMaterial(color: .green, isMetallic: false)] // Change to |1⟩ } else { targetsphere.model?.materials = [SimpleMaterial(color: .blue, isMetallic: false)] // Change to |0⟩ } } This method (baseColor) was deprecated in swift 15.0 changes to 'color' but i cannot compare the value color to each other.👾
1
0
487
Jan ’25
Persisting Anchors in RealityView with ARMode on iOS
Platform: iOS18 Tech: RealityView Hi! I was wondering if RealityView now provides ways for their session to persist Anchor data in a world such that the anchor locations in one session can be saved and loaded in a another session that persists the exact same anchor positions. I know that ARWorldMap in ARKit does that, but I was not able to find a way to use it with RealityView. I think it's because RealityView has ARKit under its hood but does not expose the ARKit session info publicly to the client code. So I was wondering if there's a SwiftUI + RealityView approach that can help me to achieve a similar goal: Come back to the same location and see the object in exactly the same place. Thanks!
0
1
400
Jan ’25