Integrate iOS device camera and motion features to produce augmented reality experiences in your app or game using ARKit.

Posts under ARKit tag

147 Posts

Post

Replies

Boosts

Views

Activity

Adding reference image failed in VisionPro
I am using ARKit to detect image in visionPro. However I met some question about adding the reference image. Some of my images can not be added correctly sometimes. (As you can see in the picture above, the 'orange' can not be added correctly, but the 'cup' can). However, sometimes they will be added without any problem. I do not know why it will happen. And I want they all be added steadily.
0
0
248
Mar ’25
App crashes after multiple transitions to screen containing AR Kit using SwiftUI NavigationStack
Hello. I am currently building an app using AR Kit. As for the UI, I am using SwiftUI and NavigationStack + NavigationLink for navigation and screen transitions! Here I need to go back and forth between the AR screen and other screens. If the number of screen transitions is small, this is not a problem. However, if the number of screen transitions increases to 10 or 20, it crashes somewhere. We are struggling with this problem. (The nature of the application requires multiple screen transitions.) The crash log showed the following. error: read memory from 0x1e387f2d4 failed AR_Crash_Sample-2025-03-07-115914.txt Incident Identifier: B23D806E-D578-4A95-8828-2A1E8D6BB7F8 Beta Identifier: 924A85AB-441C-41A7-9BC2-063940BDAF32 Hardware Model: iPhone16,1 Process: AR_Crash_Sample [2375] Path: /private/var/containers/Bundle/Application/FAC3D662-DB10-434E-A006-79B9515D8B7A/AR_Crash_Sample.app/AR_Crash_Sample Identifier: ar.crash.sample.AR.Crash.Sample Version: 1.0 (1) AppStoreTools: 16C7015 AppVariant: 1:iPhone16,1:18 Beta: YES Code Type: ARM-64 (Native) Role: Foreground Parent Process: launchd [1] Coalition: ar.crash.sample.AR.Crash.Sample [1464] Date/Time: 2025-03-07 11:59:14.3691 +0900 Launch Time: 2025-03-07 11:57:47.3955 +0900 OS Version: iPhone OS 18.3.1 (22D72) Release Type: User Baseband Version: 2.40.05 Report Version: 104 Exception Type: EXC_CRASH (SIGABRT) Exception Codes: 0x0000000000000000, 0x0000000000000000 Termination Reason: SIGNAL 6 Abort trap: 6 Terminating Process: AR_Crash_Sample [2375] Triggered by Thread: 7 Application Specific Information: abort() called Thread 7 name: Dispatch queue: com.apple.arkit.depthtechnique Thread 7 Crashed: 0 libsystem_kernel.dylib 0x1e387f2d4 __pthread_kill + 8 1 libsystem_pthread.dylib 0x21cedd59c pthread_kill + 268 2 libsystem_c.dylib 0x199f98b08 abort + 128 3 libc++abi.dylib 0x21ce035b8 abort_message + 132 4 libc++abi.dylib 0x21cdf1b90 demangling_terminate_handler() + 320 5 libobjc.A.dylib 0x18f6c72d4 _objc_terminate() + 172 6 libc++abi.dylib 0x21ce0287c std::__terminate(void (*)()) + 16 7 libc++abi.dylib 0x21ce02820 std::terminate() + 108 8 libdispatch.dylib 0x199edefbc _dispatch_client_callout + 40 9 libdispatch.dylib 0x199ee65cc _dispatch_lane_serial_drain + 768 10 libdispatch.dylib 0x199ee7158 _dispatch_lane_invoke + 432 11 libdispatch.dylib 0x199ee85c0 _dispatch_workloop_invoke + 1744 12 libdispatch.dylib 0x199ef238c _dispatch_root_queue_drain_deferred_wlh + 288 13 libdispatch.dylib 0x199ef1bd8 _dispatch_workloop_worker_thread + 540 14 libsystem_pthread.dylib 0x21ced8680 _pthread_wqthread + 288 15 libsystem_pthread.dylib 0x21ced6474 start_wqthread + 8 Perhaps I am using too much memory! How can I address this phenomenon? For the AR functionality, we are using UIViewRepresentable, which is written in UIKit and can be called from SwiftUI import ARKit import AsyncAlgorithms import AVFoundation import SCNLine import SwiftUI internal struct MeasureARViewContainer: UIViewRepresentable { @Binding var tapCount: Int @Binding var distance: Double? @Binding var currentIndex: Int var focusSquare: FocusSquare = FocusSquare() let coachingOverlay: ARCoachingOverlayView = ARCoachingOverlayView() func makeUIView(context: Context) -> ARSCNView { let arView: ARSCNView = ARSCNView() arView.delegate = context.coordinator let configuration: ARWorldTrackingConfiguration = ARWorldTrackingConfiguration() configuration.planeDetection = [.horizontal, .vertical] if ARWorldTrackingConfiguration.supportsFrameSemantics(.sceneDepth) { configuration.frameSemantics = [.sceneDepth, .smoothedSceneDepth] } arView.session.run(configuration, options: [.resetTracking, .removeExistingAnchors]) context.coordinator.sceneView = arView context.coordinator.scanTarget() coachingOverlay.session = arView.session coachingOverlay.delegate = context.coordinator coachingOverlay.goal = .horizontalPlane coachingOverlay.activatesAutomatically = true coachingOverlay.autoresizingMask = [.flexibleWidth, .flexibleHeight] coachingOverlay.translatesAutoresizingMaskIntoConstraints = false arView.addSubview(coachingOverlay) return arView } func updateUIView(_ _: ARSCNView, context: Context) { context.coordinator.mode = MeasurementMode(rawValue: currentIndex) ?? .width if tapCount == 0 { context.coordinator.resetMeasurement() return } if distance != nil { return } DispatchQueue.main.async { if context.coordinator.distance == nil { context.coordinator.handleTap() } } } static func dismantleUIView(_ uiView: ARSCNView, coordinator: Coordinator) { uiView.session.pause() coordinator.stopScanTarget() coordinator.stopSpeech() DispatchQueue.main.async { uiView.removeFromSuperview() } } func makeCoordinator() -> Coordinator { Coordinator(self) } class Coordinator: NSObject, ARSCNViewDelegate, ARSessionDelegate, ARCoachingOverlayViewDelegate { var parent: MeasureARViewContainer var sceneView: ARSCNView? var startPosition: SCNVector3? var pointedCount: Int = 0 var distance: Float? var mode: MeasurementMode = .width let synthesizer: AVSpeechSynthesizer = AVSpeechSynthesizer() var scanTargetTask: Task<Void, Never>? var currentResult: ARRaycastResult? init(_ parent: MeasureARViewContainer) { self.parent = parent } // ... etc } }
2
0
345
Mar ’25
When to use an AnchorEntity or HandTrackingProvider in VisionOS
As I understand it there are two ways I can track a hand, or a joint, in RealityKit: either, create an AnchorEntity, for example AnchorEntity(.hand(.left, location: .palm)) or, set up an ARSession with a HandTrackingProvider ( a lot more code which I haven't repeated here). Assuming this is correct, when would I want to use one over the other?
2
0
429
Mar ’25
Create Anchor on Objects from 2D Data
We're developing a VisionOS application, where we would like to do product recognition (like food items). We have enterprise entitlements and therefore also main camera access for VisionOS. We send this live camera frames to a trained CoreML model where we will receive 2D coordinates from the model detection prediction. Now, we would like to create a 3D anchor on the detected items so it can be visible for user. The 3D anchor is going to be the class name of the detected item. How do we transform this 2D coordinate from the model prediction to a 3D anchor?
4
0
782
Feb ’25
Getting to MeshAnchor.MeshClassification from MeshAnchor?
I am working with MeshAnchors, and I am having troubles getting to the classification of the triangles/faces. This post references the MeshAnchor.Geometry, and that struct does have a property named "classifications", but it is of type GeometrySource. I cannot find any classification information in GeometrySource. Am I missing something there? I think I am looking for something of type MeshAnchor.MeshClassification, but I cannot find any structs with this as a property.
3
0
1.3k
Feb ’25
How to make a RealityKit `Entity` respond to Environment light
I am developing an visionos app. I load a .usdz file as a Reality Entity(such as a cabbage). And I want such an effect: When I turn on a desk lamp in real world near the Entity, the surface of the Entity will correctly respond to the light in the real world. I want an effect like this: https://www.reddit.com/r/virtualreality/comments/1as01mm/shiny_disco_ball_reflecting_my_room/ I look up the api such as ImageBasedLightComponent andVirtualEnvironmentProbeComponent in RealityKit、EnvironmentLightEstimationProvider in ARKit,but I do not know how to code. Besides, it will be better if the shadow will also respond to the light correctly.
1
0
531
Feb ’25
Difference in ARKit plane detection from iPhone 8 to iPhone 15
I am developing an ARKit based application that requires plane detection of the tabletop at which the user is seated. Early testing was with an iPhone 8 and iPhone 8+. With those devices, ARKit rapidly detected the plane of the tabletop when it was only 8 to 10 inches away. Using iPhone 15 with the same code, it seems to require me to move the phone more like 15 to 16 inches away before detecting the plane of the table. This is an awkward motion for a user seated at a table. To validate that it was not necessarily a feature of my code, I determined that the same behavior results with Apple's sample AR Interaction application. Has anyone else experienced this, and if so, have suggestions to improve the situation?
2
0
518
Feb ’25
If My New Film 'Metaphor' Wins The Oscar For Best Short Film At The Academy Awards How Can I Distribute It On The App Store?
Hello Everyone Sam Francisco here otherwise known as Ian Francis Creative Director at Primus Films Ltd in the UK. I'm excited today to have Apple invite me to the Developer Forums and indeed honoured to have the chance of creating something that is of interest in the App Store and beyond. My question is self explanatory really, I am in the Final Cut of 'Metaphor' which is a 42 minutes 17' second Horror Film about Climate Change and I have a deadline set for it's Premiere on YouTube which is this year on April 22nd which as I'm sure you all know is Earth Day. So although I am quite comfortable and relatively proficient as the film's Writer, Director and Editor I have to confess to being green as tomatoes when it comes to marketing and publicity, and negotiating rights and agreeing deals for example to gain access to the App Store here at Apple with a view to discussing rental or sale options. Even if the film comes away empty handed from the Dorothy Chandler it'll be all set for the Baftas over in the UK and I was wondering how to get discussions going on this and if anyone out there has any knowledge of where I even start I would be very grateful. Thank you for your time Ian Francis
1
0
841
Feb ’25
Volumetric Windwos anchores
Hi, we would like to create something where you can open multiple volumetric windows and place them in a room, our biggest issue is that we want these windows to be persistent, so when I close and reopen the app, the windows to be in the same position. We can't use immersive spaces because we also want to have the possibility to access the shared space. Is it possible with the current features and capabilities to do that? If yes do you have some advices how can we achieve this? The alternative is if is it possible to open the virtual display in immersive spaces or if we have the possibility to implement our own virtual display.
1
0
426
Feb ’25
A question about interacting with entity
I am a newby of spatial computing and I am using ARKit and RealityKit to develop a visionPro app. I want to accomplish such a goal: If the user's hand touchs an object(an entity in RealityView) on the table, it will post a Window. But I do not know how to handle the event "the user's hand touchs the object". Should I use hand tracking feature to do some computing by myself? Or is there some api to use directly? Thank you!
1
0
553
Feb ’25
How to use `EnvironmentLightEstimationProvider` to capture a environment texture and apply it on an model entity?
I am a newby of spatial computing. Here I am learning how to use ARKit to capture the environment texture and apply it on a ModelEntity of RealityKit on Vision Pro. But I do not find a demo of how to use EnvironmentLightEstimationProvider. After checking the documentation, I also have some questions: EnvironmentProbeAnchor.environmentTexture is a MTLTexture, but EnvironmentResource needs a CGImage. How do I translate MTLTexture to CGImage(Forgive me that I do not know much about Metal or other framework, so It will be better if there is a code that I can copy and paste directly) It seems that the EnvironmentProbeAnchor can only get the light information around the device. But what should I do if I want get the light information around the ModelEntity so that I can apply the environment texture on it. It will be better if you can provide a code demo about how to use the new api. Thank you!
1
0
601
Feb ’25
VisionOS: Detect plane to place objects issue for animated objects
Hi, I have used the template code for Plane Detection and placing models on them from here https://developer.apple.com/documentation/visionos/placing-content-on-detected-planes This source code did not copy the animations in the preview model to the PlacedModel and hence I modified it to do a manual copy of animations and textures. There is a function called materialize() that does this and I was able to modify it to get it working where the placed models are now animating. The issue is when I apply gestures on them like drag or rotate. For those models that go through this logic I'm unable to add gestures even though I'm making sure that Collision and Input Target is set on the Placed Models. Has anyone been able to get this working or is it even a possibility? My materialize function func materialize() -> PlacedObject { let shapes = previewEntity.components[CollisionComponent.self]!.shapes // Clone render content first as we need its materials let clonedRenderContent = renderContent.clone(recursive: true) print("To be finding main model: \(descriptor.displayName)") // Find the main model in preview hierarchy func findMainModel(_ entity: Entity) -> Entity? { if entity.name == descriptor.displayName.replacingOccurrences(of: " ", with: "_") { print("Found main model: \(entity.name)") return entity } for child in entity.children { if child.name == descriptor.displayName.replacingOccurrences(of: " ", with: "_") { print("Found main model in children: \(child.name)") return child } } return nil } // Clone hierarchy preserving structure, names, and materials func cloneHierarchy(_ entity: Entity) -> Entity { print("Cloning: \(entity.name)") let cloned: Entity if let model = entity as? ModelEntity { // Clone with recursive false to handle children manually cloned = model.clone(recursive: false) if let clonedModel = cloned as? ModelEntity, let originalMaterials = model.model?.materials { // Preserve the original model's materials clonedModel.model?.materials = originalMaterials } } else { cloned = Entity() } // Preserve name and transform cloned.name = entity.name cloned.transform = entity.transform // Clone children for child in entity.children { let clonedChild = cloneHierarchy(child) cloned.addChild(clonedChild) } return cloned } print("=== Cloning Preview Structure ===") // Clone the preview hierarchy with proper structure let clonedStructure = cloneHierarchy(previewEntity) // Find and use the main model if let mainModel = findMainModel(clonedStructure) { print("Using main model for PlacedObject") let modelEntity: ModelEntity if let asModel = mainModel as? ModelEntity { print("Using asModel ") modelEntity = asModel } else { modelEntity = ModelEntity() modelEntity.name = mainModel.name // Copy children and transforms for child in mainModel.children { modelEntity.addChild(child) } modelEntity.transform = mainModel.transform } // Add collision component here let collisionComponent = CollisionComponent(shapes: shapes, isStatic: false, filter: CollisionFilter(group: PlacedObject.collisionGroup, mask: .all)) modelEntity.components.set(collisionComponent) // Create the placed object let placedObject = PlacedObject(descriptor: descriptor, renderContentToClone: modelEntity, shapes: shapes) // Set input target on the placed object itself placedObject.components.set(InputTargetComponent(allowedInputTypes: [.direct, .indirect])) return placedObject } else { print("Fallback to original render content") let placedObject = PlacedObject(descriptor: descriptor, renderContentToClone: clonedRenderContent, shapes: shapes) placedObject.components.set(InputTargetComponent(allowedInputTypes: [.direct, .indirect])) return placedObject } } My PlacedObject class where the init has the recursive cloning removed because it is handled in materialize class PlacedObject: Entity { let fileName: String // The 3D model displayed for this object. private let renderContent: ModelEntity static let collisionGroup = CollisionGroup(rawValue: 1 << 29) // The origin of the UI attached to this object. // The UI is gravity aligned and oriented towards the user. let uiOrigin = Entity() var affectedByPhysics = false { didSet { guard affectedByPhysics != oldValue else { return } if affectedByPhysics { components[PhysicsBodyComponent.self]!.mode = .static } else { components[PhysicsBodyComponent.self]!.mode = .static } } } var isBeingDragged = false { didSet { affectedByPhysics = !isBeingDragged } } var positionAtLastReanchoringCheck: SIMD3<Float>? var atRest = false init(descriptor: ModelDescriptor, renderContentToClone: ModelEntity, shapes: [ShapeResource]) { fileName = descriptor.fileName // renderContent = renderContentToClone.clone(recursive: true) renderContent = renderContentToClone super.init() name = renderContent.name // Apply the rendered content’s scale to this parent entity to ensure // that the scale of the collision shape and physics body are correct. scale = renderContent.scale renderContent.scale = .one // Make the object respond to gravity. let physicsMaterial = PhysicsMaterialResource.generate(restitution: 0.0) let physicsBodyComponent = PhysicsBodyComponent(shapes: shapes, mass: 1.0, material: physicsMaterial, mode: .static) components.set(physicsBodyComponent) components.set(CollisionComponent(shapes: shapes, isStatic: false, filter: CollisionFilter(group: PlacedObject.collisionGroup, mask: .all))) addChild(renderContent) addChild(uiOrigin) uiOrigin.position.y = extents.y / 2 // Position the UI origin in the object’s center. // Allow direct and indirect manipulation of placed objects. components.set(InputTargetComponent(allowedInputTypes: [.direct, .indirect])) // Add a grounding shadow to placed objects. renderContent.components.set(GroundingShadowComponent(castsShadow: true)) } required init() { fatalError("`init` is unimplemented.") } } Thanks
4
0
467
Feb ’25
Body segmentation/occlusion on the Apple Vision Pro
Hello, I am currently working on a Unity project for the Apple Vision Pro. I would like to have people passing in front of the virtual objects occlude the virtual objects that are behind. Something similar to this: https://developer.apple.com/documentation/arkit/occluding-virtual-content-with-people I could unfortunately not find any documentation about this. Is it possible to implement body segmentation or occlusion on the Apple Vision Pro? If it's not currently supported, are there plans to add it? Any ideas on how to achieve this with existing tools? Thanks! Mehdi
1
0
413
Feb ’25
What to return from func focusItems(in rect: CGRect) -> [any UIFocusItem]
I have a warning on my AR app Virtual Tags that since July does not show its camera information telling: "ARCL.SceneLocationView implements focusItemsInRect: - caching for linear focus movement is limited as long as this view is on screen." I tried inserting the function, but documentation does not explain what it should return and how to generate it. Is someone able to help me? Thanks,
0
0
338
Feb ’25
How to Export OBJ with Texture (JPG + MTL) from ARKit LiDAR Scan in iOS?
I am using ARKit with RealityKit to scan objects using LiDAR on iOS. I can generate an OBJ file from ARMeshAnchors, but I am missing the texture export (JPG + MTL). What I Have So Far: Successfully capturing mesh using ARMeshAnchor. Converting mesh into MDLAsset and exporting .obj. I need help generating the .jpg texture and linking it to the .mtl file. private func exportScannedObject() { guard let camera = arView.session.currentFrame?.camera else { return } func convertToAsset(meshAnchors: [ARMeshAnchor]) -> MDLAsset? { guard let device = MTLCreateSystemDefaultDevice() else {return nil} let asset = MDLAsset() for anchor in meshAnchors { let mdlMesh = anchor.geometry.toMDLMesh(device: device, camera: camera, modelMatrix: anchor.transform) // Apply a gray material to the mesh let material = MDLMaterial(name: "GrayMaterial", scatteringFunction: MDLScatteringFunction()) material.setProperty(MDLMaterialProperty(name: "baseColor", semantic: .baseColor, float3: SIMD3(0.5, 0.5, 0.5))) // Gray color if let submeshes = mdlMesh.submeshes as? [MDLSubmesh] { for submesh in submeshes { submesh.material = material } } asset.add(mdlMesh) } return asset } func export(asset: MDLAsset) throws -> URL { let directory = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask).first! let url = directory.appendingPathComponent("scaned.obj") if MDLAsset.canExportFileExtension("obj") { do { try asset.export(to: url) return url } catch let error { fatalError(error.localizedDescription) } } else { fatalError("Can't export USD") } } if let meshAnchors = arView.session.currentFrame?.anchors.compactMap({ $0 as? ARMeshAnchor }), let asset = convertToAsset(meshAnchors: meshAnchors) { do { let url = try export(asset: asset) showScanPreview(url) } catch { print("export error") } } } extension ARMeshGeometry { func vertex(at index: UInt32) -> SIMD3<Float> { assert(vertices.format == MTLVertexFormat.float3, "Expected three floats (twelve bytes) per vertex.") let vertexPointer = vertices.buffer.contents().advanced(by: vertices.offset + (vertices.stride * Int(index))) let vertex = vertexPointer.assumingMemoryBound(to: SIMD3<Float>.self).pointee return vertex } // helps from StackOverflow: // https://stackoverflow.com/questions/61063571/arkit-3-5-how-to-export-obj-from-new-ipad-pro-with-lidar func toMDLMesh(device: MTLDevice, camera: ARCamera, modelMatrix: simd_float4x4) -> MDLMesh { func convertVertexLocalToWorld() { let verticesPointer = vertices.buffer.contents() for vertexIndex in 0..<vertices.count { let vertex = self.vertex(at: UInt32(vertexIndex)) var vertexLocalTransform = matrix_identity_float4x4 vertexLocalTransform.columns.3 = SIMD4<Float>(x: vertex.x, y: vertex.y, z: vertex.z, w: 1) let vertexWorldPosition = (modelMatrix * vertexLocalTransform).columns.3 let vertexOffset = vertices.offset + vertices.stride * vertexIndex let componentStride = vertices.stride / 3 verticesPointer.storeBytes(of: vertexWorldPosition.x, toByteOffset: vertexOffset, as: Float.self) verticesPointer.storeBytes(of: vertexWorldPosition.y, toByteOffset: vertexOffset + componentStride, as: Float.self) verticesPointer.storeBytes(of: vertexWorldPosition.z, toByteOffset: vertexOffset + (2 * componentStride), as: Float.self) } } convertVertexLocalToWorld() let allocator = MTKMeshBufferAllocator(device: device); let data = Data.init(bytes: vertices.buffer.contents(), count: vertices.stride * vertices.count); let vertexBuffer = allocator.newBuffer(with: data, type: .vertex); let indexData = Data.init(bytes: faces.buffer.contents(), count: faces.bytesPerIndex * faces.count * faces.indexCountPerPrimitive); let indexBuffer = allocator.newBuffer(with: indexData, type: .index); let submesh = MDLSubmesh(indexBuffer: indexBuffer, indexCount: faces.count * faces.indexCountPerPrimitive, indexType: .uInt32, geometryType: .triangles, material: nil); let vertexDescriptor = MDLVertexDescriptor(); vertexDescriptor.attributes[0] = MDLVertexAttribute(name: MDLVertexAttributePosition, format: .float3, offset: 0, bufferIndex: 0); vertexDescriptor.layouts[0] = MDLVertexBufferLayout(stride: vertices.stride); let mesh = MDLMesh(vertexBuffer: vertexBuffer, vertexCount: vertices.count, descriptor: vertexDescriptor, submeshes: [submesh]) return mesh } } What I Need Help With: How do I generate the JPG texture from the AR scene? How do I save an MTL file linking the OBJ model to the texture? How can I correctly apply the texture when viewing the OBJ in an external 3D viewer? I appreciate any guidance, including sample code or resources! If you have a complete working solution, I’d love to discuss further via private channels.
0
0
481
Feb ’25
Entity HoverEffect Fired When inside Another Entity Collider
Hi folks, I’m new to Vision Pro stack, still trying to learn all the nuances. Here is a problem I can’t seem to find an answer. I placed entity A( a small .02 radius sphere) inside entity B( size:.1 box). Both entities have HoverEffectComponent, and both inputcomponent is set to .direct. Entity A is NOT a child of Entity B. When I direct touch Entity B, I noticed that Entity A’s hover effect is fired as well. This only happens if Entity A‘s position is inside Entity B. The gesture that is only targeted at Entity A doesn’t work either. I double checked Entity A collider which sits inside entity B collider, my direct touch shouldn’t have trigger its hove effect. Having one collider inside another seems to produce unpredictable behavior? Thanks in advance 🙏🙏🙏 Context: I’m trying to create an invisible bound around Entity A, so when my hand approaches the bound to grab Entity A, a nice spotlight hover effect would fire first on the bound before hand reaching entity A.
2
0
365
Feb ’25