Integrate iOS device camera and motion features to produce augmented reality experiences in your app or game using ARKit.

ARKit Documentation

Posts under ARKit subtopic

Post

Replies

Boosts

Views

Activity

Anchor Wall
I have created a portal and attached it to a wall using the AnchorEntity. However, I am seeking guidance on how to determine the size of the wall so that the portal can fully occupy it. Initially, I attempted to locate relevant information within the demo code, but I encountered difficulties in comprehending certain sections. I would appreciate it if someone could provide a step-by-step explanation or a reference to the appropriate code. Thank you for your assistance.
2
0
670
Oct ’24
How to Create .arobject File? Can It Be Done Using 3D Models or .objcap/.usdz Files?
Hi everyone, I'm looking for some guidance on how to create an .arobject file. Is there a way to generate it from a 3D model (like a .objcap or .usdz or .fbx/.obj file)? Or can it only be generated by scanning a real-world object using the ARKitScanner project? Any advice or resources on this would be greatly appreciated! I've only found this project for scanning real-world objects. Thanks in advance!
2
0
739
Oct ’24
[Unity & Xcode(ARKit, RealityKit) & visionOS] Is it possible to combine a project made with Unity and a project made with Xcode into one app?
Hi, I’m working on a portfolio project for Vision Pro these days. I have two projects and each of projects are made with Unity and made with Xcode(using ARKit and RealityKit tracking feature). Is it able to combine these two projects in an app? For example, using the buttons made with SwiftUI in a Reality Composer Pro, jumping to a scene in Unity, and back from a scene in Unity to a scene in Reality Composer Pro in an app.
1
0
852
Oct ’24
Object Occlusion in Non LiDAR devices
Hi, I'm currently working on an ARKit project where I need to implement object occlusion on devices that do not have a LiDAR sensor (e.g., iPhone XR, iPhone 11). I used CoreML models like DepthAnythingV2 to create depth maps and DETRResnet50SemanticSegmentationF16P8 to to perform real-time segmentation. But these models are too heavy for devices. Much appreciated on any advice or pointers to resources.
0
0
791
Nov ’24
[ARKit] Is it possible remembering certain room using Room Tracking?
Hi! I'm making content using Room Tracking for vision pro these days. So I searched information about it. Here the links I visited. But I could not found the info I wanted to know Apple ARKit Create enhanced spatial computing experiences with ARKit RoomTrackingProvider I wanna know that if it's possible remembering room structure that recognized before and adding contents in certain world anchor in the room space when user entered the room again? For example, a developer can save the room structure, room info (with room ID) and world anchor of the room with Room Tracking feature. After this, the developer can add entities via Xcode and Reality Composer Pro in certain position of the room to show contents to users when users enter the room. So users can see the contents whenever they visit the room. Is this possible? If there are example codes or projects about it, please let me know.
1
0
703
Nov ’24
[ARKit, Reality Composer Pro] Is it possible loading Immersive Scene after recognizing preregistered images?
Hi! I wanna know that if it's possible that loading Immersive Scene after scanning(recognizing) preregistered images or objects? I tried to load the Immersive scene after scanning image and objects, it didn't work well. Please let me know about the solution if it's possible. Here the ImmersiveView.swift code i tried. // ImmersiveView.swift import SwiftUI import RealityKit import RealityKitContent // Using the RealityKitContent module struct ImmersiveView: View { @ObservedObject var viewModel: TrackingViewModel @State private var immersiveScene: Entity? @State private var isToggleOn: Bool = false // Variable for toggle state var body: some View { ZStack { // Overlay RealityView and UI elements RealityView { content in if let scene = immersiveScene { content.add(scene) print("Immersive scene successfully added.") if let moneyGunsEntity = scene.findEntity(named: "MoneyGuns") { NotificationCenter.default.post( name: Notification.Name("RealityKit.NotificationTrigger"), object: nil, userInfo: [ "RealityKit.NotificationTrigger.Scene": scene, "RealityKit.NotificationTrigger.Identifier": "PlayTimeline" ] ) print("PlayTimeline notification sent.") } else { print("MoneyGuns entity not found.") } } } .onAppear { Task { if let scene = try? await Entity(named: "Immersive", in: realityKitContentBundle) { immersiveScene = scene } else { print("Failed to load immersive scene.") } } } VStack { Spacer() Toggle(isOn: $isToggleOn) { // Add toggle button Text("Toggle Option") .foregroundColor(.white) } .padding() .background(Color.black.opacity(0.7)) .cornerRadius(8) .padding() } } } }
1
0
632
Nov ’24
AR anchor shared across multiple immersive scenes
Hello, I am currently working on an app that features multiple environments in which I combine Reality Composer Pro scenes with objects managed at runtime as well as make heavy use of RealityView attachments that modify the appearance of certain objects. Is it possible to keep track of an AR anchor when transitioning between immersive spaces? About my app: There are two main contexts/scenes in the app that the user progresses through. The first takes place in AR and is non-interactive and driven by a timeline animation. The second is in VR and allows the user to change materials of select models. Both scenes need to be placed relative to a real-life object that functions as an image anchor. Anchoring is necessary for visual purposes in AR context and it would be nice to use it in the VR context as well in order to provide passive haptics to the user. If the user doesn't have access to the physical object, we make use of plane-based anchoring. Either way, we would like to keep the anchor's position across the scenes.
1
0
498
Nov ’24
Issues setting up the Enterprise API entitlements (Main Camera Access)
Hello, i've recently received the entitlements to access the main camera stream for a project on the Apple Vision Pro. What happens : When executing code from this WWDC tutorial , i'm getting this error when trying to use a Camera Frame Provider : ar_camera_frame_provider_t <0x300d58870>: Failed to start camera stream with error: <ar_error_t: 0x303fcc4c0 Error Domain=com.apple.arkit Code=100 "App not authorized." UserInfo={NSLocalizedFailureReason=Using camera frame provider requires an entitlement., NSLocalizedRecoverySuggestion=, NSLocalizedDescription=App not authorized.} What I've tried : I followed the instructions given by mail, by : adding the .license file at the root of my project, adding the .entitlements file by adding capabilities in the project (Main Camera Access & Passthrough in screen capture are there). I've added NSCameraDescription, NSEnterpriseMCAMUsageDescription and NSWorldSensingUsageDescription (they all have a value assigned). I've also followed those post & post advices. When checking on the Account settings, i do see the capabilities in the "additional capabilities" On first launch, I'm also getting prompted to accept the NSEnterpriseMCAMUsageDescription, so I assume the info.plist file is valid? What did i missed to get the entitlements working ? Here's the code : import ARKit import SwiftUI import Vision import RealityKit class MainCameraAccess { var arKitSession = ARKitSession() var cameraFrameProvider = CameraFrameProvider() var pixelBuffer: CVPixelBuffer? func startCameraSession() async { let formats = CameraVideoFormat.supportedVideoFormats(for: .main, cameraPositions: [.left]) // Request authorization await arKitSession.requestAuthorization(for: [.cameraAccess]) // Start the session do { try await arKitSession.run([cameraFrameProvider]) } catch { print("Failed to start ARKit session: \(error)") return } // Get camera frame updates guard let cameraFrameUpdates = cameraFrameProvider.cameraFrameUpdates(for: formats[0]) else { return } // Process frames for await cameraFrame in cameraFrameUpdates { guard let mainCameraSample = cameraFrame.sample(for: .left) else { continue } self.pixelBuffer = mainCameraSample.pixelBuffer } } func saveLatestImage() { guard let pixelBuffer = self.pixelBuffer else { print("No image available to save.") return } // Convert CVPixelBuffer to UIImage let ciImage = CIImage(cvPixelBuffer: pixelBuffer) let context = CIContext() guard let cgImage = context.createCGImage(ciImage, from: ciImage.extent) else { print("Failed to create CGImage.") return } let uiImage = UIImage(cgImage: cgImage) // Save UIImage to Photos Album UIImageWriteToSavedPhotosAlbum(uiImage, nil, nil, nil) print("Image saved to photo library.") } } Thanks in advance for the help, Jeremy
3
0
636
Dec ’24
Crash: offlineFloorPlanGeneration
Lately i got a lot of crashes on iOS 18 devices (mostly 13 Pro devices but also a 16 Pro Max), has anyone encountered similar issues and is there more information about offlineFloorPlanGeneration? com.apple.RoomScanCore.offlineFloorPlanGeneration EXC_BAD_ACCESS KERN_INVALID_ADDRESS 0x0000f5cab18e03c0 RoomScanCore RSFrameFromDictionary + 110992
1
0
603
Dec ’24
Image Anchoring Not Working Outside Reality Composer/Reality Composer Pro
Hi everyone, I’m having trouble with image anchoring when working on a project in Reality Composer and Reality Composer Pro. Here’s the issue: 1. What I’m Trying to Achieve: I want to create an AR scene where an object anchors to an image I provide. I don't want to create an app for this but just use the USDZ File the Scene creates. The USDZ File then should be viewable via the various integrations of AR Quick Look across the Apple Ecosystem. The image anchoring works perfectly when I preview the scene inside Reality Composer using AR mode. 2. The Problem: When I export the project (tried both USDZ and Reality formats) and open it on my iPhone using the Files app (which uses AR Quick Look), the image anchoring no longer works. The object doesn’t anchor to the provided image as expected. It just anchors to the first plane it recognizes and not the image. 3. What I’ve Tried: Exporting the scene in USDZ format. Exporting the scene in Reality format. Both formats result in the same issue: no image anchoring outside of the Reality Composer environment. Trying different images but all resulting in same manor that the image anchoring is not working Tried different iOS Version but resulting in the same issue 4. Current Setup: Reality Composer Pro version: 2.0 iPhone model: iPhone 13 Pro iOS version: 18.1. 5. What I Need Help With: Is there a way to ensure image anchoring works in exported files when opened via AR Quick Look? Do I need to configure something specific during the export process? Are there limitations in AR Quick Look that prevent image anchoring from functioning correctly? Do i need to create an app to make this work? I’d appreciate any advice or insights from the community. If anyone has experience with similar issues or knows of a workaround, please let me know! Thanks in advance, Mav
2
0
662
Dec ’24
VisionOS ARKit CameraFrame Sample Parameters Extrinsics
the following documentation tells me that the CameraFrame.Sample.Parameters.extrinsics is of type simd_float4x4, great! https://developer.apple.com/documentation/arkit/cameraframe/sample/parameters/4443449-extrinsics I have read in the answer of another post that this extrinsics represents the pose of the physical camera relative to the device anchor. Did I understand correctly that the device anchor is where the scene is rendered from onto the user's display? What is the coordinate system in which this offset is defined, which axis is left, which one is up, which one is forward? The last column of the extrinsics seems to define a translation of approximately 2 cm along the x axis, -2cm along the y axis and -5 cm along the z axis. I tried to measure the physical distance between the main left and right cameras in order to find out if it's rather 2cm or 5 cm from the "middle", it looks more like 5, so I assume that the z axis is looking towards the right (from the user's perspective). Is that so? For x and y, I assume that the physical camera is approximately 2 cm to the front of the user and 2cm to the bottom, which of x and y is horizontal, which on vertical? How is the camera image indexed, is it row-major and is the origin on the top left? I am looking forward to learning about all the details on these extrinsics in order to make use of it.
4
0
791
Jan ’25
Create Anchor on Objects from 2D Data
We're developing a VisionOS application, where we would like to do product recognition (like food items). We have enterprise entitlements and therefore also main camera access for VisionOS. We send this live camera frames to a trained CoreML model where we will receive 2D coordinates from the model detection prediction. Now, we would like to create a 3D anchor on the detected items so it can be visible for user. The 3D anchor is going to be the class name of the detected item. How do we transform this 2D coordinate from the model prediction to a 3D anchor?
4
0
716
Feb ’25