Integrate iOS device camera and motion features to produce augmented reality experiences in your app or game using ARKit.

Posts under ARKit tag

197 Posts

Post

Replies

Boosts

Views

Activity

Hover Effect & Tap Problem
Hi, since I updated my device to visionOS 2.0 or higher I have some problems with my app. Sometimes when I go with my eyes over buttons or the tabview which is a SwiftUI component that many apps use, the hover effect doesn't trigger anymore, I can tap on the icon from the tabview but doesn't extend when I hover it to se the description of the button. It is weird because it doesn't happen all the time. But in VisionOS 1.0 doesn't happen at all. My second issue is that I have an navbar as an attachment and this attachment has a draggable modifier that we created. The buttons are not tappable until I drag the navbar a little, this also never happened in VisionOS 1.0 but always happen in Vision 2.0+ because I've tested in the simulator with different versions. Is it possible these problems to be related to handTracking Service that we use from ARKit? Because sometimes when we close the trackers the app works as intended.
0
0
443
Nov ’24
INFOS FROM CMSSampleBuffer
==> Which information will I get from CMSSampleBuffer ? Is there an option to block close up accomodation of the camera ? Is there a way for the object capture module to take a video instead of a series of picture ? It would be fantastic to have an answer on all of these questions to be able to move forward on new implementations.
0
0
357
Nov ’24
Why does the planeNode in SceneKit flicker when using class property instead of a local variable?
I am working on a SceneKit project where I use a CAShapeLayer as the content for SCNMaterial's diffuse.contents to display a progress bar. Here's my initial code: func setupProgressWithCAShapeLayer() { let progressLayer = createProgressLayer() progressBarPlane?.firstMaterial?.diffuse.contents = progressLayer DispatchQueue.main.async { var progress: CGFloat = 0.0 Timer.scheduledTimer(withTimeInterval: 0.1, repeats: true) { timer in progress += 0.01 if progress > 1.0 { progress = 0.0 } progressLayer.strokeEnd = progress // Update progress } } } // MARK: - ARSCNViewDelegate func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) { progressBarPlane = SCNPlane(width: 0.2, height: 0.2) setupProgressWithCAShapeLayer() let planeNode = SCNNode(geometry: progressBarPlane) planeNode.position = SCNVector3(x: 0, y: 0.2, z: 0) node.addChildNode(planeNode) } This works fine, and the progress bar updates smoothly. However, when I change the code to use a class property (self.progressLayer) instead of a local variable, the rendering starts flickering on the screen: func setupProgressWithCAShapeLayer() { self.progressLayer = createProgressLayer() progressBarPlane?.firstMaterial?.diffuse.contents = progressLayer DispatchQueue.main.async { [weak self] in var progress: CGFloat = 0.0 Timer.scheduledTimer(withTimeInterval: 0.1, repeats: true) { [weak self] timer in progress += 0.01 if progress > 1.0 { progress = 0.0 } self?.progressLayer?.strokeEnd = progress // Update progress } } } After this change, the progressBarPlane in SceneKit starts flickering while being rendered on the screen. My Question: Why does switching from a local variable (progressLayer) to a class property (self.progressLayer) cause the flickering issue in SceneKit rendering?
0
0
571
Nov ’24
Access persona on VisionOS
I am planning to build a VisionOS app and need to get access to the persona (avatar). I have not found any information regarding integration possibilities in the docs. Does anyone know if and how I can access the user's persona? Other applications like Zoom and Teams for VisionOS use the persona, so I think it is basically possible. Apparently (if it's not fake) there is also a chess game with integrated persona: https://www.youtube.com/watch?v=mMzK8C3t14I Any help is very welcome, thanks.
3
0
1.2k
Nov ’24
ARSCNView ignores output of SCNTechnique (sometimes)
I am using SCNTechnique in combination with ARSCNView. The technique is doing so minor post-processing. I have written several filter variant for this post-processing, but I'm facing an issue when with one of the filters/fragment shaders, SCNTechnique discards my output and just presents the plain camera feed on screen instead. This is clearly visible in the Metal pipeline, using the GPU frame debugger. Let me stress that my setup works for 90% of my filters, but not this one and I want to know why. iOS 18.1, iPhone 13 Mini. Xcode 16.1. Encoder 0 & 1 are injected by the system. Render encoder 2 & 3 correspond to my SCNTechnique's render passes: one to manipulate pixel data (darken it in this case) and another to BLIT it back to the main texture. I know the separate buffer is not strictly for this particular operation, but it shouldn't matter. Note that the issue occurs in encoder 4 (not mine but ARKit's). In Render Encoder 4, scn_postprocess_AR_fragment handle my texture (#0, ending in f980) and another from the camera feed (Texture 2). I know this pass is typically used for grain because that's what it used to do before I disabled grain on ARSCNView (+ the buffer still contains grain paramaters). I have other post-processing filters that work just fine. By what magic is ARKit determining to use Texture 2 instead of my Texture 0? Sure, I could keep digging into the minute differences between my shaders to find out which LoC affects how some ARKit shader down the line operates, but it's awfully opaque so far.
0
0
604
Nov ’24
How to use `unproject(_:from:to:ontoPlane:)` of Content of RealityView
When using ARView of RealityKit, I can code like this let results = arView.raycast(from: point, allowing: .estimatedPlane, alignment: .any) to get the 3D position of where I tap on the plane. In iOS 18, we can use RealityView and I found that unproject(_:from:to:ontoPlane:) may implement the same function, but I don't know how to set the ontoPlane parameter. Can someone help me with some code snippets?
1
0
588
Nov ’24
Sample Project for WWDC24 10092 Metal with Passthrough?
It’s great that we’ll be able to use Metal custom renderers in passthrough mode on visionOS. https://developer.apple.com/wwdc24/10092 This is a lot of complicated set-up, however. It’s also unclear how occlusion and custom algorithms / raytracing will work in tandem with scene understanding. May we have a project template and/or sample? Preferably with the C api and not just swift. This would be much-appreciated and helpful to everyone who wants this set-up. I’d like to see the whole process. Thank you for introducing this feature!
3
1
1.1k
Nov ’24
Limited Window Manipulation in Mixed Immersive Style in VisionOS
In VisionOS versions 2.1 and 2.2, I’m encountering a significant limitation when using the .immersionStyle(selection: .constant(.mixed), in: .mixed) mode, specifically in mixed immersive style. Here’s a breakdown of the behavior: In full immersion mode (.immersionStyle(selection: .constant(.full), in: .full)), users can interact with and manipulate system windows while inside a 3D model, allowing typical interactions like moving windows, pinching, or activating UI switches. However, in mixed immersive mode, using the exact same layout “inside” a 3D model (which doesn’t visually obstruct the window), users are unable to interact with window content or move the window. Basic interactions like pinching or toggling switches require users to physically touch these elements in AR space, which is inconsistent with the behavior in full immersion. From a usability perspective, this restriction seems unnecessary, as the software should ideally allow for similar interaction capabilities across both immersive styles. The expected behavior is to enable window manipulation within a 3D model in mixed mode, matching the functionality observed in full immersion. The scene in question is a House in which the user is placed during the immersion that's why I am referring to the user being "Inside" of the scene. Has anyone else experienced this or found a workaround?
2
1
434
Nov ’24
ObjectCapture from ARKit
We are currently using ObjectCapture from ARKit, and we would like to fix exposure time, white balance parameter and ISO. How can we do this ? Additionally, we'd like to obtain the following information from the ARKit : white balance parameters (in case we cannot fix them) and color correction matrices ?
0
0
518
Nov ’24
ARKit AnchorUpdate<ImageAnchor>.event Behavior Changes in visionOS 2.1
In visionOS beta, when using ARKit for image detection, the initially detected AnchorUpdate status is .add, and subsequent detections of the same image are marked as .update. However, after toggling immersiveSpace, the same image is detected with the status .add again. After updating to visionOS 2.1, the first detection status remains `add, and subsequent detections of the same image remain .update, even after toggling immersiveSpace. Could this be due to a change in processing flow?
2
0
434
Nov ’24
Error: captureSession(_:didEndWith:error:) nearly matches defaulted requirement in RoomCaptureSessionDelegate
Hello, I’m working with the RoomPlan API to capture and export 3D room models in an iOS app. My goal is to implement the RoomCaptureSessionDelegate protocol to handle the end of a room capture session. However, I’m encountering a cautionary warning in Xcode: Warning: "captureSession(:didEndWith:error:) nearly matches defaulted requirement captureSession(:didEndWith:error:) of protocol RoomCaptureSessionDelegate." I’ve verified that my delegate method signature matches the protocol, but the warning persists. I suspect this might be due to minor discrepancies in the parameter types or naming conventions required by the protocol. Below is my full RoomScanner.swift file: import RoomPlan import SwiftUI import UIKit func exportModelToFiles() { let exportURL = FileManager.default.temporaryDirectory.appendingPathComponent("RoomModel.usdz") let documentPicker = UIDocumentPickerViewController(forExporting: [exportURL]) documentPicker.modalPresentationStyle = .formSheet if let windowScene = UIApplication.shared.connectedScenes.first as? UIWindowScene, let rootViewController = windowScene.windows.first?.rootViewController { rootViewController.present(documentPicker, animated: true, completion: nil) } } class RoomScanner: ObservableObject { var roomCaptureSession: RoomCaptureSession? private let sessionConfig = RoomCaptureSession.Configuration() private var capturedRoom: CapturedRoom? func startSession() { roomCaptureSession = RoomCaptureSession() roomCaptureSession?.delegate = self roomCaptureSession?.run(configuration: sessionConfig) } func stopSession() { roomCaptureSession?.stop() guard let room = capturedRoom else { print("No room data available for export.") return } let exportURL = FileManager.default.temporaryDirectory.appendingPathComponent("RoomModel.usdz") do { try room.export(to: exportURL) print("3D floor plan exported to \(exportURL)") } catch { print("Error exporting room model: \(error)") } } } // MARK: - RoomCaptureSessionDelegate extension RoomScanner: RoomCaptureSessionDelegate { func captureSession(_ session: RoomCaptureSession, didUpdate room: CapturedRoom) { // Handle real-time updates if necessary } func captureSession(_ session: RoomCaptureSession, didEndWith capturedRoom: CapturedRoom, error: Error?) { if let error = error { print("Capture session ended with error: \(error.localizedDescription)") } else { self.capturedRoom = capturedRoom } } }
1
0
608
Nov ’24
distance calculation:I want to implement the same code functionality for distance calculation in Unity using SwiftUI. visionos
I use this piece of code in Unity to get the distance length of my model entering another model. I have set collision markers at the tip and end of the model and performed raycasting, but Unity currently does not support object tracking in VisionOS. Therefore, I plan to use SwiftUI for native development. In Reality Composer Pro, I haven't seen a collision editing feature like in Unity; I can only set the size of the collision body but cannot manually adjust or visualize the shape and size of my collision body. I want to achieve similar functionality using SwiftUI, to be able to calculate and display the distance that my model A, like a needle or ruler, penetrates into another model or a physical object's interior. Is there a similar functionality available, or other coding methods to achieve this? void CalculateLengthInsideOrgan() { // Direction from the base of the probe to the tip Vector3 direction = probeTip.position - probeBase.position; float probeLength = direction.magnitude; // Raycasting RaycastHit[] hits = Physics.RaycastAll(probeBase.position, direction, probeLength, organLayerMask); if (hits.Length > 0) { // Calculate the length entering the organ float distanceToFirstHit = hits[0].distance; lengthInsideOrgan = probeLength - distanceToFirstHit; } else { lengthInsideOrgan = 0f; } }
1
0
647
Nov ’24
How To Create Dual Screen of AR using RealityView and SwiftUI iOS 18
I have this code to make ARVR Stereo View To Be Used in VR Box Or Google Cardboard, it uses iOS 18 New RealityView but it is not Act as an AR but rather Static VR on a Camera background so as I move the iPhone the cube move with it and that's not suppose to happen if its Anchored in a plane or to world coordinate. import SwiftUI import RealityKit struct ContentView : View { let anchor1 = AnchorEntity(.camera) let anchor2 = AnchorEntity(.camera) var body: some View { HStack (spacing: 0){ MainView(anchor: anchor1) MainView(anchor: anchor2) } .background(.black) } } struct MainView : View { @State var anchor = AnchorEntity() var body: some View { RealityView { content in content.camera = .spatialTracking let item = ModelEntity(mesh: .generateBox(size: 0.25), materials: [SimpleMaterial()]) anchor.addChild(item) content.add(anchor) anchor.position.z = -1.0 anchor.orientation = .init(angle: .pi/4, axis:[0,1,1]) } } } the thing is if I remove .camera like this let anchor1 = AnchorEntity() let anchor2 = AnchorEntity() It would work as AR Anchored to world coordinates but on the other hand is does not work but on the left view only not both views Meanwhile this was so easy before RealityView and SwiftUI by cloning the view like in ARSCNView Example : import UIKit import ARKit class ViewController: UIViewController, ARSCNViewDelegate, ARSessionDelegate { //create Any Two ARSCNView's in Story board // and link each to the next (dont mind dimensions) @IBOutlet var sceneView: ARSCNView! @IBOutlet var sceneView2: ARSCNView! override func viewDidLoad() { super.viewDidLoad() // Do any additional setup after loading the view. sceneView.delegate = self sceneView.session.delegate = self // Create SceneKit box let box = SCNBox(width: 0.1, height: 0.1, length: 0.1, chamferRadius: 0.01) let item = SCNNode(geometry: box) item.geometry?.materials.first?.diffuse.contents = UIColor.green item.position = SCNVector3(0.0, 0.0, -1.0) item.orientation = SCNVector4(0, 1, 1, .pi/4.0) // retrieve the ship node sceneView.scene.rootNode.addChildNode(item) } override func viewDidLayoutSubviews() // To Do Add the 4 Buttons { // Stop Screen Dimming or Closing While The App Is Running UIApplication.shared.isIdleTimerDisabled = true let screen: CGRect = UIScreen.main.bounds let topPadding: CGFloat = self.view.safeAreaInsets.top let bottomPadding: CGFloat = self.view.safeAreaInsets.bottom let leftPadding: CGFloat = self.view.safeAreaInsets.left let rightPadding: CGFloat = self.view.safeAreaInsets.right let safeArea: CGRect = CGRect(x: leftPadding, y: topPadding, width: screen.size.width - leftPadding - rightPadding, height: screen.size.height - topPadding - bottomPadding) DispatchQueue.main.async { if self.sceneView != nil { self.sceneView.frame = CGRect(x: safeArea.size.width * 0 + safeArea.origin.x, y: safeArea.size.height * 0 + safeArea.origin.y, width: safeArea.size.width * 0.5, height: safeArea.size.height * 1) } if self.sceneView2 != nil { self.sceneView2.frame = CGRect(x: safeArea.size.width * 0.5 + safeArea.origin.x, y: safeArea.size.height * 0 + safeArea.origin.y, width: safeArea.size.width * 0.5, height: safeArea.size.height * 1) } } } override func viewDidAppear(_ animated: Bool) { super.viewDidAppear(animated) let configuration = ARWorldTrackingConfiguration() sceneView.session.run(configuration) sceneView2.scene = sceneView.scene sceneView2.session = sceneView.session } } And here is the video for it
1
0
503
Nov ’24
RealityView to show two screens of AR in iOS 18/macOS 15 using SwiftUI
I have an issue using RealityView to show two screens of AR, while I did succeed to make it as a non AR but now my code not working. Also it is working using Storyboard and Swift with SceneKit, so why it is not working in RealityView? import SwiftUI import RealityKit struct ContentView : View { var body: some View { HStack (spacing: 0){ MainView() MainView() } .background(.black) } } struct MainView : View { @State var anchor = AnchorEntity() var body: some View { RealityView { content in let item = ModelEntity(mesh: .generateBox(size: 0.2), materials: [SimpleMaterial()]) content.camera = .spatialTracking anchor.addChild(item) anchor.position = [0.0, 0.0, -1.0] anchor.orientation = .init(angle: .pi/4, axis:[0,1,1]) // Add the horizontal plane anchor to the scene content.add(anchor) } } }
2
0
529
Nov ’24
ARVR RealityView Showing left Camera view Entity near more than the right view
I have this code to make ARVR Stereo View To Be Used in VR Box Or Google Cardboard, it uses iOS 18 New RealityView but for some reason the left side showing the Entity (Box) more near to the camera than the right side which make it not identical, I wonder is this a bug and need to be fixed or what ? thanx Here is the code import SwiftUI import RealityKit struct ContentView : View { let anchor1 = AnchorEntity(.camera) let anchor2 = AnchorEntity(.camera) var body: some View { HStack (spacing: 0){ MainView(anchor: anchor1) MainView(anchor: anchor2) } .background(.black) } } struct MainView : View { @State var anchor = AnchorEntity() var body: some View { RealityView { content in content.camera = .spatialTracking let item = ModelEntity(mesh: .generateBox(size: 0.25), materials: [SimpleMaterial()]) anchor.addChild(item) content.add(anchor) anchor.position.z = -1.0 anchor.orientation = .init(angle: .pi/4, axis:[0,1,1]) } } } And Here is the View
0
0
432
Nov ’24
How to insert modified video frames into the system camera to achieve AI erase effect?
By applying for the enterprise API, we can obtain the data of video frames collected by VisionPro glasses, and then we process the collected video frames to achieve the function of eliminating a certain object. But it was not found how to insert the processed video frames into the data source collected by the system camera. So I would like to ask if there is any API that can insert processed video frames into the original data and present them to the user? This effect is similar to the right side twist of VisionPro glasses, which allows the physical world and digital space to blend perfectly after rotation. So, I would like to ask if there is a related API that can solve this problem? STEPS TO REPRODUCE Obtain video frames, Process the obtained video frames Insert the processed video frames into the VisonOS system camera. System: VisionOS 2.0 API used: Enterprise APIs Main camera access permissions
1
0
557
Nov ’24
[ARKit, Reality Composer Pro] Is it possible loading Immersive Scene after recognizing preregistered images?
Hi! I wanna know that if it's possible that loading Immersive Scene after scanning(recognizing) preregistered images or objects? I tried to load the Immersive scene after scanning image and objects, it didn't work well. Please let me know about the solution if it's possible. Here the ImmersiveView.swift code i tried. // ImmersiveView.swift import SwiftUI import RealityKit import RealityKitContent // Using the RealityKitContent module struct ImmersiveView: View { @ObservedObject var viewModel: TrackingViewModel @State private var immersiveScene: Entity? @State private var isToggleOn: Bool = false // Variable for toggle state var body: some View { ZStack { // Overlay RealityView and UI elements RealityView { content in if let scene = immersiveScene { content.add(scene) print("Immersive scene successfully added.") if let moneyGunsEntity = scene.findEntity(named: "MoneyGuns") { NotificationCenter.default.post( name: Notification.Name("RealityKit.NotificationTrigger"), object: nil, userInfo: [ "RealityKit.NotificationTrigger.Scene": scene, "RealityKit.NotificationTrigger.Identifier": "PlayTimeline" ] ) print("PlayTimeline notification sent.") } else { print("MoneyGuns entity not found.") } } } .onAppear { Task { if let scene = try? await Entity(named: "Immersive", in: realityKitContentBundle) { immersiveScene = scene } else { print("Failed to load immersive scene.") } } } VStack { Spacer() Toggle(isOn: $isToggleOn) { // Add toggle button Text("Toggle Option") .foregroundColor(.white) } .padding() .background(Color.black.opacity(0.7)) .cornerRadius(8) .padding() } } } }
1
0
640
Nov ’24
RealityView Attachments on iOS 18 & Visually Appealing AR Labeling Alternatives
I want use SwiftUI views as RealityKit entities to display AR Labels within a RealityKit scene, and the labels could be more complicated than just text and window as they might include images, dynamic texts, animations, WebViews, etc. Vision OS enables this through RealityView attachments, and there is a RealityView support on iOS 18. Tried running RealityView attachments code samples from VisionOS on iOS 18. However, the code below gives errors on iOS 18: import SwiftUI import RealityKit struct PassportRealityView: View { let qrCodeCenter: SIMD3<Float> let assetID: String var body: some View { RealityView { content, attachments in // Setup your AR content, such as markers or 3D models if let qrAnchor = try? await Entity(named: "QRAnchor") { qrAnchor.position = qrCodeCenter content.add(qrAnchor) } } attachments: { Attachment(id: "passportTextAttachment") { Text(assetID) .font(.title3) .foregroundColor(.white) .background(Color.black.opacity(0.7)) .padding(5) .cornerRadius(5) } } .frame(width: 300, height: 400) } } When I remove "attachments" keyword and the block, the errors are kind of gone. That does not help me as I want to attach SwiftUI views to Anchor Entities in RealityKit. As I understand, RealityView attachments are not supported on iOS 18. I wonder if there is any way of showing SwiftUI views as entities on iOS 18 at this point. Or am I forced to use the text meshes and 3d planes to build the UI? I checked out the RealityUI plugin, but it's too simple for my use case of building complex AR labels. Any advice would be appreciated. Thanks!
0
0
627
Nov ’24
[ARKit] Is it possible remembering certain room using Room Tracking?
Hi! I'm making content using Room Tracking for vision pro these days. So I searched information about it. Here the links I visited. But I could not found the info I wanted to know Apple ARKit Create enhanced spatial computing experiences with ARKit RoomTrackingProvider I wanna know that if it's possible remembering room structure that recognized before and adding contents in certain world anchor in the room space when user entered the room again? For example, a developer can save the room structure, room info (with room ID) and world anchor of the room with Room Tracking feature. After this, the developer can add entities via Xcode and Reality Composer Pro in certain position of the room to show contents to users when users enter the room. So users can see the contents whenever they visit the room. Is this possible? If there are example codes or projects about it, please let me know.
1
0
715
Nov ’24