Integrate iOS device camera and motion features to produce augmented reality experiences in your app or game using ARKit.

Posts under ARKit tag

197 Posts

Post

Replies

Boosts

Views

Activity

SwiftUI Manual Orientation Control for Views - Solution and a Bug!
I am working on a project that contains a QuickLook View and Some ARViews. I want to restrict the entire app to Portrait orientation, but I want to allow the ARView to have Portrait and Landscape orientation. If I restrict the app to Portrait in the Deployment Info settings, we can still turn the device to landscape in the ARView, However, there is an issue with "some" spatial audio files within the digital experience. Some spatial audio items, are placed appropriately, and others are panned oddly left. If we allow Landscape Left and Right in the Deployment Info settings, all spatial audio behaves appropriately. So, we need to "lock" every other view as Portrait and only allow Portrait and Landscape on the ARView. I'm not smart enough to know how to do that, but I found this excellent package on GitHub. It works as expected. https://github.com/wvteijlingen/swiftui-interface-orientation However! When we wrap SwiftUI with UIKit, it appears every single view that contains an ARView is initialized at launch even though it is not visible. So, when the app launches, it is running multiple ARViews at once. It appears we need to have some kind of lazy loading, so this doesn't occur, but again, I am not knowledgable enough for this yet. I tried to wrap it all in a LazyVStack, I tried a LazyView struct, but I couldn't get it to build appropriately. I feel like this might be a common thing, so maybe there's already a simple answer I'm not able to locate? Any ideas??
1
0
448
Jan ’25
ARKit: Prevent Asset Clipping
Hello Apple Team, I am working on a RealityKit project for iOS, where I need to place a 3D asset far away from the camera (approximately 15 to 30 meters). When enabling people occlusion, the 3D asset gets clipped when moved far away. Is it possible to enable people occlusion for assets at close range (less than 10 meters) while disabling it for assets farther away to prevent clipping? I understand that it is possible to switch configurations at runtime. However, I would like to place assets both close to and far from the camera simultaneously. Thank you for your help! Kind regards
1
0
498
Jan ’25
Augmented Reality app unable to load the image from the camera
I have an app on the App Store for many years enabling users to post text into clouds in augmented reality. Yet last week abruptly upon installing the app on the iPhone the screen started going totally dark and a list of little comprehensible logs came up of the kind: ARSCNCompositor <0x300ad0e00>: ARSCNCompositor (0, 0) initialization failed. Matting is not set up properly. many times, then RWorldTrackingTechnique <0x106235180>: Unable to update pose [PredictorFailure] for timestamp 870.392108 ARWorldTrackingTechnique <0x106235180>: Unable to predict pose [1] for timestamp 870.392108 again several times and then: ARWorldTrackingTechnique <0x106235180>: SLAM error callback: Error Domain=Slam Error Code=7 "Non fatal error occurred due to significant drop in a IMU data" UserInfo={NSDescription=Non fatal error occurred due to significant drop in a IMU data, NSLocalizedFailureReason=SlamEngineNodeGroup Failure: IMU issue: gyro data stream verification failed [Significant data drop]. Failed on timestamp: 870.413247, Last known timestamp: 865.350198, Delta: 5.063049, System timestamp: 870.415781, Delta between system and frame: 0.002534. } and then again the pose issues several times. I hoped the new beta version would have solved the issue, but it was not the case. Unfortunately I do not know if that depends on the beta version or some other issue, given the app may be not installed on the Mac simulator.
15
2
2.1k
Jan ’25
How to get the floor plane with Spatial Tracking Session and Anchor Entity
In the WWDC session titled "Deep dive into volumes and immersive spaces", the developers discussed adding a Spatial Tracking Session and an Anchor Entity to detect the floor. They then glossed over some important details. They added a spatial tap gesture to let the user place content relative to the floor anchor, but they left a lot of information. .gesture( SpatialTapGesture( coordinateSpace: .immersiveSpace ) .targetedToAnyEntity() .onEnded { value in handleTapOnFloor(value: value) } ) My understanding is that an entity has to have input and collision components for gestures like this to work. How can we add a collision to an AnchorEntity when we don't know its size or shape? I've been trying for days to understand what is happening here and I just don't get it. It is even more frustrating that the example project that Apple released does not contain any of these features. I would like to be able Detect the floor plane Get the position/transform of the floor plane Add a collider to the floor plane Enable collisions and physics on the floor plane Enable gestures on the floor plane It seems to me that the Anchor Entity is placed as an entirely arbitrary position. It has absolutely no relationship to the rectangle with the floor label that I can see in the Xcode visualization. It is just a point, not a plane or rect that I can use. I've tried manually calculating the collision shape after the anchor is detected, but nothing that I have tried works. I can't tap on the floor with gestures. I can't drop entities onto the floor. I can't seem to do ANYTHING at all with this floor anchor other than place entity at the totally arbitrary location somewhere on the floor. Is there anyway at all with Spatial Tracking Session and Anchor Entity to get the actual plane that was detected? struct FloorExample: View { @State var trackingSession: SpatialTrackingSession = SpatialTrackingSession() @State var subject: Entity? @State var floor: AnchorEntity? var body: some View { RealityView { content, attachments in let session = SpatialTrackingSession() let configuration = SpatialTrackingSession.Configuration(tracking: [.plane]) _ = await session.run(configuration) self.trackingSession = session let floorAnchor = AnchorEntity(.plane(.horizontal, classification: .floor, minimumBounds: SIMD2(x: 0.1, y: 0.1))) floorAnchor.anchoring.physicsSimulation = .none floorAnchor.name = "FloorAnchorEntity" floorAnchor.components.set(InputTargetComponent()) floorAnchor.components.set(CollisionComponent(shapes: .init())) content.add(floorAnchor) self.floor = floorAnchor // This is just here to let me see where visinoOS decided to "place" the floor anchor. let floorPlaced = ModelEntity( mesh: .generateSphere(radius: 0.1), materials: [SimpleMaterial(color: .black, isMetallic: false)]) floorAnchor.addChild(floorPlaced) if let scene = try? await Entity(named: "AnchorLabsFloor", in: realityKitContentBundle) { content.add(scene) if let subject = scene.findEntity(named: "StepSphereRed") { self.subject = subject } // I can see when the anchor is added _ = content.subscribe(to: SceneEvents.AnchoredStateChanged.self) { event in event.anchor.generateCollisionShapes(recursive: true) // this doesn't seem to work print("**anchor changed** \(event)") print("**anchor** \(event.anchor)") } // place the reset button near the user if let panel = attachments.entity(for: "Panel") { panel.position = [0, 1, -0.5] content.add(panel) } } } update: { content, attachments in } attachments: { Attachment(id: "Panel", { Button(action: { print("**button pressed**") if let subject = self.subject { subject.position = [-0.5, 1.5, -1.5] // Remove the physics body and assign a new one - hack to remove momentum if let physics = subject.components[PhysicsBodyComponent.self] { subject.components.remove(PhysicsBodyComponent.self) subject.components.set(physics) } } }, label: { Text("Reset Sphere") }) }) } } }
2
0
791
Jan ’25
RealityKit/ARKit Environment Texturing broken on iOS 18
Devices running iOS 18 using RealityKit do not seem to receive lighting supplied via ARKit Environment Texturing (https://developer.apple.com/documentation/arkit/arworldtrackingconfiguration/2977509-environmenttexturing). Instead just a default IBL is used by RealityKit. This happens with RealityView as well as ARView. It also happens when I explicitly opt-in to environment texturing: let worldTrackingConfig = ARWorldTrackingConfiguration() worldTrackingConfig.environmentTexturing = .automatic arView.session.run(worldTrackingConfig) Even the Xcode AR Template has this issue. I'm attaching a screenshot of the sample app running on iOS 18 where it's broken and from iOS 17 where it works as expected. I hope this can get resolved quickly since I see it as a major regression. Feedback ID: FB15091335 UPDATE: It works on my older iPhone XS (iOS 18 22A5282m) Broken on iPad Pro (11-inch) (3rd generation) (iPadOS 18.0 (22A5350a)) Maybe it's related to LiDAR? Thank you! iOS 17 (works): iOS 18 (broken):
3
1
995
Jan ’25
Publishing changes from within view updates is not allowed - SwiftUI with ARKit
Hello, I’m encountering the error "Publishing changes from within view updates is not allowed, this will cause undefined behavior". while developing an app using SwiftUI and ARKit. I have a ObjectTracking class where I update some @Published variables inside a function called processObjectAttributes after detecting objects. However, when I try to update these state variables in the View (like isPositionChecked, etc.), the error keeps appearing. Here is a simplified version of my code: class ObjectTracking: ObservableObject { @Published var isPositionChecked: Bool = false @Published var isSizeChecked: Bool = false @Published var isOrientationChecked: Bool = false func checkAttributes(objectAnchor: ARObjectAnchor, _ left: ARObjectAnchor.AttributeLocation, _ right: ARObjectAnchor.AttributeLocation? = nil, threshold: Float) -> Bool { let attributes = objectAnchor.attributes guard let leftValue = attributes[left]?.floatValue else { return false } let rightValue = right != nil ? attributes[right!]?.floatValue ?? 0 : 0 return leftValue > threshold && (right == nil || rightValue > threshold) } func isComplete(objectAnchor: ARObjectAnchor) -> Bool { isPositionChecked = checkAttributes(objectAnchor: objectAnchor, .positionLeft, .positionRight, threshold: 0.5) isSizeChecked = checkAttributes(objectAnchor: objectAnchor, .sizeLeft, .sizeRight, threshold: 0.3) isOrientationChecked = checkAttributes(objectAnchor: objectAnchor, .orientationLeft, .orientationRight, threshold: 0.3) return isPositionChecked && isSizeChecked && isOrientationChecked } func processObjectAttributes(objectAnchor: ARObjectAnchor) { currentObjectAnchor = objectAnchor } } In my View, I am using @ObservedObject to observe the state of these variables, but the error persists when I try to update them during view rendering. Could anyone help me understand why this error occurs and how to avoid it? I understand that state should not be updated during view rendering, but I can’t find a solution that works in this case. Thank you in advance for your help!
Topic: UI Frameworks SubTopic: SwiftUI Tags:
0
0
309
Jan ’25
Comparing colors of two ModelEntities
I want to compare the colors of two model entities (spheres). How can i do it? The method i'm currently trying to apply is as follows case let .color(controlColor) = controlMaterial.baseColor, controlColor == .green { // Flip target sphere colour if let targetMaterial = targetsphere.model?.materials.first as? SimpleMaterial, case let .color(targetColor) = targetMaterial.baseColor, targetColor == .blue { targetsphere.model?.materials = [SimpleMaterial(color: .green, isMetallic: false)] // Change to |1⟩ } else { targetsphere.model?.materials = [SimpleMaterial(color: .blue, isMetallic: false)] // Change to |0⟩ } } This method (baseColor) was deprecated in swift 15.0 changes to 'color' but i cannot compare the value color to each other.👾
1
0
623
Jan ’25
VisionOS ARKit CameraFrame Sample Parameters Extrinsics
the following documentation tells me that the CameraFrame.Sample.Parameters.extrinsics is of type simd_float4x4, great! https://developer.apple.com/documentation/arkit/cameraframe/sample/parameters/4443449-extrinsics I have read in the answer of another post that this extrinsics represents the pose of the physical camera relative to the device anchor. Did I understand correctly that the device anchor is where the scene is rendered from onto the user's display? What is the coordinate system in which this offset is defined, which axis is left, which one is up, which one is forward? The last column of the extrinsics seems to define a translation of approximately 2 cm along the x axis, -2cm along the y axis and -5 cm along the z axis. I tried to measure the physical distance between the main left and right cameras in order to find out if it's rather 2cm or 5 cm from the "middle", it looks more like 5, so I assume that the z axis is looking towards the right (from the user's perspective). Is that so? For x and y, I assume that the physical camera is approximately 2 cm to the front of the user and 2cm to the bottom, which of x and y is horizontal, which on vertical? How is the camera image indexed, is it row-major and is the origin on the top left? I am looking forward to learning about all the details on these extrinsics in order to make use of it.
4
0
792
Jan ’25
Billboard Entity with AttachmentView
Hey Everyone, Happy New Year! I wanted to see if you have seen this before. I have added an attachment to the RealityView as a child on an entity that has a Billboard component set on it. I wanted to create the effect that the attachment is offset by .5 meters from center and follows the device as you move around it. IT works great until you try click a button. The attachment moves with the billboard, but the collision box around the attachment is not following it. If I position myself perfectly it works. Video Example: https://youtu.be/4d9Vx7K8MmU // // ImmersiveView.swift // Billboard Attachment // // Created by Justin Leger on 1/3/25. // import SwiftUI import RealityKit import RealityKitContent struct ImmersiveView: View { var rootEntity = Entity() var body: some View { RealityView { content, attachments in // Add the initial RealityKit content let sphereEntity = ModelEntity(mesh: .generateSphere(radius: 0.1), materials: [SimpleMaterial(color: .red, roughness: 1, isMetallic: false)]) sphereEntity.position = [0.0, 1.0, -2.0] let controlsPivotEntity = Entity() controlsPivotEntity.components[BillboardComponent.self] = .init() // Extract the attachemnt entity and disable it before its used. if let controlsViewAttachmentEntity = attachments.entity(for: PlacedThingControls.attachmentId) { controlsViewAttachmentEntity.position.z = 0.5 controlsPivotEntity.addChild(controlsViewAttachmentEntity) sphereEntity.addChild(controlsPivotEntity) } content.add(sphereEntity) } attachments: { Attachment(id: PlacedThingControls.attachmentId) { PlacedThingControls() } } } } #Preview(immersionStyle: .mixed) { ImmersiveView() .environment(AppModel()) } struct PlacedThingControls: View { static let attachmentId = "placed-thing-3D-controls" var body: some View { VStack { HStack(spacing: 0) { Button { print("🗺️🗺️🗺️ Map selected pieces") } label: { Text("\(Image(systemName: "plus.square.dashed")) Manage Mesh Maps") .fontWeight(.semibold) .frame(maxWidth: .infinity) } .padding(.leading, 20) Spacer() Button(role: .destructive) { print("🗑️🗑️🗑️ Delete selected pieces") } label: { Label { Text("Delete") } icon: { Image(systemName: "trash") } .labelStyle(.iconOnly) } .padding(.trailing, 20) } .padding(.vertical) .frame(minWidth: 320, maxWidth: 480) } .glassBackgroundEffect() } }
5
0
853
Jan ’25
Persisting Anchors in RealityView with ARMode on iOS
Platform: iOS18 Tech: RealityView Hi! I was wondering if RealityView now provides ways for their session to persist Anchor data in a world such that the anchor locations in one session can be saved and loaded in a another session that persists the exact same anchor positions. I know that ARWorldMap in ARKit does that, but I was not able to find a way to use it with RealityView. I think it's because RealityView has ARKit under its hood but does not expose the ARKit session info publicly to the client code. So I was wondering if there's a SwiftUI + RealityView approach that can help me to achieve a similar goal: Come back to the same location and see the object in exactly the same place. Thanks!
0
1
516
Jan ’25
Cannot assign build target to usdz file
I’m working in the app playground and want to add my usdz file but when i drag drop the file to my main folder i cannot add target to it which leads to a resource not found error while I build my app. It was working on a normal xcode project but while transitioning to app playground it is not working. How can I fix this issue?
0
0
360
Jan ’25
Scanning Smaller Objects with RoomPlan (Light Switches or Sockets)
Hi everyone, I’m currently developing an app using Apple’s RoomPlan framework, and so far, everything is working great! However, I’d like to extend the functionality to include scanning smaller objects, such as light switches or power outlets, in addition to the walls and larger furniture that RoomPlan already supports. From what I understand, based on the documentation, RoomPlan doesn’t natively support the detection or measurement of smaller objects like these. Is that correct? If that’s the case, does anyone have suggestions or ideas on how this could be achieved? Perhaps by integrating another framework or technology alongside RoomPlan? I’d appreciate any insights or advice from those who have worked on similar use cases. Thanks in advance!
1
0
562
Jan ’25
Is it possible to create a sentence hover effect in Vision Pro?
I want a sentence custom hover effect, not a button. I want a hover effect when you look at one sentence out of many sentences. So I searched for reference videos https://youtu.be/DftRTx1oX6E , https://developer.apple.com/videos/play/wwdc2023/10110/ on apple youtube and visionOS documentation. But I haven't gotten anywhere near my wish feature yet. I respectfully request someone to help me. :)
1
0
408
Jan ’25
ARView.Environment.SceneUnderstanding.Options.occlusion not working on models that aren't opaque
Is this behaviour expected? For example, if I'm using let materials = [SimpleMaterial(color: .red, isMetallic: false)] occlusion works normally, but with let materials = [SimpleMaterial(color: .red.withAlphaComponent(0.5), isMetallic: false)] i can see my cube through real-world objects, like tables, columns, etc. I'm getting the same behaviour if using CustomMaterial from shader and applying customMaterial.blending = .opaque and customMaterial.blending = .transparent(opacity: ) respectively
0
0
495
Dec ’24
Rounded button in realitykit using swiftui
I'm developing an ar app using reality kit and Arkit and i want to have my buttons in the same theme of vision os buttons thin , transparent background and round at corners.Following is the code i have written and need help with it func createButton(label: String, position: SIMD3<Float>) -> ModelEntity { let button = ModelEntity(mesh: .generateBox(size: [0.3, 0.1, 0.02], cornerRadius: 10), materials: [SimpleMaterial(color: .blue, isMetallic: false)]) button.generateCollisionShapes(recursive: true) button.position = position // Add button label let buttonText = ModelEntity(mesh: .generateText(label, extrusionDepth: 0.005, font: .systemFont(ofSize: 0.05))) buttonText.model?.materials = [SimpleMaterial(color: .white, isMetallic: true)] buttonText.position = [-0.07, -0.02, 0.01] button.addChild(buttonText) return button }
1
0
567
Dec ’24
BillboardComponent causing Model Entity tap recognition issues on iOS 18
Hi, When I attach BillboardComponent to anchor entities, I am no longer able to retrieve the tapped entity anymore because the collision shapes of the entity are messed up due to always orienting it towards the camera. And it does not updated the collision shapes because if I try pressing everywhere that is not my model entity, I get a hit out of nowhere. I tried updating the collision shapes of the entity every frame: for child in existingPassport.mainEntity!.children { child.generateCollisionShapes(recursive: true) } However, nothing comes of it, and it is not a smart solution in the first places because it is too heavy to recreate the shapes every frame. I am using the usual AR View Controller that works when I comment out the BillboardComponent line just fine: private func setupTapRecognizer() { let tapRecognizer = UITapGestureRecognizer(target: self, action: #selector(handleTap)) arView.addGestureRecognizer(tapRecognizer) } @objc func handleTap(_ recognizer: UITapGestureRecognizer) { print("handle tap URL 1") let location = recognizer.location(in: arView) if let entity = arView.entity(at: location) { print("handle tap URL 2") // Assuming each entity has a URL stored in a component if let urlComponent = entity.components[URLComponent.self] { webViewPresenter?.presentFullScreenWebView(url: urlComponent.url) print("handle tap URL: \(urlComponent.url)") } } } How should we tackle this issue on iOS 18? Thanks!
1
0
675
Dec ’24
VisionOS, passthrough through broadcast shows a black background
Hey, I have Enterprise Access on the account and have added the passthrough capability and the entitlement on the main project and the "Broadcast Upload" extension, too. The broadcast works except it returns a black screen. I am attaching some screenshots below of the entitlement file. I have tried searching online to no avail, so any help would be greatly appreciated. I am also attaching the code. import Foundation import AVFoundation import ReplayKit class VideoAssetWriter { private var isRecording = false private var outputStream: OutputStream? private func setupConnection() { guard outputStream == nil else { return } print("setting up connection.") let serverIP = macIP let port = 12345 var readStream: Unmanaged<CFReadStream>? var writeStream: Unmanaged<CFWriteStream>? CFStreamCreatePairWithSocketToHost(kCFAllocatorDefault, serverIP as CFString, UInt32(port), &readStream, &writeStream) guard let writeStream = writeStream?.takeRetainedValue() else { print("Failed to create write stream") return } self.outputStream = writeStream as OutputStream self.outputStream?.open() } func startRecording() { isRecording = true } func processVideoSampleBuffer(_ sampleBuffer: CMSampleBuffer) { print("Processing Sample 1") guard isRecording else { return } print("Processing Sample 2") sendVideoChunkToServer(sampleBuffer) } private func sendVideoChunkToServer(_ sampleBuffer: CMSampleBuffer) { guard let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else { return } print("Processing Sample 3") let ciImage = CIImage(cvPixelBuffer: imageBuffer) let context = CIContext() guard let cgImage = context.createCGImage(ciImage, from: ciImage.extent) else { return } print("Processing Sample 4") let image = UIImage(cgImage: cgImage) if let imageData = image.jpegData(compressionQuality: 0.5) { guard imageData.count <= 10_000_000 else { print("Frame too large: \(imageData.count) bytes") return } if outputStream == nil { setupConnection() } print("sending frame size up connection.") // Convert to network byte order (big-endian) var frameSize = UInt32(imageData.count).bigEndian let sizeData = Data(bytes: &frameSize, count: MemoryLayout<UInt32>.size) _ = sizeData.withUnsafeBytes { outputStream?.write($0.baseAddress!.assumingMemoryBound(to: UInt8.self), maxLength: sizeData.count) } print("sending image data up connection.") // Send frame data _ = imageData.withUnsafeBytes { outputStream?.write($0.baseAddress!.assumingMemoryBound(to: UInt8.self), maxLength: imageData.count) } } } func stopRecording() { isRecording = false outputStream?.close() outputStream = nil } } This is the broadcast picker view wrapper: // Broadcast Picker View wrapper struct BroadcastButtonView: UIViewRepresentable { func makeUIView(context: Context) -> RPSystemBroadcastPickerView { let broadcastPickerView = RPSystemBroadcastPickerView( frame: CGRect(x: 0, y: 0, width: 200, height: 200) ) // Make sure this matches your broadcast extension bundle identifier broadcastPickerView.preferredExtension = "my-extension-bundle-identifier" broadcastPickerView.showsMicrophoneButton = false return broadcastPickerView } func updateUIView(_ uiView: RPSystemBroadcastPickerView, context: Context) { } } The extension SampleHandler: override func broadcastPaused() { print("paused broadcast") // User has requested to pause the broadcast. Samples will stop being delivered. } override func broadcastResumed() { print("resumed broadcast") // User has requested to resume the broadcast. Samples delivery will resume. } override func processSampleBuffer(_ sampleBuffer: CMSampleBuffer, with sampleBufferType: RPSampleBufferType) { print("broadcast received") assetWriter?.processVideoSampleBuffer(sampleBuffer) } Looking forward to any and all help. Information Property list: Information property list for the extension: The capabilities:
1
0
485
Dec ’24
Nested USDZs new behaviour with Quick Look
Hello there I have a nested USDZ file I had created a long time ago with some make-up products. Its behaviour was always the same in the past: nested usdz files allow you to control each object "nested" separately. So I used this as a way to allow people to play around with a "set of objects". Today I went to try it and since I'm on ios 18 it shows an "assets tab" on the bottom that allows me to see all the assets inside the tab but doesn't allow me to see or anchor them at all! What changes do I need to do in order for this to work and where can I check documentation on these new behaviours for USDZ files? And what will this allow for in the future? Thank you in advance
0
1
490
Dec ’24