I’m working on an iOS app that needs to measure the area of planes or surfaces, like the length and width of objects, just like the Apple Measure app does. I’ve been exploring ARKit, but I’m curious if there are any APIs or techniques that can help automate the process of detecting and measuring planes.
Specifically, I’m looking for a way to automatically detect and measure planes (e.g., from a top-down view). For example: Measuring a box width and length. I have attached a screenshot and a video of the Apple Measure App doing it.
Does Apple provide any tools or APIs for this, or are there any best practices I should know about? I’d love to hear from anyone who’s tackled something similar.
Video: https://drive.google.com/file/d/1BxM7fIbFxsCsYwY7w8ZxIeq_4WTGkkwA/view?usp=drive_link
General
RSS for tagDiscuss Spatial Computing on Apple Platforms.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hello, I'm adding a CollisionComponent to an entity in RealityView. CollisionComponent requires that a Mesh must be provided as a reference for collision detection. However, in order to achieve more accurate detection, I hope that this Mesh resource is a geometric shape of a USDZ model. Is there any way to make it happen? Thank you!
Previously I had developed software using SMI eye trackers, both screen mounted and their mobile glasses, for unique therapeutic and physiology applications. Sadly, after SMI was bought by Apple, their hardware and software have been taken off the market and now it is very difficult to get secondhand-market systems. The Apple Vision Pro integrates the SMI hardware. While I can use ARKit to get gaze position, I do not see a way to access information that was previously made accessible on the SMI hardware, particularly: dwell time and pupil diameter information. I am hopeful (or asking) to see that if a user has a properly set up Optic ID and would opt-in if, either on the present or a future version of visionOS, it might be possible to get access to the data streams for dwell times and pupil diameter. Pupil diameter is particularly important as it is a very good physiological measure of how much stress a person is encountering, which is critical to some of the therapeutic applications that formerly we used SMI hardware. Any ideas, or, if this is not possible, proposing this to the visionOS team would be appreciated!
Topic:
Spatial Computing
SubTopic:
General
Hi,
Is there a way to create an AnchorEntity that is attached to the window / WindowGroup of a visionOS app, so that there would be a box that aligns with the window?
Thanks for your help!
I'm writing code using ObjectAnchor for Vision OS. If an object is tracked, and then becomes not visible (either because the user looked in a different direction, or because the tracked object was occluded by another object), it is still tracked and you get anchor updates (e.g., object permanence).
For my application, it would be very helpful if I could determine if the object is currently being observed, or it is not currently observed and just assumed to be in the same location as seen previously.
ObjectAnchor.isTracked just seems to indicate whether it is getting anchor updates. I don't see anything in the ObjectAnchor or AnchorUpdate that would allow me to determine if the object is currently observed. Does anyone know of a way to do this, or would this be a feature request?
We're developing an iOS application that integrates RoomCaptureSession with ARSCNView for room scanning. Our implementation differs from the standard RoomCaptureView because we need custom UI guidance with 3D dots placed in the scanning environment to guide users through the capture process.
Bug Description:
The application crashes when users attempt to scan multiple rooms or apartments in sequence. The crash specifically occurs with the following pattern:
User successfully scans first room with multiple hotspots (working correctly)
User stops scanning, moves to a new room
In the new room, first 1-2 hotspots work correctly
Application crashes when attempting to scan additional hotspots
Technical Details:
Error: SLAM Anchor assertion failure in SlamAnchor.cpp:37 : HasValidPose()
Crash occurs in Thread 27 with CAPIDetectionOutputFwdNode
Error suggests invalid positioning when placing AR anchors
Steps to Reproduce:
Start room scan
Complete multiple hotspot captures in first room
Stop scanning
Start new room scan
Capture 1-2 hotspots successfully
Attempt additional hotspot captures -> crashes
Attempted Solutions:
Implemented anchor cleanup between sessions
Added position validation before anchor placement
Implemented ARSession error handling
Added proper thread management for AR operations
Environment:
Device: iPhone 14 Pro (LiDAR equipped)
iOS Version: 18.1.1 (22B91)
Testing through TestFlight
Crash Log Details:
Exception Type: EXC_CRASH (SIGABRT)
Exception Codes: 0x0000000000000000, 0x0000000000000000
Exception Note: EXC_CORPSE_NOTIFY
Triggered by Thread: 27
Thread 27 Crashed:
0 libsystem_kernel.dylib 0x00000001f0cc91d4 __pthread_kill + 8
1 libsystem_pthread.dylib 0x0000000228e12ef8 pthread_kill + 268
2 libsystem_c.dylib 0x00000001a86bbad8 abort + 128
3 AppleCV3D 0x0000000234d71a28 cv3d::vio::capi::SlamAnchor::SlamAnchor
Question:
Is there a recommended approach for handling multiple room captures with custom ARSCNView integration? The standard RoomCaptureView implementation doesn't show this behavior, but we need the custom guidance functionality that ARSCNView provides.
Crash Log
Code and full crash logs can be provided if needed.
I am currently developing an app for visionOS and have encountered an issue involving a component and system that moves an entity up and down within a specific Y-axis range. The system works as expected until I introduce sound playback using AVAudioPlayer.
Whenever I use AVAudioPlayer to play sound, the entity exhibits unexpected behaviors, such as freezing or becoming unresponsive. The freezing of the entity's movement is particularly noticeable when playing the audio for the first time. After that, it becomes less noticeable, but you can still feel it, especially when the audio is played in quick succession.
Also, the issue is more noticable on real device than the simulator
//
// IssueApp.swift
// Issue
//
// Created by Zhendong Chen on 2/1/25.
//
import SwiftUI
@main
struct IssueApp: App {
var body: some Scene {
WindowGroup {
ContentView()
}
.windowStyle(.volumetric)
}
}
//
// ContentView.swift
// Issue
//
// Created by Zhendong Chen on 2/1/25.
//
import SwiftUI
import RealityKit
import RealityKitContent
struct ContentView: View {
@State var enlarge = false
var body: some View {
RealityView { content, attachments in
// Add the initial RealityKit content
if let scene = try? await Entity(named: "Scene", in: realityKitContentBundle) {
if let sphere = scene.findEntity(named: "Sphere") {
sphere.components.set(UpAndDownComponent(speed: 0.03, minY: -0.05, maxY: 0.05))
}
if let button = attachments.entity(for: "Button") {
button.position.y -= 0.3
scene.addChild(button)
}
content.add(scene)
}
} attachments: {
Attachment(id: "Button") {
VStack {
Button {
SoundManager.instance.playSound(filePath: "apple_en")
} label: {
Text("Play audio")
}
.animation(.none, value: 0)
.fontWeight(.semibold)
}
.padding()
.glassBackgroundEffect()
}
}
.onAppear {
UpAndDownSystem.registerSystem()
}
}
}
//
// SoundManager.swift
// LinguaBubble
//
// Created by Zhendong Chen on 1/14/25.
//
import Foundation
import AVFoundation
class SoundManager {
static let instance = SoundManager()
private var audioPlayer: AVAudioPlayer?
func playSound(filePath: String) {
guard let url = Bundle.main.url(forResource: filePath, withExtension: ".mp3") else { return }
do {
audioPlayer = try AVAudioPlayer(contentsOf: url)
audioPlayer?.play()
} catch let error {
print("Error playing sound. \(error.localizedDescription)")
}
}
}
//
// UpAndDownComponent+System.swift
// Issue
//
// Created by Zhendong Chen on 2/1/25.
//
import RealityKit
struct UpAndDownComponent: Component {
var speed: Float
var axis: SIMD3<Float>
var minY: Float
var maxY: Float
var direction: Float = 1.0 // 1 for up, -1 for down
var initialY: Float?
init(speed: Float = 1.0, axis: SIMD3<Float> = [0, 1, 0], minY: Float = 0.0, maxY: Float = 1.0) {
self.speed = speed
self.axis = axis
self.minY = minY
self.maxY = maxY
}
}
struct UpAndDownSystem: System {
static let query = EntityQuery(where: .has(UpAndDownComponent.self))
init(scene: RealityKit.Scene) {}
func update(context: SceneUpdateContext) {
let deltaTime = Float(context.deltaTime) // Time between frames
for entity in context.entities(matching: Self.query, updatingSystemWhen: .rendering) {
guard var component: UpAndDownComponent = entity.components[UpAndDownComponent.self] else { continue }
// Ensure we have the initial Y value set
if component.initialY == nil {
component.initialY = entity.transform.translation.y
}
// Calculate the current position
let currentY = entity.transform.translation.y
// Move the entity up or down
let newY = currentY + (component.speed * component.direction * deltaTime)
// If the entity moves out of the allowed range, reverse the direction
if newY >= component.initialY! + component.maxY {
component.direction = -1.0 // Move down
} else if newY <= component.initialY! + component.minY {
component.direction = 1.0 // Move up
}
// Apply the new position
entity.transform.translation = SIMD3<Float>(entity.transform.translation.x, newY, entity.transform.translation.z)
// Update the component with the new direction
entity.components[UpAndDownComponent.self] = component
}
}
}
Could someone help me with this?
Hi there.
Thanks to amazing help from you guys, I've managed to code a 360 image carousel, where the user can browse 360 images located inside the project package.
Is there a way to access the filesystem on AVP outside the app?
I know about the FileManager, and I can get access to the .documentsDirectory, but how do I access documents folder from the "Files" app on the AVP?
My goal is to read images from a hardcoded folderlocation on the AVP, such that the user never will have to select the images themselves.
I know this may not be the "right" way to do this. The app is supposed to be "foolproof" with a minimum of userinteraction.
The only way to change the images should be to change the contents of the hardcoded imagefolder.
I hope this makes sense =)
Thanks in advance!
Regards,
Kim
Topic:
Spatial Computing
SubTopic:
General
I have a simple example of a motion matching (MxM for Unity) character controller that uses Unity's input system and gamepad support. In editor the scene and inputs work as expected. When I build to headset the app stops at an initialization step where my game controller should kick in. The app doesn't crash but my character is frozen in A-Pose and doesn't respond to input.
I'm wondering if this error I'm seeing in the logs is what's causing it? And if so how do I fix it?
error 15:56:11.724200-0700 PolySpatialProjectTemplate NSBundle file:///System/Library/Frameworks/GameController.framework/ principal class is nil because all fallbacks have failed
I'm using Xcode 16 beta 6
Unity 6000.0.17f1
VisionOS 2.0 beta 9
I'm using hand tracking to detect collisions between fingertips and entities that I have placed in the scene. I'm using the .mixed environment.
However, I want to detect when a fingertip touches a real-world object such as a wall.
No matter what I try, I can't get the collision to fire. I'm using the SceneReconstructionProvider to give me world meshes, which I use to create ModelEntity objects to which I add a CollisionComponent with the shape of the object.
I can render the meshes just fine, but nothing I do seems to allow collisions to work.
Surely this is possible, what am I missing?
I've tried following apple's documentation to apply a video material on a Model Entity, but I have encountered a compile error while attempting to specify the Spatial Audio type.
It is a 360 video on a Sphere which plays just fine, but the audio is too quiet compared to the volume I get when I preview the video on Xcode. So I tried tried to configure audio playback mode on the material but it gives me a compile error:
"audioInputMode' is unavailable in visionOS
audioInputMode' has been explicitly marked unavailable here
RealityFoundation.VideoPlaybackController.audioInputMode)"
https://developer.apple.com/documentation/realitykit/videomaterial/
Code:
let player = AVPlayer(url: url)
// Instantiate and configure the video material.
let material = VideoMaterial(avPlayer: player)
// Configure audio playback mode.
material.controller.audioInputMode = .spatial // this line won’t compile.
VisionOS 2.4, Xcode 16.4, also tried Xcode 26 beta 2.
The videos are HEVC MPEG-4 codecs.
Is there any other way to do this, or is there a workaround available?
Thank you.
Hello,
I'm currently trying to make a collaborative app. But it just works only on Reality View, when I tried to use Compositor Layer like below, the personas disappeared.
ImmersiveSpace(id: "ImmersiveSpace-Metal") {
CompositorLayer(configuration: MetalLayerConfiguration()) { layerRenderer in
SpatialRenderer_InitAndRun(layerRenderer)
}
}
Is there any potential solution too see Personas in Metal view?
Thanks in advance!
We use SceneReconstructionProvider to detect meshes in the surrounding environment and apply an OcclusionMaterial to them.
// Assuming `entity` represents one of the detected mesh in the environment
entity.components.set(ModelComponent(
mesh: mesh,
materials: [OcclusionMaterial()]
))
While this correctly occludes entities placed in the immersive space, it also occludes system windows. This becomes problematic when a window is dragged into an occluded area (before or after entering the immersive space), preventing interaction with its elements. In some cases, it also makes it impossible to focus on the window’s drag handle, since this might become occluded as well after moving the window nearby. More generally, system windows can be occluded when they come into proximity with a model that has OcclusionMaterial applied.
I'm aware of a change introduced in visionOS 2 regarding how occlusions interact with UI elements (as noted in the release notes). I believe this change was intended to ensure windows do not remain visible when opened in another room. However, this also introduces some challenges, as described in the scenario above.
Is there a way to prevent system window occlusion while still allowing entities to be occluded by environmental features? Perhaps not using OcclusionMaterial at all?
Development environment: Xcode 16.2, macOS 15.2
Run-time configuration: visionOS 2.2 and 2.3
I’m facing an issue while using CustomHoverEffect. In my view, there is a long title, which causes the title to be truncated. When the user hovers over it, the title should scroll. Although I have already implemented the scrolling effect, I am unsure how to trigger the scroll on hover. How should I approach this?
I have been implementing the LongPressGesture to have a menu come up upon a long press. I love the functionality and it is very close to being where I want it to be.
I don't know if this is a visionOS-specific thing, but I am hoping to control the corner radius of the pulled-out element behind my "button." I've wrangled hover effects in the past with overlays, but I'm not sure what to target in this case.
Worst case, I'll have to change the border radius on all of my tiles to match this LongPressGesture-controlled behavior, or I could possibly change the radius onLongPressGesture to match.
Is there a simpler solution? Thanks!
With the new ImagePresentationComponent in visionOS 26, how can text/overlays be shown on top of the image as seen in the Spatial Gallery app?
HoverEffectComponent on macOS 15 and iOS 18 works fine using RealityView, but seems to be ignored when ARView (even with a SwiftUI UIViewRepresentable) is used.
Feedback ID: FB15080805
At the moment the map kit APls only support non-volumetric maps (i.e. in a window or in a volume, but on a 2D surface).
Is support for 3D volumetric maps in VisionOS in the works? And if so when can we expect it to be available?
.glassEffect(.regular, in: .rect(cornerRadius: 24))
error; 'glassEffect(_:in:isEnabled:)' is unavailable in visionOS
This is not surprising since visionOS already has a native glass interface that formed a model for the other OS's, but this error will create additional overhead for developers creating multi-platform apps that include visionOS.
So I am exporting a .usdc file from blender that already has some morph animations. The animations play well in blender but when I export I cannot seem to play them in RealityKit or RCP.
Entity.availableAnimations is an empty array.
Not of the child objects in the entity hierarchy has an animation library component with it.
Maybe I am exporting it wrong but I tried multiple combinations but doesn't seem to work.
Here are my export settings in blender
The original file I purchased is an FBX file that has the animation but when I try to directly get it in RealityConverter it doesn't seem to play animations.
Topic:
Spatial Computing
SubTopic:
General
Tags:
Reality Converter
RealityKit
Reality Composer Pro
visionOS