Similar to the visionOS Spatial Gallery app, I'm developing a visionOS app that will show spatial photos and videos. Is it possible to re-create the horizontal (or a vertical) scrolling functionality that shows spatial photos and spatial video previews? Does the Spatial Gallery app use private APIs to create this functionality? I've been looking at the Quick Look documentation and have been able to use the PreviewApplication to show a single preview, but do not see anything for a collection of files as the Spatial Gallery app presents in the scrolling view. Any insights or direction on how this may be done is greatly appreciated.
General
RSS for tagDiscuss Spatial Computing on Apple Platforms.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hi,
We are trying to port our Unity app from other XR devices to Vision Pro. Thus it's way easier for us to use the Metal rendering layer, fully immersive. And to stay true to the platform, we want to keep the gaze/pinch interaction system.
But we just noticed that, unlike Polyspatial XR apps, VisionOS XR in Metal does not provide gaze info unless the user is actively pinching... Which forbids any attempt to give visual feedback on what they are looking at (buttons, etc).
Is this planned in Apple's roadmap ?
Thanks
Hello,
I've been tinkering a bit with TextComponent.
Based on the docs it seems like this component should always render sharp and nice text, no matter how close the user gets:
RealityKit dynamically adjusts the backing size to a value that results in high-fidelity text at its current location.
And it does on visionOS, but on iOS and macOS the text gets pixelated when I get close to it, as if its just rendering it once as a plain image texture.
Can anyone tell me if this is expected behavior or a bug?
Here two screenshots for comparison (iPhone and Vision Pro):
Thanks!
Hi All,
We're a studio building an app and as part of a scene we have a 3D asset with a smoke particle emitter and a curved mesh that plays video. I notice that when the video alone is played or the particle effect alone is done then the scene works fine but the frame rate drops drastically when both are turned on.
How do I solve this because this is an important storytelling feature.
I’m working on an iOS app that needs to measure the area of planes or surfaces, like the length and width of objects, just like the Apple Measure app does. I’ve been exploring ARKit, but I’m curious if there are any APIs or techniques that can help automate the process of detecting and measuring planes.
Specifically, I’m looking for a way to automatically detect and measure planes (e.g., from a top-down view). For example: Measuring a box width and length. I have attached a screenshot and a video of the Apple Measure App doing it.
Does Apple provide any tools or APIs for this, or are there any best practices I should know about? I’d love to hear from anyone who’s tackled something similar.
Video: https://drive.google.com/file/d/1BxM7fIbFxsCsYwY7w8ZxIeq_4WTGkkwA/view?usp=drive_link
I am trying to launch a fully immersive game from Unity on a SwiftUI view. The game is using Metal Rendering with Compositor Services.
I added the unity Xcode project into the workspace, added the necessary bridge code. When I click on the button to call ufw?.showUnityWindow(), it does not start and I get the following in the console:
AR session failed to start after 5 seconds. Is the app configured to use an immersive space?
Hello,
Let me ask you a question about Apple Immersive Video.
https://www.apple.com/newsroom/2024/07/new-apple-immersive-video-series-and-films-premiere-on-vision-pro/
I am currently considering implementing a feature to play Apple Immersive Video as a background scene in the app I developed, using 3DCG-created content converted into Apple Immersive Video format.
First, I would like to know if it is possible to integrate Apple Immersive Video into an app.
Could you provide information about the required software and the integration process for incorporating Apple Immersive Video into an app?
It would be great if you could also share any helpful website resources.
I am considering creating Apple Immersive Video content and would like to know about the necessary equipment and software for producing both live-action footage and 3DCG animation videos.
As I mentioned earlier, I’m planning to play Apple Immersive Video as a background in the app. In doing so, I would also like to place some 3D models as RealityKit entities and spatial audio elements.
I’m also planning to develop the visionOS app as a Full Space Mixed experience. Is it possible to have an immersive viewing experience with Apple Immersive Video in Full Space Mixed mode? Does Apple Immersive Video support Full Space Mixed?
I’ve asked several questions, and that’s all for now. Thank you in advance!
With Xcode 26, loading ressources with RealityKit is extremely slow.
Here my project takes almost 50 seconds to load.
I also get multiple Hang detected messages in the console:
When I uncheck "Debug executable" in the schema, the same project loads in 2 seconds.
I'm using RealityKit asynchronous loading:
private static func loadFromRealityComposerPro(
named entityName: String,
fromSceneNamed sceneName: String
) async -> Entity? {
var entity: Entity?
do {
let scene = try await Entity(
named: sceneName,
in: visionPetsContentBundle
)
entity = scene.findEntity(named: entityName)
} catch {
print(
"Error loading \(entityName) from scene \(sceneName): \(error.localizedDescription)"
)
}
return entity
}
Anyone having the same problem?
Topic:
Spatial Computing
SubTopic:
General
Has Roomplan been abandoned? Two years have gone by without comments from Apple on improvements. Are the improvements behind the scenes? Is there going to be any major updates?
Hi I have a monitoring app, that will take input video from uvc and process it using Metal, and eventually get a MTLTexture.
The problem I'm facing is I have to convert MTLTexture to CGImage then call TextureResource.replace, which is super slow. Metal processing speed is same as input frame rate(50pfs), but MTLTexture -> CGImage -> TextureResource only got 7fps...
Is there any way I can make it faster?
Topic:
Spatial Computing
SubTopic:
General
Tags:
Media Player
Frameworks
Media Accessibility
Core Media
.glassEffect(.regular, in: .rect(cornerRadius: 24))
error; 'glassEffect(_:in:isEnabled:)' is unavailable in visionOS
This is not surprising since visionOS already has a native glass interface that formed a model for the other OS's, but this error will create additional overhead for developers creating multi-platform apps that include visionOS.
Here is my code in visionOS 2.3
NavigationSplitView {
List {
}
.navigationTitle("Passwords")
} detail: {
Text("Hello")
.navigationTitle("All")
}
The font size of "Passwords" and "All" are smaller than the ones in Passwords app.
Hi,
we've developed an app for Vision Pro that utilises the GroupActivitites SDK to provide shared experiences for our users.
Remote Participation works great, but we can't get nearby sharing to work.
The behaviour we're observing:
User 1 engages share sheet from Volume, 2nd Vision Pro is visible.
User 1 starts nearby sharing
Session initialisation runs for approx. 30 seconds, then fails
Sometimes, the nearby participant doesn't show up at all after the initialisation has failed once.
As stated in the Configure your visionOS app for sharing with people nearby article, we didn't make any changes to our implementation to support nearby sharing.
Any help would be greatly appreciated.
Kind regards,
David
The landing page for visionOS 26 mentions
The Unified Coordinate Conversion API makes moving views and entities between scenes straightforward — even between views and ARKit accessory anchors.
This WWDC session very briefly shows a single example of using this, but with no context. For example, they discuss a way to tell the distance between a Model3D and an entity in a RealityView. But they don't provide any details for how they are referencing the entity (bolts in the slide).
The session used the BOT-anist example project that we saw in visionOS 2, but the version on in the Sample Code library has not been updated with these examples.
I was able to put together a simple example where we can get the position of a window relative to the world origin. It even updates when the user recenters.
struct Lab080: View {
@State private var posX: Float = 0
@State private var posY: Float = 0
@State private var posZ: Float = 0
var body: some View {
GeometryReader3D { geometry in
VStack {
Text("Unified Coordinate Conversion")
.font(.largeTitle)
.padding(24)
VStack {
Text("X: \(posX)")
Text("Y: \(posY)")
Text("Z: \(posZ)")
}
.font(.title)
.padding(24)
}
.onGeometryChange3D(for: Point3D.self) { proxy in try! proxy
.coordinateSpace3D()
.convert(value: Point3D.zero, to: .worldReference)
} action: { old, new in
posX = Float(new.x)
posY = Float(new.y)
posZ = Float(new.z)
}
}
}
}
This is all that I've been able to figure out so far. What other features are included in this new Unified Coordinate Conversion?
Can we use this to get the position of one window relative to another? Can we use this to get the position of a view in a window relative to an entity in a RealityView, for example in a Volume or Immersive Space? What else can Unified Coordinate Conversion do?
Are there documentation pages that I'm missing? I'm not sure what to search for. Are there any Sample projects that use these features? Any additional information would be very helpful.
Topic:
Spatial Computing
SubTopic:
General
At the moment the map kit APls only support non-volumetric maps (i.e. in a window or in a volume, but on a 2D surface).
Is support for 3D volumetric maps in VisionOS in the works? And if so when can we expect it to be available?
Hi,
I am creating an ECS. With this ECS I will need to register several DragGesture.
Question: Is it possible to define DragGestures in ECS? If yes, how do we do that? If not, what is the best way to do that?
Question: Is there a "gesture" method that takes an array of gestures as a parameter?
I am interested in any information that can help me, if possible with an example of code.
Regards
Tof
With the new ImagePresentationComponent in visionOS 26, how can text/overlays be shown on top of the image as seen in the Spatial Gallery app?
Hi!
I'm currently trying to render another XR scene in front of a RealityKit one.
Actually, I'm anchoring a plane to the head with a shader to display for left/right eye side-by-side images. By default, the camera has a near plane so I can directly draw at z=0.
Is there a way to change the camera near plane? Or maybe there is a better solution to overlay image/texture for left/right eyes?
Ideally, I would layer some kind of CompositorLayer on RealityKit, but that's sadly not possible from what I know.
Thanks in advance and have a good day!
Hi !
I'm new on this forum, so if I need to update this post to have more info, or anything else, please let me know.
I'm using the Apple Vision Pro to develop some app (with unity). To demonstrate what the user see on the headset, I would like to mirror the view on a device (an iPad in this case). I managed to do this without any issue.
My problem is that, in the Vision Pro, I have an interface that the user can interact with. But I would like to be able to manage myself the interface on the iPad. What I mean is that the user can (or can't, doesn't matter) see the interface in the headset, and the interface is controlled by myself on the iPad.
Is there any way to do this ? Is this a question I should ask on unity's forum ? (I don't think so, because it should be related to the mirroring function non ?)
Topic:
Spatial Computing
SubTopic:
General
Hi, would love for your help in that matter.
I try to get the position in space of two QR codes to make an alignment to their positions in space. The detection shows that the QR codes position is always 0,0,0 and I don't understand why. Here's my code:
import SwiftUI
import RealityKit
import RealityKitContent
struct AnchorView: View {
@ObservedObject var qrCoordinator: QRCoordinator
@ObservedObject var coordinator: ImmersiveCoordinator
let qrName: String
@Binding var startQRDetection: Bool
@State private var anchor: AnchorEntity? = nil
@State private var detectionTask: Task<Void, Never>? = nil
var body: some View {
RealityView { content in
// Add the QR anchor once (must exist before detection starts)
if anchor == nil {
let imageAnchor = AnchorEntity(.image(group: "QRs", name: qrName))
content.add(imageAnchor)
anchor = imageAnchor
print("📌 Created anchor for \(qrName)")
}
}
.onChange(of: startQRDetection) { enabled in
if enabled {
startDetection()
} else {
stopDetection()
}
}
.onDisappear {
stopDetection()
}
}
private func startDetection() {
guard detectionTask == nil, let anchor = anchor else { return }
detectionTask = Task {
var detected = false
while !Task.isCancelled && !detected {
print("🔎 Checking \(qrName)... isAnchored=\(anchor.isAnchored)")
if anchor.isAnchored {
// wait a short moment to let transform update
try? await Task.sleep(nanoseconds: 100_000_000)
let worldPos = anchor.position(relativeTo: nil)
if worldPos != .zero {
// relative to modelRootEntity if available
var posToSave = worldPos
if let modelEntity = coordinator.modelRootEntity {
posToSave = anchor.position(relativeTo: modelEntity)
print("converted to model position")
} else {
print("⚠️ modelRootEntity not available, using world position")
}
print("✅ \(qrName) detected at position: world=\(worldPos) saved=\(posToSave)")
if qrName == "reanchor1" {
qrCoordinator.qr1Position = posToSave
let marker = createMarker(color: [0,1,0])
marker.position = .zero // sits directly on QR
marker.position = SIMD3<Float>(0, 0.02, 0)
anchor.addChild(marker)
print("marker1 added")
} else if qrName == "reanchor2" {
qrCoordinator.qr2Position = posToSave
let marker = createMarker(color: [0,0,1])
marker.position = posToSave // sits directly on QR
marker.position = SIMD3<Float>(0, 0.02, 0)
anchor.addChild(marker)
print("marker2 added")
}
detected = true
} else {
print("⚠️ \(qrName) anchored but still at origin, retrying...")
}
}
try? await Task.sleep(nanoseconds: 500_000_000) // throttle loop
}
print("🛑 QR detection loop ended for \(qrName)")
detectionTask = nil
}
}
private func stopDetection() {
detectionTask?.cancel()
detectionTask = nil
}
private func createMarker(color: SIMD3<Float>) -> ModelEntity {
let sphere = MeshResource.generateSphere(radius: 0.05)
let material = SimpleMaterial(color: UIColor(
red: CGFloat(color.x),
green: CGFloat(color.y),
blue: CGFloat(color.z),
alpha: 1.0
), isMetallic: false)
let marker = ModelEntity(mesh: sphere, materials: [material])
marker.name = "marker"
return marker
}
}
Topic:
Spatial Computing
SubTopic:
General