Are there any changes to RotationSystem: System and RotationComponent: Component that I should be aware of to see if I need to update my use in my visionOS app?
General
RSS for tagDiscuss Spatial Computing on Apple Platforms.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Can an app made with the Room Plan API be used on iPhones without LIDAR? If so, how much accuracy would be lost compared to iPhones with LIDAR?
If not, is there an API similar to RoomPlan that works on iPhones without LiDAR?
Hi there,
I was looking to add a particle emitter to my augmented reality app I'm developing using RealityKit. I'm targeting iOS. I noticed in the documentation for the ParticleEmitterComponent that it looks like iOS 18.0+ is supported, but when I try to use the ParticleEmitterComponent in my code in XCode, I get an error that it isn't found. Furthermore, this StackOverflow post seems to indicate that particle systems are not available for iOS. Would it be possible to get clarification on this?
Is there a suitable
UTType type to satisfy the need to pick up only SpatialVideo in UIDocumentPickerViewController?
I already know that PHPickerFilter in PHPickerViewController can do this, but not in UIDocumentPickerViewController.
Our app needs to adapt both of these ways to pick spatial videos
So is there anything that I can try in UIDocumentPickerViewController to fulfill such picker functionality?
Topic:
Spatial Computing
SubTopic:
General
Tags:
Files and Storage
Photos and Imaging
PhotoKit
visionOS
prefetching logic for UICollectionView on VisionOS does not work.
I have set up a Standalone test repo to demonstrate this issue. This repo is basically a visionOS version of Apple's guide project on implementation of prefetching logic.
in repo you will see a simple ViewController that has UICollectionView, wrapped inside UIViewControllerRepresentable.
on scroll, it should print 🕊️ prefetch start on console to demonstrate func collectionView(_ collectionView: UICollectionView, prefetchItemsAt indexPaths: [IndexPath]) is called. However it never happens on VisionOS devices.
With the same code it behaves correctly on iOS devices
Topic:
Spatial Computing
SubTopic:
General
Tags:
SwiftUI
UIKit
visionOS
iPad and iOS apps on visionOS
In Vision OS app, I have two types of windows:
Main App Window – This is the default window that launches when the app starts. It displays the video listings and other primary content.
Immersive Space Window – This opens only when a user starts streaming or playing a video.
Issue:
When entering the immersive space, the main app window remains visible in front of it unless manually closed. To avoid this, I currently close the main window when transitioning to immersive space and reopen it when exiting from immersive space. However, this causes the app to restart instead of resuming from its previous state.
Desired Behavior:
I want the main app window to retain its state and seamlessly resume from where it was before entering immersive mode, rather than restarting.
Attempts & Challenges:
Tried managing opacity, visibility but none worked as expected.
Couldn’t find a way to push the main window to the background while bringing the immersive space to the foreground.
Looking for a solution to keep the main window’s state intact while transitioning between immersive and normal modes.
Hello,
I'm currently working on a project that requires real-world object recognition and labeling. I understand that due to the security and privacy issues, we are unable to access the vision pro camera feed. Is there any other external way to solve this problem?
Thank you!
How can I access the player’s camera vector in VisionOS, specifically using RealityKit?
In Unity and other game engines, there’s often an API like "Camera.main.transform.forward for this purpose. "
I’ve found the head anchor but haven’t identified a way to obtain the forward vector in RealityKit.
Is there a related API for this? Any guidance would be greatly appreciated.
Thanks!
Seeing this magical sand table, the unfolding and folding effects are similar to spreading out cards, which is very interesting. But I don't know how to achieve it. I want to see if there are any ways to achieve this effect and give some ideas. May I ask if this effect can be achieved under the existing API
Topic:
Spatial Computing
SubTopic:
General
Hi,
My app allows users to share and view spatial photos.
For viewing spatial photos, I'm using a plane in a RealityView that has a camera index switch material node, which takes the stereo images as the inputs.
For sharing native spatial photos taken on the vision pro, prior to visionOS 2.0, I extract the stereo image pair and merge them into a single side-by-side image to upload to the app's backend.
However, since visionOS 2.0 introduced generating spatial photos from normal photos, I've been seeing some unexpected behaviours in my app, while on the other hand, they can be viewed correctly in the system Photos app:
Sometimes the extracted images have different size, the right image is smaller than the left image. See the first image in the google drive below, taken with iPhone 15 Pro.
Even if the image pair have the same size, when viewed in my app, it has some artefacts, especially around the edge of objects which are closer to the camera. See the second image in the google drive below, taken with iPhone 11.
Google drive link here:
https://drive.google.com/drive/folders/1UTfpxvO3-ChqshwfyzY5E_KCgk8VgUaa
I know that now Quicklook preview application can support viewing spatial photos now, but I would like to keep it the way I implemented in the app, for compatibility concerns.
Below is a code snippet that deals with the extraction. Please point out the correct way to extract stereo image pair from a generated spatial photo.
Happy to submit a code-level support request if more information is needed.
// the data is from photos picker item
let data = try await photo.loadTransferable(type: Data.self)
let source = CGImageSourceCreateWithData(data as CFData, nil)
let sbsImage = source.extractSpatialPhoto()
extension CGImageSource {
func extractSpatialPhoto() -> UIImage? {
guard let leftCGImage = extractSpatialImage(at: 0),
let rightCGImage = extractSpatialImage(at: 1)
else {
return nil
}
let leftImage = UIImage(ciImage: leftCGImage)
let rightImage = UIImage(ciImage: rightCGImage)
guard leftImage.size == rightImage.size else {
return nil
}
// merge left + right
let size = CGSize(width: leftImage.size.width * 2, height: leftImage.size.height)
UIGraphicsBeginImageContextWithOptions(size, true, 1.0)
leftImage.draw(in: CGRect(x: 0, y: 0, width: leftImage.size.width, height: leftImage.size.height))
rightImage.draw(in: CGRect(x: leftImage.size.width, y: 0, width: rightImage.size.width, height: rightImage.size.height))
let mergedImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return mergedImage
}
// not sure if this actually works
func extractSpatialImage(at index: Int) -> CIImage? {
guard let cgImage = CGImageSourceCreateImageAtIndex(self, index, nil) else {
return nil
}
var ciImage = CIImage(cgImage: cgImage)
if let properties = CGImageSourceCopyPropertiesAtIndex(self, index, nil) as? [String: Any],
let heifDictionary = properties[kCGImagePropertyHEIFDictionary as String] as? [String: Any],
let extrinsics = heifDictionary[kIIOMetadata_CameraExtrinsicsKey as String] as? [String: Any],
let position = extrinsics[kIIOCameraExtrinsics_Position as String] as? [Double]
{
// Default baseline is 64mm (0 for left camera, 0.064m for right camera)
let standardBaseline = 0.064
// Check if it's the right image (should be at [0.064, 0, 0])
let isRightImage = (index == 1)
let expectedPosition = isRightImage ? standardBaseline : 0.0
// Calculate the translation needed to align to standard baseline
let positionDelta = position[0] - expectedPosition
// Apply translation only if there's a mismatch in position
if positionDelta != 0 {
let transform = CGAffineTransform(translationX: CGFloat(positionDelta), y: 0)
ciImage = ciImage.transformed(by: transform)
}
}
return ciImage
}
}
How can I access the player’s camera vector in VisionOS, specifically using RealityKit?
In Unity and other game engines, there’s often an API like Camera.main.transform.forward for this purpose. I’ve found the head anchor but haven’t identified a way to obtain the forward vector in RealityKit.
Is there a related API for this? Any guidance would be greatly appreciated.
Thanks!
Hello everyone,
I'm currently working through this example project Exploring object tracking with ARKit to learn how to use Object Tracking with visionOS.
I was able to modify the example project to overlay a different model onto the detected object. However, with the example code, I'm only able to track 1 instance of the trained object at a time. I'm wondering if there is a way to track multiple instances of 1 object?
For example, I have a usdz model of a box, then trained that box for object tracking, I'm able to overlay a model of a chair over that box once it's detected. But now, I have multiple copies of that same box, and I want to arrange them so that when I wear the Vision Pro, I can see the chairs arranged however I want.
I'm still new to visionOS development, so I'm not sure if there's a way to accomplish that by just training 1 object and having copies of it.
If it helps, this is my current modification to overlay a virtual object ontop of a detected object.
func loadReferenceObjects() async -> [ReferenceObject] {
// Get a list of all reference object files in the app's main bundle and attempt to load each.
var referenceObjectFiles: [String] = []
if let resourcesPath = Bundle.main.resourcePath {
print("resource path: \(resourcesPath)")
try? referenceObjectFiles = FileManager.default.contentsOfDirectory(atPath: resourcesPath).filter { $0.hasSuffix(".referenceobject") }
}
await withTaskGroup(of: Void.self) { group in
for file in referenceObjectFiles {
let objectURL = Bundle.main.bundleURL.appending(path: file)
group.addTask {
// get the file name
let fileNameWithoutExtension = (file as NSString).deletingPathExtension
// load each ref objs as task
await self.loadSingleReferenceObject(url: objectURL, fileName: fileNameWithoutExtension)
}
}
}
return self.referenceObjects
}
// Private helper method to load a single object
// and assign entity to it.
private func loadSingleReferenceObject(url: URL, fileName: String) async {
var referenceObject: ReferenceObject
do {
print("Loading reference object from \(url)")
// Load the file as a `ReferenceObject` - this can take a while for larger objects.
try await referenceObject = ReferenceObject(from: url)
} catch {
fatalError("Failed to load reference object with error \(error)")
}
// add the ref obj to the ref objs array
self.referenceObjects.append(referenceObject)
// entity with each file
var model: Entity = Entity()
// add entity according to the file name
switch fileName {
// Box1 ref obj binds with Chair1
case "Box1":
// try to load the model
do {
try await model = Entity(named: "chair1", in: realityKitContentBundle)
} catch {
print("Failed to load chair1")
}
// Box2 ref obj binds with Chair2
case "Box2":
// try to load the model
do {
try await model = Entity(named: "chair2", in: realityKitContentBundle)
} catch {
print("Failed to load chair2")
}
default:
print("no model associated with this file name: \(fileName)")
break
}
// map entity to ref objs
usdzPerReferenceObject[referenceObject.id] = model
}
Any help or suggestions would be greatly appreciated.
Thank you.
I'm developing a VisionOS app and I'm trying to load a ModelEntity from a USDZ file which is inside my custom RealityKit package called R2UVisionOficial. But it keeps giving me an resourceNotFound error.
import RealityKit
import R2UVisionOficial
import ARKit
/* more code */
do {
let newEntity: Entity
//...
// Loads entity from USDZ inside package
newEntity = try await ModelEntity(named: "Salas", in: r2UVisionOficialBundle)
//...
return newEntity
} catch {
print("wtManager >>> **** FAILED to load entity:", error.localizedDescription)
throw error
}
I'm sure I have the Salas.usdz file in the root folder of my package and that I'm using the correct paths. However I keep getting the error:
Failed to find resource with name "Salas" in bundle
It's funny because when I try to load a USDA (scenes) from the same packages, it works fine. So I guess there's something to do with ModelEntity or USDZ files.
Can you please help me?
P.S. This issue is similar to https://developer.apple.com/forums/thread/746842?answerId=780415022#780415022
Hi, I heve a problem with an visionOS app and I couldn't find a solution. I have 3D carousel with cards and when I use the drag gesture and I drag to the left I want the carousel to rotate clockwise and when I drag to the right I want the carousel to rotate counter clockwise. My problem is when I rotate my body more than 90 degrees to the left or to the right the drag gesture changes it's value and the carousel rotates in the opposite direction? Do you know how can I maintain the right sense taking into account that the user can rotate his body?
I've tried to take the user orientation with device tracking and check if rotation on Y axis is greater than 90 degrees in both direction but It is a very small area bettween 70-110 degrees when it still rotates in the opposite direction. I think that's because the device traker doesn't update at the same rate as drag gesture or it doesn't have the same acurracy.
Is it possible to create a button in my app that will turn on the spatial personas for the user? Currently the only way I know of turning on spatial personas is by selecting the cube icon in the FaceTime window which is quite clunky for people unfamiliar with the Vision Pro's UI. Any help would be appreciated.
There is a flickering occurring on 3D assets when switching immersive spaces, which is not the nicest user experience. The flickering does occur either when loading the scenes directly from the RealityKitContent package, or from memory (pre-loaded assets).
Since we cannot upload a video illustrating the undesirable behaviour, I have to describe how to setup the project for you to observe it.
To replicate the issue, follow these steps:
Create a new visionOS app using Xcode template, see image.
Configure the project to launch directly into an immersive space (set Preferred Default Scene Session Role to Immersive Space Application Session Role in Info.plist), see image.
Replace all swift files with those you will find in the attached texts.
In the RealityKitContent package, create a scene named YellowSpheres as illustrated below.
In the RealityKitContent package, create a scene named RedSpheres as illustrated below.
Launch the app in debug mode via Xcode and onto the AVP device or simulator
Continuously switch immersive spaces by pressing on buttons Show RedSpheres and Show YellowSpheres.
Observe the 3d assets flicker upon opening of the immersive spaces.
AppModel
RedSpheresImmersiveView
YellowSpheresImmersiveView
TestFlickeringBetweenImmersiveSpacesApp
Hi!
I'm creating an app like this:
Using Image Tracking to set world anchor in real world first.
The timeline in Reality Composer Pro scene needs to be played in same time(for the people in same place using the app).
People using the app will see the same contents in same position in same time in same place.
I already made Image Tracking feature worked. But the big problem is "Synchronization". I found Group Activities and TabletopKit to solve the problem. But I don't know if this are the right modules for this project.
How do I solve this problem technically?
If you have ideas, please let me know. I really need help for this.
Hi!
I wanna know if it's possible mirroring a Vision Pro to other Vision Pros.
If it's possible, how do I work on it? Can I get some hints?
I have a visionOS 2 project created on Xcode 16, when I updated to Xcode 26 beta5, I can't build it any more, every time it stuck in process like the picture shows below:
Already tried many methods to fix this issue, such as clear build folders, but don't work.
MacBook Air M2 / MacOS 26 beta5 / Xcode 26 beta5
This modifier in visionOS 2.5 works perfectly with LazyVgrid inside a Stack in ScrollView:
.hoverEffect { effect, isActive, _ in
effect.scaleEffect(isActive ? 1.1 : 1.0)
But the grid does not scroll in visionOS 26 beta 1 unless the scaleEffect is commented out.
FB17941468