The issue reproducible with empty project. When you run it and tap "Open immersive space" it takes a couple of minutes to respond. The issue only reproducible on real device with debugger attached. Reproducible other developers too (not specific to my environment). Issue doesn't exists in Xcode 16.
Afer initial long delay subsequent opens works fine.
Console logs:
nw_socket_copy_info [C1:2] getsockopt TCP_INFO failed [102: Operation not supported on socket]
nw_socket_copy_info getsockopt TCP_INFO failed [102: Operation not supported on socket]
Failed to set dependencies on asset 9303749952624825765 because NetworkAssetManager does not have an asset entity for that id.
void * _Nullable NSMapGet(NSMapTable * _Nonnull, const void * _Nullable): map table argument is NULL
PSO compilation completed for driver shader copyFromBufferToTexture so=0 sbpr=256 sbpi=16384 ss=(64, 64, 1) p=70 sc=1 ds=0 dl=0 do=(0, 0, 0) in 1997
XPC connection interrupted
<<<< FigAudioSession(AV) >>>> audioSessionAVAudioSession_CopyMXSessionProperty signalled err=-19224 (kFigAudioSessionError_UnsupportedOperation) (getMXSessionProperty unsupported) at FigAudioSession_AVAudioSession.m:606
Failed to load item AXCodeItem<0x14706f250> [Rank:6000] SpringBoardUIServices [AXBundle name:/System/Library/AccessibilityBundles/SpringBoardUIServices.axbundle/SpringBoardUIServices] [Platforms and Targets:{ iOS = SpringBoardUIServices; } Framework] [Excluded: (null)]. error: Error Domain=AXLoading Code=0 "URL does not exist: file:///System/Library/AccessibilityBundles/SpringBoardUIServices.axbundle" UserInfo={NSLocalizedDescription=URL does not exist: file:///System/Library/AccessibilityBundles/SpringBoardUIServices.axbundle}
Failed to load item AXCodeItem<0x14706f250> [Rank:6000] SpringBoardUIServices [AXBundle name:/System/Library/AccessibilityBundles/SpringBoardUIServices.axbundle/SpringBoardUIServices] [Platforms and Targets:{ iOS = SpringBoardUIServices; } Framework] [Excluded: (null)]. error: Error Domain=AXLoading Code=0 "URL does not exist: file:///System/Library/AccessibilityBundles/SpringBoardUIServices.axbundle" UserInfo={NSLocalizedDescription=URL does not exist: file:///System/Library/AccessibilityBundles/SpringBoardUIServices.axbundle}
[b30780-MRUIFeedbackTypeButtonWithBackgroundTouchDown] Playback timed out before completion (after 3111 ms)
Failed to set dependencies on asset 7089614247973236977 because NetworkAssetManager does not have an asset entity for that id.
General
RSS for tagDiscuss Spatial Computing on Apple Platforms.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hi guys,
I noticed that Apple created a really engaging visual effect for browsing spatial videos in the app. The video appears embedded in glass panel with glowing edges and even shows a parallax effect as you move around. When I tried to display the stereo video using RealityView, however, the video entity always floats above the panel.
May I ask how does VisionOS implement this effect? Is there any approach to achieve this effect or example code I can use in my own code.
Thanks!
Hi everyone,
I'm creating an educational App that allows doing computational design in an immersive environment with the Vision Pro. The App is free and can be found here:
https://apps.apple.com/us/app/arcade-topology/id6742103633
The problem I have is that the mesh of voxels I currently create use ModelEntity and I recently read that this is horrible for scalability. I already start to see issues when I try to use thousands of voxels. I also read somewhere that I should then take advantage of GPUs and use metal to that end. I was wondering if someone could point me to a tutorial or article that discusses this. In essence, I need to create a 3D voxel mesh, and those voxels have to update their opacity within an iterative loop.
Thanks!
—Alejandro
Hi,
On visionOS to manage entity rotation we can rely on RotateGesture3D. We can even with the constrainedToAxis parameter authorize only rotation on an x, y or z axis or even make combinations.
What I want to know is if it is possible to constrain the rotation on axis automatically.
Let me explain, the functionality that I would like to implement is to constrain the rotation on an axis only once the user has started his gesture. The initial gesture the user makes should let us know which axis they want to rotate on.
This would be equivalent to activating a constraint automatically on one of the axes, as if we were defining the gesture on one of the axes.
RotateGesture3D(constrainedToAxis: .x)
RotateGesture3D(constrainedToAxis: .y)
RotateGesture3D(constrainedToAxis: .z)
Is it possible to do this?
If so, what would be the best way to do it?
A code example would be greatly appreciated.
Regards
Tof
We were having an issue wrb the system rotate and scale gestures (two-handed gestures / RotateGesture3D and MagnifyGesture) were extremely difficult to register (make work) in the visionOS simulator.
The solution we found was to:
Launch your app in the simulator
Move the pointer on top of the 3D object for which you are testing rotation and scaling gestures.
Press and hold the Option key to display touch points (ie: the two-handed gesture points).
While maintaining the option key pressed, release the pointer and re-enable it again. I am using a track pad with tap-to-click enabled and three-finger to drag enabled in accessibility, so "release the pointer and re-enable it again" translates simply to removing the three finger and placing them again on the trackpad.
If you have maintained the option key pressed, then you should now be able to rotate and scale the 3D object.
Context if you are interested:
Our issue was also occurring in Apple's own sample project relating to gestures "Transforming RealityKit entities using gestures", at below link.
On Apple's article "Interacting with your app in the visionOS simulator" at the below link, for two-handed gestures it states "Press and hold the Option key to display touch points. Move the pointer while pressing the Option key to change the distance between the touch points. Move the pointer and hold the Shift and Option keys to reposition the touch points."
This simply did not work anymore for rotation and scaling gestures.
These gestures used to be a lot more responsive in Sonoma. Either the article should be updated to what I described above, or there is an issue. Our colleague who is using macOS Sonoma 14.6.1 with the latest release of Xcode is not having these issues.
Here is the list of configurations (troubleshooting we tried!) where it is difficult to achieve rotation and scaling gestures in the visionOS simulator:
macOS Sequoia 16.1 Beta, Xcode 16.1 RC w visionOS 2.1
macOS Sequoia 16.1 Beta, Xcode 16.1 RC w visionOS 2.0
macOS Sequoia 16.1 Beta, Xcode 16.2 Beta 1 w visionOS 2.1
macOS Sequoia 16.1 Beta, Xcode 16.2 Beta 1 w visionOS 2.0
macOS Sequoia 16.1 Beta, remove all Xcodes and installed the build from AppStore (Xcode 16.1)
macOS Sequoia 16.1 Beta, Xcode 16.0 w visionOS 2.0
completely wiped out, and reset entire development machine, re-installed latest releases of sequoia (15.1) and xcode (15.1))
Throughout these troubleshooting I often:
restarted both xcode and sim
erased all derived data
erased all contents and settings from sims
performed fresh git clones
None of the above worked, only the workaround described above works atm. As you can maybe deduce, it was very time consuming to find the workaround, we also wasted some development effort thinking our gesture development was no-good.
Hopefully this will help other devs.
Article Link:
https://developer.apple.com/documentation/xcode/interacting-with-your-app-in-the-visionos-simulator
Gesture sample project link:
https://developer.apple.com/documentation/realitykit/transforming-realitykit-entities-with-gestures
Since only the user can take a screenshot using the Apple Vision Pro's top buttons, the only workaround available to an immersive app that needs a screenshot to document the user's creative interior design choices is
ask the user to take a screenshot
wait until the user taps a button indicating the screenshot has been taken
then the app asks the user to select the screenshot when the app opens the PhotoPicker
when the user presses Done, the screenshot is handed off to the app.
One wonders why there is no Apple Api for doing this in a simple privacy protective way such as:
When called, the Apple api captures the screenshot in Apple secured memory
The api displays the screenshot to the user with appropriate privacy warnings and asks if the user wants to
a. share this screenshot with the app, or
b. cancel,
c. retake the screenshot
If the user approves, the app receives the screenshot
I have Mac mini M4 with 16GB memory, the Xcode is 16.1, when I test my Vision Pro App with the Simulator, it is very slow and system shows the memory is under the high pressure.
How do I run/test/debug the application on Vision Pro directly? Tried to add my Vision Pro to my developer account, it didn't work due to cannot find UDID, when I hook the USB to the battery, it only shows Battery device ID.
Topic:
Spatial Computing
SubTopic:
General
If I correctly understand, a new Enterprise API has been introduced In visionOS 26 allowing to fix windows to the user frame of reference, implementing a something like an "head up display", with the window tracking the user movements.
Is this API only available to enterprise applications, and if so is there a plan to make it available for every kind of app?
Hi Nathaniel,
I spoke with you yesterday in the WWDC lab. Thanks for chatting with me! Is it possible to get a link to a doc that has some key metrics I'd find in a RealityKit trace so I know if that metric is exceeding limits and probably causing a problem? Right now, I just see numbers and have no idea if a metric is high or low :). This is specifically for a VisionOS app.
Thanks,
Bob
Hello,
For GuessTogether source code, it seems like the code assumes that you're already in a FaceTime call before pressing the custom SharePlay button (labeled "Play Guess Together"). If not already on a FaceTime call, my Apple Vision Pro and the visionOS simulator both do nothing after throwing warnings. Is this intended behavior?
If so, how do I make it so that pressing the button can also initiate FaceTime calls? Is this allowed?
Thank you!
I can generate a ShapeResource from a ReakityKit entity's extents. Could I apply some scaling to the generated shape. Is there a way to do that?
// model is a ModelResource and bounds is a BoundingBox
var shape = ShapeResource.generateConvex(from: model.mesh);
shape = shape.offsetBy(translation: bounds.center)
// How can I scale the shape to fit within the bounds?
The following API only provide the rotation and translation support. and I cannot find the scale support.
offsetBy(rotation: simd_quatf = simd_quatf(ix: 0, iy: 0, iz: 0, r: 1), translation: SIMD3<Float> = SIMD3<Float>())
I can put the ShapeResource on an entity and scale the entity. But, I would like to know if it is possible to scale the ShapeResource itself without attaching it to an entity.
Hi, I'm developing a virtual camera system using ReplayKit to capture scene video by directly accessing raw video buffers. The capture mechanism works flawlessly when repeatedly starting and stopping video capture within a continuous immersive environment. However, a critical issue arises when interrupting the immersive space:
Step 1: Enter immersive environment and start and stop capture videos(Multiple times with no issues)
Step 2: Press the crown button to exit the immersive environment
Step 3: Return to the immersive space subsequently
Step 4: Attempt to start the video capture
At this point, the startCapture method throws an unexpected error, disrupting the video capture workflow.
This is the Xcode error that I see " [ERROR] -[RPScreenRecorder startCaptureWithHandler:completionHandler:]_block_invoke_2:500 failed to start due to error: Error Domain=com.apple.ReplayKit.RPRecordingErrorDomain Code=-5803 "Recording failed to start" UserInfo={NSLocalizedDescription=Recording failed to start}"
I have tried all possible ways to stopCapture including OnDisappear and other methods and nothing seems to solve this.
Hello,
I'm trying to view the components of an Entity I'm creating in RealityKit by reading from a USDZ file. I have the following code snippet in my app.
if let appleEntity = try? Entity.loadModel(named: "apple_tile") {
let c = appleEntity.components
for comp in c { // <- compiler error here
print(comp)
}
}
The compiler error I'm receiving says "For-in loop requires 'Entity.ComponentSet' to conform to 'Sequence'". However, I thought this was the case, according to the documentation for Entity.ComponentSet?
Curious if anyone else has had this problem. Running XCode 15.4, and my Swift version is
xcrun swift -version
swift-driver version: 1.90.11.1 Apple Swift version 5.10 (swiftlang-5.10.0.13 clang-1500.3.9.4)
Target: x86_64-apple-macosx14.0
Hello,
Thank you for your time. I have a question regarding visionOS app development.
When placing a SwiftUI TextField inside RealityView.attachments, we found that focusing on the field does not bring up the virtual keyboard in front of the user. Instead, the keyboard appears around the user’s lower abdomen area.
However, when placing the same TextField in a regular SwiftUI layer outside of RealityView, the keyboard appears in the correct position as expected. This suggests that the issue is specific to RealityView.attachments.
We are currently exploring ways to have the virtual keyboard appear directly in front of the user when using TextField inside RealityViewAttachments. If there is any method to explicitly control the keyboard position or any known workarounds—including alternative UI approaches—we would greatly appreciate your guidance.
Best regards,
Sadao Tokuyama
Is there any way to detect if an entity is being looked at in a RealityView. I know it is possible to add a "HoverEffectComponent()" which will highlight the entity a little when you gaze on it, but there doesn't seem to be any way to call a function from this. There is also no GazeGesture or anything similar.
Hello,
A lot of the RealityKit APIs (Ex. LowLevelMesh, LowLevelTexture, etc.) are marked with MainActor so they needed to be accessed on the main thread.
This creates issues when we need to perform expensive GPU related operations since now we need to perform those on the main thread. This results in bottlenecks and hangs in our application. We would like to use a multi-threaded approach to solve these problems which is difficult to do here. We are constantly streaming data whether the app is just appearing or the user is interacting with our application so we need to be able to perform these operations on a separate thread.
Any advice on how to achieve this using RealityKit?
Thank you.
Hi I am trying to implement something simple as people can share their Spatial Photos with others (just like this post). I encountered the same issue with him, but his answer doesn't help me out here.
Briefly speaking, I am using CGImgaeSoruce to extract paired leftImage and rightImage from one fetched spatial photo
let photos = PHAsset.fetchAssets(with: .image, options: nil)
// enumerating photos ....
if asset.mediaSubtypes.contains(PHAssetMediaSubtype.spatialMedia) {
spatialAsset = asset
}
// other code show below
I can fetch left and right images from native Spatial Photo (taken by Apple Vision Pro or iPhone 15+), but it didn't work on generated spatial photo (2D -> 3D feat in Photos).
// imageCount is 1 when it comes to generated spatial photo
let imageCount = CGImageSourceGetCount(source)
I searched over the net and someone says the generated version is having a depth image instead of left/right pair. But still I cannot extract any depth image from imageSource.
The full code below, the imagePair extraction will stop at "no groups found":
func extractPairedImage(phAsset: PHAsset, completion: @escaping (StereoImagePair?) -> Void) {
let options = PHImageRequestOptions()
options.isNetworkAccessAllowed = true
options.deliveryMode = .highQualityFormat
options.resizeMode = .none
options.version = .original
return PHImageManager.default().requestImageDataAndOrientation(for: phAsset, options: options) {
imageData, _, _, _ in
guard let imageData,
let imageSource = CGImageSourceCreateWithData(imageData as CFData, nil)
else {
completion(nil)
return
}
let stereoImagePair = stereoImagePair(from: imageSource)
completion(stereoImagePair)
}
}
}
func stereoImagePair(from source: CGImageSource) -> StereoImagePair? {
guard let properties = CGImageSourceCopyProperties(source, nil) as? [CFString: Any] else {
return nil
}
let imageCount = CGImageSourceGetCount(source)
print(String(format: "%d images found", imageCount))
guard let groups = properties[kCGImagePropertyGroups] as? [[CFString: Any]] else {
/// function returns here
print("no groups found")
return nil
}
guard
let stereoGroup = groups.first(where: {
let groupType = $0[kCGImagePropertyGroupType] as! CFString
return groupType == kCGImagePropertyGroupTypeStereoPair
})
else {
return nil
}
guard let leftIndex = stereoGroup[kCGImagePropertyGroupImageIndexLeft] as? Int,
let rightIndex = stereoGroup[kCGImagePropertyGroupImageIndexRight] as? Int,
let leftImage = CGImageSourceCreateImageAtIndex(source, leftIndex, nil),
let rightImage = CGImageSourceCreateImageAtIndex(source, rightIndex, nil),
let leftProperties = CGImageSourceCopyPropertiesAtIndex(source, leftIndex, nil),
let rightProperties = CGImageSourceCopyPropertiesAtIndex(source, rightIndex, nil)
else {
return nil
}
return (leftImage, rightImage, self.identifier)
}
Any suggestion? Thanks
visionOS 2.4
We've recently discovered that our app crashes on startup on the latest visionOS 2.0 beta 5 (22N5297g) build. In fact, the entire field of view would dim down and visionOS would then restart, showing the Apple logo. Interestingly, no app crash is reported by Xcode during debug.
After investigation, we have isolated the issue to a specific USDZ asset in our app. Loading it in a sample, blank project also causes visionOS to reliably crash, or become extremely unresponsive with rendering artifacts everywhere.
This looks like a potentially serious issue. Even if the asset is problematic, loading it should not crash the entire OS. We have filed feedback FB14756285, along with a demo project. Hopefully someone can take a look. Thanks!
I’ve submitted the following feedback:
FB13820942 (List Outline View Not Using Accent Color on Disclosure Caret for visionOS)
I’d appreciate help on this to see if I’m doing something wrong or indeed it’s the way visionOS currently works and it’s a suggested feedback.
Hello,
I am experimenting with Unity to develop a mixed reality (MR) application for visionOS. I would like to understand the best approach for structuring my project:
Should I build the entire experience in Unity (both Windows and Volumes)?
Or is it better to create only certain elements (e.g., Volumes) in Unity while managing Windows separately in Xcode?
Also, how well do interactions (e.g pinch, grab…) created in Unity integrate with Xcode?
If I use the PolySpatial plugin, does that allow me to manage all interactions entirely within Unity, or would I still need to handle/integrate part of it in Xcode?
What's worked best for you? Please let me know if you have any recommendations, Thanks!
Topic:
Spatial Computing
SubTopic:
General
Tags:
Vision
Reality Composer Pro
visionOS
iPad and iOS apps on visionOS