Hi,
Following the wwdc24 video - What’s new in Quick Look for visionOS, I've managed to open a 3D model using PreviewApplication by calling
let previewItem = PreviewItem(url: modelURL, displayName: "Easter", editingMode: .disabled)
_ = PreviewApplication.open(items: [previewItem])
However, the "Save to Downloads" option is aways there(see attached screenshot). As the models are user generated content, and I don't want the download option to be available to all users. How to disable it?
General
RSS for tagDiscuss Spatial Computing on Apple Platforms.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Copy of Feedback:
FB15969432
Improve window management with immersive spaces.
It is hard to manage windows from code when entering immersive space.
Look for instance at the sample:
https://developer.apple.com/documentation/visionos/displaying-a-3d-environment-through-a-portal
The window displayed before entering the virtual space stays there once the virtual space is entered : this window is too big but can't be resized by the program.
One could say this big window could be closed and a smaller window opened by the program with the "exit" button, but then this small window should be closed and the main window reopened when leaving te immersive space.
In the immersive space closing the "Exit" window with the X does not allow to leave the immersive space.
If the crown button is then used we go back to the Vision Pro main menu. If the app is chosen again we can see that it wasn't closed : the "Exit" window is now displayed but we are not in the immersive space!
Don't say "this is just a sample app", because all developers face those issues.
Please try to find the right solutions with your team to enhance this sample and share the right way to solve those issues. You could find that the specifications need to be enhanced.
You can also see that there is no way to exit a program from a program even if this is something that could be useful for some apps (you end a SharePlay game for instance)
Thank you very much for your time and consideration.
Topic:
Spatial Computing
SubTopic:
General
We seen to have found an issue when using the pushWindow action on visionOS. The issue occurs if the app is backgrounded then reopened by selecting the apps icon on the home screen. Any window that is opened via the pushWindow action is then dismissed. We've been able to replicate the issue in a small sample project.
Replication steps
Open app
Open window via the push action
Press the digital crown
On the home screen select the apps icon again
The pushed window will now be dismissed.
There is a sample project linked here that shows off the issue, including a video of the bug in progress
Hello experts, and question seekers,
I have been trying to get Gaussian splats working with RealityKit, however it seems not to work out for me.
The library I use for Gaussian splatting: https://github.com/scier/MetalSplatter
My idea was to use the renderers provided by RealityKit (aka RealityRenderer) https://developer.apple.com/documentation/realitykit/realityrenderer and the renderer provided by MetalSplatter (aka. SplatRenderer) https://github.com/scier/MetalSplatter/blob/main/MetalSplatter/Sources/SplatRenderer.swift
Then with a custom render pipeline, I would be able to compose the outputs of the renderers, enabling the possibility, for example to build immersive scenery with realistic environment scans, as Gaussian splats, and RealityKit to provide the necessary features to build extra scenery around Gaussian splats, eg. dynamic 3D models inside Gaussian splats.
However the problem is, as of now I am not able to do that with the current implementation of RealityRenderer.
It seems to be, that first RealityRenderer is supposed to be an API, just to render colour information onto a texture, which in first glance might be useful, but misses important information, such as for example depth, and stencil information.
Second issue is, even with that in mind, currently I am not able to execute RealityRenderer.updateAndRender, due to the following error messages:
Could not resolve material name 'engine:BuiltinRenderGraphResources/Common/realityRendererBackground.rematerial' in bundle at '/Users//Library/Developer/CoreSimulator/Devices//data/Containers/Bundle/Application//.app'. Loading via asset path.
exiting spatial tracking service update thread because wait returned 37”
I was able to build a custom Metal view with UIViewRepresentable, MTKView, and MTKViewDelegate, enabling me to build a custom rendering pipeline, by utilising some of the Metal developer workflows.
Reference: https://developer.apple.com/documentation/xcode/metal-developer-workflows/
Inside draw(in view: MTKView), in a class derived by MTKViewDelegate:
guard let currentDrawable = view.currentDrawable else {
return
}
let realityRenderer = try! RealityRenderer()
try! realityRenderer.updateAndRender(deltaTime: 0.0, cameraOutput: .init(.singleProjection(colorTexture: currentDrawable.texture)), whenScheduled: { realityRenderer in
print("Rendering scheduled")
}, onComplete: { RealityRenderer in
print("Rendering completed")
})
Can you please tell me, what I am doing wrong?
Is there any solution, that enables me to use RealityKit with for example Gaussian splats?
Any help is greatly appreciated.
All the best,
Ethem Kurt
Hello,
A lot of the RealityKit APIs (Ex. LowLevelMesh, LowLevelTexture, etc.) are marked with MainActor so they needed to be accessed on the main thread.
This creates issues when we need to perform expensive GPU related operations since now we need to perform those on the main thread. This results in bottlenecks and hangs in our application. We would like to use a multi-threaded approach to solve these problems which is difficult to do here. We are constantly streaming data whether the app is just appearing or the user is interacting with our application so we need to be able to perform these operations on a separate thread.
Any advice on how to achieve this using RealityKit?
Thank you.
Dear all,
I´m using Unity 6.2 beta and Xcode 16.2. I´m creating a simple framework to use the text to speech functionality in VisionOS from unity. The framework is created in Swift. I create an objective-c wrapper with the following declarations:
...
void _initTTS(int);
...
I create the framework, import it in Unity and call the functions in a c# wrapper class. The code is as follows:
public static class TTSPluginManager
{
[DllImport("TTS_Vision"]
private static extern void _initTTS(int val);
...
public static void Initialize()
{
#if UNITY_VISIONOS
_initTTS(0);
#else
Debug.LogWarning("NativeTTS.Initialize called on a non-iOS platform. Ignoring.");
#endif
}
}
I have managed to compile and run the program in the Apple Vision Pro, but I keep on getting the following error:
DllNotFoundException: TTS_Vision assembly: type: member:(null)
TTSPluginManager.Initialize () (at Assets/Plugins/TTSPluginManager.cs:33)
LecturePortalManager.OnCreateStory (Ink.Runtime.Story story) (at Assets/AVRLecture/LecturePortalManager.cs:17)
InkLoader.StartStory () (at Assets/AVRLecture/InkLoader.cs:24)
InkLoader.Start () (at Assets/AVRLecture/InkLoader.cs:18)
If I run the generated code from Xcode, I can see the app in the AVP, but I keep getting a loading error:
DllNotFoundException: Unable to load DLL 'TTS_Vision'. Tried the load the following dynamic libraries: Unable to load dynamic library '/TTS_Vision' because of 'Failed to open the requested dynamic library (0x06000000) dlerror() = dlopen(/TTS_Vision, 0x0005): tried: '/TTS_Vision' (no such file)
at TTSPluginManager.Initialize () [0x00000] in <00000000000000000000000000000000>:0
at LecturePortalManager.OnCreateStory (Ink.Runtime.Story story) [0x00000] in <00000000000000000000000000000000>:0
I can see in the generated code that the framework (TTS_Vision) is there, but the path seems wrong. I've tried to add more options to the searched paths, with no success...
Any hints or suggestions are much more appreciated.
The WWDC25 video and notes titled “Learn About Apple Immersive Video Technologies” introduced the Apple Spatial Audio Format (ASAF) and codec (APAC). However, despite references throughout on using immersive video, there is scant information on ASAF/APAC (including no code examples and no framework references), and I’ve found no documentation in Apple’s APIs/Frameworks about its implementation and use months on.
I want to leverage ambisonic audio in my app. I don’t want to write a custom AU if APAC will be opened up to developers. If you read the notes below along with the iPhone 17 advertising (“Video is captured with Spatial Audio for immersive listening”), it sounds like this is very much a live feature in iOS26.
Anyone know the state of play? I’m across how the PHASE engine works, which is unrelated to what I’m asking about here.
Original quote from video referenced above: “ASAF enables truly externalized audio experiences by ensuring acoustic cues are used to render the audio. It’s composed of new metadata coupled with linear PCM, and a powerful new spatial renderer that’s built into Apple platforms. It produces high resolution Spatial Audio through numerous point sources and high resolution sound scenes, or higher order ambisonics.”
”ASAF is carried inside of broadcast Wave files with linear PCM signals and metadata. You typically use ASAF in production, and to stream ASAF audio, you will need to encode that audio as an mp4 APAC file.”
”APAC efficiently distributes ASAF, and APAC is required for any Apple immersive video experience. APAC playback is available on all Apple platforms except watchOS, and supports Channels, Objects, Higher Order Ambisonics, Dialogue, Binaural audio, interactive elements, as well as provisioning for extendable metadata.”
Topic:
Spatial Computing
SubTopic:
General
I am getting the error "Initializing hosting entity without a context" in the console when I build and run my game in XCode 16.0 beta, targeting Vision Pro OS 2.0 (22N5252n).
Not sure where the error is originating.
In the Discover RealityKit APIs for iOS, macOS, and visionOS presentation, there was a slide at the end highlighting new features not covered in the video. One of them was surface subdivision, but I have not been able to find any documentation or APIs that support this feature. Does anyone have any further details or how this works in RealityKit?
Can we access a Vision pro's spatial persona in application's view without using SharePlay or group activity like any other 3d Avatar?
I want to use that persona in app and without live rendering I just want to pass some voice commands like Avatar is speaking.
Hey,
I'm building an interior design app In Vision OS 2.0. I'm fetching the planes detected by ARKit and I then proceed to add them with an "OcclusionMaterial" to make sure my object are occluded accordingly. However, I'm facing two problems with this:
The ground shadows are completely disabled as soon as an occlusion material is added, even if I inset the planes doing the occlusion. I've looked into this: https://developer.apple.com/documentation/shadergraph/realitykit/shadow-receiving-occlusion-surface-(realitykit) but when I tried to use it, it behaved exactly as "OcclusionMaterial".
The planes are also occluding all windows (mines and the system ones), which is a behavior I'd like to avoid. I only want to occluded the Entity I added. Is there a way to achieve this?
Thanks in advance
Hey there,
Just tried the visionOS 2.2 ultra-wide remote desktop feature and it's just a-m-a-z-i-n-g!!
I was curious if there's an API that we can use to setup our windows in a similar fashion? (curved + ultra wide).
Thank you!
Apple, please provide access to face tracking blend shapes on vision os, just like you do on iOS.
You have the best eye and face tracking implementation on the market, please let us use it. There is a sizable audience who will buy the headset just for it.
I personally know multiple people who are not buying the headset simply because you locked those features out.
No raw camera access is needed, just abstracted blendshapes values. You will make the headset so much more useful if you do this simple thing.
Hello everyone, I'm a Computer Science student. My supervisor has given me some topics for my final year project, and one of them involves using Vision Pro for facial recognition—specifically, identifying a designated face to display specific information.
As a developer, my understanding of Vision Pro is quite limited. I've done some research online and found that Unity and Xcode are used as development tools. Traditionally, facial recognition is done using OpenCV.
However, I've come across articles stating that Apple, due to security reasons, cannot implement facial recognition. I’d like to ask if that’s true. Also, with VisionOS 2 featuring object tracking and image tracking, could these methods potentially replace facial recognition?
Hi,
we've been through the Explore Object Tracking for visionOS and worked through the sample code ExploringObjectTrackingWithARKit.
What we'd really like to see is Object Tracking for iOS using devices with either LiDAR or the TrueDepth/RGB cameras.
I have been playing around with the idea of drawing directly onto the pixels of the Vision Pro, as I am working on a telepresence app that streams a live stereoscopic feed from an articulated robot neck to the wearer.
I was playing around in the Compositor Services demo and modified it to show the following.
I created a grid pattern using normalized device coordinates (-1 to 1) and it looks great when it shows up in the simulator as shown below.
I wanted to see the effects of lens distortion on the image so I launched this script inside the actual Vision Pro, it seems that each eye has only a portion of this screen visible. I have included a screen capture of a screen recording inside of the Vision Pro when running this modified app.
The lines appear straight, which says to me that there must be some automatic pre-distortion correction applied (similar to the image shown below taken from an AVP teardown that I cannot link here).
However, I am wondering why the grid appears cropped and what the bounds of the frame are defined by?
Hi,
We have been experimenting with VisionOS and we are in need to query the field of view values for each eye of the device. We are currently using drawable.views[0].tangents and drawable.views[1].tangents respectively which is labeled as 'Depreceated' . We wonder is there an alternative function for obrtaining FOVs since we had no luck to calcuate them out of the projection matrix we obtain from drawables.
Thanks
This is related to the WWDC presentation, What's new in Metal rendering for immersive apps..
Specifically, the macOS spatial streaming to visionOS feature: For reference: the page in the docs.
The presentation demonstrates it using a full immersive space and Metal rendering using compositor services.
I'd like clarity on a few things:
Is the remote device wireless, or must the visionOS device be connected via a wired connected?
Is there a limit to the number of remote devices, and if not, could macOS render different things per remote device simultaneously?
Can I also use mixed mode with passthrough enabled, instead of just a fully-immersive mode?
Can I use RealityKit instead of Metal? If so, may I have an example, or would someone point to an example?
Hi guys,
I noticed that Apple created a really engaging visual effect for browsing spatial videos in the app. The video appears embedded in glass panel with glowing edges and even shows a parallax effect as you move around. When I tried to display the stereo video using RealityView, however, the video entity always floats above the panel.
May I ask how does VisionOS implement this effect? Is there any approach to achieve this effect or example code I can use in my own code.
Thanks!
Is there any way to detect if an entity is being looked at in a RealityView. I know it is possible to add a "HoverEffectComponent()" which will highlight the entity a little when you gaze on it, but there doesn't seem to be any way to call a function from this. There is also no GazeGesture or anything similar.