A popover presented from a view that is used as an attachment is properly displayed in preview mode in the canvas but not so at runtime. I was wondering if it is supported at all.
General
RSS for tagDiscuss Spatial Computing on Apple Platforms.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Screenshot:
Specific error message:
validateComputeFunctionArguments:1149: failed assertion `Compute Function(textureShader): Shader uses texture(texture[0]) as read-write, but hardware does not support read-write texture of this pixel format.'
OS: visionOS 2.1 (22N5548c) simulator.
Link:
https://developer.apple.com/documentation/visionos/generating-procedural-textures-in-visionos
This restriction causes me to be unable to use Metal to create images and simultaneously use Swift to add UI controls or RealityKit content (without using a window) in immersive mode.
There is a flickering and slight dimming occurring specifically on skysphere, at initial load of the scene, when using Attachment. This is observed in the simulator and on the real device.
Since we cannot upload a video illustrating the undesirable behaviour, I have to describe how to setup the project for you to observe it.
To replicate the issue, follow these steps:
Create a new visionOS app using Xcode template, see image.
Configure the project to launch directly into an immersive space (set Preferred Default Scene Session Role to Immersive Space Application Session Role in Info.plist), see image.
Replace all swift files with those you will find in the attached texts.
Add the skysphere image asset Skydome_8k found at this Apple Sample App Presenting an artist’s scene.
Launch the app in debug mode via Xcode and onto the AVP device or simulator
Continuously open and dismiss the skysphere by pressing on buttons Open Skysphere and Close.
Observe the skysphere flicker and dim upon display of the skysphere.
The current workaround is commented in file ThreeSixtySkysphereRealityView at lines 65, 70, 71, and 72. Uncomment these lines, and the flickering and dimming do not occur.
Are we using attachments wrongly?
Is this behavior known and documented?
Or, is there really a bug in visionOS?
AppModel
InitialImmersiveView
MainImmersiveView
TestSkysphereAttachmentFlickerApp
ThreeSixtySkysphereRealityView
We seen to have found an issue when using the pushWindow action on visionOS. The issue occurs if the app is backgrounded then reopened by selecting the apps icon on the home screen. Any window that is opened via the pushWindow action is then dismissed. We've been able to replicate the issue in a small sample project.
Replication steps
Open app
Open window via the push action
Press the digital crown
On the home screen select the apps icon again
The pushed window will now be dismissed.
There is a sample project linked here that shows off the issue, including a video of the bug in progress
I understand that the system helps maintain user comfort by automatically adjusting the opacity of content in certain situations, like when someone moves too quickly or gets too close to a physical object. The content in front of them dims briefly to allow a clearer view of their surroundings. And I'd like to know the specific distance at which the system begins to show the physical object, or what criteria are used for this adjustment.
Information is light on the new subdivision support for USD models in RealityKit, and I have been unable so far to get one of my models to actually subdivide within Reality Composer Pro or Quick Look (or when viewing on Vision Pro).
I've exported a few test models from Houdini and verified that it contains ' uniform token subdivisionScheme = "catmullClark"'. I've started with some very lightweight, basic meshes.
But, when viewing, they simply look like polygonal meshes. There's no 'subdividing' occurring at runtime when viewing the models.
Is there a trick to getting them to actually smooth-out?
Topic:
Spatial Computing
SubTopic:
General
In the Discover RealityKit APIs for iOS, macOS, and visionOS presentation, there was a slide at the end highlighting new features not covered in the video. One of them was surface subdivision, but I have not been able to find any documentation or APIs that support this feature. Does anyone have any further details or how this works in RealityKit?
Can we access a Vision pro's spatial persona in application's view without using SharePlay or group activity like any other 3d Avatar?
I want to use that persona in app and without live rendering I just want to pass some voice commands like Avatar is speaking.
We have a native iOS app that supports the upload and display of USDZ files. It has been working great since beta (late 2022) and live launch (late 2023) until now. But recently we had reports from some users on Max model phones (14 + 15 Pro Max) at least. When they launch tap and launch a 3D file the Quick Look player is triggered. So far so good. But for affected users the controls along the top of the player - X (close) AR | Object (toggle) and share button - are moving too high up the phone screen and getting stuck (untappable) behind the phone's top status bar (time, camera bug, connection, battery).
This means that when they open a USDZ file in AR or 3D view they have to hard-close the app to get out of it again. This doesn't happen when they open a USDZ file from Files, Dropbox etc on their phone (which also uses the Quick Look player). The controls only move up and get stuck when launching a USDZ from within our app.
I'm at a loss to figure out what might be causing this on some phones and not all others. And why only when opening a USDZ file from our app! So far we have replicated this issue on a single iphone 14 Pro Max and a 15Pro Max, both running iOS18+
We have tested on other 15Pro Max's on same OS, and Pros, normal iPhones, Minis and are not experiencing the issue. You would think that a USDZ file is a USDZ file and that your iPhone knows what to do with it and open it in the Quick Look player regardless of where you open the file from. Why would the navigation items be moving if you open the USDZ file from within our app, and why only for some select users?
We will continue to troubleshoot and test but I wanted to throw this out to the community in case anyone had experienced this or if anyone had any theories that would expedite our testing. Your thoughts are most appeciated!
Here is a video showing the expected (correct) behaviour: https://www.dropbox.com/scl/fi/0sp8s4opaf2m4gukkcbrk/How-opening-a-USDZ-should-behave_correct-behaviour.MP4?rlkey=tzzau9x91mwox66gsgguryhep&st=qiykmne9&dl=0 and a screenSHOT attached below of what is happening on one of the affected user's iPhone 15Pro Max.
I am trying to achieve an effect such that the particles of a particle system are attracted to my hand entity. The hand entity is essentially an AnchorEntity that is tracking my right hand.
let particleEmitterEntities = context.entities(matching: particleEmitterQuery, updatingSystemWhen: .rendering)
for particleEmitterEntity in particleEmitterEntities {
if var particleEmitter = particleEmitterEntity.components[ParticleEmitterComponent.self] {
particleEmitter.mainEmitter.attractionCenter = rightHandEntity.position(relativeTo: nil)
// trying to get the world space position of the hand
// I also tried relative to particleEmitterEntity
particleEmitterEntity.components[ParticleEmitterComponent.self] = particleEmitter
} else {
fatalError("Cannot find particle emitter")
}
}
The particle attraction center doesn't seem to update
Another issue I am noticing here that My particle system doesn't show the particle image a lot of times and just renders a placeholder square when I do this, when I comment this code out I get the right particle image. I believe this is due to the number of times this loop runs to update the position of the attraction center.
What is the right way to do an effect where the particles are attracted to my hand.
On visionOS, I have discovered that if dismissWindow is followed immediately by a call to openWindow, the new window does not open where the user is looking at. Instead, it appears at the same location as the dismissed window. However, if I open the new window after a small delay, or after UIScene's willDeactivateNotification, the new window correctly opens in front of the user. (I tested this within a opened immersive space.)
Does this imply that dismissWindow is actually asynchronous, in the sense that it requires extra time to reset certain internal states before the next openWindow can be called? What is the best practice to close a window, then open a new window in front of the user's current head position?
I found that there is such a click-to-expand horizontally and smoothly effect in the system application called "message", which is good. I wonder if I can add a similar effect to my own app. If possible, are there any implementation ideas or examples that I can refer to? Thanks!
Hello Developers,
I am currently in the initial planning stages of my bachelor thesis in computer science, where I will be developing an application in collaboration with a manufacturer of large-scale machinery. One of the core features I aim to implement is the ability for multiple Apple Vision Pro users to view the same object in augmented reality simultaneously, each from their respective positions relative to the object.
I am still exploring how best to achieve this feature. My initial approach involves designating one device as the host of a "room" within the application, allowing other users to join. If I can accurately determine the relative positions of all users to the host device, it should be possible to display the AR content correctly in terms of angle, size, and location for each user.
Despite my research, I haven't found much information on similar projects, and I would appreciate any insights or suggestions. Specifically, I am curious about common approaches for synchronizing AR experiences across multiple devices. Given that the Apple Vision Pro does not have a GPS sensor, I am also looking for alternative methods to precisely determine the positions of multiple devices relative to each other.
Any advice or shared experiences would be greatly appreciated!
Best regards,
Revin
Hi, I am trying to stream spatial video in realtime from my iPhone 16.
I am able to record spatial video as a file output using:
let videoDeviceOutput = AVCaptureMovieFileOutput()
However, when I try to grab the raw sample buffer, it doesn't include any spatial information:
let captureOutput = AVCaptureVideoDataOutput()
//when init camera
session.addOutput(captureOutput)
captureOutput.setSampleBufferDelegate(self, queue: sessionQueue)
//finally
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
//use sample buffer (but no spatial data available here)
}
Is this how it's supposed to work or maybe I am missing something?
this video: https://developer.apple.com/videos/play/wwdc2023/10071 gives us a clue towards setting up spatial streaming and I've got the backend all ready for 3D HLS streaming. Now I am only stuck at how to send the video stream to my server.
I have an app with a visionOS target, and I want to add an iOS target. Both are based on RealityKit.
I want to use a SpatialTapGesture to get the tap coordinate local to the entity tapped.
In visionOS this is easy:
SpatialTapGesture(coordinateSpace: .local)
.targetedToAnyEntity()
.onEnded { tap in
let entity = tap.entity
let localPoint3D = tap.convert(tap.location3D, from: .local, to: entity)
// …
}
However, according to the docs, the convert function seems to exist only in visionOS, not in iOS.
So how can I do this conversion in iOS?
PS: This was already posted on StackOverflow without success. There, I tried to find a workaround, but I failed.
As mentioned in https://forums.developer.apple.com/forums/thread/756736?answerId=810096022#810096022
Is there any update about the full support to WebXR AR Module, which should enable immersive-ar mode?
Are the features such as DOM overlays and WebGPU bindings on the roadmap?
Is it possible to capture stereoscopic video either internally or externally or via airplay for debugging purposes?
Thanks
Im asking myself we are the limits of RealityView. For example is it possible to place an entity on postion (x=800m,y=0,z=-900m) What happens if i walk from my (0,0,0) to this point, will i see the entity then ? Does someone know where are the limits ?
As the title states, this severely limits the flexibility of multi-window applications in creating a good user experience.
Even effects like the ones shown below cannot be achieved.
I created an Object & Hand Tracking app based on the sample code released here by Apple.
https://developer.apple.com/documentation/visionos/exploring_object_tracking_with_arkit
The app worked great and everything was fine, but I realized I was coding on Xcode 16 beta 3, so I installed the latest Xcode 16 from the App Store and tested by app there, and it completely crashed. No idea why. Here is the console
dyld[1457]: Symbol not found: _$ss13withTaskGroup2of9returning9isolation4bodyq_xm_q_mScA_pSgYiq_ScGyxGzYaXEtYas8SendableRzr0_lF
Referenced from: <3AF14FE4-0A5F-381C-9FC5-E2520728FC65> /private/var/containers/Bundle/Application/F74E88F2-874F-4AF4-9D9A-0EFB51C9B1BD/Hand Tracking.app/Hand Tracking.debug.dylib
Expected in: <2F158065-9DC8-33D2-A4BF-CF0C8A32131B> /usr/lib/swift/libswift_Concurrency.dylib
It was working perfectly fine on Xcode 16 beta 3, which makes me think it's an Xcode 16 issue, but no idea how to fix this. I also installed Xcode 16.2 beta (the newest beta) but same error.
Please help if anyone knows what is wrong!