I have a class with an Entity, on which I added a Spatial Audio component. Furthermore, I have a function, which uses the playAudio() method to start the Spatial Audio. During the first call of the function, everything is fine. If I call the again, the audio volume drops abruptly after a half second. It is very quiet.
Approximately, I have following code:
class VoiceOutputPlayer: NSObject, ObservableObject, AVAudioPlayerDelegate {
private var speechEntity = Entity()
func play() {
Task{
let audioRessource = try await AudioFileResource(contentsOf: urlWave)
self.speechEntity.playAudio(audioRessource)
}
}
func initSpatialAudio() -> Entity {
speechEntity.transform.translation.y = -0.37
speechEntity.transform.translation.z = 0.09
speechEntity.spatialAudio = SpatialAudioComponent(gain: Double.zero)
speechEntity.spatialAudio?.reverbLevel = -2
speechEntity.spatialAudio?.directivity = .beam(focus: 0.9)
speechEntity.orientation = .init(angle: .pi, axis: [0, 1, 0])
speechEntity.spatialAudio?.distanceAttenuation = .rolloff(factor: 1)
return speechEntity
}
}
Have visionOS 2.2 on the Apple Vision Pro and use Xcode 16.1
General
RSS for tagDiscuss Spatial Computing on Apple Platforms.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Screenshot:
Specific error message:
validateComputeFunctionArguments:1149: failed assertion `Compute Function(textureShader): Shader uses texture(texture[0]) as read-write, but hardware does not support read-write texture of this pixel format.'
OS: visionOS 2.1 (22N5548c) simulator.
Link:
https://developer.apple.com/documentation/visionos/generating-procedural-textures-in-visionos
In the DestinationVideo demo, the onAppear in UpNextView is triggered again when it is closed, but I only want it to be triggered once. How can I achieve this?
Alternatively, I would like to capture the button click events in the player menu, as shown in the screenshot below.
Hi,
I'm developing an app for the Apple Vision Pro.
Inside the app the user should be able to load objects from the web into the scene and then be able to move them around (dragging and rotating) via gestures.
My question:
I'm working with RealityKit and use RealityView..
I have no issues loading in one object and making it interactive by adding gestures to the entire RealityView via the .gestures() function.
Also I succeeded in loading multiple objects into the scene.
My problem is that I can't figure out how to add my gestures to multiple objects independently.
I can't use Reality composer since I'm loading the objects dynamically into the scene.
Using .gestures() doesn't work for multiple objects since every gesture needs to be targeted to a specific entity but I have multiple entities.
I also tried defining a GestureComponent and adding it to every newly loaded entity but that doesn't seem to work as nothing happens,
even though my gesture is targeted to every entity having my GestureComponent.
I only found solutions that are not usable on Visionos / RealityView like installGestures.
Also I tried following this guide: https://developer.apple.com/documentation/realitykit/transforming-realitykit-entities-with-gestures
But I feel like there are things missing and contradictory, like a extension of RealityView is mentioned but not shown and the guide states that we don't need to store the translation values for every entity and therefore creates a EntityGestureState.swift file to store these values but then it's not used and a GestureStateComponent is used instead that was never mentioned and contradicts what was just said about not storing values per entity because a new instance of it is created for every entity. Nevermind, following the guide didn't lead me to a working solution.
In some cases, we will put the Apple Vision Pro back on for a short period of time after taking it off, such as less than 1 minute, and want to keep it activte when we take it off so that it can continue to work seamlessly when we put it on again.
The current Apple Vision Pro turns off the display and goes to sleep whenever it is been taken off. This feature is also explained in the user guide, to save power, for safety, etc.
However, we want a seamless experience!
Is it possible to have a screen saver like macOS or iOS, where we can set our own delay time to go sleep?
Or is there any API that can be called to prevent going to sleep?
Topic:
Spatial Computing
SubTopic:
General
While using Screen Mirroring in developer mode within my immersive space, I noticed an alignment issue with the computer cursor (transparent circle). When I move it toward an attachment view, the cursor remains horizontal instead of aligning with the surface of the attachment view. It shows correctly on a 2D window only wrong on attachment view.
Is this behavior a bug, or could it be caused by a missing or incorrect configuration on the attachment view?
Want help, thanks.
I'm developing an app in which I need to render pictures and contain some models in a RealityView. I want to set up a camera, intercept virtual content through the camera, and save it as an image.
Topic:
Spatial Computing
SubTopic:
General
Tags:
SwiftUI
RealityKit
Reality Composer Pro
Shader Graph Editor
I saw at WWDC25 mentions of visionOS 26 now providing hand tracking poses at 90hz, but I also recall that being a feature in visionOS 2.
Is there something new happening in visionOS 26 that makes its implementation of hand tracking "better"?
Topic:
Spatial Computing
SubTopic:
General
i have normal application flow and at one place i have to open immesive space for image seen 360 view but currently when ever i run application it start with immersive space my app normal flow is not start.What the isssue here?
We have a native iOS app that supports the upload and display of USDZ files. It has been working great since beta (late 2022) and live launch (late 2023) until now. But recently we had reports from some users on Max model phones (14 + 15 Pro Max) at least. When they launch tap and launch a 3D file the Quick Look player is triggered. So far so good. But for affected users the controls along the top of the player - X (close) AR | Object (toggle) and share button - are moving too high up the phone screen and getting stuck (untappable) behind the phone's top status bar (time, camera bug, connection, battery).
This means that when they open a USDZ file in AR or 3D view they have to hard-close the app to get out of it again. This doesn't happen when they open a USDZ file from Files, Dropbox etc on their phone (which also uses the Quick Look player). The controls only move up and get stuck when launching a USDZ from within our app.
I'm at a loss to figure out what might be causing this on some phones and not all others. And why only when opening a USDZ file from our app! So far we have replicated this issue on a single iphone 14 Pro Max and a 15Pro Max, both running iOS18+
We have tested on other 15Pro Max's on same OS, and Pros, normal iPhones, Minis and are not experiencing the issue. You would think that a USDZ file is a USDZ file and that your iPhone knows what to do with it and open it in the Quick Look player regardless of where you open the file from. Why would the navigation items be moving if you open the USDZ file from within our app, and why only for some select users?
We will continue to troubleshoot and test but I wanted to throw this out to the community in case anyone had experienced this or if anyone had any theories that would expedite our testing. Your thoughts are most appeciated!
Here is a video showing the expected (correct) behaviour: https://www.dropbox.com/scl/fi/0sp8s4opaf2m4gukkcbrk/How-opening-a-USDZ-should-behave_correct-behaviour.MP4?rlkey=tzzau9x91mwox66gsgguryhep&st=qiykmne9&dl=0 and a screenSHOT attached below of what is happening on one of the affected user's iPhone 15Pro Max.
In Reality Composer, it is possible to create child components and manipulate them within the hierarchy of a ModelEntity. Is there a way to create child components in other 3D modeling programs, such as Blender?
In Vision OS app, We have two types of windows:
Main App Window – This is the default window that launches when the app starts. It displays the video listings and other primary content.
Immersive Space Window – This opens only when a user starts streaming or playing a video.
Issue:
When entering the immersive space, the main app window remains visible in front of it unless manually closed. To avoid this, I currently close the main window when transitioning to immersive space and reopen it when exiting. However, this causes the app to restart instead of resuming from its previous state.
Desired Behavior:
I want the main app window to retain its state and seamlessly resume from where it was before entering immersive mode, rather than restarting.
Attempts & Challenges:
Tried managing opacity, visibility, and state preservation, but none worked as expected.
Couldn’t find a way to push the main window to the background while bringing the immersive space to the foreground.
Looking for a solution to keep the main window’s state intact while transitioning between immersive and normal modes.
I am encountering an issue while using the multiview video demo provided at this link "https://developer.apple.com/documentation/avkit/creating-a-multiview-video-playback-experience-in-visionos/". Specifically, when running on versions of visionOS prior to 2.2, navigating back results in a blank screen. Has anyone else experienced this problem and found a solution? Any advice or workaround would be greatly appreciated.
I'm getting the following error in my swift build targeting VisionOS 2.0 :
" 'defaultDisplay' is unavailable in visionOS "
TLDR : how do I specify an initial window position in visionOS? The docs seem to be off? - see below.
The docs say it is available, but it is not, or at least my XCODE (Version 16.0 ) is throwing errors on it :
https://developer.apple.com/documentation/swiftui/scene/defaultwindowplacement(_:)
I know apple is opinionated about window placement in visionOS, and maybe it will never be available, but the docs say it is in visionOS 2.0+ and it sure would be nice to be able to specify a default position toward the bottom of one's FOV, etc .
Side-note -- the example code in that doc also has the issue that "Window" is not available in visionOS ( WindowGroup is ).
example code -- barely modified from example code in doc :
var body: some Scene {
WindowGroup("MyLilWindow", id: "MyLilWindow") {
TestView()
}
.windowResizability(.contentSize)
.defaultWindowPlacement { content, context in
let displayBounds = context.defaultDisplay.visibleRect
let size = content.sizeThatFits(.unspecified)
let position = CGPoint(
x: displayBounds.midX - (size.width / 2),
y: displayBounds.maxY - size.height - 140)
return WindowPlacement(position)
}
}
Following up on my previous question here: https://developer.apple.com/forums/thread/774262
Having solved the clipping problem, I am now trying to overlay some content in front of the RealityView. However, it looks like any content with transparency does not render in front of the RealityView, while opaque views seem to work; placing content with transparency like glassBackgroundEffect() behind the RealityView in a ZStack causes the entire window to flicker.
Additionally, my SwiftUI attachment placed in front of the stereoscopic image plane are invisible if the user look at it straight at 90 degrees. However, if the user look at it from increasing angles from the sides, the attachment gradually turns visible again.
Are these behaviors expected? What is a recommended approach to overlay content in front of a RealityView? Thanks!
In RealityView, physical components are only applicable to certain solids. How can I simulate the physical effects of water and cloth?
enity.components.set(PhysicalComponent)
I would like to translate info in a three.js based web app as a 3D model in a volumetric window. Is it possible to do this in a similar manner as loading a web page in a WKWebView?
I want to record animation with entity, then export it to .usd without using Reality Composer Pro, how to achieve that?
In the particle effect of RealityKit, there is a Type:
ParticleEmitterComponent.Presets
He can invoke certain particle effects in certain systems, but I am interested in learning how to modify these particle entities (such as adjusting the color, the number of particles, the generation range…)?
I have discovered that RemoteImmersiveSpace is limited to utilizing the structure of the CompositorContent protocol, precluding direct invocation of RealityView. Consequently, I am interested in understanding the appropriate method for integrating CompositorContent within RemoteImmersiveSpace. Thanks.