I have a class with an Entity, on which I added a Spatial Audio component. Furthermore, I have a function, which uses the playAudio() method to start the Spatial Audio. During the first call of the function, everything is fine. If I call the again, the audio volume drops abruptly after a half second. It is very quiet.
Approximately, I have following code:
class VoiceOutputPlayer: NSObject, ObservableObject, AVAudioPlayerDelegate {
private var speechEntity = Entity()
func play() {
Task{
let audioRessource = try await AudioFileResource(contentsOf: urlWave)
self.speechEntity.playAudio(audioRessource)
}
}
func initSpatialAudio() -> Entity {
speechEntity.transform.translation.y = -0.37
speechEntity.transform.translation.z = 0.09
speechEntity.spatialAudio = SpatialAudioComponent(gain: Double.zero)
speechEntity.spatialAudio?.reverbLevel = -2
speechEntity.spatialAudio?.directivity = .beam(focus: 0.9)
speechEntity.orientation = .init(angle: .pi, axis: [0, 1, 0])
speechEntity.spatialAudio?.distanceAttenuation = .rolloff(factor: 1)
return speechEntity
}
}
Have visionOS 2.2 on the Apple Vision Pro and use Xcode 16.1
Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hello,
I checked following documentations.
Vision | Apple Developer Documentation
Discover Swift enhancements in the Vision framework - WWDC24 - Videos - Apple Developer
I saw Vision Framework is available on visionOS.
So I want to know that if it's possible using Vision Framework on visionOS for tracking human and animal body poses. Or are there some limits to use this on visionOS?
I just downloaded and opened the sample code for Creating 3D models as movable windows. (Link: https://developer.apple.com/documentation/visionos/creating-a-volumetric-window-in-visionos ).
I opened the main view in the simulator (canvas), placed the volumentric window somewhere and then moved the camera a bit.
Expected:
I would expect volumentric windows to stay in place when I place them somewhere and then move around or look in a different direction.
Actually:
They don't stay in place. They slightly move with the camera.
Question:
Is this actual behavior expected?
Is this just a thing with the simulator and will not happen with real hardware?
Topic:
Spatial Computing
SubTopic:
General
When I try to open Immersive space I got error like below:-
HALC_ProxyIOContext::IOWorkLoop: skipping cycle due to overload
How to solve it any idea?
When building a multiplayer Tabletop game, the documentation includes how to attach a custom TabletopNetworkSessionCoordinator, which could be used in addition to TabletopGame.MultiplayerDelegate. But so far, we have been unable to create these types of custom coordinators or have a delegate that works.
Our current setup with our generic GroupActivity works by sending the session to TabletopGame's coordinateWithSession method (like in the current sample project), but we didn't find a way to access and control, for example, the arbiter, seats, player events, among other features mentioned on https://developer.apple.com/documentation/tabletopkit/tabletopnetworksession.
Is correct to expect having access to the participants, messenger, or journal without having to maintain a parallel coordinator?
possibly we are missing something here; any suggestions?
if I set UIApplicationPreferredDefaultSceneSessionRole to UISceneSessionRoleImmersiveSpaceApplication then my Immersive Space for image is working fine but when I try with UIWindowSceneSessionRoleApplication this option and try to open Immersive space on particular sub screen then its not showing image in immersive space(Immersive space not open).
Any one have idea what the issue.
<key>UIApplicationSceneManifest</key>
<dict>
<key>UIApplicationPreferredDefaultSceneSessionRole</key>
<string>UIWindowSceneSessionRoleApplication</string>
<key>UIApplicationSupportsMultipleScenes</key>
<true/>
<key>UISceneConfigurations</key>
<dict>
<key>UISceneSessionRoleImmersiveSpaceApplication</key>
<array>
<dict>
<key>UISceneInitialImmersionStyle</key>
<string>UIImmersionStyleFull</string>
</dict>
</array>
</dict>
</dict>
My info.plist value as above
Hi,
I have a question.
In visionOS, when a user looks at a button and performs a pinch gesture with their index finger and thumb, the button responds. By default, this works with both the left and right hands. However, I want to disable the pinch gesture when performed with the left hand while keeping it functional with the right hand.
I understand that the system settings allow users to configure input for both hands, the left hand only, or the right hand only. However, I would like to control this behavior within the app itself.
Is this possible?
Best regards.
In a simple test, I'm observing ~30% higher CPU usage with the ARWorldTrackingConfiguration compared to the ARBodyTrackingConfiguration when both configurations have AREnvironmentTexturing enabled.
In Instruments, I observe Recon3D consuming ~5.5 seconds of CPU time with the ARWorldTrackingConfiguration vs <0.3 second with the ARBodyTrackingConfiguration in two separate 30 seconds samples.
This is on an iPhone 12 Pro equipped with lidar.
Is there a reason why two separate configurations, both having the same features enabled would have a different CPU overhead?
I am working on an app that will allow a user to load and share their model files (usdz, usda, usdc). I'm looking at security options to prevent bad actors. Are there security or validation methods built into ARKit/RealityKit/CloudKit when loading models or saving them on the cloud? I want to ensure no one can inject any sort of exploit through these file types.
prefetching logic for UICollectionView on VisionOS does not work.
I have set up a Standalone test repo to demonstrate this issue. This repo is basically a visionOS version of Apple's guide project on implementation of prefetching logic.
in repo you will see a simple ViewController that has UICollectionView, wrapped inside UIViewControllerRepresentable.
on scroll, it should print 🕊️ prefetch start on console to demonstrate func collectionView(_ collectionView: UICollectionView, prefetchItemsAt indexPaths: [IndexPath]) is called. However it never happens on VisionOS devices.
With the same code it behaves correctly on iOS devices
Topic:
Spatial Computing
SubTopic:
General
Tags:
SwiftUI
UIKit
visionOS
iPad and iOS apps on visionOS
We use Unity6+VisonOS2.2 to develop an MR Application.
App Mode: RealityKit with PolySpatial,
In the actual test, we found that when my moving position is more than 80~100 meters away from the starting position of the application, my current position will be reset to Vector.zero, which will cause my application experience to be very bad. Is anyone experiencing the same problem? Is there a solution?
Topic:
Spatial Computing
SubTopic:
General
I've submitted my first AR app for iPhone and iPad to iTunes Connect. After sending a binary to iTunes Connect, I've received the following warning message.
The app contains the following UIRequiredDeviceCapabilities values, which aren’t supported in visionOS: [arkit].
No. 1, my app doesn't support visionOS. No. 2, I don't have the UIRequiredDeviceCapabilities dictionary in info.plist. Why am I receiving this warning? One article related to this issue that I've read suggests that I remove the UIRequiredDeviceCapabilities dictionary. Well, I don't have it in my plist. What can I do with this warning message? Thanks.
Topic:
Spatial Computing
SubTopic:
ARKit
So it seems to be that there is a contradiction between how ARKit defines UIDeviceOrientation.landscapeRight, and the actual definition of UIDeviceOrientation.landscapeRight in the UIKit documentation.
In the ARKit documentation for ARCamera.transform, it says the following:
This transform creates a local coordinate space for the camera that is constant with respect to device orientation. In camera space, the x-axis points to the right when the device is in UIDeviceOrientation.landscapeRight orientation—that is, the x-axis always points along the long axis of the device, from the front-facing camera toward the Home button. The y-axis points upward (with respect to UIDeviceOrientation.landscapeRight orientation), and the z-axis points away from the device on the screen side.
Going through the same link, we see the definition of UIDeviceOrientation.landscapeRight given as:
The device is in landscape mode, with the device held upright and the front-facing camera on the right side.
There seems to be a conflict in the two definitions, that has already been asked and visualized in this StackOverflow thread
The resolution of that answer says that ARKit landscapeRight, unlike what is given in UIDeviceOrientation.landscapeRight, has home button on the right, as stated in the ARCamera.transform documentation.
It says that more details are given in this StackOverflow thread, but this thread talks about the discrepancy between the definitions of landscapeRight in UIDeviceOrientation and UIInterfaceOrientation, and not anything related to ARKit.
So I am wondering, why does ARKit definition of landscapeRight contradict with that of UIDeviceOrientation despite explicitly mentioning it? Is it just a mistake by Apple developers that hasn't been resolved even after so long?
Hi all,
I'm running into an issue with an app that previously worked fine on device using visionOS 2.0. After updating to visionOS 26, the same code runs fine in the simulator but crashes on the device with the following error:
-[MTLDebugComputeCommandEncoder _validateThreadsPerThreadgroup:]:1330:
failed assertion `(threadsPerThreadgroup.width(32) * threadsPerThreadgroup.height(32) * threadsPerThreadgroup.depth(1))(1024) must be <= 832. (kernel threadgroup size limit)`
Is there any documented way to check or increase the allowed threadsPerThreadgroup size on Apple Vision Pro? Or any recommended workaround for this regression?
Thanks in advance!
Hi there,
I’m building a workplace experience that requires using virtual desktop, is there a way to launch it in my code, so user doesn’t have to do it manually?
Thanks in advance!
Can we constrain or clamp translation with the new ManipulationComponent? For example, allow free movement within certain bounds.
Hello,
I have downloaded and run the sample object tracking app for visionos.
Now I'm working on my own objects for tracking. I have made a model using Create ML using images of my object.
However, I cannot see how to convert the Create ML output file (xxx.mlmodel) into a reference object like the files in the sample project.
is there a tool for converting them?
TIA
Topic:
Spatial Computing
SubTopic:
ARKit
Greetings. I am having this issue with a Unity Polyspatial VisionOS app.
We have our main Bounded Volume for our app.
We have other Native UI windows that appear when we interact with objects in our Bounded Volume.
If a user closes our main Bounded Volume...sometimes it quits the app. Sometimes it doesn't.
If we go back to the home screen and reopen the app, our main Bounded Volume doesn't always appear, and just the Native UI windows we left open are visible. But, we can sometimes still hear sounds that are playing in our Bounded Volume.
What solutions are there to make sure our Bounded Volume always appears when the app is open?
I have an entity that was created using Mixamo, and it has an animation.
after the animation completes the mesh of the robot is not where the entity is positioned.
I want to do something like when the animation finishes, I set the root entity's transform to the mesh's transform. There are no transformations applied to any of the children of this root of the model, which means that the transformations are applied to the skeleton due the the playing of animations.
Is there a way where I can apply the final position of the root of the skeleton to the root entity to make sure to position the entity where the animation has ended just before the next animation plays?
I need help to wrap my head around this...
If I import the Reality Composer Pro package and load it into an ARView, I will see 1.3gb of memory usage and about 180-220% cpu usage. The frames will start at around 60fps, and then eventually drop to around 30fps.
If I export the usdz from Reality Composer Pro and load that into the same ARView, I will see about 1gb of memory usage and around 150% cpu usage; fps holds longer at 60 but eventually drops.
If I load that same usdz into a QuickLook view, I will see about 55mb of memory usage, 9-11% cpu, and the frames stay locked at 116fps. The only thing I notice is the button I have is slightly less responsive, but it all still works fine.
I don't understand. How can I make the ARView work as efficiently as QuickLook?