After creating a ". plist" file in the "Document" folder of the app, uninstalling and reinstalling the app, the "FileManager. default. fileExists (atPath: folderURL. path())" code returns true. I checked if the ". plist" file was not found in the "Document" folder of the app. Is this a bug in the VisionOS system
Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hello! I’ve got a USDZ export from Maya pipeline working with animation, and they load up nicely in the Vision Pro.
I’ve been checking out the animated sample files in the Augmented Reality/Quick Loop sample page, specifically, the first three at the top of the page.
I would like to know how they are created. I’m a 3d modeler and animator, not a programmer, so dipping my toe in RCP and Xcode/SwiftUI, but could used some informative tutorials for proper workflow. For example, in the Lunar Rover sample, there are lines emanating from the model, then text windows appear. Would I need to create all these extras inside Reality Composer Pro? I’d like to start creating immersive, narrative experiences (both in a volume, and fully immersive) but for prototyping, I want to learn the proper way to add this type of functionality. I think I remember seeing something to do with “schemas” involved. I’m assuming there might be some coding to setup in RCP for when items are selected, then an associated animation is triggered. Can anyone point me towards the relevant documentation to help me get started? Remember, I don’t code. ;)
Here are my recent Vision Pro experimentations.
https://youtube.com/playlist?list=PLCH753rZ9r6eqXxpIemaSlcyYxjFgR210&si=P_7AY2aL97Upm61i
I’m also proficient with Unreal Engine, but getting content packaged and over to AVP is still not ready for prime time, so i’m exploring the native approach.
Thanks for helping point me in the right direction!
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
How do we author a Reality File like the ones under Examples with animations at https://developer.apple.com/augmented-reality/quick-look/
??
For example, "The Hab" : https://developer.apple.com/augmented-reality/quick-look/models/hab/hab_en.reality
Tapping on various buttons in this experience triggers various complex animations. I don't see any way to accomplish this in Reality Composer.
And I don't see any way to export/compile to a "reality file" from within Xcode.
How can I use multiple animations within a single GLTF file?
How can I set up multiple "tap target" on a single object, where each one triggers a different action?
How do we author something similar? What tools do we use?
Thanks
Hi,
Is there a way to create an AnchorEntity that is attached to the window / WindowGroup of a visionOS app, so that there would be a box that aligns with the window?
Thanks for your help!
Hi, Could anyone share some insights on how to get and track the 3D coordinates of real objects in the environment in playgrounds? I searched for some resources and noticed ARKit, Reality may be helpful but not sure how to do it.
I just downloaded and opened the sample code for Creating 3D models as movable windows. (Link: https://developer.apple.com/documentation/visionos/creating-a-volumetric-window-in-visionos ).
I opened the main view in the simulator (canvas), placed the volumentric window somewhere and then moved the camera a bit.
Expected:
I would expect volumentric windows to stay in place when I place them somewhere and then move around or look in a different direction.
Actually:
They don't stay in place. They slightly move with the camera.
Question:
Is this actual behavior expected?
Is this just a thing with the simulator and will not happen with real hardware?
Topic:
Spatial Computing
SubTopic:
General
I have been playing around with the idea of drawing directly onto the pixels of the Vision Pro, as I am working on a telepresence app that streams a live stereoscopic feed from an articulated robot neck to the wearer.
I was playing around in the Compositor Services demo and modified it to show the following.
I created a grid pattern using normalized device coordinates (-1 to 1) and it looks great when it shows up in the simulator as shown below.
I wanted to see the effects of lens distortion on the image so I launched this script inside the actual Vision Pro, it seems that each eye has only a portion of this screen visible. I have included a screen capture of a screen recording inside of the Vision Pro when running this modified app.
The lines appear straight, which says to me that there must be some automatic pre-distortion correction applied (similar to the image shown below taken from an AVP teardown that I cannot link here).
However, I am wondering why the grid appears cropped and what the bounds of the frame are defined by?
Lately i got a lot of crashes on iOS 18 devices (mostly 13 Pro devices but also a 16 Pro Max), has anyone encountered similar issues and is there more information about offlineFloorPlanGeneration?
com.apple.RoomScanCore.offlineFloorPlanGeneration
EXC_BAD_ACCESS KERN_INVALID_ADDRESS 0x0000f5cab18e03c0
RoomScanCore RSFrameFromDictionary + 110992
Hello, I am creating a VisionOS application where a simple web browser is implemented in an immersive view by using WKWebView. Presentations cause a crash with "Presentations are not permitted within volumetric window scenes."
I am attempting to use the Barcode Detection enterprise API. I have the necessary entitlements and license file. I'm following the sample code online, and whenever I attempt to run the barcode detection using arKitSession.run I get the following error message:
ar_barcode_detection_provider_t <0x300d82130>: Failed to run provider with transient error code: 1
It obviously isn't running the barcode detection, even though it's running in an immersive space in mixed mode. Any idea what might be going on?
Topic:
Spatial Computing
SubTopic:
ARKit
Hi!
I wanna ask that if it's possible to make mirror material with Shader Graph in Reality Composer Pro. The mirror should reflect entities in Reality Composer Pro scene.
I found that it works with SceneKit, but I'm using RealityKitContent in my project. Are there ways to solve this?
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
Reality Composer Pro
Shader Graph Editor
visionOS
I have a class with an Entity, on which I added a Spatial Audio component. Furthermore, I have a function, which uses the playAudio() method to start the Spatial Audio. During the first call of the function, everything is fine. If I call the again, the audio volume drops abruptly after a half second. It is very quiet.
Approximately, I have following code:
class VoiceOutputPlayer: NSObject, ObservableObject, AVAudioPlayerDelegate {
private var speechEntity = Entity()
func play() {
Task{
let audioRessource = try await AudioFileResource(contentsOf: urlWave)
self.speechEntity.playAudio(audioRessource)
}
}
func initSpatialAudio() -> Entity {
speechEntity.transform.translation.y = -0.37
speechEntity.transform.translation.z = 0.09
speechEntity.spatialAudio = SpatialAudioComponent(gain: Double.zero)
speechEntity.spatialAudio?.reverbLevel = -2
speechEntity.spatialAudio?.directivity = .beam(focus: 0.9)
speechEntity.orientation = .init(angle: .pi, axis: [0, 1, 0])
speechEntity.spatialAudio?.distanceAttenuation = .rolloff(factor: 1)
return speechEntity
}
}
Have visionOS 2.2 on the Apple Vision Pro and use Xcode 16.1
I am currently developing a game that runs on VisionOS using RealityKit and Swift.
I have a question regarding particle emitters.
It seems that there is a sorting order (render queue) between particle emitters themselves, but there doesn’t appear to be a render queue between particle emitters and regular model entities.
If such a feature exists, could you please provide a simple example?
Thank you!
Hello, i've recently received the entitlements to access the main camera stream for a project on the Apple Vision Pro.
What happens :
When executing code from this WWDC tutorial , i'm getting this error when trying to use a Camera Frame Provider :
ar_camera_frame_provider_t <0x300d58870>: Failed to start camera stream with error: <ar_error_t: 0x303fcc4c0 Error Domain=com.apple.arkit Code=100 "App not authorized." UserInfo={NSLocalizedFailureReason=Using camera frame provider requires an entitlement., NSLocalizedRecoverySuggestion=, NSLocalizedDescription=App not authorized.}
What I've tried :
I followed the instructions given by mail, by :
adding the .license file at the root of my project,
adding the .entitlements file by adding capabilities in the project (Main Camera Access & Passthrough in screen capture are there).
I've added NSCameraDescription, NSEnterpriseMCAMUsageDescription and NSWorldSensingUsageDescription (they all have a value assigned).
I've also followed those post & post advices.
When checking on the Account settings, i do see the capabilities in the "additional capabilities"
On first launch, I'm also getting prompted to accept the NSEnterpriseMCAMUsageDescription, so I assume the info.plist file is valid?
What did i missed to get the entitlements working ?
Here's the code :
import ARKit
import SwiftUI
import Vision
import RealityKit
class MainCameraAccess {
var arKitSession = ARKitSession()
var cameraFrameProvider = CameraFrameProvider()
var pixelBuffer: CVPixelBuffer?
func startCameraSession() async {
let formats = CameraVideoFormat.supportedVideoFormats(for: .main, cameraPositions: [.left])
// Request authorization
await arKitSession.requestAuthorization(for: [.cameraAccess])
// Start the session
do {
try await arKitSession.run([cameraFrameProvider])
} catch {
print("Failed to start ARKit session: \(error)")
return
}
// Get camera frame updates
guard let cameraFrameUpdates = cameraFrameProvider.cameraFrameUpdates(for: formats[0]) else {
return
}
// Process frames
for await cameraFrame in cameraFrameUpdates {
guard let mainCameraSample = cameraFrame.sample(for: .left) else {
continue
}
self.pixelBuffer = mainCameraSample.pixelBuffer
}
}
func saveLatestImage() {
guard let pixelBuffer = self.pixelBuffer else {
print("No image available to save.")
return
}
// Convert CVPixelBuffer to UIImage
let ciImage = CIImage(cvPixelBuffer: pixelBuffer)
let context = CIContext()
guard let cgImage = context.createCGImage(ciImage, from: ciImage.extent) else {
print("Failed to create CGImage.")
return
}
let uiImage = UIImage(cgImage: cgImage)
// Save UIImage to Photos Album
UIImageWriteToSavedPhotosAlbum(uiImage, nil, nil, nil)
print("Image saved to photo library.")
}
}
Thanks in advance for the help,
Jeremy
Copy of Feedback:
FB15969432
Improve window management with immersive spaces.
It is hard to manage windows from code when entering immersive space.
Look for instance at the sample:
https://developer.apple.com/documentation/visionos/displaying-a-3d-environment-through-a-portal
The window displayed before entering the virtual space stays there once the virtual space is entered : this window is too big but can't be resized by the program.
One could say this big window could be closed and a smaller window opened by the program with the "exit" button, but then this small window should be closed and the main window reopened when leaving te immersive space.
In the immersive space closing the "Exit" window with the X does not allow to leave the immersive space.
If the crown button is then used we go back to the Vision Pro main menu. If the app is chosen again we can see that it wasn't closed : the "Exit" window is now displayed but we are not in the immersive space!
Don't say "this is just a sample app", because all developers face those issues.
Please try to find the right solutions with your team to enhance this sample and share the right way to solve those issues. You could find that the specifications need to be enhanced.
You can also see that there is no way to exit a program from a program even if this is something that could be useful for some apps (you end a SharePlay game for instance)
Thank you very much for your time and consideration.
Topic:
Spatial Computing
SubTopic:
General
Hi, in visionOS2, I wonder if there is a method to add/remove referenceObjects of ObjectTrackingProvider after ARSession started without stop it.
Now I must stop it and run it again. The current tracking is stopped.
Thanks.
Hi,
We have been experimenting with VisionOS and we are in need to query the field of view values for each eye of the device. We are currently using drawable.views[0].tangents and drawable.views[1].tangents respectively which is labeled as 'Depreceated' . We wonder is there an alternative function for obrtaining FOVs since we had no luck to calcuate them out of the projection matrix we obtain from drawables.
Thanks
Hi everyone,
I’m working on a project in Reality Composer Pro, and I’ve encountered an issue with vertex animations. Here’s what I’ve done:
I created a model in Blender, which includes two animations:
One that scales the vertices of the model.
Another that moves the model's position.
I imported both animations into Reality Composer Pro, and the position animation works fine, but the vertex scaling animation does not seem to work.
What I’m Trying to Achieve:
I want the vertex scaling animation to play correctly in Reality Composer Pro alongside the movement animation.
Problem:
The position animation works as expected, but the vertex scaling animation does not work when applied in Reality Composer Pro.
I have checked the vertex scaling animation in other software, and it works fine there. The issue seems to be specific to Reality Composer Pro.
Is vertex animation scaling supported in Reality Composer Pro? If so, what might be causing this issue? Any advice or solutions would be greatly appreciated!
Hello,
I have an iOS app that is using SwiftUI but the gesture code is written using UIGestureRecognizer. When I run this app on visionOS using the "Designed for iPad" destination and try to use any of my gestures I see this warning in the console:
Trying to convert coordinates between views that are in different UIWindows, which isn't supported. Use convertPoint:fromCoordinateSpace: instead.
But I don't see any visible problems with the gestures.
I see this warning printed out after the gesture takes place but before any of our gesture methods get kicked off. So now I am wondering if this is something we need to deal with or some internal work that needs to happen in UIKit.
Does anyone have any thoughts on this?
Hello!
https://forums.developer.apple.com/forums/thread/762763
I read this thread, and this is similar to what I'm trying to do.
I have two entities in the scene, "HandTrackingEntity", "HandScanner".
"HandTrackingEntity": I put Anchoring Component, Collision Component (Trigger) here.
"HandScanner": I put Behaviors Component(OnCollision), and Collision Component here.
Here is the pictures how I set the components.
and I set physicsSimulation property to .none.
I was expecting that Timeline will be played when I put my hand(with HandTrackingEntity) on "HandScanner" entity. But it didn't work.
Am I missing some steps? And I need sample codes to understand how to apply 'physicsSimulation' property. I'd appreciate it if you could let me know about it.
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
RealityKit
Reality Composer Pro
visionOS