visionOS

RSS for tag

Discuss developing for spatial computing and Apple Vision Pro.

Posts under visionOS tag

200 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

How to integrate Apple Immersive Video into the app you are developing.
Hello, Let me ask you a question about Apple Immersive Video. https://www.apple.com/newsroom/2024/07/new-apple-immersive-video-series-and-films-premiere-on-vision-pro/ I am currently considering implementing a feature to play Apple Immersive Video as a background scene in the app I developed, using 3DCG-created content converted into Apple Immersive Video format. First, I would like to know if it is possible to integrate Apple Immersive Video into an app. Could you provide information about the required software and the integration process for incorporating Apple Immersive Video into an app? It would be great if you could also share any helpful website resources. I am considering creating Apple Immersive Video content and would like to know about the necessary equipment and software for producing both live-action footage and 3DCG animation videos. As I mentioned earlier, I’m planning to play Apple Immersive Video as a background in the app. In doing so, I would also like to place some 3D models as RealityKit entities and spatial audio elements. I’m also planning to develop the visionOS app as a Full Space Mixed experience. Is it possible to have an immersive viewing experience with Apple Immersive Video in Full Space Mixed mode? Does Apple Immersive Video support Full Space Mixed? I’ve asked several questions, and that’s all for now. Thank you in advance!
0
0
8
25m
System issues encountered when developing VisionOS related programs using Unity
Using Unity to develop VisionOS program, pressing the right knob of VisionPro during use will exit the Unity space and destroy the model in the space. The model in the space has been disconnected from the SwiftUI interface. After clicking the right knob, return to the system main interface, and then click the right knob again to return to the inside of the program. However, Unity space cannot be restored, and calling the discisWindow method on the SwiftUI interface has no effect, so the interface cannot be destroyed. Is there any solution??
0
0
32
2h
My iOS game is compatible with visionOS and MacOS
I already have an opinion ( I should never release to a platform without testing on a physical platform device ) on this but wanted to learn from experience and expertise and see if there were any viable options. My hybrid casual puzzle game is released on the App Store for iOS. (Whew!) Apparently it is compatible to both Mac OS and VisionOS I would love to make it available everywhere however, I am not sure it is best to do so without testing on these physical devices. Which could also mean making the design adjustments for those devices, having test devices ready etc. and I would have to update my Laptop to silicon. Has anyone tried this without testing on physical devices? What are your thoughts/best suggestions? Thanks in advance!
0
0
73
7h
Day 11: No Progress Despite Assurances of Expedited Review
Apple App Review Team, what is going on? Yesterday, I was assured that my app review was being "expedited." Yet, here we are—Day 11—and absolutely nothing has changed. The app remains stuck "In Review" with zero progress. This is beyond frustrating. It is actively damaging my business, my users, and the trust developers have in the App Store review process. I have followed every proper channel—daily support tickets, expedited review requests, even a direct call with an Apple representative—yet I am still met with the same vague, non-committal responses. Let me be clear: This is not an acceptable way to handle app reviews. Developers cannot operate under these unpredictable and arbitrary delays. The app is one of the most successful indie apps on visionOS, and this treatment is completely unjustified. I need real action. Today. Not another canned response, not another “we need additional time”—real progress and real accountability. Do better! App ID: 6737148404
2
0
119
9h
EntityAction for MaterialBaseTint - incorrect colours
Hello, I'm writing an EntityAction that animates a material base tint between two different colours. However, the colour that is being actually set differs in RGB values from that requested. For example, trying to set an end target of R0.5, G0.5, B0.5, results in a value of R0.735357, G0.735357, B0.735357. I can also see during the animation cycle that intermediate actual tint values are also incorrect, versus those being set. My understanding is the the values of material base colour are passed as a SIMD4. Therefore I have a couple of helper extensions to convert a UIColor into this format and mix between two colours. Note however, I don't think the issue is with this functions - even if their outputs are wrong, the final value of the base tint doesn't match the value being set. I wondered if this was a colour space issue? import simd import RealityKit import UIKit typealias Float4 = SIMD4<Float> extension Float4 { func mixedWith(_ value: Float4, by mix: Float) -> Float4 { Float4( simd_mix(x, value.x, mix), simd_mix(y, value.y, mix), simd_mix(z, value.z, mix), simd_mix(w, value.w, mix) ) } } extension UIColor { var float4: Float4 { var r: CGFloat = 0.0 var g: CGFloat = 0.0 var b: CGFloat = 0.0 var a: CGFloat = 0.0 getRed(&r, green: &g, blue: &b, alpha: &a) return Float4(Float(r), Float(g), Float(b), Float(a)) } } struct ColourAction: EntityAction { let startColour: SIMD4<Float> let targetColour: SIMD4<Float> var animatedValueType: (any AnimatableData.Type)? { SIMD4<Float>.self } init(startColour: UIColor, targetColour: UIColor) { self.startColour = startColour.float4 self.targetColour = targetColour.float4 } static func registerEntityAction() { ColourAction.subscribe(to: .updated) { event in guard let animationState = event.animationState else { return } let interpolatedColour = event.action.startColour.mixedWith(event.action.targetColour, by: Float(animationState.normalizedTime)) animationState.storeAnimatedValue(interpolatedColour) } } } extension Entity { func updateColour(from currentColour: UIColor, to targetColour: UIColor, duration: Double, endAction: @escaping (Entity) -> Void = { _ in }) { let colourAction = ColourAction(startColour: currentColour, targetColour: targetColour, endedAction: endAction) if let colourAnimation = try? AnimationResource.makeActionAnimation(for: colourAction, duration: duration, bindTarget: .material(0).baseColorTint) { playAnimation(colourAnimation) } } } The EntityAction can only be applied to an entity with a ModelComponent (because of the material), so it can be called like so: guard let modelComponent = entity.components[ModelComponent.self], let material = modelComponent.materials.first as? PhysicallyBasedMaterial else { return } let currentColour = material.baseColor.tint let targetColour = UIColor(_colorLiteralRed: 0.5, green: 0.5, blue: 0.5, alpha: 1.0) entity.updateColour(from:currentColour, to: targetColour, duration: 2)
0
0
103
15h
Volumetric Windwos anchores
Hi, we would like to create something where you can open multiple volumetric windows and place them in a room, our biggest issue is that we want these windows to be persistent, so when I close and reopen the app, the windows to be in the same position. We can't use immersive spaces because we also want to have the possibility to access the shared space. Is it possible with the current features and capabilities to do that? If yes do you have some advices how can we achieve this? The alternative is if is it possible to open the virtual display in immersive spaces or if we have the possibility to implement our own virtual display.
1
0
89
14h
Urgent: App Stuck "In Review" for 10 Days – Repeated Delays Impacting Releases
Dear Apple App Review Team, I’m reaching out here as a last resort, as my previous attempts to get a response through the developer support portal have been met with only automated replies. My app has been stuck in the "In Review" state for 10 days, and unfortunately, this is not the first time. Every time a new feature is added, the review process takes weeks, causing major disruptions to my development and release schedule. Based on backend logs, it appears that while the app is technically "In Review," no one has actually opened it, meaning it's sitting in a queue without active review. I have submitted daily support tickets and expedited review requests, but I have yet to receive a response from a real person—only the standard copy-paste replies. This lack of transparency and responsiveness makes it extremely difficult to plan and operate efficiently. Given that this is one of the most profitable indie apps on the visionOS App Store, I believe it deserves a more predictable and efficient review process. These extended delays are not just frustrating but actively harming the app’s growth and user experience. I urgently request immediate action on this matter. Please do not reply with a generic link directing me to support—I’ve already exhausted that route without success. I need a real response and a clear resolution. App ID: 6737148404
2
1
139
21h
How to show only Spatial video using UIDocumentPickerViewController
Is there a suitable UTType type to satisfy the need to pick up only SpatialVideo in UIDocumentPickerViewController? I already know that PHPickerFilter in PHPickerViewController can do this, but not in UIDocumentPickerViewController. Our app needs to adapt both of these ways to pick spatial videos So is there anything that I can try in UIDocumentPickerViewController to fulfill such picker functionality?
1
0
127
1d
Why can't I load a ModelEntity from RealityKitContentBundle?
If I place the .usdz file in the project directory alongside other .swift files, ModelEntity loads it perfectly. However, if I try to load the same file from Reality Composer Pro under RealityKitContent.rkassets, I get the error: resourceNotFound("heart"). Could someone help me with this? Thank you so much Code: // // TestttttttApp.swift // Testtttttt // // Created by Zhendong Chen on 2/17/25. // import SwiftUI @main struct TestttttttApp: App { var body: some Scene { WindowGroup { ContentView() } .windowStyle(.volumetric) } } // // ContentView.swift // Testtttttt // // Created by Zhendong Chen on 2/17/25. // import SwiftUI import RealityKit import RealityKitContent struct ContentView: View { @State private var enlarge = false var body: some View { RealityView { content in do { // MARK: Work let scene = try await ModelEntity(named: "heart") content.add(scene) // MARK: Doesn't work // let scene = try await ModelEntity(named: "heart", in: realityKitContentBundle) // content.add(scene) } catch { print(error) } } } } #Preview(windowStyle: .volumetric) { ContentView() }
1
0
143
1d
How to programmatically open a new document in a DocumentGroup-based visionOS app?
I’m trying to implement my Examples view in my DocumentGroup app on visionOS. I am stuck on what on the surface seems very basic: programmatically opening a document. Is there an analog to NSDocumentController.shared.openUntitledDocumentAndDisplay on visionOS? Here’s what I’ve tried so far. Ideally, this would a be collection of document templates in a DocumentGroupLaunchScene. However, I’ve been unable to get DocumentGroupLaunchScene to work on visionOS. I’ve tried UIApplication.shared.open(url) with a url to a document in my app bundle. UIApplication.shared.canOpenURL(url) returns true, but open(url) has no effect. In the macOS build, I use NSDocumentController.shared.openUntitledDocumentAndDisplay, but do not see any iOS or visionOS analog. @Environment(\.newDocument) private var newDocument would be ideal, but that is not available on visionOS. UIApplication.shared.activateSceneSession(for: .init()) brings up the document browser in a new window, at which point clicking the “+” button does what I want. Can I invoke that directly somehow? It would be sufficient if I could programmatically open a new untitled document. On macOS, I do that and sneak the template contents to the Document constructor in a global variable. I presume I am just overlooking something simple, but I’ve come up blank so far.
1
0
124
14h
Body segmentation/occlusion on the Apple Vision Pro
Hello, I am currently working on a Unity project for the Apple Vision Pro. I would like to have people passing in front of the virtual objects occlude the virtual objects that are behind. Something similar to this: https://developer.apple.com/documentation/arkit/occluding-virtual-content-with-people I could unfortunately not find any documentation about this. Is it possible to implement body segmentation or occlusion on the Apple Vision Pro? If it's not currently supported, are there plans to add it? Any ideas on how to achieve this with existing tools? Thanks! Mehdi
1
0
104
1d
Issue with Behaviors + Timelines in VisionOS
Hello, I'm unable to activate a timeline in my application through an OnTap, OnAddedToScene or OnNotification. In RCP I can test and play the timelines easily. When running in the simulator or on device the timelines simply do not run, regardless of the method through which I try to call the API. I have two questions: How can I check that my timelines are in my RCP project that's loaded into the scene? I don't see timelines in the entity hierarchy when I debug in RealityKit Debugger Is Behaviors a component I can manually set at runtime? I can very clearly see the behaviors component attached to my entity in RCP, but when running this code: .gesture( TapGesture() .targetedToAnyEntity() .onEnded { value in if value.entity.applyTapForBehaviors() { print("Success!") } else { print("Failure.") } } ) It prints "Failure." every time indicating to me that the entity does not have a Behavior attached to it (whether that's a component or however else the Behavior is associated with the entity) I also have not had success using the Notification system or even the OnAddedToScene behavior trigger which should theoretically work if a behavior is attached to the entity which the tap experiment indicates it's not. For context this is my notification trigger code: private let notificationTrigger = NotificationCenter.default .publisher(for: Notification.Name("RealityKit.NotificationTrigger")) @Environment(\.realityKitScene) var scene Attachment(id: "home") { Button { NotificationCenter.default.post( name: NSNotification.Name("RealityKit.NotificationTrigger"), object: nil, userInfo: [ "RealityKit.NotificationTrigger.Scene": scene, "RealityKit.NotificationTrigger.Identifier": "test" ] ) } label: { Text("Test") } .padding(20) .glassBackgroundEffect() } .onReceive(notificationTrigger) { _ in print("test notification received") I am receiving "test notification received" print statements as well. I'm using Xcode 16.0 with VisionOS 2.0 on MacOS 15.3.1
1
0
181
5d
Hover effect is shown on a disabled button
Hello. I have a scenario where a hover effect is being shown for a button that is disabled. Usually this doesn't happen but when you wrap the button in a Menu it doesn't work properly. Here is some example code: struct ContentView: View { var body: some View { NavigationStack { Color.green .toolbar { ToolbarItem(placement: .topBarTrailing) { Menu("Menu") { Button("Disabled Button") {} .disabled(true) .hoverEffectDisabled() // This doesn't work. Button("Enabled Button") {} } } } } } } And here is what it looks like: This looks like a SwiftUI bug. Any help is appreciated, thank you!
1
0
128
5d
Reading scenePhase from custom Scene
Hi, I've encountered a thread where an Apple engineer points out that there are 2 possible ways to anchor scenePhase, either App or View implementation: https://developer.apple.com/forums/thread/757429 This thread also links to documentation which states If you read the phase from within a custom Scene instance, the value similarly reflects an aggregation of all the scenes that make up the custom scene: This doesn't seem to be the case on visionOS 2, I tried the following code starting from an empty app template: import SwiftUI @main struct SceneTestApp: App { var body: some Scene { MyScene() WindowGroup(id: "extra") { Text("Extra window") } } } struct MyScene: Scene { @Environment(\.scenePhase) private var scenePhase @Environment(\.openWindow) private var openWindow var body: some Scene { WindowGroup { ContentView() .onAppear { openWindow(id: "extra") } } .onChange(of: scenePhase) { oldValue, newValue in print("scenePhase changed") } } } The result was that I didn't get onChange callback if I only closed the extra window, the callback only came after I closed both windows and the whole app was suspended. Is this expected behavior?
3
0
150
1d
WorldTrackingProvider stops working on device
After re-launching the immersive space in my app 5-10 times, the WorldTrackingProvider stops working. Only restarting the app will allow it to start working again. Only on device, not the simulator. I get these errors when it happens: The device_anchor can only be queried when the world tracking provider is running. ARPredictorRemoteService <0x107cbb5e0>: Service configured with error: Error Domain=com.apple.arkit.error Code=501 "(null)" Remote Service was invalidated: <ARPredictorRemoteService: 0x107cbb5e0>, will stop all data_providers. ARRemoteService: remote object proxy failed with error: Error Domain=NSCocoaErrorDomain Code=4099 "The connection to service with pid 81 named com.apple.arkit.service.session was invalidated from this process." UserInfo={NSDebugDescription=The connection to service with pid 81 named com.apple.arkit.service.session was invalidated from this process.} ARRemoteService: weak self released before invalidation @Observable class VisionPro { let session = ARKitSession() let worldTracking = WorldTrackingProvider() func transformMatrix() async -> simd_float4x4 { guard let deviceAnchor = worldTracking.queryDeviceAnchor(atTimestamp: CACurrentMediaTime()) else { return .init() } return deviceAnchor.originFromAnchorTransform } func runArkitSession() async { Task { try? await session.run([worldTracking]) } } } which I call from my RealityView: .task { await visionPro.runArkitSession() }
3
0
137
5d
Issue in TabletopKit Sample scene + shared immersive space
Hello Community, I am currently developing an experimental VisionOS app, to investigate the social effects of the new Spatial Persona feature, for my bachelor thesis. My setup includes a simple board game for the participants, in which they can engage with their persona avatars. I tried to use the TabletopKit for this setup, but ran into issues when starting the SharePlay session. When I testes my app, I couldn't see the other spatial persona anymore, despite the green SharePlay button indicating the session started. The other person can see my actions in their version of the app on the board, but can not interact with anything. Also, we are both seat on the default side of the seat. I tried to remove the environment I added, because it doesn't seem to synch with the other player. When I tried the FaceTime feature in the simulator without the environment, I could then see the test robot avatar, but at a totally wrong place. It's seems like it isn't just my environment occluding the seats, but a flaw in the seating process as well. When I tried the FaceTime feature in the simulator on the official test scene (TabletopKit Sample), I got the same incorrect placement and the warning "role(for:inSeatNumber:): The provided role identifier does not match a role in the current template." So my questions are: What needs to be changed so the TabletopKit can handle seating correctly? How can I correctly use immersive scenes in combination with the TabletopKit? I tried to keep the implementation of the TabletopKit example as close as possible, so I think it will enough to look into this codebase for now. I debugged the position of seats and they are placed correctly in front of their equipment. The personas are just not placed on them.
3
4
189
5d