Discuss Spatial Computing on Apple Platforms.

Post

Replies

Boosts

Views

Activity

Spatial Video Capturing on iPhone 15 Pro
Hi all, I tried the "isSpatialVideoCaptureEnabled" with AVCaptureMovieFileOutput mentioned in WWDC24: Build compelling spatial photo and video experiences, and it works. But there are some issues and questions: Below codes, the change.newValue always nil so the code seems not work. let observation = videoDevice.observe(\.spatialCaptureDiscomfortReasons) { (device, change) in guard let newValue = change.newValue else { return } if newValue.contains(.subjectTooClose) { // Guide user to move back } if newValue.contains(.notEnoughLight) { // Guide user to find a brighter environment } } AVCaptureMovieFileOutput is support spatial video capturing. May I ask if AVCaptureVideoDataOutput will also support spatial video capturing?
0
0
88
1d
3DoF Tracking on Vision Pro
Hello everyone, It seems that Vision Pro supports 6DoF tracking, but is it possible to switch to 3DoF tracking? The reason for my question is that I would like to use it while riding in a car, but it seems that the 6DoF tracking is not working well in this situation. I was wondering if switching to 3DoF tracking might solve the issue. Note: I am using the travel mode.
0
0
87
2d
Disable reverb effect in immersive spaces
I'm developing an app where a user can bring a video or content from a WKWebView into an immersive space using SwiftUI attachments on a RealityView. This works just fine, but I'm having some trouble configuring how the audio from the web content should sound in an immersive space. When in windowed mode, content playing sounds just fine and very natural. The spatial audio effect with head tracking is pronounced and adds depth to content with multichannel or Dolby Atmos audio. When I move the same web view into an immersive space however, the audio becomes excessively echoey, as if a large amount of reverb has been put onto the audio. The spatial audio effect is also decreased, and while still there, is no where near as immersive. I've tried the following: Setting all entities in my space to use channel audio, including the web view attachment. for entity in content.entities { entity.channelAudio = ChannelAudioComponent() entity.ambientAudio = nil entity.spatialAudio = nil } Changing the AVAudioSessionSpatialExperience: And I've also tried every soundstage size and anchoring strategy, large works the best, but doesn't remove that reverb. let experience = AVAudioSessionSpatialExperience.headTracked( soundStageSize: .large, anchoringStrategy: .automatic ) try? AVAudioSession.sharedInstance().setIntendedSpatialExperience(experience) I'm also aware of ReverbComponent in visionOS 2 (which I haven't updated to just yet), but ideally I need a way to configure this for visionOS 1 users too. Am I missing something? Surely there's a way for developers to stop the system messing with the audio and applying these effects? A few of my users have complained that the audio sounds considerably worse in my cinema immersive space compared to in a window.
2
0
163
2d
GroupSessionJournal attachment loading error on Vision Pro
Hi all, Currently working on a shareplay feature where users pull data from a remote source and are able to share it in a volumetric window with others in the facetime call. However, I am running into an issue where the group activity/session seems to be throwing an error on the recipient of the journal's attachment with the description of notSupported. As I understand it, we use GroupSessionJournal for larger pieces of data like images (like in the Drawing Together example) and in my case 3d models. The current flow goes as follows: User will launch the app and fetch a model from remote. User can start a shareplay instance in which the system captures the volumetric window for users to join and see. At this point, only the original user can see the model. The user can press a button to share this model with the other participants using /// modelData is serialized `Data` try await journal.add(modelData) In the group session configuration, I already have a task listening for for await attachments in journal.attachments { for attachment in attachments { ... } } This task attempts to load data via the following code: let modelData = try await attachment.load(Data.self) /// this is where the error is thrown: `notSupported` I expect the attachment.load(Data.self) call to properly deliver the model data, but instead I am receiving this error. I have also attempted to wrap the model data within an enclosing struct that has a name and data property and conform the enclosing struct to Transferable but that continued to throw the notSupported error. Is there something I'm doing wrong or is this simply a bug in the GroupSessionJournal? Please let me know if more information is required for debugging and resolution. Thanks!
0
0
151
5d
Transparency dial in Immersive mode
In Progressive mode, you can turn the digital crown which will reveal your environment by limiting/expanding the field of view of your Immersive scene. I'm trying to create a different sort of behavior where your Immersive scene remains in 360 mode but adjusting a dial (doesn't have to be the crown, it could be an in-app dial/slider) adjusts the transparency of the scene. My users aren't quite satisfied with the native features that help ensure you aren't about to run into a wall or furniture and want a way of quickly adjusting the transparency on the fly. Is that possible?
0
0
91
6d
Apple Vision Pro app stops working after a while
Hello! I have developed an application using Unity that runs on the visionPro. I have correctly built it, installed it on the device and it works. The spatial computing application continue to work after several days (pretty normal, I launch the app and it works. It doesn't use any external services). After several weeks, like a month or so, I launch again the same app but it's not working anymore. The only way I have to make it work again is to rebuild and reinstall it again. What am I missing here? Why an application built and installed few weeks ago suddenly stops working on the VisionPro?
2
0
107
6d
VISIONOS: how to detect if the user closes a window vs a window going into the background
For context, we have a fully immersive application running on visions. The application starts with a standard View/menu, and when the user clicks an option, it takes you to fully immersive mode with a small floating toolbar window that you can move and interact with as you move around the virtual space. When the user clicks the small x button below the window, we intercept the scenePhase .background event and handle exiting immersive mode and displaying the main menu. This all works fine. The problem happens if the user turns around and doesn't look at the floating window for a couple of minutes. The system decides that the window should go into the background, and the same scenePhase background event is called - causing the system to exit immersive mode without warning. There seems to be no way of preventing this, and no distinction between this and the user clicking the close button. Is there a reliable way to detect if the user intentionally clicks the close button vs the window from going into the background through lack of use? onDisappear doesn't trigger. thanks in advance
1
0
177
1w
Loading USDZ file take almost 30s
I'm having issue with loading 1.2GB USDZ file with visionOS Here's the details the file is download via backend api file is download to document directory FileManager.default.url(for: .documentDirectory, in: .userDomainMask, appropriateFor: nil, create: true) when loading the asset, it took almost 30s to load the asset Loaded usd((extension in RealityFoundation):RealityKit.Entity.LoadStatistics.USDLoader.rio) in 29.24642503261566 seconds loading asset code let model = try await Entity(contentsOf: assetUrl) USDZ file is exported from RealityComposerPro Did I make any mistake on the flow or is there any other approach to decrease the loading time?
1
0
165
1w
VisionOS 2 dev beta control center call
Hello I've just updated my Vision Pro to the newest and greatest 2.0, and I see that the way to call out control center has been changed to the hand gesture which I'm assuming that is powered by computer vision? using the cams, for me there is a use case of watching apple TV shows at night where my gf would like to have the lights turned off, and that is the time where it fails, the new method is greate and the responsiveness are crazily good, but I would like this to be a toggle so that we can select our own method? like for example during the day where lights are sufficient we use the new gestrue recogn way, and when low light we can switch it back to look up? or as a fellow programmer who are just learning, I think it is possible to just make it automatic toggle, whenever the lighting condition can't make the hand gestrue work, hope to see that fixed :) cheers
0
0
148
1w
Xbox controller and visionOS 2
I am having problems getting button input from an Xbox game controller. I have the visionOS 2 beta on my Apple Vision Pro, and I am trying to use an Xbox game controller with a RealityView following the instructions from the WWDC session Explore game input in visionOS. The notification about a game controller is picking up the game controller, finds GCInputButtonA, and I am setting closures for touchedChangedHandler, pressedChangedHandler, and valueChangedHandler that just print an os_log statement. buttonA.valueChangedHandler = { button, value, pressed in os_log("Got valueChangedHandler") } At the end of RealityView, I have the modifier RealityView { content in // stuff } .handlesGameControllerEvents(matching: .gamepad) But I am never seeing the log message appear in the console when I press the 'A' button (or any other button). Any ideas what I might be doing wrong? The Xbox controller is pretty old. Settings is reporting it as version 9.0.3
1
0
158
1w
Vision Pro capabliity
Hello everyone! I'm planning to buy an Apple Vision Pro (for replacing a Varjo XR-3) I want to use it for a professional project, and I want to know if it can fit our need. I want to develop a program on the Vision Pro for playing some live streaming videos from our local network cameras. (using RTSP) Is this possible to get and play more than one live stream video. One of those videos come from a stereo camera, streaming a SideBySide 3d stereo video. Is this possible to have a classic 2d video in one ultra-wide virtual screen and another one virtual screen displaying a 3D video with depth simultaneously? Thank you everyone in advance. Regard's.
0
0
111
1w
Attachment-like functionality in metal immersive space
I have an immersive space that is rendered using metal. Is there a way that I can position swiftUI views at coordinates relative to positions in my immersive space? I know that I can display a volume with RealityKit content simultaneously to my metal content. The volume's coordinate system specifically, it's bounds, does not, coincide with my entire metal scene. One approach I thought of would be to open two views in my immersive space. That way, I could simply add Attachment's to invisible RealityKit Entities in one view at positions where I have content in my metal scene. unfortunately it seems that, while I can declare an ImmersiveSpace be composed of multiple RealityViews ImmersiveSpace(){ RealityView { content in // load first view } update: { content in // update } } RealityView{ content in //load second view } } update: { content in //update } } That results in two coinciding realty kit views in the immersive space. I can not however do something like this: ImmersiveSpace(){ CompositorLayer(configuration: ContentStageConfiguration()){ layerRenderer in //set up my metal renderer and stuff } RealityView{ content in //set up a view where I could use attachments } update: { content in } }
2
0
155
1w
SharePlay Button
I learned Sharplay from the WWDC video. I understand the creation of seats, but I can't learn some of the following contents well, so I hope you can help me. The content is as follows: I have set up the seats. struct TeamSelectionTemplate: SpatialTemplate { let elements: [any SpatialTemplateElement] = [ .seat(position: .app.offsetBy(x: 0, z: 4)), .seat(position: .app.offsetBy(x: 1, z: 4)), .seat(position: .app.offsetBy(x: -1, z: 4)), .seat(position: .app.offsetBy(x: 2, z: 4)), .seat(position: .app.offsetBy(x: -2, z: 4)), ] } It was mentioned in one of my previous posts: "I hope you can give me a SharePlay Button. After pressing it, it will assign all users in Facetime to a seat with elements quantified in TeamSe lectionTemplate.", and someone replied to me and asked me to try systemCoordinator.configuration.spatialTemplatePreference = .custom (TeamSelectionTemplate()), however, Xcode error Cannot find 'systemCoordinator' in scope How to solve it? Thank you!
1
0
197
1w