Discuss Spatial Computing on Apple Platforms.

Posts under General subtopic

Post

Replies

Boosts

Views

Activity

Control of LongPressGesture-created element
I have been implementing the LongPressGesture to have a menu come up upon a long press. I love the functionality and it is very close to being where I want it to be. I don't know if this is a visionOS-specific thing, but I am hoping to control the corner radius of the pulled-out element behind my "button." I've wrangled hover effects in the past with overlays, but I'm not sure what to target in this case. Worst case, I'll have to change the border radius on all of my tiles to match this LongPressGesture-controlled behavior, or I could possibly change the radius onLongPressGesture to match. Is there a simpler solution? Thanks!
2
0
462
Oct ’24
How to extracted stereo image pair from generated spatial photos by visionOS 2.0
Hi, My app allows users to share and view spatial photos. For viewing spatial photos, I'm using a plane in a RealityView that has a camera index switch material node, which takes the stereo images as the inputs. For sharing native spatial photos taken on the vision pro, prior to visionOS 2.0, I extract the stereo image pair and merge them into a single side-by-side image to upload to the app's backend. However, since visionOS 2.0 introduced generating spatial photos from normal photos, I've been seeing some unexpected behaviours in my app, while on the other hand, they can be viewed correctly in the system Photos app: Sometimes the extracted images have different size, the right image is smaller than the left image. See the first image in the google drive below, taken with iPhone 15 Pro. Even if the image pair have the same size, when viewed in my app, it has some artefacts, especially around the edge of objects which are closer to the camera. See the second image in the google drive below, taken with iPhone 11. Google drive link here: https://drive.google.com/drive/folders/1UTfpxvO3-ChqshwfyzY5E_KCgk8VgUaa I know that now Quicklook preview application can support viewing spatial photos now, but I would like to keep it the way I implemented in the app, for compatibility concerns. Below is a code snippet that deals with the extraction. Please point out the correct way to extract stereo image pair from a generated spatial photo. Happy to submit a code-level support request if more information is needed. // the data is from photos picker item let data = try await photo.loadTransferable(type: Data.self) let source = CGImageSourceCreateWithData(data as CFData, nil) let sbsImage = source.extractSpatialPhoto() extension CGImageSource { func extractSpatialPhoto() -> UIImage? { guard let leftCGImage = extractSpatialImage(at: 0), let rightCGImage = extractSpatialImage(at: 1) else { return nil } let leftImage = UIImage(ciImage: leftCGImage) let rightImage = UIImage(ciImage: rightCGImage) guard leftImage.size == rightImage.size else { return nil } // merge left + right let size = CGSize(width: leftImage.size.width * 2, height: leftImage.size.height) UIGraphicsBeginImageContextWithOptions(size, true, 1.0) leftImage.draw(in: CGRect(x: 0, y: 0, width: leftImage.size.width, height: leftImage.size.height)) rightImage.draw(in: CGRect(x: leftImage.size.width, y: 0, width: rightImage.size.width, height: rightImage.size.height)) let mergedImage = UIGraphicsGetImageFromCurrentImageContext() UIGraphicsEndImageContext() return mergedImage } // not sure if this actually works func extractSpatialImage(at index: Int) -> CIImage? { guard let cgImage = CGImageSourceCreateImageAtIndex(self, index, nil) else { return nil } var ciImage = CIImage(cgImage: cgImage) if let properties = CGImageSourceCopyPropertiesAtIndex(self, index, nil) as? [String: Any], let heifDictionary = properties[kCGImagePropertyHEIFDictionary as String] as? [String: Any], let extrinsics = heifDictionary[kIIOMetadata_CameraExtrinsicsKey as String] as? [String: Any], let position = extrinsics[kIIOCameraExtrinsics_Position as String] as? [Double] { // Default baseline is 64mm (0 for left camera, 0.064m for right camera) let standardBaseline = 0.064 // Check if it's the right image (should be at [0.064, 0, 0]) let isRightImage = (index == 1) let expectedPosition = isRightImage ? standardBaseline : 0.0 // Calculate the translation needed to align to standard baseline let positionDelta = position[0] - expectedPosition // Apply translation only if there's a mismatch in position if positionDelta != 0 { let transform = CGAffineTransform(translationX: CGFloat(positionDelta), y: 0) ciImage = ciImage.transformed(by: transform) } } return ciImage } }
1
0
1.2k
Oct ’24
USDZ file not loading on Apple Vision Pro
I'm working on a school project that allows users to open a .USDZ file (using Quick Look) on the webpage while using Apple Vision Pro to put the object in their physical envirnment, the project is deployed on Vercel. I'm testing the page with my apple vision pro, when I click open the .USDZ file, I'm seeing a triangle with an exclamation mark while it's trying to load, but it won't load. Does anybody know how to troubleshoot this issue?
4
0
927
Oct ’24
Unity/PolySpatial GameController framework failing to load
I have a simple example of a motion matching (MxM for Unity) character controller that uses Unity's input system and gamepad support. In editor the scene and inputs work as expected. When I build to headset the app stops at an initialization step where my game controller should kick in. The app doesn't crash but my character is frozen in A-Pose and doesn't respond to input. I'm wondering if this error I'm seeing in the logs is what's causing it? And if so how do I fix it? error 15:56:11.724200-0700 PolySpatialProjectTemplate NSBundle file:///System/Library/Frameworks/GameController.framework/ principal class is nil because all fallbacks have failed I'm using Xcode 16 beta 6 Unity 6000.0.17f1 VisionOS 2.0 beta 9
2
0
948
Oct ’24
Multilayer VisionOS App icon not working
I tried to use the application icon from sample project https://developer.apple.com/documentation/visionos/diorama, but the 3 layers of the app icon are not separated when I hover on the icon in the Vision Pro simulator. Could you please advise how to fix the problem? I am using the latest Xcode Version 15.4 (15F31d). Thank you.
2
0
496
Oct ’24
Translation API availability on visionOS
Hi, One of the great features introduced in WWDC24 is the Translation API. But unfortunately it's currently unavailable on visionOS. My question is, does Apple have any plan to support it on visionOS as well? If so, what's the ETA for this feature? I would really like to see it on visionOS, otherwise I'll have to pay Google to use their translation API.
1
0
596
Oct ’24
Keyboard awareness during custom immersive enviroment of an app
Hello I was wondering if the keyboard awareness feature that came with visionOS 2 would also work for the Mac Book keyboard if someone is in an immersive .progressive custom environment such as the "Garden" environment from Construct an immersive environment for visionOS in e.g. an app I'm currently developing, to see one's keyboard. I haven't managed to achieve it so far. Thank you very much in advance!
1
0
303
Oct ’24
Apple Vision Pro Enterprise API configuration issue
My name is Tom Shannon, a developer with Omnia (d.b.a Aequilibrium Inc.). We were recently approved for some of the Enterprise APIs for the Vision Pro. You can reference the history through our Case-ID: 9237594 We are contacting you for assistance as we have downloaded the entitlement license provided and added it to our target for an application under the bundle id: com.omnia.spatialbrowser Then under my project and with my developer account, which is under the Aequilibrium Inc. account (279PV9XKZ2), we tried to add the Barcode Scanner Enterprise API entitlement, but this does not show up as an option for us. I am on XCode 16.1 beta (16B5001e) for reference! Any help would be greatly appreciated. Best,
0
0
608
Oct ’24
ImmersiveSpace with system environments
Hi, When opening an ImmersiveSpace with the .mixed style, is it possible to keep the user's current selected system immersive environment? Currently, the system immersive environment will be dismissed. ImmersiveSpace(id: "some id") { SomeRealityView() } .immersionStyle(selection: .constant(.mixed), in: .mixed)
1
0
445
Oct ’24
Raw point cloud access
Hi, I currently have Enterprise API access and have observed that the main camera API only provides RGB data. I am trying to access point cloud information from LIDAR, but it seems ARKit doesn't offer this directly via the standard APIs that iPad uses. I wanted to ask if there are any possible options to access depth data or enhanced camera capabilities using the Enterprise API. Specifically: Does having Enterprise API access unlock any additional camera-related APIs in AVFoundation that could provide depth information or more advanced control over the camera? Are there any workarounds or alternative methods to obtain depth data from the camera?
1
0
384
Oct ’24
EntityAction implementation example
Does anyone have experience of creating their own EntityActions? Say for example I wanted one that faded up the opacity of an entity, then once it had completed set another property on one of the entity's components. I understand that I could use the FromToByAction to control the opacity (and have this working), but I am interested to learn how to create my own dedicated EntityAction, and finding the documentation hard to fathom. I got as far as creating a struct conforming to EntityAction protocol: var animatedValueType: (any AnimatableData.Type)? } Subscribing to update events on this: FadeUpAction.subscribe(to: .updated) { event in guard let animationState = event.animationState else { return } // My animation state is always nil, so I never get here! let newValue = \\\Some Calc... animationState.storeAnimatedValue(newValue) } And setting it up as an animation on an entity: let action = FadeUpAction() if let animation = try? AnimationResource.makeActionAnimation( for:action, duration: 2.0, bindTarget: .opacity ) { entity.playAnimation(animation) } ...but haven't been able to understand how to extract the current timeDelta or set the value in the event handler. Any pointers?
2
0
551
Oct ’24
Spatial Video captured quality using the API differs from that of the iPhone's built-in camera app
I tried "WWDC24: Build compelling spatial photo and video experiences | Apple" and it can successfully capture spatial video. But I found the video by my app differs from the iPhone build-in camera app in: Videos captured with the iPhone's build-in camera app tend to have a more natural or warmer tone, while videos taken with my app appear whiter or cooler in color temperature. In videos recorded using the iPhone's built-in camera app, the left eye image is typically sharper than the right eye image. However, in my app, this is reversed: the right eye image is clearer than the left eye image. I've noticed that when I cover the wide-angle lens while shooting, the entire preview screen in my app becomes brighter. However, this doesn't occur when using the iPhone's built-in camera app. Is there any api or parameters to make my app more close to the iPhone build-in app? I have tried "whiteBalanceMode" and "exposureMode" but no luck.
3
0
837
Oct ’24
Keep Apple Vision Pro Awake
I have two Apple Vision Pros so that I can make and test a multi-player immersive reality game. But I am one developer, so I need to be able to take one Apple Vision Pro off, and put the other one on to see what the other device is seeing, and to ensure my game information is correctlly being sent over the network with multipeer connectivity. But when I take one off, the Apple Vision Pro immediately goes to sleep. With Apple Vision Pro OS 1, I could put a piece of paper into the pro and they would stay on for hours, and I could take them on and off and debug my game. But now with VisionOS 2, even with the paper they soon go to sleep. Is there a setting I can change or override as a developer to stop this auto sleep or auto lock? I need to check things like: when two devices are on the network, can I see them both so that players can select each other from a menu? can i send object positions back and forth Thank you.
0
2
571
Oct ’24
App lost audio spatialization from VisionOS 2 Update
Hi, I have a video player app that lost its audio spatialization since the VisionOS 2 update. I am using the VideoPlayerComponent (https://developer.apple.com/documentation/realitykit/videoplayercomponent), to implement my videos as entities, as I want a custom look and controls to my player. In VisionOS 1, there was automatic audio spatialization. Depending where my video entity is, the app automatically enables head tracking audio spatialization. Since VisionOS 2 however, I cannot get my video entities to play Spatial Audio. I've looked into DestinationVideo and even set up AVAudioSessionSpatialExperience but Spatial Audio is still not working. Appreciate any help. Thanks.
1
0
379
Oct ’24
Asset flickering when switching ImmersiveSpaces
There is a flickering occurring on 3D assets when switching immersive spaces, which is not the nicest user experience. The flickering does occur either when loading the scenes directly from the RealityKitContent package, or from memory (pre-loaded assets). Since we cannot upload a video illustrating the undesirable behaviour, I have to describe how to setup the project for you to observe it. To replicate the issue, follow these steps: Create a new visionOS app using Xcode template, see image. Configure the project to launch directly into an immersive space (set Preferred Default Scene Session Role to Immersive Space Application Session Role in Info.plist), see image. Replace all swift files with those you will find in the attached texts. In the RealityKitContent package, create a scene named YellowSpheres as illustrated below. In the RealityKitContent package, create a scene named RedSpheres as illustrated below. Launch the app in debug mode via Xcode and onto the AVP device or simulator Continuously switch immersive spaces by pressing on buttons Show RedSpheres and Show YellowSpheres. Observe the 3d assets flicker upon opening of the immersive spaces. AppModel RedSpheresImmersiveView YellowSpheresImmersiveView TestFlickeringBetweenImmersiveSpacesApp
1
0
443
Oct ’24
Custom Progressive mode --> Unity/Polyspatial not working
I'm setting: .immersionStyle(selection: .constant(.progressive(0.1...1.0, initialAmount: 0.1)), in: .progressive(0.1...1.0, initialAmount: 0.1)) In UnityVisionOSSettings.swift before build out in Xcode. I'm having an issue where this only works on occasion. Seems random. I'll either get no immersion level available (crown dial is greyed out and no changes can be made) or it will only allow 0.5 - 1.0 immersion (dial will go below 0.5 but springs back to 0.5 when released). With no changes to my setup or how I'm setting immersionStyle I've been able to get this to work as I would expect. Wondering if there is some bug that would be causing this to fail. I've tested a simple NativeSDK progressive immersion style with same code for custom setting and it works everytime, so it's something related to Unity. Here is the entire UnityVisionOSSettings that, from as far as I can tell, are controlling this: `// GENERATED BY BUILD import Foundation import SwiftUI import PolySpatialRealityKit import UnityFramework let unityStartInBatchMode = false extension UnityPolySpatialApp { func initialWindowName() -> String { return "Unbounded" } func getAllAvailableWindows() -> [String] { return ["Bounded-0.500x0.500x0.500", "Unbounded"] } func getAvailableWindowsForMatch() -> [simd_float3] { return [] } func displayProviderParameters() -> DisplayProviderParameters { return .init( framebufferWidth: 1830, framebufferHeight: 1600, leftEyePose: .init(position: .init(x: 0, y: 0, z: 0), rotation: .init(x: 0, y: 0, z: 0, w: 1)), rightEyePose: .init(position: .init(x: 0, y: 0, z: 0), rotation: .init(x: 0, y: 0, z: 0, w: 1)), leftProjectionHalfAngles: .init(left: -1, right: 1, top: 1, bottom: -1), rightProjectionHalfAngles: .init(left: -1, right: 1, top: 1, bottom: -1) ) } @SceneBuilder var mainScenePart0: some Scene { ImmersiveSpace(id: "Unbounded", for: UUID.self) { uuid in PolySpatialContentViewWrapper(minSize: .init(1.000, 1.000, 1.000), maxSize: .init(1.000, 1.000, 1.000)) .environment(\.pslWindow, PolySpatialWindow(uuid.wrappedValue, "Unbounded", .init(1.000, 1.000, 1.000))) .onImmersionChange() { oldContext, newContext in PolySpatialWindowManagerAccess.onImmersionChange(oldContext.amount, newContext.amount) } KeyboardTextField().frame(width: 0, height: 0).modifier(LifeCycleHandlerModifier()) } defaultValue: { UUID() } .upperLimbVisibility(.automatic) .immersionStyle(selection: .constant(.progressive(0.1...1.0, initialAmount: 0.1)), in: .progressive(0.1...1.0, initialAmount: 0.1)) WindowGroup(id: "Bounded-0.500x0.500x0.500", for: UUID.self) { uuid in PolySpatialContentViewWrapper(minSize: .init(0.100, 0.100, 0.100), maxSize: .init(0.500, 0.500, 0.500)) .environment(\.pslWindow, PolySpatialWindow(uuid.wrappedValue, "Bounded-0.500x0.500x0.500", .init(0.500, 0.500, 0.500))) KeyboardTextField().frame(width: 0, height: 0).modifier(LifeCycleHandlerModifier()) } defaultValue: { UUID() } .windowStyle(.volumetric).defaultSize(width: 0.500, height: 0.500, depth: 0.500, in: .meters).windowResizability(.contentSize) .upperLimbVisibility(.automatic) .volumeWorldAlignment(.gravityAligned) } @SceneBuilder var mainScene: some Scene { mainScenePart0 } struct LifeCycleHandlerModifier: ViewModifier { func body(content: Content) -> some View { content .onOpenURL(perform: { url in UnityLibrary.instance?.setAbsoluteUrl(url.absoluteString) }) } } }`
3
0
791
Oct ’24
Video Memory Leak when Backgrounding
While trying to control the following two scenes in 1 ImmersiveSpace, we found the following memory leak when we background the app while a stereoscopic video is playing. ImmersiveView's two scenes: Scene 1 has 1 toggle button Scene 2 has same toggle button with a 180 degree skysphere playing a stereoscopic video Attached are the files and images of the memory leak as captured in Xcode. To replicate this memory leak, follow these steps: Create a new visionOS app using Xcode template as illustrated below. Configure the project to launch directly into an immersive space (set Preferred Default Scene Session Role to Immersive Space Application Session Role in Info.plist. Replace all swift files with those you will find in the attached texts. In ImmersiveView, replace the stereoscopic video to play with a large 3d 180 degree video of your own bundled in your project. Launch the app in debug mode via Xcode and onto the AVP device or simulator Display the memory use by pressing on keys command+7 and selecting Memory in order to view the live memory graph Press on the first immersive space's button "Open ImmersiveView" Press on the second immersive space's button "Show Immersive Video" Background the app When the app tray appears, foreground the app by selecting it The first immersive space should appear Repeat steps 7, 8, 9, and 10 multiple times Observe the memory use going up, the graph should look similar to the below illustration. In ImmersiveView, upon backgrounding the app, I do: a reset method to clear the video's memory dismiss of the Immersive Space containing the video (even though upon execution, visionOS raises the purple warning "Unable to dismiss an Immersive Space since none is opened". It appears visionOS dismisses any ImmersiveSpace upon backgrounding, which makes sense..) Am I not releasing the memory correctly? Or, is there really a memory leak issue in either SwiftUI's ImmersiveSpace or in AVFoundation's AVPlayer upon background of an app? App file TestVideoLeakOneImmersiveView First ImmersiveSpace file InitialImmersiveView Second ImmersiveSpace File ImmersiveView Skysphere Model File Immersive180VideoViewModel File AppModel
3
0
693
Oct ’24
QuickLook and .heic
I"m trying to create a simple app for my students that will display .heic images taken with a nikon and them converted to .heic in the photos app. My attempts only result in the QuickLook viewer showing the images in 2d. Any guidance? Here is my ContentView: import SwiftUI import QuickLook struct ContentView: View { @State private var showQuickLook = false @State private var previewURL: URL? = nil // State to store the URL for Quick Look var body: some View { VStack { Button("See it in 3D") { // Set the URL for the file from the bundle and toggle Quick Look presentation if let imageURL = Bundle.main.url(forResource: "Michelia_fuego", withExtension: "heic") { previewURL = imageURL // Set the preview URL if the image is found showQuickLook.toggle() // Toggle to trigger Quick Look presentation } else { print("File not found") // Print error if the file is missing } } .quickLookPreview($previewURL) // Binding to the URL } } } #Preview { ContentView() }
2
0
711
Oct ’24