Discuss Spatial Computing on Apple Platforms.

Post

Replies

Boosts

Views

Activity

SharePlay Button
I followed the WWDC video to learn Sharplay. I understood the first creation of seats, but I couldn't learn some of the following content very well, so I hope you can give me a list code. The contents are as follows: I have already taken a seat. struct TeamSelectionTemplate: SpatialTemplate { let elements: [any SpatialTemplateElement] = [ .seat(position: .app.offsetBy(x: 0, z: 4)), .seat(position: .app.offsetBy(x: 1, z: 4)), .seat(position: .app.offsetBy(x: -1, z: 4)), .seat(position: .app.offsetBy(x: 2, z: 4)), .seat(position: .app.offsetBy(x: -2, z: 4)), ] } I hope you can give me a SharePlay Button. After pressing it, it will assign all users in Facetime to a seat with elements quantified in TeamSelectionTemplate. Thank you very much.
0
0
86
1d
Tabletopkit Game between players at same location
Does the current version of TabletopKit support having two or more game players to be at the same physical location? In these cases, the players would not want to see a Facetime persona around the table but instead should be able to see the physical player. Any other remote players would be able to see the personas of those players since they are not at that location. There are a couple of issues in this scenario (shared position of the board, players' location around the table, etc.), but they should be solvable. Thank you!
0
1
42
1d
API for turning regular photos into spatial photos?
With quite some excitement I read about visionOS 2's new feature to automatically turn regular 2D photos into spatial photos, using machine learning. It's briefly mentioned in this WWDC video: https://developer.apple.com/wwdc24/10166 My question is: Can developers use this feature via an API, so we can turn any image into a spatial image, even if it is not in the device photo library? We would like to download an image from our server, convert it on the visionPro on-the-fly and display it as a spatial photo.
1
0
77
1d
How to read back from Spatial Image encoded with HEIC information about which image at which index is left or right?
In the example https://developer.apple.com/documentation/imageio/writing-spatial-photos, we see that for each image encoded with the photo we include the following information: kCGImagePropertyGroups: [ kCGImagePropertyGroupIndex: 0, kCGImagePropertyGroupType: kCGImagePropertyGroupTypeStereoPair, (isLeft ? kCGImagePropertyGroupImageIsLeftImage : kCGImagePropertyGroupImageIsRightImage): true, kCGImagePropertyGroupImageDisparityAdjustment: encodedDisparityAdjustment ], Which will identify which image is left, and which is right, also information about group type = stereo pair. Now, how do you read those back? I tried to implement a reading simply with CGImageSourceCopyPropertiesAtIndex, and that did not work, getting back "No property groups found." func tryToReadThose() { guard let imageData = try? Data(contentsOf: outputImageURL), let source = CGImageSourceCreateWithData(imageData as NSData, nil) else { print("cannot read") return } for i in 0..<CGImageSourceGetCount(source) { guard let imageProperties = CGImageSourceCopyPropertiesAtIndex(source, i, nil) as? [String: Any] else { print("cannot read options") continue } if let propertyGroups = imageProperties[String(kCGImagePropertyGroups)] as? [Any] { // Process the property groups as needed print(propertyGroups) } else { print("No property groups found.") } //print(imageProperties) } } I assume maybe CGImageSourceCopyPropertiesAtIndex expects something as a 3rd parameter. But in the https://developer.apple.com/documentation/imageio/cgimagesource "Specifying the Read Options" I don't see anything related to that.
1
0
71
1d
Drag Gesture in Immersive Spaces with Reality Kit
I've been trying to get the drag gesture up and running so I can move my 3D model around in my immersive space, but for some reason I am not able to move it around. The model shows up in my visionOS 1.0 Simulator, but I can't seem to get it to move around. Would love some help with this and some resources too that would be helpful. Here's a snippet of my Reality View code import SwiftUI import RealityKit import RealityKitContent struct GearRealityView: View { static var modelEntity = Entity() var body: some View { RealityView { content in if let model = try? await Entity(named: "LandingGear", in: realityKitContentBundle) { GearRealityView.modelEntity = model content.add(model) } }.gesture( DragGesture() .targetedToEntity(GearRealityView.modelEntity) .onChanged({ value in GearRealityView.modelEntity.position = value.convert(value.location3D, from: .local, to: GearRealityView.modelEntity.parent!) }) ) } }
2
0
72
2d
Best practices for live-streaming MV-HEVC content?
I was wondering of anyone had guidance on how to “livestream“ MV-HEVC content. More specifically, I have a left and right eye view for stereoscopic content (perhaps, for example, the views were taken from a stereoscopic video being passed through an AVPlayer). I know, based on sample code, that I can convert the stereoscopic video into a MV-HEVC file using AVAssetWriter. However, how would I take the stereoscopic video and encode it, in realtime, to a stream that could then leverage HLS Tools to deliver to clients? Is AVFoundation capable of this directly? Or is there an API within VideoToolbox that can help with this?
0
0
62
2d
Dev documentation search is not accurate/complete
Posting here as I did not see a section for Dev Documentation portal Using the search box in the documentation portal I searched for "frustum" hoping to find any APIs that game me control over frustum culling. https://developer.apple.com/search/?q=frustum&type=Documentation The search came up empty for hits in RealityKit. Hours later I found the boundsMargin API which explains how it affect frustum culling. I went back and tried the search again to verify the documentation search result were incomplete. site:developer.apple.com/documentation/realitykit frustum on google worked fine. Fixing this can save everyone time and stress.
2
0
80
2d
[WebXR] Support for AR module in VisionOS 2.x
Thank you again for pushing the web forward in VisionOS 2, super exciting! The latest WWDC24 video touched on VR experiences for VisionOS2.0 using WebXR, however there was no mention of passthrough AR experiences. Samples such as this one are not supported: https://immersive-web.github.io/webxr-samples/immersive-ar-session.html In Settings > Safari, there is a feature flag for the AR WebXR module, but enabling it did not seem to change anything. Is this the expected behavior at this time? Any developer preview(s) we could try?
2
1
174
3d
[Metal Passthrough] upperLimbVisibility not respected in sample code
Hello everyone, Super exciting stuff released this year! I was playing around with the Metal passthrough sample code (see: https://developer.apple.com/documentation/compositorservices/interacting_with_virtual_content_blended_with_passthrough) ... and noticed that the upperLimbVisibility set to .automatic does not seem to work and my hand is always on top. How to reproduce: Draw something Position your hand behind the brush stroke Notice that your hands are always rendered on top Taking a GPU frame capture reveals that the depth is correctly written. Xcode: Version 16.0 beta (16A5171c) VisionOS: visionOS 2.0 (22N5252n)
0
0
120
3d
TextureResource.DrawableQueue broken in VisionOS 2?
I have an input texture in a ShaderGraphMaterial. I use .replace(withDrawables:) to replace the texture with a drawable queue. When I present drawables to this queue, nothing happens in VisionOS 2. The drawables are not presented, I can't get any more via nextDrawable() because the unpresented ones are holding things up. This is with both bgra8Unorm_srgb and rgba16float formats. I have confirmed the material applied to my object has the modified texture resources on them. It was working in VisionOS 1.2. What changed in VisionOS 2 to cause this?
3
0
121
4d
Avatars for spatial
Hello friends! I am looking into a use case where I want to add an animated avatars into a RealityView. I am looking to use a third party package but have not found any that have good iOS or visionOS support. Has anyone come across a package for this that I could look into?
0
0
81
4d
Not able to view custom stereo/spatial images in VisionOS 2
Hello, I've been creating my own stereoscopic images on my laptop and airdropping them to the Vision Pro to view them in 3D. My custom images have a left_eye.png and right_eye.png and have been combined into one HEIF image (as it is done natively with the headset) In VisionOS 1.xx Photos app, I was able to see my custom images in 3D, but in VisionOS 2, the device no longer recognizes that my image(s) should also be shown stereoscopically and instead, it shows it in 2D. I see that it gives me the option to use the AI tool to convert 2D into 3D, but the original file that I airdropped to myself (Mac --> AVP Photos Album) already has a left and right image pair. Is this something that can be fixed?
1
0
106
4d