Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.

All subtopics
Posts under Spatial Computing topic

Post

Replies

Boosts

Views

Activity

Envision the future: Build great apps for visionOS
I am honored that I successfully participated in the "Envision the future: Build great apps for visionOS"(https://developer.apple.com/events/view/ZCH7ZUY24C/dashboard) conference. However, unfortunately, I am in China, and due to the visa problem (because it usually takes at least 2 months to apply for a visa, but it only takes about 10 days from the time I received the notice to the meeting), I can't go to the United States to participate in the site. So I hope Apple can place an iPad on site and create a FaceTime link, and then I can make a call to this iPad. I also told Apple about this suggestion, but now there are only a few days left to start. They didn't reply to me. Even Apple has sent me the ticket to the Developer Center and asked me to add it to the Apple Wallet App, which means that Apple has not There is a request to deal with me. So I hope you can give me some advice or help me for those who know about this aspect. For Apple's internal engineers, if possible, I hope you can contact the person who manages this meeting. I'm very grateful for this. Thank you.
0
0
613
Oct ’24
Is this the easiest way create scene planes that allow for collision with Realitykit entities
In my Vision OS app I am using plane detection and I want to create planes that have physics I want to create an effect that my reality kit entities rest on real world detected planes. I was curious to see that the code below that I found in the Samples is the most efficient way of doing this. func processPlaneDetectionUpdates() async { for await anchorUpdate in planeTracking.anchorUpdates { let anchor = anchorUpdate.anchor if anchorUpdate.event == .removed { planeAnchors.removeValue(forKey: anchor.id) if let entity = planeEntities.removeValue(forKey: anchor.id) { entity.removeFromParent() } return } planeAnchors[anchor.id] = anchor let entity = Entity() entity.name = "Plane \(anchor.id)" entity.setTransformMatrix(anchor.originFromAnchorTransform, relativeTo: nil) // Generate a mesh for the plane (for occlusion). var meshResource: MeshResource? = nil do { let contents = MeshResource.Contents(planeGeometry: anchor.geometry) meshResource = try MeshResource.generate(from: contents) } catch { print("Failed to create a mesh resource for a plane anchor: \(error).") return } var material = UnlitMaterial(color: .red) material.blending = .transparent(opacity: .init(floatLiteral: 0)) if let meshResource { // Make this plane occlude virtual objects behind it. entity.components.set(ModelComponent(mesh: meshResource, materials: [material])) } // Generate a collision shape for the plane (for object placement and physics). var shape: ShapeResource? = nil do { let vertices = anchor.geometry.meshVertices.asSIMD3(ofType: Float.self) shape = try await ShapeResource.generateStaticMesh(positions: vertices, faceIndices: anchor.geometry.meshFaces.asUInt16Array()) } catch { print("Failed to create a static mesh for a plane anchor: \(error).") return } if let shape { entity.components.set(CollisionComponent(shapes: [shape], isStatic: true)) let physics = PhysicsBodyComponent(mode: .static) entity.components.set(physics) } let existingEntity = planeEntities[anchor.id] planeEntities[anchor.id] = entity contentEntity.addChild(entity) existingEntity?.removeFromParent() } } } extension MeshResource.Contents { init(planeGeometry: PlaneAnchor.Geometry) { self.init() self.instances = [MeshResource.Instance(id: "main", model: "model")] var part = MeshResource.Part(id: "part", materialIndex: 0) part.positions = MeshBuffers.Positions(planeGeometry.meshVertices.asSIMD3(ofType: Float.self)) part.triangleIndices = MeshBuffer(planeGeometry.meshFaces.asUInt32Array()) self.models = [MeshResource.Model(id: "model", parts: [part])] } } extension GeometrySource { func asArray<T>(ofType: T.Type) -> [T] { assert(MemoryLayout<T>.stride == stride, "Invalid stride \(MemoryLayout<T>.stride); expected \(stride)") return (0..<count).map { buffer.contents().advanced(by: offset + stride * Int($0)).assumingMemoryBound(to: T.self).pointee } } func asSIMD3<T>(ofType: T.Type) -> [SIMD3<T>] { asArray(ofType: (T, T, T).self).map { .init($0.0, $0.1, $0.2) } } subscript(_ index: Int32) -> (Float, Float, Float) { precondition(format == .float3, "This subscript operator can only be used on GeometrySource instances with format .float3") return buffer.contents().advanced(by: offset + (stride * Int(index))).assumingMemoryBound(to: (Float, Float, Float).self).pointee } } extension GeometryElement { subscript(_ index: Int) -> [Int32] { precondition(bytesPerIndex == MemoryLayout<Int32>.size, """ This subscript operator can only be used on GeometryElement instances with bytesPerIndex == \(MemoryLayout<Int32>.size). This GeometryElement has bytesPerIndex == \(bytesPerIndex) """ ) var data = [Int32]() data.reserveCapacity(primitive.indexCount) for indexOffset in 0 ..< primitive.indexCount { data.append(buffer .contents() .advanced(by: (Int(index) * primitive.indexCount + indexOffset) * MemoryLayout<Int32>.size) .assumingMemoryBound(to: Int32.self).pointee) } return data } func asInt32Array() -> [Int32] { var data = [Int32]() let totalNumberOfInt32 = count * primitive.indexCount data.reserveCapacity(totalNumberOfInt32) for indexOffset in 0 ..< totalNumberOfInt32 { data.append(buffer.contents().advanced(by: indexOffset * MemoryLayout<Int32>.size).assumingMemoryBound(to: Int32.self).pointee) } return data } func asUInt16Array() -> [UInt16] { asInt32Array().map { UInt16($0) } } public func asUInt32Array() -> [UInt32] { asInt32Array().map { UInt32($0) } } } I was also curious to know if I can do this without ARKit using SpatialTrackingSession. My understanding is that using SpatialTrackingSession in RealityKit I can only get the transforms of the AnchorEntities but it won't have geometry information to create the collision shapes.
2
0
587
Oct ’24
Seize the entity
How to make a specified entity in RealityView be captured by users: This entity has physical and collision components, and the user will not change when he does not grasp the action. However, when the user makes a grab hand gesture and is very close to the entity (there can be a small deviation), an Anchor component will be enabled to bind the entity to the hand, but when the user lets go, he will fall along the y-axis of the current position (affected by the physical component). I hope you can help me. Thank you.
1
0
515
Oct ’24
How to disable the download option from Quicklook PreviewApplication?
Hi, Following the wwdc24 video - What’s new in Quick Look for visionOS, I've managed to open a 3D model using PreviewApplication by calling let previewItem = PreviewItem(url: modelURL, displayName: "Easter", editingMode: .disabled) _ = PreviewApplication.open(items: [previewItem]) However, the "Save to Downloads" option is aways there(see attached screenshot). As the models are user generated content, and I don't want the download option to be available to all users. How to disable it?
3
1
621
Oct ’24
Send messages to the scene
I saw onnoffitacation in the Behavior configuration of Reality Composer pro, which asked me to enter the Nofficatition name, that is to say, this requires swift in Xcode to send a message. There is a message name in the message, so I hope you can write a list for me how to use Swift in Xcode to send a message containing the message name.(There is an answer in https://developer.apple.com/forums/thread/756978, but it doesn't work.) and in the time line in Reality Composer Pro, there is a Notification action, which is used to send messages to swift. How can I ask swift to detect whether the Notification action has sent a message?(There is an answer in https://developer.apple.com/videos/play/wwdc2024/10102/, but it doesn't work.) I have asked this question before (https://developer.apple.com/forums/thread/756978). Those answers were available before, but now they are all invalid in the latest system. I hope you can help me. Thank you.
1
1
748
Oct ’24
USDZ Models from RealityKitContentBundle loosing textures (show all black)
In my VisionPro app, I'm facing a problem with loading USDZ models from a RealityKitBundle package, created using Reality Composer Pro. It was working fine until I added more models to the package. As I added more models with large textures in the project, the app started to show them with texturing problems. So, when I load the models from the RealityView using Entity(named:in), the mesh loads correctly, but all black, with no textures, as below: However, when I load the same USDZ directly from the main bundle, using ModelEntity(named:in), it loads fine. I know that large textures can cause memory issues, but when talking about one single model, I know that it's not enough to cause a memory overflow in the VisionPro. This USDZ model is about 40MB with something around 800MB of texture memory (from the RealityComposerPro Statistic tab). I've built experiences in VisionPro with much heavier models, and they do present the same texture issues, but only after there's more than 3 huge models enabled in the Reality scene. But that's not the case. The un-textured model appears right from the beginning, so it seems to me that's not a runtime issue in the device, but rather some issue in the packaging process from RealityComposerPro to XCode to the Device, am I correct? I'm also using a simple Mac Mini with M2 but only 8MB of RAM. Maybe that's the issue? As I still want to use RealityComposerPro to build more dev-friendly and interesting applications, I'd really appreciate some guidance here! Thanks in advance!
1
0
747
Oct ’24
Vision pro not pairing Macbook with pro
I'm having trouble pairing my apple vision pro to my macbook pro M3, my macbook pro is on sonoma 14.6 and i have tested pairing a visionOS1.2 and 2.0 vision pro but it still doesn't work, i have a mac mini that pairs and connects fine to the headsets and those are the steps i tried to do on vision pro and macbook pro to pair them together until now but with no success : On the same windows wifi hotspot On the same iPhone hotspot On an other wifi hotspot Tried to clear remote devices, still not recognized tried to turn off and turn on developper mode still nothing tried to reset network parameters tried to restart headset tried to restart Xcode tried to restart mac just after restart the headset showed up and i clicked pair and typed in the code but then the headset was still in "disconnected" and couldn't connect to mac tried to restart mac and headset tried to rename headset tried to switch mac tried 1 headset on at a time tried to clean build folder deleted contents of ~/Library/Developer/Xcode/DerivedData tried sudo defaults write "/Library/Preferences/com.apple.mDNSResponder.plist NoMulticastAdvertisements" -bool true tried to deactivate the firewall
2
0
635
Oct ’24
Control of LongPressGesture-created element
I have been implementing the LongPressGesture to have a menu come up upon a long press. I love the functionality and it is very close to being where I want it to be. I don't know if this is a visionOS-specific thing, but I am hoping to control the corner radius of the pulled-out element behind my "button." I've wrangled hover effects in the past with overlays, but I'm not sure what to target in this case. Worst case, I'll have to change the border radius on all of my tiles to match this LongPressGesture-controlled behavior, or I could possibly change the radius onLongPressGesture to match. Is there a simpler solution? Thanks!
2
0
462
Oct ’24
Issue: ARKit Camera Frame Provider Not Authorized in visionOS App
I’m developing a visionOS app using EnterpriseKit, and I need access to the main camera for QR code detection. I’m using the ARKit CameraFrameProvider and ARKitSession to capture frames, but I’m encountering this error when trying to start the camera stream: ar_camera_frame_provider_t: Failed to start camera stream with error: <ar_error_t Error Domain=com.apple.arkit Code=100 "App not authorized."> Context: VisionOS using EnterpriseKit for camera access and QR code scanning. My Info.plist includes necessary permissions like NSCameraUsageDescription and NSWorldSensingUsageDescription. I’ve added the com.apple.developer.arkit.main-camera-access.allow entitlement as per the official documentation here. My app is allowed camera access as shown in the logs (Authorization status: [cameraAccess: allowed]), but the camera stream still fails to start with the “App not authorized” error. I followed Apple’s WWDC 2024 sample code for accessing the main camera in visionOS from this session. Sample of My Code: import ARKit import Vision class QRCodeScanner: ObservableObject { private var arKitSession = ARKitSession() private var cameraFrameProvider = CameraFrameProvider() private var pixelBuffer: CVPixelBuffer? init() { Task { await requestCameraAccess() } } private func requestCameraAccess() async { await arKitSession.queryAuthorization(for: [.cameraAccess]) do { try await arKitSession.run([cameraFrameProvider]) } catch { print("Failed to start ARKit session: \(error)") return } let formats = CameraVideoFormat.supportedVideoFormats(for: .main, cameraPositions: [.left]) guard let cameraFrameUpdates = cameraFrameProvider.cameraFrameUpdates(for: formats[0]) else { return } Task { for await cameraFrame in cameraFrameUpdates { guard let mainCameraSample = cameraFrame.sample(for: .left) else { continue } self.pixelBuffer = mainCameraSample.pixelBuffer // QR Code detection code here } } } } Things I’ve Tried: Verified entitlements in both Info.plist and .entitlements files. I have added the com.apple.developer.arkit.main-camera-access.allow entitlement. Confirmed camera permissions in the privacy settings. Followed the official documentation and WWDC 2024 sample code. Checked my provisioning profile to ensure it supports ARKit camera access. Request: Has anyone encountered this “App not authorized” error when accessing the main camera via ARKit in visionOS using EnterpriseKit? Are there additional entitlements or provisioning profile configurations I might be missing? Any help would be greatly appreciated! I haven't seen any official examples using new API for main camera access and no open source examples either.
9
1
1.1k
Oct ’24
Cannot find the entitlement of Enterprise API for Vision pro
We are developing VisionOS app now, we have applied the Enterprise API for visionOS, including Main Camera Access for Vision Pro, and already get the "Enterprise.license" in the mail apple sent us, we use the developer account import the license file into Xcode: but in Xcode, we cannot find the entitlement of Enterprise API: if we put com.apple.developer.arkit.main-camera-access.allow into Entitlement file of the project manually,Xcode will alarm: and we find that the app itself dont have "Additional Capabilities" which include the Enterprise API: what should we do to have the entitlement file for the Enterprise API, so we can use the enterprise API?
6
1
757
Oct ’24
RGB-D and Point Clouds in visionOS
Dear all, We are building an XR application demonstrating our research on open-vocabulary 3D instance segmentation for assistive technology. We intend on bringing it to visionOS using the new Enterprise APIs. Our method was trained on datasets resembling ScanNet which contain the following: localized (1) RGB camera frames (2) with Depth (3) and camera intrinsics (4) point cloud (5) I understand, we can query (1), (2), and (4) from the CameraFrameProvider. As for (3) and (4), it is unclear to me if/how we can obtain that data. In handheld ARKit, this example project demos how the depthMap can be used to simulate raw point clouds. However, this property doesn't seem to be available in visionOS. Is there some way for us to obtain depth data associated with camera frames? "Faking" depth data from the SceneReconstructionProvider-generated meshes is too coarse for our method. I hope I'm just missing some detail and there's some way to configure CameraFrameProvider to also deliver depth and/or point clouds. Thanks for any help or pointer in the right direction! ~ Alex
2
1
610
Oct ’24
How to extracted stereo image pair from generated spatial photos by visionOS 2.0
Hi, My app allows users to share and view spatial photos. For viewing spatial photos, I'm using a plane in a RealityView that has a camera index switch material node, which takes the stereo images as the inputs. For sharing native spatial photos taken on the vision pro, prior to visionOS 2.0, I extract the stereo image pair and merge them into a single side-by-side image to upload to the app's backend. However, since visionOS 2.0 introduced generating spatial photos from normal photos, I've been seeing some unexpected behaviours in my app, while on the other hand, they can be viewed correctly in the system Photos app: Sometimes the extracted images have different size, the right image is smaller than the left image. See the first image in the google drive below, taken with iPhone 15 Pro. Even if the image pair have the same size, when viewed in my app, it has some artefacts, especially around the edge of objects which are closer to the camera. See the second image in the google drive below, taken with iPhone 11. Google drive link here: https://drive.google.com/drive/folders/1UTfpxvO3-ChqshwfyzY5E_KCgk8VgUaa I know that now Quicklook preview application can support viewing spatial photos now, but I would like to keep it the way I implemented in the app, for compatibility concerns. Below is a code snippet that deals with the extraction. Please point out the correct way to extract stereo image pair from a generated spatial photo. Happy to submit a code-level support request if more information is needed. // the data is from photos picker item let data = try await photo.loadTransferable(type: Data.self) let source = CGImageSourceCreateWithData(data as CFData, nil) let sbsImage = source.extractSpatialPhoto() extension CGImageSource { func extractSpatialPhoto() -> UIImage? { guard let leftCGImage = extractSpatialImage(at: 0), let rightCGImage = extractSpatialImage(at: 1) else { return nil } let leftImage = UIImage(ciImage: leftCGImage) let rightImage = UIImage(ciImage: rightCGImage) guard leftImage.size == rightImage.size else { return nil } // merge left + right let size = CGSize(width: leftImage.size.width * 2, height: leftImage.size.height) UIGraphicsBeginImageContextWithOptions(size, true, 1.0) leftImage.draw(in: CGRect(x: 0, y: 0, width: leftImage.size.width, height: leftImage.size.height)) rightImage.draw(in: CGRect(x: leftImage.size.width, y: 0, width: rightImage.size.width, height: rightImage.size.height)) let mergedImage = UIGraphicsGetImageFromCurrentImageContext() UIGraphicsEndImageContext() return mergedImage } // not sure if this actually works func extractSpatialImage(at index: Int) -> CIImage? { guard let cgImage = CGImageSourceCreateImageAtIndex(self, index, nil) else { return nil } var ciImage = CIImage(cgImage: cgImage) if let properties = CGImageSourceCopyPropertiesAtIndex(self, index, nil) as? [String: Any], let heifDictionary = properties[kCGImagePropertyHEIFDictionary as String] as? [String: Any], let extrinsics = heifDictionary[kIIOMetadata_CameraExtrinsicsKey as String] as? [String: Any], let position = extrinsics[kIIOCameraExtrinsics_Position as String] as? [Double] { // Default baseline is 64mm (0 for left camera, 0.064m for right camera) let standardBaseline = 0.064 // Check if it's the right image (should be at [0.064, 0, 0]) let isRightImage = (index == 1) let expectedPosition = isRightImage ? standardBaseline : 0.0 // Calculate the translation needed to align to standard baseline let positionDelta = position[0] - expectedPosition // Apply translation only if there's a mismatch in position if positionDelta != 0 { let transform = CGAffineTransform(translationX: CGFloat(positionDelta), y: 0) ciImage = ciImage.transformed(by: transform) } } return ciImage } }
1
0
1.2k
Oct ’24
USDZ file not loading on Apple Vision Pro
I'm working on a school project that allows users to open a .USDZ file (using Quick Look) on the webpage while using Apple Vision Pro to put the object in their physical envirnment, the project is deployed on Vercel. I'm testing the page with my apple vision pro, when I click open the .USDZ file, I'm seeing a triangle with an exclamation mark while it's trying to load, but it won't load. Does anybody know how to troubleshoot this issue?
4
0
927
Oct ’24
Unity/PolySpatial GameController framework failing to load
I have a simple example of a motion matching (MxM for Unity) character controller that uses Unity's input system and gamepad support. In editor the scene and inputs work as expected. When I build to headset the app stops at an initialization step where my game controller should kick in. The app doesn't crash but my character is frozen in A-Pose and doesn't respond to input. I'm wondering if this error I'm seeing in the logs is what's causing it? And if so how do I fix it? error 15:56:11.724200-0700 PolySpatialProjectTemplate NSBundle file:///System/Library/Frameworks/GameController.framework/ principal class is nil because all fallbacks have failed I'm using Xcode 16 beta 6 Unity 6000.0.17f1 VisionOS 2.0 beta 9
2
0
948
Oct ’24
Multilayer VisionOS App icon not working
I tried to use the application icon from sample project https://developer.apple.com/documentation/visionos/diorama, but the 3 layers of the app icon are not separated when I hover on the icon in the Vision Pro simulator. Could you please advise how to fix the problem? I am using the latest Xcode Version 15.4 (15F31d). Thank you.
2
0
496
Oct ’24