visionOS

RSS for tag

Discuss developing for spatial computing and Apple Vision Pro.

Posts under visionOS tag

200 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

Bluetooth keyboard events in fully immersive Vision Pro app?
I'm writing a Vision Pro app that's fully immersive and rendered using Metal. Occasionally, some users of this app would benefit from being able to use a physical keyboard (or other accessory like a game controller). It seems very straightforward to capture and handle spatial gesture events, but I cannot find an interface that allows the detection, capture, or handling of keyboard events in any of the objects associated with fully immersive metal rendering: CompositorServices, LayerRenderer, and its associated .frame, .drawable, and .drawable.view don't seem to have any accessory awareness. Can you help me handle a keyboard event?
2
0
69
4h
SwiftUI previews don't work in multi-platform app
I created a native visionOS app which I am now trying to convert into a multi-platform app, so iOS is supported as well. I also have Swift packages which differ from platform to platform, to handle platform-specific code. My SwiftUI previews work fine if I just setup visionOS as the target. But as soon as I add iOS 17 (with a minimum deployment of 17), they stop working. If I try to display them in the canvas, compilation fails and I get errors that my packages require iOS 17, but the device supports iOS 12. Which I never defined anywhere. This even happens if I set the preview to visionOS. If I run the same setup on a real device or a simulator, everything works just fine. Only the previews are affected by this. How do the preview device decide which minimum deployment version it should use, and how can I change this?! Update: This only happens if the app has a package dependency for a Swift package that itself includes a RealityKitContent package as a sub-dependency. I defined to only include this package in visionOS builds, and also the packages themselves define the platform as .visionOS(.v1) If I remove this package completely from "Frameworks, Libraries, and Embedded Content" the previews work again. Re-adding the package results in this weird behavior that the preview canvas thinks it is building for iOS 12.
0
0
55
9h
Anchor an Reality scene on an image anchor
Developing a prototype Vision Pro app and would like to render a 3D scene made from Reality Composer Pro on an image anchor in a RealityView. But I have no luck so far to make it work and need some guidance to move on. I got the image file stored in the assets like below: And from below is the source codes: import SwiftUI import RealityKit import RealityKitContent struct AnchorView: View { @State var imageEntity: Entity = { let anchorEntity = AnchorEntity(.image(group: "AR Resources", name: "reanchor")) return anchorEntity }() var body: some View { RealityView { content in do { // Add the initial RealityKit content if let scene = try? await Entity(named: "Scene", in: realityKitContentBundle) { imageEntity.addChild(scene) content.add(imageEntity) } } catch { print("Error occurs when adding reality view content: \(error)") } } } }
1
0
54
2h
How to exclude RealityKitContent from Swift package for iOS?
I've created an app for visionOS that uses a custom package that includes RealityKitContent as well (as a sub-package). I now want to turn this app into a multi-platform app that also supports iOS. When I try to compile the app for this platform, I get this error message: Building for 'iphoneos', but realitytool only supports [xros, xrsimulator] Thus, I want to exclude the RealityKitContent from my package for iOS, but I don't really know how. The Apple docs are pretty complicated, and ChatGPT did only give me solutions that did not work at all. I also tried to post this on the Swift forum, but no-one could help me there either - so I am trying my luck here. Here is my Package.swift file: // swift-tools-version: 5.10 import PackageDescription let package = Package( name: "Overlays", platforms: [ .iOS(.v17), .visionOS(.v1) ], products: [ .library( name: "Overlays", targets: ["Overlays"]), ], dependencies: [ .package( path: "../BackendServices" ), .package( path: "../MeteorDDP" ), .package( path: "Packages/OverlaysRealityKitContent" ), ], targets: [ .target( name: "Overlays", dependencies: ["BackendServices", "MeteorDDP", "OverlaysRealityKitContent"] ), .testTarget( name: "OverlaysTests", dependencies: ["Overlays"]), ] ) Based on a recommendation in the Swift forum, I also tried this: dependencies: [ ... .package( name: "OverlaysRealityKitContent", path: "Packages/OverlaysRealityKitContent" ), ], targets: [ .target( name: "Overlays", dependencies: [ "BackendServices", "MeteorDDP", .product(name: "OverlaysRealityKitContent", package: "OverlaysRealityKitContent", condition: .when(platforms: [.visionOS])) ] ), ... ] but this won't work either. The problem seems to be that the package is listed under dependencies, which makes the realitytool kick in. Is there a way to avoid this? I definitely need the RealityKitContent package being part of the Overlay package, since the latter depends on the content (on visionOS). And I would not want to split the package up in two parts (one for iOS and one for visionOS), if possible.
0
0
76
18h
GroundingShadowComponent tanks performance even when set to false
Working on a vision OS app. I've noticed that even when castsShadow is false, performance goes down the drain when there are more than a few dozen entities that have GroundingShadowComponent. I managed to hard crash the Vision Pro with about 200 or so entities that each had two ModelEntities with GroundingShadowComponent attached but set to castShadows = false. My solution is to add and remove the GroundingShadowComponent from entities as needed, but I thought maybe someone at Apple might want to look into this. I don't expect great performance with that many entities casting shadows, but I'd think turning the shadow off would effectively disable the component and not incur a performance penalty.
0
0
87
1d
How Immersive Video works ?
1.Does the Immersive Video still consist of a 2-way video? Is there depth information included? 2.Does Immersive Video uses MV-HEVC standard? 3.What is the difference between Immersive Video and Spatial Video? 4.Does Immersive Video currently support up to 180-degree 8K resolution? Will it expand to 360 degrees in the future? What's the resolution ceiling? 5.What are the requirements for third-party capture coding and packaging for Immersive Video? 6.Can you give me some video package materials for Immersive Video?
0
0
45
1d
glassMaterialEffect not working in immersive view with a skybox
I seem to be running into an issue where the .glassBackgroundEffect modifier doesn't seem to render correctly. The issue is occurring when attached to a view shown in a RealityKit immersive view with a Skybox texture. The glass effect is applied but doesn't let any of the colour of the skybox behind it though. I have created a sample project which is just the immersive space template with the addition of a skybox texture and an attachment with the glassBackgroundEffect modifier. The RealityView itself is struct ImmersiveView: View { var body: some View { RealityView { content, attachments in // Add the initial RealityKit content if let immersiveContentEntity = try? await Entity(named: "Immersive", in: realityKitContentBundle) { content.add(immersiveContentEntity) let attachment = attachments.entity(for: "foo")! let leftSphere = immersiveContentEntity.findEntity(named: "Sphere_Left")! attachment.position = [0, 0.2, 0] leftSphere.addChild(attachment) // Add an ImageBasedLight for the immersive content guard let resource = try? await EnvironmentResource(named: "ImageBasedLight") else { return } let iblComponent = ImageBasedLightComponent(source: .single(resource), intensityExponent: 0.25) immersiveContentEntity.components.set(iblComponent) immersiveContentEntity.components.set(ImageBasedLightReceiverComponent(imageBasedLight: immersiveContentEntity)) // Put skybox here. See example in World project available at var skyboxMaterial = UnlitMaterial() let skyboxTexture = try! await TextureResource(named: "pano") skyboxMaterial.color = .init(texture: .init(skyboxTexture)) let skyboxEntity = Entity() skyboxEntity.components.set(ModelComponent(mesh: .generateSphere(radius: 1000), materials: [skyboxMaterial])) skyboxEntity.scale *= .init(x: -1, y: 1, z: 1) content.add(skyboxEntity) } } update: { _, _ in } attachments: { Attachment(id: "foo") { Text("Hello") .font(.extraLargeTitle) .padding(48) .glassBackgroundEffect() } } } } The effect is shown bellow I've tried this both in the simulator and in a physical device and get the same behaviour. Not sure if this is an issue with RealityKit or if I'm just holding it wrong, would greatly appreciate any help. Thanks.
1
0
74
1d
How to get selected usdz model thumbnail image with material apply in vision os?
I want to get thumbnail image from USDZ model from vision os, But it will get image without material apply. Here is my code import Foundation import SceneKit import SceneKit.ModelIO class ARQLThumbnailGenerator { private let device = MTLCreateSystemDefaultDevice()! /// Create a thumbnail image of the asset with the specified URL at the specified /// animation time. Supports loading of .scn, .usd, .usdz, .obj, and .abc files, /// and other formats supported by ModelIO. /// - Parameters: /// - url: The file URL of the asset. /// - size: The size (in points) at which to render the asset. /// - time: The animation time to which the asset should be advanced before snapshotting. func thumbnail(for url: URL, size: CGSize, time: TimeInterval = 0) -> UIImage? { let renderer = SCNRenderer(device: device, options: [:]) renderer.autoenablesDefaultLighting = true if (url.pathExtension == "scn") { let scene = try? SCNScene(url: url, options: nil) renderer.scene = scene } else { let asset = MDLAsset(url: url) let scene = SCNScene(mdlAsset: asset) renderer.scene = scene } let image = renderer.snapshot(atTime: time, with: size, antialiasingMode: .multisampling4X) self.saveImageFileInDocumentDirectory(imageData: image.pngData()!) return image } func saveImageFileInDocumentDirectory(imageData : Data){ var uniqueID = UUID().uuidString let tempPath = NSSearchPathForDirectoriesInDomains(FileManager.SearchPathDirectory.documentDirectory, FileManager.SearchPathDomainMask.userDomainMask, true) let tempDocumentsDirectory: AnyObject = tempPath[0] as AnyObject let uniqueVideoID = uniqueID + "image.png" let tempDataPath = tempDocumentsDirectory.appendingPathComponent(uniqueVideoID) as String try? imageData.write(to: URL(fileURLWithPath: tempDataPath), options: []) } }
1
0
80
2d
Object tracking on Vision Pro using Vision
I'm wondering if it's possible to implement object tracking on Vision Pro using the Vision framework of Apple? I see that the Vision documentation offers a variety of classes for computer vision which have a tag "visionOS", but all the example codes in the documentation are only for iOS, iPadOS or macOS. So can those classes also be used for developing Vision Pro apps? If so, how do they get data feed from the camera of Vision Pro?
0
0
77
3d
spacial computing
Hello, I've test Apple spacial computing since few weeks now. I've hope working on spacial computer will give me more productive. At this time, safari extensions I use (bitwarden) was not available. There is no other web browser available. I've not find how to use xcode in my spacial computer. I've not find any native (not iPad) terminal tool. Mac screen sharing only give me one screen (and I've got 4 in my desk). This screen can be very big but I prefer having 4 little screen better a big one. And screen sharing is not usable using mouse at all. my mouse always disapear from my screen when sharing with my spacial computer. For all theses case, using spacial computing for working is not realist at this time in my specific case. I lost to many time using it instead of using a "real" computer. Hope this message can help you to upgrade software and give me all tools needed to use it to work. Best regards, Julien Boquet
0
0
97
3d
SceneReconstructionProvider stops providing updates
I have found that my Vision Pro device can get into a state where my app is no longer receiving fresh SceneReconstructionProvider updates. It reports that the SceneReconstructionProvider goes into the DataProviderState.running state, and .anchorUpdates will report a set of stale mesh anchors when first fired up, but does not produce any further updates. Once the device gets into this state, I can force quit the app, and even uninstall and re-install it, and I get the same few mesh updates, but no fresh updates until I restart the device. Sample async function below. I can confirm that print("WE FELL OFF THE END OF sceneReconstruction.anchorUpdates") never gets executed, so it stays inside the sceneReconstruction.anchorUpdates loop. let session = ARKitSession() var handTracking = HandTrackingProvider() let sceneReconstruction = SceneReconstructionProvider() let planeDetection = PlaneDetectionProvider(alignments: [.horizontal, .vertical]) let worldTracking = WorldTrackingProvider() ... func start() async { do { await requestAuth() if dataProvidersAreSupported && isReadyToRun && !isRunning { // print("ARKitSession starting.") try await session.run([sceneReconstruction, handTracking, planeDetection, worldTracking]) startCount += 1 // TODO: Fail gracefully if we have to attempt start too many (# TBD) times } else { print("dataProvidersAreSupported: \(dataProvidersAreSupported). isReadyToRun: \(isRunning)") print("handTracking.state: \(handTracking.state), sceneReconstruction.state: \(sceneReconstruction.state) worldTracking.state: \(worldTracking.state), planeDetection.state; \(planeDetection.state)") } }catch { print("ARKitSession error:", error) } } ... func processReconstructionUpdates() async { while (true) { for await update in sceneReconstruction.anchorUpdates { let meshAnchor = update.anchor guard let shape = try? await ShapeResource.generateStaticMesh(from: meshAnchor) else { continue } switch update.event { case .added: let entity = try! await generateModelEntity(geometry: meshAnchor.geometry) entity.transform = Transform(matrix: meshAnchor.originFromAnchorTransform) entity.collision = CollisionComponent(shapes: [shape], isStatic: true) entity.components.set(InputTargetComponent()) entity.name = "mesh" entity.physicsBody = PhysicsBodyComponent(mode: .static) let sortComponent = ModelSortGroupComponent(group: modelSortGroup, order: 1) entity.components.set(sortComponent) entity.components.set(OpacityComponent(opacity: 0.5)) meshEntities[meshAnchor.id] = entity meshesParent.addChild(entity, preservingWorldTransform: true) case .updated: guard let entity = meshEntities[meshAnchor.id], let updatedEntity = try? await generateModelEntity(geometry: meshAnchor.geometry) else { continue } entity.transform = Transform(matrix: meshAnchor.originFromAnchorTransform) entity.collision?.shapes = [shape] if let newMesh = updatedEntity.model?.mesh { entity.model?.mesh = newMesh } case .removed: meshEntities[meshAnchor.id]?.removeFromParent() meshEntities.removeValue(forKey: meshAnchor.id) } print("We now have '\(meshEntities.count)' mesh entities") } print("WE FELL OFF THE END OF sceneReconstruction.anchorUpdates") try? await Task.sleep(nanoseconds: 1_000_000) }
5
0
116
1d
Purchasing is not currently available on this device visionos
I am developing a VisionOS application with IAP which has not yet been submitted for review. In the process of development, while using the payment feature under the sandbox environment, the store returns the following error: explanation = "Purchasing is not currently available on this device in your country or region. Purchases you make on an iPhone, iPad or Mac can still be accessed here.\n\n[Environment: Sandbox]" How should I handle this? change another country or region ? thanks
0
0
80
4d
Build errors for iOS for my visionOS app
I'm taking my iOS/iPadOS app and converting it so it runs on visionOS. I’m trying to compile my app, build it, for both visionOS and iOS. When I try to build for an iPhone and iPad simulator, I get the following error:  Building for 'iphonesimulator', but realitytool only supports [xros, xrsimulator] I’m thinking I might need to do a # if conditional compilation statement for visionOS so iOS doesn’t try to build lines of code but I can’t for this particular error find out for which file or code I need to do the conditional compilation. Anyone know how to get rid of this error? 
2
0
142
18h
[Discussion] Why these features are missing in Vision Pro/VisionOS
Dear developers, now that we have played with Vision Pro for 3 months, I am wondering why some features are missing on Vision Pro, especially some seem to be very basic/fundamental. So I would like to see if you know more about the reasons or correct me if I'm wrong! You are also welcome to share features that you think is fundamental, but missing on Vision Pro. My list goes below: (1) GPS/Compass: cost? heat? battery? (2) Moving image tracking: surrounding environment processing is already too computing intensive? (3) 3D object tracking: looks like only supported on iOS and iPadOS, but not visionOS (4) Does not invoke application focus/pause callback: maybe I'm wrong? But we were not able to detect if an app has been put on background or brought to foreground to invoke a callback
1
0
184
6d
Performing a rotation transform on an already transformed entity in a RealityView
I'm trying to understand better how to 'navigate' around a large USD scene inside a RealityView in SwiftUI (itself in a volume on VisionOS). With a little trial and error I have been able to understand scale and translate transforms, and I can have the USD zoom to 'presets' of different scale and translation transforms. Separately I can also rotate an unscaled and untranslated USD, and have it rotate in place 90 degrees at a time to return to a rotation of 0 degrees. But if I try to combine the two activities, the rotation occurs around the center of the USD, not my zoomed location. Is there a session or sample code available that combines these activities? I think I would understand relatively quickly if I saw it in action. Thanks for any pointers available!
1
0
141
11h
Problem about. .windowStyle
WindowGroup{ SolarDisplayView() .environment(model) } .windowStyle(.plain) Why is the code above correct while the code below reports an error? How to modify the following code? WindowGroup{ SolarDisplayView() .environment(model) } .windowStyle(model.isShow ? .plain : .automatic)
0
0
89
1w