visionOS

RSS for tag

Discuss developing for spatial computing and Apple Vision Pro.

Posts under visionOS tag

1,190 Posts
Sort by:
Post not yet marked as solved
1 Replies
65 Views
I was executing some code from Incorporating real-world surroundings in an immersive experience func processReconstructionUpdates() async { for await update in sceneReconstruction.anchorUpdates { let meshAnchor = update.anchor guard let shape = try? await ShapeResource.generateStaticMesh(from: meshAnchor) else { continue } switch update.event { case .added: let entity = ModelEntity() entity.transform = Transform(matrix: meshAnchor.originFromAnchorTransform) entity.collision = CollisionComponent(shapes: [shape], isStatic: true) entity.components.set(InputTargetComponent()) entity.physicsBody = PhysicsBodyComponent(mode: .static) meshEntities[meshAnchor.id] = entity contentEntity.addChild(entity) case .updated: guard let entity = meshEntities[meshAnchor.id] else { continue } entity.transform = Transform(matrix: meshAnchor.originFromAnchorTransform) entity.collision?.shapes = [shape] case .removed: meshEntities[meshAnchor.id]?.removeFromParent() meshEntities.removeValue(forKey: meshAnchor.id) } } } I would like to toggle the Occlusion mesh available on the dev tools below, but programmatically. I would like to have a button, that would activate and deactivate that. I was checking .showSceneUnderstanding but it does not seem to work in visionOS. I get the following error 'ARView' is unavailable in visionOS when I try what is available Visualizing and Interacting with a Reconstructed Scene
Posted
by Hygison.
Last updated
.
Post not yet marked as solved
1 Replies
57 Views
I'm trying to control the LOD of textures for an app for vision pro, With the default image node in composer pro the UV's are correct but the LOD is not what I want, I would like to have control over it. I see there is a node called "RealityKitTexture2DLOD" but as soon as I try to use that one the UV's are all messed up. Am I missing something ? Do we need to do something specific to use this node ? I tried to use the nodes "Place 2D" and "UsdTransform2d" but could not get the texture to align Any help appreciated
Posted
by gl5.
Last updated
.
Post not yet marked as solved
2 Replies
432 Views
I'm unable to figure out how to know when my app no longer has focus. ScenePhase will only change when the WindowGroup gets created or closed. UIApplication.didBecomeActive and UIApplication.didEnterBackgroundNotification are not called either when say you move focus to Safari. What's the trick?
Posted
by Lucky7.
Last updated
.
Post not yet marked as solved
1 Replies
240 Views
In SwiftUI, it looks like when a window is not in view its scene phase is set to ScenePhase.background after a minute or so. Now when every window of an app has been closed, the last window to be closed has its phase set to SchenePhase.background. This makes it quite difficult to differentiate reopening and app from a window that was out of view being glanced at again. I have tried implementing a solution where I count opened/closed window with an onChange that watches scenePhase, but it unfortunately looks like not every window closing gets detected (specifically if there are two windows, with one window is in the background, and the one active window is closed, its scene phase onChange is never triggered). Is there a better way to handle the case of differentiating between reopening an app with looking back at a suspended window that was never actually closed?
Posted
by ncitron.
Last updated
.
Post not yet marked as solved
0 Replies
98 Views
I just bought a Vision Pro to build and run my app on it, but I'm encountering this error from Xcode: Developer Mode is enabled. Computer is paired. Device is paired, and it's connected to my Mac via the Developer Strap. In the "VPN & Device Management" setting it says to navigate to, there is no "Developer App certificate". Others have suggested clearing the ModuleCache in DerivedData, but that's also been fruitless. I've cleaned the build multiple times, and restarted both devices, and Xcode. I have no idea what else I can possibly do to resolve this. Does anyone have any other ideas?
Posted Last updated
.
Post not yet marked as solved
4 Replies
99 Views
We have a random issue that when ARKitSession.run() is called, monitorSessionEvents() receives .paused and it never transitions to .running If we exit Immersive Space and do ARKitSesssion.run() again it works fine. Unfortunately this is very difficult to manage in the flow of our App.
Posted Last updated
.
Post not yet marked as solved
1 Replies
73 Views
Today I have tried to add a second archive action for visionOS. I had added a visionOS destination to my app target a while back and can build and archive my app for visionOS in Xcode 15.3 locally, and also run it on the device. Xcode Cloud is giving me the following errors in the Archive - visionOS action (Archive - iOS works): Invalid Info.plist value. The value for the key 'DTPlatformName' in bundle MyApp.app is invalid. Invalid sdk value. The value provided for the sdk portion of LC_BUILD_VERSION in MyApp.app/MyApp is 17.4 which is greater than the maximum allowed value of 1.2. This bundle is invalid. The value provided for the key MinimumOSVersion '17.0' is not acceptable. Type Mismatch. The value for the Info.plist key CFBundleIcons.CFBundlePrimaryIcon is not of the required type for that key. See the Information Property List Key Reference at https://developer.apple.com/library/ios/documentation/general/Reference/InfoPlistKeyReference/Introduction/Introduction.html#//apple_ref/doc/uid/TP40009248-SW1 All 4 errors are annotated with "Prepare Build for App Store Connect" and I get them for both "TestFlight (Internal Testing Only)" and "TestFlight and App Store" deployment preparation options. I have tried to remove the visionOS destination and add it back, but this is not changing the project at all. Any ideas what I am missing?
Posted
by RK123.
Last updated
.
Post not yet marked as solved
1 Replies
111 Views
I'm trying to implement the playback of an HLS content with FairPlay, and I want to insert it into a RealityView using a VideoMaterial of a sphere. When I use unencrypted HLS content everything works correctly, but when I use FairPlay it doesn't. To initialize FairPlay I am using the following in the view: let contentKeyDelegate = ContentKeySessionDelegate(licenseURL: licenseURL, certificateURL: certificateURL) // Create the Content Key Session using the FairPlay Streaming key system. let contentKeySession = AVContentKeySession(keySystem: .fairPlayStreaming) contentKeySession.setDelegate(contentKeyDelegate, queue: DispatchQueue.main) contentKeySession.addContentKeyRecipient(asset) Has anyone else encountered this problem? Note: I'm testing in Vision Pro directly because the simulator hasn't support for FairPlay.
Posted
by AlvaroVG.
Last updated
.
Post marked as solved
1 Replies
66 Views
Context https://developer.apple.com/forums/thread/751036 I found some sample code that does the process I described in my other post for ModelEntity here: https://www.youtube.com/watch?v=TqZ72kVle8A&ab_channel=ZackZack At runtime I'm loading: Immersive scene in a RealityView from Reality Compose Pro with the robot model baked into the file (not remote - asset in project) A Model3D view that pulls in the robot model from the web url A RemoteObjectView (RealityView) which downloads the model to temp, creates a ModelEntity, and adds it to the content of the RealityView Method 1 above is fine, but Methods 2 + 3 load the model with a pure black texture for some reason. Ideal state is Methods 2 + 3 look like the Method 1 result (see screenshot). Am I doing something wrong? e.g. I shouldn't use multiple Reality Views at once? Screenshot Code struct ImmersiveView: View { var body: some View { RealityView { content in // Add the initial RealityKit content if let immersiveContentEntity = try? await Entity(named: "Immersive", in: realityKitContentBundle) { content.add(immersiveContentEntity) // Add an ImageBasedLight for the immersive content guard let resource = try? await EnvironmentResource(named: "ImageBasedLight") else { return } let iblComponent = ImageBasedLightComponent(source: .single(resource), intensityExponent: 0.25) immersiveContentEntity.components.set(iblComponent) immersiveContentEntity.components.set(ImageBasedLightReceiverComponent(imageBasedLight: immersiveContentEntity)) // Put skybox here. See example in World project available at // https://developer.apple.com/ } } Model3D(url: URL(string: "https://developer.apple.com/augmented-reality/quick-look/models/vintagerobot2k/robot_walk_idle.usdz")!) SkyboxView() // RemoteObjectView(remoteURL: "https://developer.apple.com/augmented-reality/quick-look/models/retrotv/tv_retro.usdz") RemoteObjectView(remoteURL: "https://developer.apple.com/augmented-reality/quick-look/models/vintagerobot2k/robot_walk_idle.usdz") } }
Posted Last updated
.
Post not yet marked as solved
0 Replies
143 Views
When the dinosaur protrudes from the portal in the Encounter Dinosaurs app, it appears to be lit by the real room lighting, just like any other RealityKit content is by default. When the dinosaur is inside of the portal, it appears to be lit by the virtual environment, and the two light sources seem to be smoothly blended between at the plane of the portal. How is this done? ImageBasedLightReceiverComponent allows the IBL to be changed on a per-entity basis, but the actual lightning calculation shader code seems to be a black box, and I have not seen a way to specify which IBL texture is used on a per-fragment basis.
Posted
by GiantSox.
Last updated
.
Post not yet marked as solved
3 Replies
191 Views
Hello, I would like to change the aspect (scale, texture, color) of a 3D element (Model Entity) when I hovered it with my eyes. What should I do If I want to create a request for this feature? And how would I know if it will ever be considered or when it will appear?
Posted Last updated
.
Post not yet marked as solved
2 Replies
126 Views
extension Entity { func addPanoramicImage(for media: WRMedia) { let subscription=TextureResource.loadAsync(named:"image_20240425_201630").sink( receiveCompletion: { switch $0 { case .finished: break case .failure(let error): assertionFailure("(error)") } }, receiveValue: { [weak self] texture in guard let self = self else { return } var material = UnlitMaterial() material.color = .init(texture: .init(texture)) self.components.set(ModelComponent( mesh: .generateSphere(radius: 1E3), materials: [material] )) self.scale *= .init(x: -1, y: 1, z: 1) self.transform.translation += SIMD3(0.0, -1, 0.0) } ) components.set(Entity.WRSubscribeComponent(subscription: subscription)) } problem: case .failure(let error): assertionFailure("(error)") Thread 1: Fatal error: Error Domain=MTKTextureLoaderErrorDomain Code=0 "Image decoding failed" UserInfo={NSLocalizedDescription=Image decoding failed, MTKTextureLoaderErrorKey=Image decoding failed}
Posted
by big_white.
Last updated
.
Post not yet marked as solved
1 Replies
86 Views
I can't find a way to download a USDZ at runtime and load it into a Reality View with Reality kit. As an example, imagine downloading one of the 3D models from this Apple Developer page: https://developer.apple.com/augmented-reality/quick-look/ I think the process should be: Download the file from the web and store in temporary storage with the FileManager API Load the entity from the temp file location using Entity.init (I believe Entity.load is being deprecated in Swift 6 - throws up compiler warning) - https://developer.apple.com/documentation/realitykit/loading-entities-from-a-file Add the entity to content in the Reality View. I'm doing this at runtime on vision os in the simulator. I can get this to work with textures using slightly different APIs so I think the logic is sound but in that case I'm creating the entity with a mesh and material. Not sure if file size has an effect. Is there any official guidance or a code sample for this use case?
Posted Last updated
.
Post not yet marked as solved
1 Replies
78 Views
Hello, I am doing to load model from bundle and it is loaded successfully. Now I am scaling model using GestureExtension from apple demo code. (https://developer.apple.com/documentation/realitykit/transforming-realitykit-entities-with-gestures?changes=_8) @State private var selectedEntityName : String = "" @State private var modelEntity: ModelEntity? var body: some View { contentView .task { do { modelEntity = try await ModelEntity.loadArcadeMachine() } catch { fatalError(error.localizedDescription) } } } @ViewBuilder private var contentView: some View { if let modelEntity { RealityView { content, attachments in modelEntity.position = SIMD3<Float>(x: 0, y: -0.3, z: -5) print(modelEntity.transform.scale) modelEntity.transform.scale = [0.006, 0.006, 0.006] content.add(modelEntity) if let percentTextAttachment = attachments.entity(for: "percentage") { percentTextAttachment.position = [0, 50, 0] modelEntity.addChild(percentTextAttachment) } } update: { content, attachments in // I want here to get updated scaling value and it is showing in RealityView attachmnt text. } attachments: { Attachment(id: "percentage") { Text("\(modelEntity.name) \(modelEntity.scale * 100) %") .font(.system(size: 5000)) .background(.red) } } // This method am using for gesture support .installGestures() } else { ProgressView() } } } Below code from GestureExtension let state = EntityGestureState.shared guard canScale, !state.isDragging else { return } let entity = value.entity if !state.isScaling { state.isScaling = true state.startScale = entity.scale } let magnification = Float(value.magnification) entity.scale = state.startScale * magnification state.magnifyValue = magnification magnifyScale = Double(magnification) print("Entity Name ::::::: \(entity.name)") print("Scale ::::::: \(entity.scale)") print("Magnification ::::::: \(magnification)") print("StartScale ::::::: \(state.startScale)") > This "magnification" value I need to use in RealityView class. How can i Do it? Could you please guide it. }
Posted Last updated
.
Post not yet marked as solved
3 Replies
112 Views
Hi, do you guys have any idea why this code block doesn't run properly on a designed iPad app running on a vision pro simulator? I'm trying to add a hovering effect to a view in UIKit but it just doesn't enter this if statement. if #available(iOS 17.0, visionOS 1.0, *) { someView.hoverStyle = .init(effect: .automatic) }
Posted Last updated
.
Post not yet marked as solved
3 Replies
526 Views
I've been having a hard time getting WebXR testing working in VisionOS. I had Ventura and installed VisionOS 1.0 and video crashed launching to WebXR. To get 1.1 I did alot of jumps to get XCode 15.3 beta and VisionOS 1.1 requiring to also upgrade to macOS Sonoma. In Ventura I was able to web inspect Safari in VisionOS 1.0 but in Sonoma, and VisionOS 1.1 I get "No Inspectable Applications" I have tried Safari and Preview Safari.
Posted
by danrossi1.
Last updated
.
Post not yet marked as solved
1 Replies
85 Views
Hi everyone, This happens with Xcode 15.3 (15E204a) and visionOS 1.1.2 (21O231). To reproduce this issue, simply create a new VisionOS app with Metal (see below). Then simply change the following piece of code in Renderer.swift: func renderFrame() { [...] // Set the clear color red channel to 1.0 instead of 0.0. renderPassDescriptor.colorAttachments[0].clearColor = MTLClearColor(red: 1.0, green: 0.0, blue: 0.0, alpha: 0.0) [...] } On the simulator it works as expected while on device it will show a black background with red jagged edges (see below).
Posted
by _clemzio_.
Last updated
.
Post not yet marked as solved
0 Replies
82 Views
I am trying to download XCode apps from my Mac to my Apple Vision Pro for testing. I have tried following the instructions by going into Settings->General->Remote Devices on my Apple Vision Pro, but there my Mac does not show up as a possible connection. I have made sure that both devices are connected to the same WiFi network, updated my Mac to Sonoma, and updated my AVP to the latest OS, and everything else that it asks for. I am able to mirror my display from my Mac but downloading apps from XCode does not work. I have also looked to enable Developer Mode by going to Settings -> Privacy & Security -> Enable Developer Mode, but there is no option for enabling developer mode here. I initially thought that it is a Bonjour Protocol compatibility issue since both devices are on university wifi (WPA2-Enterprise), but I also tried connecting both over a WPA2-Normal which also did not work.
Posted Last updated
.