visionOS

RSS for tag

Discuss developing for spatial computing and Apple Vision Pro.

Posts under visionOS tag

1,191 Posts
Sort by:
Post not yet marked as solved
1 Replies
75 Views
I was executing some code from Incorporating real-world surroundings in an immersive experience func processReconstructionUpdates() async { for await update in sceneReconstruction.anchorUpdates { let meshAnchor = update.anchor guard let shape = try? await ShapeResource.generateStaticMesh(from: meshAnchor) else { continue } switch update.event { case .added: let entity = ModelEntity() entity.transform = Transform(matrix: meshAnchor.originFromAnchorTransform) entity.collision = CollisionComponent(shapes: [shape], isStatic: true) entity.components.set(InputTargetComponent()) entity.physicsBody = PhysicsBodyComponent(mode: .static) meshEntities[meshAnchor.id] = entity contentEntity.addChild(entity) case .updated: guard let entity = meshEntities[meshAnchor.id] else { continue } entity.transform = Transform(matrix: meshAnchor.originFromAnchorTransform) entity.collision?.shapes = [shape] case .removed: meshEntities[meshAnchor.id]?.removeFromParent() meshEntities.removeValue(forKey: meshAnchor.id) } } } I would like to toggle the Occlusion mesh available on the dev tools below, but programmatically. I would like to have a button, that would activate and deactivate that. I was checking .showSceneUnderstanding but it does not seem to work in visionOS. I get the following error 'ARView' is unavailable in visionOS when I try what is available Visualizing and Interacting with a Reconstructed Scene
Posted
by
Post not yet marked as solved
0 Replies
104 Views
I just bought a Vision Pro to build and run my app on it, but I'm encountering this error from Xcode: Developer Mode is enabled. Computer is paired. Device is paired, and it's connected to my Mac via the Developer Strap. In the "VPN & Device Management" setting it says to navigate to, there is no "Developer App certificate". Others have suggested clearing the ModuleCache in DerivedData, but that's also been fruitless. I've cleaned the build multiple times, and restarted both devices, and Xcode. I have no idea what else I can possibly do to resolve this. Does anyone have any other ideas?
Posted
by
Post not yet marked as solved
1 Replies
65 Views
I'm trying to control the LOD of textures for an app for vision pro, With the default image node in composer pro the UV's are correct but the LOD is not what I want, I would like to have control over it. I see there is a node called "RealityKitTexture2DLOD" but as soon as I try to use that one the UV's are all messed up. Am I missing something ? Do we need to do something specific to use this node ? I tried to use the nodes "Place 2D" and "UsdTransform2d" but could not get the texture to align Any help appreciated
Posted
by
gl5
Post not yet marked as solved
4 Replies
105 Views
We have a random issue that when ARKitSession.run() is called, monitorSessionEvents() receives .paused and it never transitions to .running If we exit Immersive Space and do ARKitSesssion.run() again it works fine. Unfortunately this is very difficult to manage in the flow of our App.
Posted
by
Post marked as solved
1 Replies
70 Views
Context https://developer.apple.com/forums/thread/751036 I found some sample code that does the process I described in my other post for ModelEntity here: https://www.youtube.com/watch?v=TqZ72kVle8A&ab_channel=ZackZack At runtime I'm loading: Immersive scene in a RealityView from Reality Compose Pro with the robot model baked into the file (not remote - asset in project) A Model3D view that pulls in the robot model from the web url A RemoteObjectView (RealityView) which downloads the model to temp, creates a ModelEntity, and adds it to the content of the RealityView Method 1 above is fine, but Methods 2 + 3 load the model with a pure black texture for some reason. Ideal state is Methods 2 + 3 look like the Method 1 result (see screenshot). Am I doing something wrong? e.g. I shouldn't use multiple Reality Views at once? Screenshot Code struct ImmersiveView: View { var body: some View { RealityView { content in // Add the initial RealityKit content if let immersiveContentEntity = try? await Entity(named: "Immersive", in: realityKitContentBundle) { content.add(immersiveContentEntity) // Add an ImageBasedLight for the immersive content guard let resource = try? await EnvironmentResource(named: "ImageBasedLight") else { return } let iblComponent = ImageBasedLightComponent(source: .single(resource), intensityExponent: 0.25) immersiveContentEntity.components.set(iblComponent) immersiveContentEntity.components.set(ImageBasedLightReceiverComponent(imageBasedLight: immersiveContentEntity)) // Put skybox here. See example in World project available at // https://developer.apple.com/ } } Model3D(url: URL(string: "https://developer.apple.com/augmented-reality/quick-look/models/vintagerobot2k/robot_walk_idle.usdz")!) SkyboxView() // RemoteObjectView(remoteURL: "https://developer.apple.com/augmented-reality/quick-look/models/retrotv/tv_retro.usdz") RemoteObjectView(remoteURL: "https://developer.apple.com/augmented-reality/quick-look/models/vintagerobot2k/robot_walk_idle.usdz") } }
Posted
by
Post not yet marked as solved
0 Replies
147 Views
When the dinosaur protrudes from the portal in the Encounter Dinosaurs app, it appears to be lit by the real room lighting, just like any other RealityKit content is by default. When the dinosaur is inside of the portal, it appears to be lit by the virtual environment, and the two light sources seem to be smoothly blended between at the plane of the portal. How is this done? ImageBasedLightReceiverComponent allows the IBL to be changed on a per-entity basis, but the actual lightning calculation shader code seems to be a black box, and I have not seen a way to specify which IBL texture is used on a per-fragment basis.
Posted
by
Post not yet marked as solved
1 Replies
88 Views
I can't find a way to download a USDZ at runtime and load it into a Reality View with Reality kit. As an example, imagine downloading one of the 3D models from this Apple Developer page: https://developer.apple.com/augmented-reality/quick-look/ I think the process should be: Download the file from the web and store in temporary storage with the FileManager API Load the entity from the temp file location using Entity.init (I believe Entity.load is being deprecated in Swift 6 - throws up compiler warning) - https://developer.apple.com/documentation/realitykit/loading-entities-from-a-file Add the entity to content in the Reality View. I'm doing this at runtime on vision os in the simulator. I can get this to work with textures using slightly different APIs so I think the logic is sound but in that case I'm creating the entity with a mesh and material. Not sure if file size has an effect. Is there any official guidance or a code sample for this use case?
Posted
by
Post not yet marked as solved
3 Replies
115 Views
Hi, do you guys have any idea why this code block doesn't run properly on a designed iPad app running on a vision pro simulator? I'm trying to add a hovering effect to a view in UIKit but it just doesn't enter this if statement. if #available(iOS 17.0, visionOS 1.0, *) { someView.hoverStyle = .init(effect: .automatic) }
Posted
by
Post not yet marked as solved
1 Replies
80 Views
Hello, I am doing to load model from bundle and it is loaded successfully. Now I am scaling model using GestureExtension from apple demo code. (https://developer.apple.com/documentation/realitykit/transforming-realitykit-entities-with-gestures?changes=_8) @State private var selectedEntityName : String = "" @State private var modelEntity: ModelEntity? var body: some View { contentView .task { do { modelEntity = try await ModelEntity.loadArcadeMachine() } catch { fatalError(error.localizedDescription) } } } @ViewBuilder private var contentView: some View { if let modelEntity { RealityView { content, attachments in modelEntity.position = SIMD3<Float>(x: 0, y: -0.3, z: -5) print(modelEntity.transform.scale) modelEntity.transform.scale = [0.006, 0.006, 0.006] content.add(modelEntity) if let percentTextAttachment = attachments.entity(for: "percentage") { percentTextAttachment.position = [0, 50, 0] modelEntity.addChild(percentTextAttachment) } } update: { content, attachments in // I want here to get updated scaling value and it is showing in RealityView attachmnt text. } attachments: { Attachment(id: "percentage") { Text("\(modelEntity.name) \(modelEntity.scale * 100) %") .font(.system(size: 5000)) .background(.red) } } // This method am using for gesture support .installGestures() } else { ProgressView() } } } Below code from GestureExtension let state = EntityGestureState.shared guard canScale, !state.isDragging else { return } let entity = value.entity if !state.isScaling { state.isScaling = true state.startScale = entity.scale } let magnification = Float(value.magnification) entity.scale = state.startScale * magnification state.magnifyValue = magnification magnifyScale = Double(magnification) print("Entity Name ::::::: \(entity.name)") print("Scale ::::::: \(entity.scale)") print("Magnification ::::::: \(magnification)") print("StartScale ::::::: \(state.startScale)") > This "magnification" value I need to use in RealityView class. How can i Do it? Could you please guide it. }
Posted
by
Post not yet marked as solved
1 Replies
87 Views
Hi everyone, This happens with Xcode 15.3 (15E204a) and visionOS 1.1.2 (21O231). To reproduce this issue, simply create a new VisionOS app with Metal (see below). Then simply change the following piece of code in Renderer.swift: func renderFrame() { [...] // Set the clear color red channel to 1.0 instead of 0.0. renderPassDescriptor.colorAttachments[0].clearColor = MTLClearColor(red: 1.0, green: 0.0, blue: 0.0, alpha: 0.0) [...] } On the simulator it works as expected while on device it will show a black background with red jagged edges (see below).
Posted
by
Post not yet marked as solved
1 Replies
75 Views
Today I have tried to add a second archive action for visionOS. I had added a visionOS destination to my app target a while back and can build and archive my app for visionOS in Xcode 15.3 locally, and also run it on the device. Xcode Cloud is giving me the following errors in the Archive - visionOS action (Archive - iOS works): Invalid Info.plist value. The value for the key 'DTPlatformName' in bundle MyApp.app is invalid. Invalid sdk value. The value provided for the sdk portion of LC_BUILD_VERSION in MyApp.app/MyApp is 17.4 which is greater than the maximum allowed value of 1.2. This bundle is invalid. The value provided for the key MinimumOSVersion '17.0' is not acceptable. Type Mismatch. The value for the Info.plist key CFBundleIcons.CFBundlePrimaryIcon is not of the required type for that key. See the Information Property List Key Reference at https://developer.apple.com/library/ios/documentation/general/Reference/InfoPlistKeyReference/Introduction/Introduction.html#//apple_ref/doc/uid/TP40009248-SW1 All 4 errors are annotated with "Prepare Build for App Store Connect" and I get them for both "TestFlight (Internal Testing Only)" and "TestFlight and App Store" deployment preparation options. I have tried to remove the visionOS destination and add it back, but this is not changing the project at all. Any ideas what I am missing?
Posted
by
Post not yet marked as solved
1 Replies
112 Views
I'm trying to implement the playback of an HLS content with FairPlay, and I want to insert it into a RealityView using a VideoMaterial of a sphere. When I use unencrypted HLS content everything works correctly, but when I use FairPlay it doesn't. To initialize FairPlay I am using the following in the view: let contentKeyDelegate = ContentKeySessionDelegate(licenseURL: licenseURL, certificateURL: certificateURL) // Create the Content Key Session using the FairPlay Streaming key system. let contentKeySession = AVContentKeySession(keySystem: .fairPlayStreaming) contentKeySession.setDelegate(contentKeyDelegate, queue: DispatchQueue.main) contentKeySession.addContentKeyRecipient(asset) Has anyone else encountered this problem? Note: I'm testing in Vision Pro directly because the simulator hasn't support for FairPlay.
Posted
by
Post not yet marked as solved
0 Replies
82 Views
I am trying to download XCode apps from my Mac to my Apple Vision Pro for testing. I have tried following the instructions by going into Settings->General->Remote Devices on my Apple Vision Pro, but there my Mac does not show up as a possible connection. I have made sure that both devices are connected to the same WiFi network, updated my Mac to Sonoma, and updated my AVP to the latest OS, and everything else that it asks for. I am able to mirror my display from my Mac but downloading apps from XCode does not work. I have also looked to enable Developer Mode by going to Settings -> Privacy & Security -> Enable Developer Mode, but there is no option for enabling developer mode here. I initially thought that it is a Bonjour Protocol compatibility issue since both devices are on university wifi (WPA2-Enterprise), but I also tried connecting both over a WPA2-Normal which also did not work.
Posted
by
Post not yet marked as solved
0 Replies
104 Views
Hello, This is the first time for me as a developper that I have to work deeply on Xcode, I am a Unity / Unreal developper. I am experiencing a bug, And I cannot have access to the call stack, because the bug is not a crash, it is not blocking the app, and I do not have access to the related Files. When I try to use the VisionOS simulator, I see the debugger print this : " MEMixerChannel.cpp:1006 MEMixerChannel::EnableProcessor: failed to open processor type 0x726f746d AURemoteIO.cpp:1162 failed: -10851 (enable 1, outf< 2 ch, 0 Hz, Float32, deinterleaved> inf< 1 ch, 44100 Hz, Int16>) MEMixerChannel.cpp:1006 MEMixerChannel::EnableProcessor: failed to open processor type 0x726f746d " thus, I cannot put a breakpoint here (MEMixerChannel.cpp), because I don't have access to this file... Kind regards.
Posted
by
Post not yet marked as solved
1 Replies
105 Views
In my app I play HLS streams via AVPlayer. It works well! However, when I try to download those same HLS urls via MakeAssetDownloadTask I regularly come across the error: Download error for identifier 21222: Error Domain=CoreMediaErrorDomain Code=-12938 "HTTP 404: File Not Found" UserInfo={NSDescription=HTTP 404: File Not Found, _NSURLErrorRelatedURLSessionTaskErrorKey=( "BackgroundAVAssetDownloadTask <CE9B10ED-E749-49FF-9942-3F8728210B20>.<1>" ), _NSURLErrorFailingURLSessionTaskErrorKey=BackgroundAVAssetDownloadTask <CE9B10ED-E749-49FF-9942-3F8728210B20>.<1>} I have a feeling that the AVPlayer has a way to resolve this that the MakeAssetDownloadTask lacks. I am wondering if any of you have come across this or have insight. Thank you! BTW this is using Xcode Version 15.3 (15E204a) and developing for visionOS 1.0.1
Posted
by
Post not yet marked as solved
0 Replies
139 Views
Please treat me as a beginner of Unity. Now I want to learn to develop visionOS VR App through unity. I try to find a relatively complete route and start learning, but Unity's official website does not have much explanation for visionOS VR App, so I hope you can give me a comparison. The whole route, thank you!
Posted
by