visionOS

RSS for tag

Discuss developing for spatial computing and Apple Vision Pro.

Posts under visionOS tag

1,187 Posts
Sort by:
Post not yet marked as solved
3 Replies
59 Views
We have a random issue that when ARKitSession.run() is called, monitorSessionEvents() receives .paused and it never transitions to .running If we exit Immersive Space and do ARKitSesssion.run() again it works fine. Unfortunately this is very difficult to manage in the flow of our App.
Posted Last updated
.
Post not yet marked as solved
1 Replies
84 Views
I'm trying to implement the playback of an HLS content with FairPlay, and I want to insert it into a RealityView using a VideoMaterial of a sphere. When I use unencrypted HLS content everything works correctly, but when I use FairPlay it doesn't. To initialize FairPlay I am using the following in the view: let contentKeyDelegate = ContentKeySessionDelegate(licenseURL: licenseURL, certificateURL: certificateURL) // Create the Content Key Session using the FairPlay Streaming key system. let contentKeySession = AVContentKeySession(keySystem: .fairPlayStreaming) contentKeySession.setDelegate(contentKeyDelegate, queue: DispatchQueue.main) contentKeySession.addContentKeyRecipient(asset) Has anyone else encountered this problem? Note: I'm testing in Vision Pro directly because the simulator hasn't support for FairPlay.
Posted
by AlvaroVG.
Last updated
.
Post marked as solved
1 Replies
39 Views
Context https://developer.apple.com/forums/thread/751036 I found some sample code that does the process I described in my other post for ModelEntity here: https://www.youtube.com/watch?v=TqZ72kVle8A&ab_channel=ZackZack At runtime I'm loading: Immersive scene in a RealityView from Reality Compose Pro with the robot model baked into the file (not remote - asset in project) A Model3D view that pulls in the robot model from the web url A RemoteObjectView (RealityView) which downloads the model to temp, creates a ModelEntity, and adds it to the content of the RealityView Method 1 above is fine, but Methods 2 + 3 load the model with a pure black texture for some reason. Ideal state is Methods 2 + 3 look like the Method 1 result (see screenshot). Am I doing something wrong? e.g. I shouldn't use multiple Reality Views at once? Screenshot Code struct ImmersiveView: View { var body: some View { RealityView { content in // Add the initial RealityKit content if let immersiveContentEntity = try? await Entity(named: "Immersive", in: realityKitContentBundle) { content.add(immersiveContentEntity) // Add an ImageBasedLight for the immersive content guard let resource = try? await EnvironmentResource(named: "ImageBasedLight") else { return } let iblComponent = ImageBasedLightComponent(source: .single(resource), intensityExponent: 0.25) immersiveContentEntity.components.set(iblComponent) immersiveContentEntity.components.set(ImageBasedLightReceiverComponent(imageBasedLight: immersiveContentEntity)) // Put skybox here. See example in World project available at // https://developer.apple.com/ } } Model3D(url: URL(string: "https://developer.apple.com/augmented-reality/quick-look/models/vintagerobot2k/robot_walk_idle.usdz")!) SkyboxView() // RemoteObjectView(remoteURL: "https://developer.apple.com/augmented-reality/quick-look/models/retrotv/tv_retro.usdz") RemoteObjectView(remoteURL: "https://developer.apple.com/augmented-reality/quick-look/models/vintagerobot2k/robot_walk_idle.usdz") } }
Posted Last updated
.
Post not yet marked as solved
0 Replies
96 Views
When the dinosaur protrudes from the portal in the Encounter Dinosaurs app, it appears to be lit by the real room lighting, just like any other RealityKit content is by default. When the dinosaur is inside of the portal, it appears to be lit by the virtual environment, and the two light sources seem to be smoothly blended between at the plane of the portal. How is this done? ImageBasedLightReceiverComponent allows the IBL to be changed on a per-entity basis, but the actual lightning calculation shader code seems to be a black box, and I have not seen a way to specify which IBL texture is used on a per-fragment basis.
Posted
by GiantSox.
Last updated
.
Post not yet marked as solved
3 Replies
174 Views
Hello, I would like to change the aspect (scale, texture, color) of a 3D element (Model Entity) when I hovered it with my eyes. What should I do If I want to create a request for this feature? And how would I know if it will ever be considered or when it will appear?
Posted Last updated
.
Post not yet marked as solved
2 Replies
108 Views
extension Entity { func addPanoramicImage(for media: WRMedia) { let subscription=TextureResource.loadAsync(named:"image_20240425_201630").sink( receiveCompletion: { switch $0 { case .finished: break case .failure(let error): assertionFailure("(error)") } }, receiveValue: { [weak self] texture in guard let self = self else { return } var material = UnlitMaterial() material.color = .init(texture: .init(texture)) self.components.set(ModelComponent( mesh: .generateSphere(radius: 1E3), materials: [material] )) self.scale *= .init(x: -1, y: 1, z: 1) self.transform.translation += SIMD3(0.0, -1, 0.0) } ) components.set(Entity.WRSubscribeComponent(subscription: subscription)) } problem: case .failure(let error): assertionFailure("(error)") Thread 1: Fatal error: Error Domain=MTKTextureLoaderErrorDomain Code=0 "Image decoding failed" UserInfo={NSLocalizedDescription=Image decoding failed, MTKTextureLoaderErrorKey=Image decoding failed}
Posted
by big_white.
Last updated
.
Post not yet marked as solved
1 Replies
65 Views
I can't find a way to download a USDZ at runtime and load it into a Reality View with Reality kit. As an example, imagine downloading one of the 3D models from this Apple Developer page: https://developer.apple.com/augmented-reality/quick-look/ I think the process should be: Download the file from the web and store in temporary storage with the FileManager API Load the entity from the temp file location using Entity.init (I believe Entity.load is being deprecated in Swift 6 - throws up compiler warning) - https://developer.apple.com/documentation/realitykit/loading-entities-from-a-file Add the entity to content in the Reality View. I'm doing this at runtime on vision os in the simulator. I can get this to work with textures using slightly different APIs so I think the logic is sound but in that case I'm creating the entity with a mesh and material. Not sure if file size has an effect. Is there any official guidance or a code sample for this use case?
Posted Last updated
.
Post not yet marked as solved
1 Replies
62 Views
Hello, I am doing to load model from bundle and it is loaded successfully. Now I am scaling model using GestureExtension from apple demo code. (https://developer.apple.com/documentation/realitykit/transforming-realitykit-entities-with-gestures?changes=_8) @State private var selectedEntityName : String = "" @State private var modelEntity: ModelEntity? var body: some View { contentView .task { do { modelEntity = try await ModelEntity.loadArcadeMachine() } catch { fatalError(error.localizedDescription) } } } @ViewBuilder private var contentView: some View { if let modelEntity { RealityView { content, attachments in modelEntity.position = SIMD3<Float>(x: 0, y: -0.3, z: -5) print(modelEntity.transform.scale) modelEntity.transform.scale = [0.006, 0.006, 0.006] content.add(modelEntity) if let percentTextAttachment = attachments.entity(for: "percentage") { percentTextAttachment.position = [0, 50, 0] modelEntity.addChild(percentTextAttachment) } } update: { content, attachments in // I want here to get updated scaling value and it is showing in RealityView attachmnt text. } attachments: { Attachment(id: "percentage") { Text("\(modelEntity.name) \(modelEntity.scale * 100) %") .font(.system(size: 5000)) .background(.red) } } // This method am using for gesture support .installGestures() } else { ProgressView() } } } Below code from GestureExtension let state = EntityGestureState.shared guard canScale, !state.isDragging else { return } let entity = value.entity if !state.isScaling { state.isScaling = true state.startScale = entity.scale } let magnification = Float(value.magnification) entity.scale = state.startScale * magnification state.magnifyValue = magnification magnifyScale = Double(magnification) print("Entity Name ::::::: \(entity.name)") print("Scale ::::::: \(entity.scale)") print("Magnification ::::::: \(magnification)") print("StartScale ::::::: \(state.startScale)") > This "magnification" value I need to use in RealityView class. How can i Do it? Could you please guide it. }
Posted Last updated
.
Post not yet marked as solved
2 Replies
88 Views
Hi, do you guys have any idea why this code block doesn't run properly on a designed iPad app running on a vision pro simulator? I'm trying to add a hovering effect to a view in UIKit but it just doesn't enter this if statement. if #available(iOS 17.0, visionOS 1.0, *) { someView.hoverStyle = .init(effect: .automatic) }
Posted Last updated
.
Post not yet marked as solved
3 Replies
513 Views
I've been having a hard time getting WebXR testing working in VisionOS. I had Ventura and installed VisionOS 1.0 and video crashed launching to WebXR. To get 1.1 I did alot of jumps to get XCode 15.3 beta and VisionOS 1.1 requiring to also upgrade to macOS Sonoma. In Ventura I was able to web inspect Safari in VisionOS 1.0 but in Sonoma, and VisionOS 1.1 I get "No Inspectable Applications" I have tried Safari and Preview Safari.
Posted
by danrossi1.
Last updated
.
Post not yet marked as solved
1 Replies
77 Views
Hi everyone, This happens with Xcode 15.3 (15E204a) and visionOS 1.1.2 (21O231). To reproduce this issue, simply create a new VisionOS app with Metal (see below). Then simply change the following piece of code in Renderer.swift: func renderFrame() { [...] // Set the clear color red channel to 1.0 instead of 0.0. renderPassDescriptor.colorAttachments[0].clearColor = MTLClearColor(red: 1.0, green: 0.0, blue: 0.0, alpha: 0.0) [...] } On the simulator it works as expected while on device it will show a black background with red jagged edges (see below).
Posted
by _clemzio_.
Last updated
.
Post not yet marked as solved
0 Replies
60 Views
Today I have tried to add a second archive action for visionOS. I had added a visionOS destination to my app target a while back and can build and archive my app for visionOS in Xcode 15.3 locally, and also run it on the device. Xcode Cloud is giving me the following errors in the Archive - visionOS action (Archive - iOS works): Invalid Info.plist value. The value for the key 'DTPlatformName' in bundle MyApp.app is invalid. Invalid sdk value. The value provided for the sdk portion of LC_BUILD_VERSION in MyApp.app/MyApp is 17.4 which is greater than the maximum allowed value of 1.2. This bundle is invalid. The value provided for the key MinimumOSVersion '17.0' is not acceptable. Type Mismatch. The value for the Info.plist key CFBundleIcons.CFBundlePrimaryIcon is not of the required type for that key. See the Information Property List Key Reference at https://developer.apple.com/library/ios/documentation/general/Reference/InfoPlistKeyReference/Introduction/Introduction.html#//apple_ref/doc/uid/TP40009248-SW1 All 4 errors are annotated with "Prepare Build for App Store Connect" and I get them for both "TestFlight (Internal Testing Only)" and "TestFlight and App Store" deployment preparation options. I have tried to remove the visionOS destination and add it back, but this is not changing the project at all. Any ideas what I am missing?
Posted
by RK123.
Last updated
.
Post not yet marked as solved
0 Replies
72 Views
I am trying to download XCode apps from my Mac to my Apple Vision Pro for testing. I have tried following the instructions by going into Settings->General->Remote Devices on my Apple Vision Pro, but there my Mac does not show up as a possible connection. I have made sure that both devices are connected to the same WiFi network, updated my Mac to Sonoma, and updated my AVP to the latest OS, and everything else that it asks for. I am able to mirror my display from my Mac but downloading apps from XCode does not work. I have also looked to enable Developer Mode by going to Settings -> Privacy & Security -> Enable Developer Mode, but there is no option for enabling developer mode here. I initially thought that it is a Bonjour Protocol compatibility issue since both devices are on university wifi (WPA2-Enterprise), but I also tried connecting both over a WPA2-Normal which also did not work.
Posted Last updated
.
Post not yet marked as solved
0 Replies
93 Views
Hello, This is the first time for me as a developper that I have to work deeply on Xcode, I am a Unity / Unreal developper. I am experiencing a bug, And I cannot have access to the call stack, because the bug is not a crash, it is not blocking the app, and I do not have access to the related Files. When I try to use the VisionOS simulator, I see the debugger print this : " MEMixerChannel.cpp:1006 MEMixerChannel::EnableProcessor: failed to open processor type 0x726f746d AURemoteIO.cpp:1162 failed: -10851 (enable 1, outf< 2 ch, 0 Hz, Float32, deinterleaved> inf< 1 ch, 44100 Hz, Int16>) MEMixerChannel.cpp:1006 MEMixerChannel::EnableProcessor: failed to open processor type 0x726f746d " thus, I cannot put a breakpoint here (MEMixerChannel.cpp), because I don't have access to this file... Kind regards.
Posted Last updated
.
Post marked as solved
3 Replies
176 Views
I am developing an app in mixed immersive native app on Vision Pro. In my RealityView, I add my scene by content.add(mainGameScene). Normally the anchored position (original coord) should be the device position but on the ground (with y == 0 on the ground). At least this is how I understand the RealityViewContent works. So if I place something at position (0, 0, -1.0), the object should be in the front of you but on the floor (z axis is pointing backwards) However recently I load a different scene and I add that with same code, content.add(mainGameScene), something has changed, my scene randomly anchored on the floor or ceiling, according to the places I stand or sit. When I open Visualizations of my anchoring point, I could see that anchor point I am using is on the ceiling. The correct one (around my foots) is left over there. How could I switch to the correct anchored position? Or does any setting can change the behavior of default RealityViewContent?
Posted
by milanowth.
Last updated
.
Post not yet marked as solved
1 Replies
96 Views
In my app I play HLS streams via AVPlayer. It works well! However, when I try to download those same HLS urls via MakeAssetDownloadTask I regularly come across the error: Download error for identifier 21222: Error Domain=CoreMediaErrorDomain Code=-12938 "HTTP 404: File Not Found" UserInfo={NSDescription=HTTP 404: File Not Found, _NSURLErrorRelatedURLSessionTaskErrorKey=( "BackgroundAVAssetDownloadTask <CE9B10ED-E749-49FF-9942-3F8728210B20>.<1>" ), _NSURLErrorFailingURLSessionTaskErrorKey=BackgroundAVAssetDownloadTask <CE9B10ED-E749-49FF-9942-3F8728210B20>.<1>} I have a feeling that the AVPlayer has a way to resolve this that the MakeAssetDownloadTask lacks. I am wondering if any of you have come across this or have insight. Thank you! BTW this is using Xcode Version 15.3 (15E204a) and developing for visionOS 1.0.1
Posted Last updated
.