Go beyond the window with SwiftUI

RSS for tag

Discuss the WWDC23 Session Go beyond the window with SwiftUI

View Session

Posts under wwdc2023-10111 tag

9 Posts
Sort by:
Post not yet marked as solved
2 Replies
720 Views
Hi, I have one question. How do I issue MagnifyGesture's onChange event in the visionOS simulator? I have tried various operations, but the onChange event does not work. https://developer.apple.com/videos/play/wwdc2023/10111/?time=994 @main struct WorldApp: App { @State private var currentStyle: ImmersionStyle = .mixed var body: some Scene { ImmersiveSpace(id: "solar") { SolarSystem() .simultaneousGesture(MagnifyGesture() .onChanged { value in let scale = value.magnification if scale > 5 { currentStyle = .progressive } else if scale > 10 { currentStyle = .full } else { currentStyle = .mixed } } ) } .immersionStyle(selection:$currentStyle, in: .mixed, .progressive, .full) } } Thanks.
Posted Last updated
.
Post not yet marked as solved
1 Replies
468 Views
Hi Scene Phases, but no event is issued when Alert is executed. Is this a known bug? https://developer.apple.com/videos/play/wwdc2023/10111/?time=784 In the following video, the center value is obtained, but a compile error occurs because the center is not found. https://developer.apple.com/videos/play/wwdc2023/10111/?time=861 GeometryReader3D { proxy in ZStack { Earth( earthConfiguration: model.solarEarth, satelliteConfiguration: [model.solarSatellite], moonConfiguration: model.solarMoon, showSun: true, sunAngle: model.solarSunAngle, animateUpdates: animateUpdates ) .onTapGesture { if let translation = proxy.transform(in: .immersiveSpace)?.translation { model.solarEarth.position = Point3D(translation) } } } } } Also, model.solarEarth.position is Point3D. This is not a simple Entity, is it? I'm quite confused because the whole code is fragmented and I'm not even sure if it works. I'm not even sure if it's a bug or not, so it's taking me a few days to a week to investigate and verify.
Posted Last updated
.
Post not yet marked as solved
1 Replies
698 Views
Hi, I'm trying to place my 3D content relative to my SwiftUI window. And I find the GeometryReader3D doesn't work as expected. Or maybe I don't fully understand it? When tapping the 3D models in the RealityKit scene, they should be expected to align with the center of the SwiftUI window, not the immersive space origin(0, 0, 0). I further found out the transform of GeometryProxy3D.transform is always identity matrix, no matter in which space. I also noticed in the example and doc, it should come with a ZStack, and I tried it as well, still failed. Could some clarify it a little bit? Below is a brief layout of my app: WindowGroup { ContentView() } ImmersiveSpace(id: "ImmersiveSpace") { ImmersiveView() } } var body: some View { GeometryReader3D { proxy in ZStack { RealityView { content in if let scene = try? await Entity(named: "ImmersiveScene", in: realityKitContentBundle) { content.add(scene) } } .gesture(TapGesture().targetedToAnyEntity().onEnded({ value in if let trans = proxy.transform(in: .immersiveSpace)?.translation { value.entity.position = SIMD3<Float>(trans.vector) } })) } } } }
Posted Last updated
.
Post not yet marked as solved
0 Replies
460 Views
I had a question while researching the layout of spatial views in visionOS. As I understand it, in a 2D plan layout GeometryReader can take the suggested dimensions provided by the parent view. In 3D spatial Layout, GeometryReader3D can obtain additional depth value. However, in layout's SizeThatFits function, ProposedViewSize passed by parent view to child view does not contain depth information. How does GeometryReader3D access this depth information?
Posted
by JokerGM.
Last updated
.
Post not yet marked as solved
2 Replies
692 Views
I have an app that launches into an immersive space with a mixed immersion style. It appears like the Reality View has bounds that resemble a window. I would expect the bounds to not exist because it's an ImmersiveSpace. Why do they exist? And how can I remove these? This is the entire code: @main struct RealityKitDebugViewApp: App { var body: some Scene { ImmersiveSpace { ContentView() } } } struct ContentView: View { @State var logMessages = [String]() var body: some View { RealityView { content, attachements in let root = Entity() content.add(root) guard let debugView = attachements.entity(for: "entity_debug_view") else { return } debugView.position = [0, 0, 0] root.addChild(debugView) } update: { content, attachements in } attachments: { Color.blue .tag("entity_debug_view") } .onAppear(perform: { self.logMessages.append("Hello World") }) } }
Posted
by tvg_123.
Last updated
.
Post not yet marked as solved
0 Replies
582 Views
Planning to use 3D video in "windows" and "volumes" in my visionOS app, what's the easiest method to capture 3D video for this need? Thanks and cheers!
Posted Last updated
.
Post not yet marked as solved
2 Replies
760 Views
Watching "Go beyond the window with SwiftUI" and the presenter is talking about immersive spaces phases (about 12:41 in the presentation) being automatically becoming inactive on the user stepping outside of the system boundary. I am curious about how this system boundary is defined? What happens if I have a mixed immersive space and want to allow the user to walk around a large room (ar enhanced art gallery experience for example) and explore, after a few steps will the space become inactive? Thanks for any clarifications.
Posted
by jroome.
Last updated
.