I am at a loss. I have looked at examples, and I have used chat/cursor. I cannot figure out how to target the transform/position of a Reality Composer Pro project when adding it to an ARView in iOS.
I have a test red sphere working perfectly for raycast positioning. When I pass the same variables (tested with print out) to the Entity or Anchor position/transform nothing changes.
It seems that, no matter what, the content of the Reality Composer Pro project is placed where the camera view initialized.
How do I actually interact with its position? I just want to be able to tap the screen and place the RCP wherever I want.
Post
Replies
Boosts
Views
Activity
I am working on a project that contains a QuickLook View and Some ARViews. I want to restrict the entire app to Portrait orientation, but I want to allow the ARView to have Portrait and Landscape orientation.
If I restrict the app to Portrait in the Deployment Info settings, we can still turn the device to landscape in the ARView, However, there is an issue with "some" spatial audio files within the digital experience. Some spatial audio items, are placed appropriately, and others are panned oddly left.
If we allow Landscape Left and Right in the Deployment Info settings, all spatial audio behaves appropriately. So, we need to "lock" every other view as Portrait and only allow Portrait and Landscape on the ARView.
I'm not smart enough to know how to do that, but I found this excellent package on GitHub. It works as expected. https://github.com/wvteijlingen/swiftui-interface-orientation
However! When we wrap SwiftUI with UIKit, it appears every single view that contains an ARView is initialized at launch even though it is not visible. So, when the app launches, it is running multiple ARViews at once.
It appears we need to have some kind of lazy loading, so this doesn't occur, but again, I am not knowledgable enough for this yet. I tried to wrap it all in a LazyVStack, I tried a LazyView struct, but I couldn't get it to build appropriately.
I feel like this might be a common thing, so maybe there's already a simple answer I'm not able to locate? Any ideas??
Everything works fine, except when tapping the navigation Back link and returning to the previous view, the AR session inside RealityView does not terminate. The green dot camera indicator stays on, it is still scanning the environment, and if the package has audio in it, the audio will still play, albeit extremely panned on the right channel.
I have no issues terminating QuickLook or ARSCNView.
I have a simple NavigationLink opening the RealityView...
NavigationLink(destination: MyRealityView()) {
Text("Open AR")
}
struct MyRealityView : View {
var body: some View {
RealityView { content in
// Create horizontal plane anchor for the content
let anchor = AnchorEntity(.plane(.horizontal, classification: .floor, minimumBounds: [0.5,0.5]))
let scene = await loadEntity(named: "Scene")
// Add model to anchor
anchor.addChild(scene!)
content.add(anchor)
// View Settings
content.camera = .spatialTracking
} placeholder: {
ProgressView()
}
.onDisappear {
//print("RealityView is disappearing. Cleanup actions here.")
}
.edgesIgnoringSafeArea(.all)
// Activate onTap from Reality Composer Pro
.gesture(TapGesture().targetedToAnyEntity().onEnded { value in
_ = value.entity.applyTapForBehaviors()
})
}}
I have experimented with several ways of trying to close it, and I can't figure it out. I have tried State variables and custom Back buttons. I was also trying to working with pause(), but I can't seem to get that to function either.
Anyone else have this issue or know of a solution? What am I missing?
I was watching the Developer videos, and there was mention that RealityView handles persistent world data differently and also automatically for us.
I am having an issue finding the material I need to get up to speed on that.
In ARKit, I was able to place a model with the world data and recall that .map data. It even stored a reference image for the scene to help match the world data.
I'm looking for the information on how to implement and work with those same features with RealityView, as it seems to be better/automatically integrated?
I need help being pointed in the right direction. Sample code would be amazing.
I have been digging through the docs and the developer videos, and I have noticed a mention to RealityView having som potential limitations with anchors and world tracking. However, I haven’t been able to locate my answers.
Does anyone know (or point me to) if RealityView supports everything ARView does, and if not what are the difference?
I was fooling around with RealityView today with a simple plane anchor, and the stability of that anchor didn’t seem to be as steady as I recall ARView being In the past on iPhone.
I’m trying to determine if I should be rolling over into RealityView or stay with ARView on this little educational project. I would imagine the answer is to go RealityView, but I want to make sure I’m not setting myself up for failure based on any current limitations For anchors and world data.