Hi,
I'm rewriting my game from SceneKit to RealityKit, and I'm having trouble implementing the following scenario:
I tap on the iPhone screen to select an Entity that I want to drag.
If an Entity was tapped, it should then be possible to drag it left, right, etc.
SceneKit solution:
func CGPointToSCNVector3(_ view: SCNView, depth: Float, point: CGPoint) -> SCNVector3 {
let projectedOrigin = view.projectPoint(SCNVector3Make(0, 0, Float(depth)))
let locationWithz = SCNVector3Make(Float(point.x), Float(point.y), Float(projectedOrigin.z))
return view.unprojectPoint(locationWithz)
}
and then I was calling:
SCNView().hitTest(location, options: [SCNHitTestOption.firstFoundOnly:true])
the code was called inside of the UIPanGestureRecognizer in my UIViewController.
Could I reuse that code or should I go with the SwiftUI approach - something like that:
var body: some View {
RealityView {
....
} .gesture(TapGesture().onEnded {
})
?
I already have this code:
@State private var location: CGPoint?
.onTapGesture { location in
self.location = location
}
I'm trying to identify the entity that was tapped within the RealityView like that:
RealityView { content in
let box: ModelEntity = createBox() // for now there is only one box, however there will be many boxes
content.add(box)
let anchor = AnchorEntity(world: [0, 0, 0])
content.add(anchor)
_ = content.subscribe(to: SceneEvents.Update.self) { event in
//TODO: find tapped entity, so that it could be dragged inside of the DragGesture()
}
Any help would be appreciated.
I also noticed that if I create a TapGesture like that:
TapGesture(count: 1)
.targetedToAnyEntity()
and add it to my view using .gesture() then it is not triggered.
RealityKit
RSS for tagSimulate and render 3D content for use in your augmented reality apps using RealityKit.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I have 2 planes with textures on. I want these planes to intersect [ –|– ], and I want the blend mode to be additive. Currently I get z fighting on the planes, and I can't see how to set blend modes.
I've done this before in Unity and Godot in a fairly straight forward manner.
How do I accomplish this with RealityKit, preferably using code only (my scene is quite dynamic)?
Do I need to do it with a shader manually? How can I stop the z fighting?
I’m trying to play an Apple Immersive video in the .aivu format using VideoPlayerComponent using the official documentation found here:
https://developer.apple.com/documentation/RealityKit/VideoPlayerComponent
Here is a simplified version of the code I'm running in another application:
import SwiftUI
import RealityKit
import AVFoundation
struct ImmersiveView: View {
var body: some View {
RealityView { content in
let player = AVPlayer(url: Bundle.main.url(forResource: "Apple_Immersive_Video_Beach", withExtension: "aivu")!)
let videoEntity = Entity()
var videoPlayerComponent = VideoPlayerComponent(avPlayer: player)
videoPlayerComponent.desiredImmersiveViewingMode = .full
videoPlayerComponent.desiredViewingMode = .stereo
player.play()
videoEntity.components.set(videoPlayerComponent)
content.add(videoEntity)
}
}
}
Full code is here:
https://github.com/tomkrikorian/AIVU-VideoPlayerComponentIssueSample
But the video does not play in my project even though the file is correct (It can be played in Apple Immersive Video Utility) and I’m getting this error when the app crashes:
App VideoPlayer+Component Caption: onComponentDidUpdate Media Type is invalid
Domain=SpatialAudioServicesErrorDomain Code=2020631397 "xpc error" UserInfo={NSLocalizedDescription=xpc error}
CA_UISoundClient.cpp:436 Got error -4 attempting to SetIntendedSpatialAudioExperience
[0x101257490|InputElement #0|Initialize] Number of channels = 0 in AudioChannelLayout does not match number of channels = 2 in stream format.
Video I’m using is the official sample that can be found here but tried several different files shot from my clients and the same error are displayed so the issue is definitely not the files but on the RealityKit side of things:
https://developer.apple.com/documentation/immersivemediasupport/authoring-apple-immersive-video
Steps to reproduce the issue:
- Open AIVUPlayerSample project and run. Look at the logs.
All code can be found in ImmersiveView.swift
Sample file is included in the project
Expected results:
If I followed the documentation and samples provided, I should see my video played in full immersive mode inside my ImmersiveSpace.
Am i doing something wrong in the code? I'm basically following the documentation here.
Feedback ticket: FB19971306
Hello Dev Community,
I've been thinking over Apple's preference for USDZ for AR and 3D content, especially when there's the widely used GLTF. I'm keen to discuss and hear your insights on this choice.
USDZ, backed by Apple, has seen a surge in the AR community. It boasts advantages like compactness, animation support, and ARKit compatibility. In contrast, GLTF too is a popular format with its own merits, like being an open standard and offering flexibility.
Here are some of my questions toward the use of USDZ:
Why did Apple choose USDZ over other 3D file formats like GLTF?
What benefits does USDZ bring to Apple's AR and 3D content ecosystem?
Are there any limitations of USDZ compared to other file formats?
Could factors like compatibility, security, or integration ease have influenced Apple's decision?
I would love to hear your thoughts on this. Feel free to share any experiences with USDZ or other 3D file formats within Apple's ecosystem!
Topic:
Graphics & Games
SubTopic:
RealityKit
Tags:
RealityKit
USDZ
Reality Converter
Reality Composer Pro
I wanted to drag EntityA while also dragging EntityB independently.
I've tried to separate them by entity but it only recognizes the latest drag gesture
RealityView { content, attachments in
...
}
.gesture(
DragGesture()
.targetedToEntity(EntityA)
.onChanged { value in
...
}
)
.gesture(
DragGesture()
.targetedToEntity(EntityB)
.onChanged { value in
...
}
)
also tried using the simultaneously but didn't work too, maybe i'm missing something
.gesture(
DragGesture()
.targetedToEntity(EntityA)
.onChanged { value in
...
}
.simultaneously(with:
DragGesture()
.targetedToEntity(EntityB)
.onChanged { value in
...
}
)
Does anyone know how I can disable foveation for an ImmersiveSpace? I'm aware that I could use a CompositorLayer and my own Metal rendering to control foveation, but I'm hoping that I can configure an existing/underlying LayerRenderer (or similar) to disable it for an immersive scene.
Or if there's another approach I should be taking, any pointers are appreciated. Thank you!
Has anyone come across the issue that setting GKLocalPlayer.local.authenticateHandler breaks a RealityView's world tracking on iOS / iPadOS 18 beta 5?
I'm in the process of upgrading my app to make use of the much appreciated RealityView unification, using RealityView not only on visionOS but now also on iOS and iPadOS. In my RealityView, I enable world tracking on iOS like this:
content.camera = .worldTracking
However, device position and orientation were ignored (the camera remained static) and there was no camera pass-through. Then I discovered that the issue disappeared when I remove the line
GKLocalPlayer.local.authenticateHandler = { viewController, error in
// ... some more code ...
}
So I filed FB14731139 and hope that it will be resolved before the release of iOS / iPadOS 18.
I am unable to get VisionOS 2.0 (simulator) to receive the GCControllerDidConnect notification and thus am unable to setup support for a gamepad. However, it works in VisionOS 1.2.
For VisionOS 2.0 I've tried adding:
.handlesGameControllerEvents(matching: .gamepad) attribute to the view
Supports Controller User Interaction to Info.plist
Supported game controller types -> Extended Gamepad to Info.plist
...but the notification still doesn't fire. It does when the code is run from VisionOS 1.2 simulator, both of which have the Send Game Controller To Device option enabled.
Here is the example code. It's based on the Xcode project template. The only files updated were ImmersiveView.swift and Info.plist, as detailed above:
import SwiftUI
import GameController
import RealityKit
import RealityKitContent
struct ImmersiveView: View {
var body: some View {
RealityView { content in
// Add the initial RealityKit content
if let immersiveContentEntity = try? await Entity(named: "Immersive", in: realityKitContentBundle) {
content.add(immersiveContentEntity)
}
NotificationCenter.default.addObserver(
forName: NSNotification.Name.GCControllerDidConnect,
object: nil, queue: nil) { _ in
print("Handling GCControllerDidConnect notification")
}
}
.modify {
if #available(visionOS 2.0, *) {
$0.handlesGameControllerEvents(matching: .gamepad)
} else {
$0
}
}
}
}
extension View {
func modify<T: View>(@ViewBuilder _ modifier: (Self) -> T) -> some View {
return modifier(self)
}
}
Hi I have setup animation using timeline in Reality Composer Pro like below
This get triggered by a notification posts from the code.Once this time line triggered, I want to repeat this 2 animations in the timeline unitll user takes the next action. How can I make these repeat forever?
I've created a minimal iOS RealityKit/ARKit app with only this view controller:
import UIKit
import RealityKit
import ARKit
class ViewController: UIViewController {
var arView: ARView!
override func viewDidLoad() {
super.viewDidLoad()
arView = ARView(frame: view.bounds, cameraMode: .nonAR, automaticallyConfigureSession: true)
arView.autoresizingMask = [.flexibleWidth, .flexibleHeight]
view.addSubview(arView)
}
}
When I run this app in the iOS 18.1 simulator using Xcode 16.1 beta 2 (16B5014f), RealityKit logs the warnings included below to the console.
I see similar warnings on the device, running iOS 18.0.
Should I be concerned about any of these warnings?
Please let me know if I should submit feedback reporting this issue.
Thank you.
Could not locate file 'default-binaryarchive.metallib' in bundle. Registering library (/Library/Developer/CoreSimulator/Volumes/iOS_22B5045f/Library/Developer/CoreSimulator/Profiles/Runtimes/iOS 18.1.simruntime/Contents/Resources/RuntimeRoot/System/Library/PrivateFrameworks/CoreRE.framework/default.metallib) that already exists in shader manager. Library will be overwritten. Could not resolve material name 'engine:BuiltinRenderGraphResources/AR/suFeatheringCreateMergedOcclusionMask.rematerial' in bundle at '/Users/drew/Library/Developer/CoreSimulator/Devices/6BCC578D-5046-49E6-B149-390576C7241D/data/Containers/Bundle/Application/75E15C63-F1D2-487F-A057-EBA2D45582C3/ARViewFailApp.app'. Loading via asset path. Could not resolve material name 'engine:BuiltinRenderGraphResources/AR/arKitPassthrough.rematerial' in bundle at '/Users/drew/Library/Developer/CoreSimulator/Devices/6BCC578D-5046-49E6-B149-390576C7241D/data/Containers/Bundle/Application/75E15C63-F1D2-487F-A057-EBA2D45582C3/ARViewFailApp.app'. Loading via asset path. Could not resolve material name 'engine:BuiltinRenderGraphResources/AR/arSegmentationComposite.rematerial' in bundle at '/Users/drew/Library/Developer/CoreSimulator/Devices/6BCC578D-5046-49E6-B149-390576C7241D/data/Containers/Bundle/Application/75E15C63-F1D2-487F-A057-EBA2D45582C3/ARViewFailApp.app'. Loading via asset path. Could not resolve material name 'engine:BuiltinRenderGraphResources/AR/arInPlacePostProcessCombinedPermute0.rematerial' in bundle at '/Users/drew/Library/Developer/CoreSimulator/Devices/6BCC578D-5046-49E6-B149-390576C7241D/data/Containers/Bundle/Application/75E15C63-F1D2-487F-A057-EBA2D45582C3/ARViewFailApp.app'. Loading via asset path. Could not resolve material name 'engine:BuiltinRenderGraphResources/AR/arInPlacePostProcessCombinedPermute1.rematerial' in bundle at '/Users/drew/Library/Developer/CoreSimulator/Devices/6BCC578D-5046-49E6-B149-390576C7241D/data/Containers/Bundle/Application/75E15C63-F1D2-487F-A057-EBA2D45582C3/ARViewFailApp.app'. Loading via asset path. Could not resolve material name 'engine:BuiltinRenderGraphResources/AR/arInPlacePostProcessCombinedPermute2.rematerial' in bundle at '/Users/drew/Library/Developer/CoreSimulator/Devices/6BCC578D-5046-49E6-B149-390576C7241D/data/Containers/Bundle/Application/75E15C63-F1D2-487F-A057-EBA2D45582C3/ARViewFailApp.app'. Loading via asset path. Could not resolve material name 'engine:BuiltinRenderGraphResources/AR/arInPlacePostProcessCombinedPermute3.rematerial' in bundle at '/Users/drew/Library/Developer/CoreSimulator/Devices/6BCC578D-5046-49E6-B149-390576C7241D/data/Containers/Bundle/Application/75E15C63-F1D2-487F-A057-EBA2D45582C3/ARViewFailApp.app'. Loading via asset path. Could not resolve material name 'engine:BuiltinRenderGraphResources/AR/arInPlacePostProcessCombinedPermute4.rematerial' in bundle at '/Users/drew/Library/Developer/CoreSimulator/Devices/6BCC578D-5046-49E6-B149-390576C7241D/data/Containers/Bundle/Application/75E15C63-F1D2-487F-A057-EBA2D45582C3/ARViewFailApp.app'. Loading via asset path. Could not resolve material name 'engine:BuiltinRenderGraphResources/AR/arInPlacePostProcessCombinedPermute5.rematerial' in bundle at '/Users/drew/Library/Developer/CoreSimulator/Devices/6BCC578D-5046-49E6-B149-390576C7241D/data/Containers/Bundle/Application/75E15C63-F1D2-487F-A057-EBA2D45582C3/ARViewFailApp.app'. Loading via asset path. Could not resolve material name 'engine:BuiltinRenderGraphResources/AR/arInPlacePostProcessCombinedPermute6.rematerial' in bundle at '/Users/drew/Library/Developer/CoreSimulator/Devices/6BCC578D-5046-49E6-B149-390576C7241D/data/Containers/Bundle/Application/75E15C63-F1D2-487F-A057-EBA2D45582C3/ARViewFailApp.app'. Loading via asset path. Could not resolve material name 'engine:BuiltinRenderGraphResources/AR/arInPlacePostProcessCombinedPermute7.rematerial' in bundle at '/Users/drew/Library/Developer/CoreSimulator/Devices/6BCC578D-5046-49E6-B149-390576C7241D/data/Containers/Bundle/Application/75E15C63-F1D2-487F-A057-EBA2D45582C3/ARViewFailApp.app'. Loading via asset path. Could not resolve material name 'engine:BuiltinRenderGraphResources/AR/arInPlacePostProcessCombinedPermute8.rematerial' in bundle at '/Users/drew/Library/Developer/CoreSimulator/Devices/6BCC578D-5046-49E6-B149-390576C7241D/data/Containers/Bundle/Application/75E15C63-F1D2-487F-A057-EBA2D45582C3/ARViewFailApp.app'. Loading via asset path. Could not resolve material name 'engine:BuiltinRenderGraphResources/AR/arInPlacePostProcessCombinedPermute9.rematerial' in bundle at '/Users/drew/Library/Developer/CoreSimulator/Devices/6BCC578D-5046-49E6-B149-390576C7241D/data/Containers/Bundle/Application/75E15C63-F1D2-487F-A057-EBA2D45582C3/ARViewFailApp.app'. Loading via asset path. Could not resolve material name 'engine:BuiltinRenderGraphResources/AR/arInPlacePostProcessCombinedPermute10.rematerial' in bundle at '/Users/drew/Library/Developer/CoreSimulator/Devices/6BCC578D-5046-49E6-B149-390576C7241D/data/Containers/Bundle/Application/75E15C63-F1D2-487F-A057-EBA2D45582C3/ARViewFailApp.app'. Loading via asset path. Could not resolve material name 'engine:BuiltinRenderGraphResources/AR/arInPlacePostProcessCombinedPermute11.rematerial' in bundle at '/Users/drew/Library/Developer/CoreSimulator/Devices/6BCC578D-5046-49E6-B149-390576C7241D/data/Containers/Bundle/Application/75E15C63-F1D2-487F-A057-EBA2D45582C3/ARViewFailApp.app'. Loading via asset path. Could not resolve material name 'engine:BuiltinRenderGraphResources/AR/arInPlacePostProcessCombinedPermute12.rematerial' in bundle at '/Users/drew/Library/Developer/CoreSimulator/Devices/6BCC578D-5046-49E6-B149-390576C7241D/data/Containers/Bundle/Application/75E15C63-F1D2-487F-A057-EBA2D45582C3/ARViewFailApp.app'. Loading via asset path. ... Compiler failed to build request makeRenderPipelineState failed [reading from a rendertarget is not supported]. Pipeline for technique meshShadowCasterProgrammableBlending failed compilation!
I'm wondering if it's possible to do Parallax Occlusion Mapping in RealityKit? Does RK's metal shader API provide enough?
I think it would need to be able to discard fragments and thus can't be run as a deferred pass. Not sure though!
I'm wondering if anyone can suggest a workaround for the broken ARKit body tracking in iOS / iPadOS 18.0 and 18.1?
The orientation for foot bones (and possibly other bones) is incorrect in the ARBodyAnchor returned via ARView.session.delegate update method. It works correctly in iOS / iPadOS 17.x.
The same failure occurs in a SceneKit app via a ARBodyAnchor in ARSCNViewDelegate.
You can easily verify the problem using Apple’s own sample app: https://developer.apple.com/documentation/arkit/arkit_in_ios/content_anchors/capturing_body_motion_in_3d
Any help would be greatly appreciated.
Hello,
I'm creating an app that use PhotogrammetrySession Class to build 3D objects from photographs (https://developer.apple.com/documentation/realitykit/creating-3d-objects-from-photographs).
I'm wondering why this class is working only on Pro iphone (12 Pro, 13 Pro, 14 Pro, 15 Pro and 16 Pro) and none non-Pro iPhone.
My app does not use Lidar so it's not the problem.
I thought it could be power-related but a18 soc from iPhone 16 is more powerful than a14 bionic from iPhone 12 Pro (i could also mention iPhone 13 Pro and iPhone 14 that both have a15 bionic whereas only the first one is compatible).
Did I miss something that could explain these restrictions ?
Is there any plan to make this class usable by every iPhone enough powerful to run it ?
Thanks in advance for answering me
In macOS project with RealityKit and SwiftUI, adding OrthographicCameraComponent causes crashes in both Xcode Preview and at runtime.
import SwiftUI
import RealityKit
struct ContentView: View {
var body: some View {
RealityView { content in
var camera = Entity()
var component = OrthographicCameraComponent()
component.scale = 5
camera.position = [0, 0, 5]
camera.components.set(component)
content.add(camera)
content.add(ModelEntity(mesh: .generateSphere(radius: 1)))
}
}
}
#Preview {
ContentView()
}
Has anyone faced this issue or knows a fix?
Hi everyone,
I’m working on a project using RealityKit and encountering an issue with object occlusion. Specifically, I need to disable the occlusion of real-world objects (e.g., tables, walls) in my RealityView. I want virtual entities to render fully, even if real-world objects would normally block their view.
I’ve explored options in ARSession and ARWorldTrackingConfiguration but haven’t found anything that affects occlusion in RealityView. I suspect there might be a setting or approach I’ve missed.
Has anyone dealt with a similar scenario or knows how to achieve this? Any insights or pointers would be greatly appreciated!
Thanks in advance,
Nicolas
Topic:
Graphics & Games
SubTopic:
RealityKit
I’m having issues getting Collision Shapes working in Reality Composer on iPadOS, or with Reality Composer Pro via Xcode on macOS?
I’ve posted a video recorded through my Vision Pro showing the issue.
The project i’m working on is a Dice Rolling application. The dice don’t appear to be working set as Collision Shape=Automatic, which I assume takes into account the actual silhouette of the shape.
https://youtu.be/upPtQY4QOAk?si=yyx6rbSSmVkLxBLg
They also don’t rest on their face when they land.
Anyone experience this type of behavior and found a solution? I’m currently doing this with Reality Composer, but most likely will also be wanting to get it to work properly in Reality Composer Pro as well.
Thx!
Everything works fine, except when tapping the navigation Back link and returning to the previous view, the AR session inside RealityView does not terminate. The green dot camera indicator stays on, it is still scanning the environment, and if the package has audio in it, the audio will still play, albeit extremely panned on the right channel.
I have no issues terminating QuickLook or ARSCNView.
I have a simple NavigationLink opening the RealityView...
NavigationLink(destination: MyRealityView()) {
Text("Open AR")
}
struct MyRealityView : View {
var body: some View {
RealityView { content in
// Create horizontal plane anchor for the content
let anchor = AnchorEntity(.plane(.horizontal, classification: .floor, minimumBounds: [0.5,0.5]))
let scene = await loadEntity(named: "Scene")
// Add model to anchor
anchor.addChild(scene!)
content.add(anchor)
// View Settings
content.camera = .spatialTracking
} placeholder: {
ProgressView()
}
.onDisappear {
//print("RealityView is disappearing. Cleanup actions here.")
}
.edgesIgnoringSafeArea(.all)
// Activate onTap from Reality Composer Pro
.gesture(TapGesture().targetedToAnyEntity().onEnded { value in
_ = value.entity.applyTapForBehaviors()
})
}}
I have experimented with several ways of trying to close it, and I can't figure it out. I have tried State variables and custom Back buttons. I was also trying to working with pause(), but I can't seem to get that to function either.
Anyone else have this issue or know of a solution? What am I missing?
Topic:
Graphics & Games
SubTopic:
RealityKit
I want to use SwiftUI and RealityView to get AR scene understanding data (ARMeshAnchor) on iOS devices with LiDAR. The only way we can do that is by using ARSession (unless there is another way).
However in previous iOS 18 builds there was this function:
https://developer.apple.com/documentation/realitykit/spatialtrackingsession/run(_:session:arconfiguration:)
, which worked with SpatialTrackingSession and a custom ARSession together. This function in the the latest iOS and Xcode has since been removed in the RealityKit framework but still there on documentation.
I also wanted to get ARFaceAnchor data which I still cannot get without ARSession, the closest I can get is by using:
let target = AnchoringComponent.Target.face
let anchoringComponent = AnchoringComponent(target, trackingMode: .predicted)
entity = Entity()
entity!.components.set(anchoringComponent)
But I still can't find a way to get the current frame (ARFrame) or the anchors ([ARAnchor]) in the view.
Alternatively if I use if I use this function: https://developer.apple.com/documentation/realitykit/spatialtrackingsession/run(_:) and start the ARSession separately. The session (didUpdate and didAdd) only runs for a few frames before getting interrupted.
And if I completely remove SpatialTrackingConfiguration and just run the ARSession. There still is a valid tracked entity for the AnchoringComponent.Target.face component. IF in the configuration for the ARSession I use the ARWorldTrackingConfiguration with face tracking. And I still get updated facial data each frame. But the ARSession didUpdate or didAdd functions don't get called passed the first few frames.
Interestingly if I switch the RealityViewCameraContent.RealityViewCamera to .virtual. I get ARMeshAnchor and ARFaceAnchor data, but no camera feed (as expected). This with or without SpatialTrackingConfiguration.
My overarching question is what is the proper way to access ARMeshAnchors and other ARAnchors created by the system and track them live while also using SwiftUI.
GitHub Repo with sample project can be found here: https://github.com/bpate75/RealityViewTesting
Hello,
I am trying to use the subdivision mesh rendering option.
I can see it working in RealityComposerPro:
But not when loading asset and displaying in Simulator:
Using this code:
import SwiftUI
import RealityKit
import RealityKitContent
struct AirspaceView: View {
// MARK: - VIEW BODY
var body: some View {
RealityView { content in
if let a = try? await Entity(named: "Models/Test/Test.usdc", in: realityKitContentBundle) {
content.add(a)
}
}
}
}
Any ideas why?
I have a visionOS app that I’m adding support for IOS and will like to keep using RealityView.
I know there are the following modifiers to add some navigation
.realityViewCameraControls(.orbit)
.realityViewCameraControls(.dolly)
.realityViewCameraControls(.pan)
But how can I add more than one? For example I would like to orbit with one finger, Pan with 2 fingers and dolly by pinching. Is this possible and if so can someone share some sample code on how to achieve that?
Thanks,
Guillermo