GKLocalPlayer.local.authenticateHandler = {viewController, error in
When authenticating a player using authenticateHandler, the completion handler is only called if the player is already logged in. If the player is not logged in, the authentication window will appear but the completion handler is never called.
If I have content in a volumetric window that obscures the login window (which appears at a slight Z increase from the parent window), what can I do? If the completion handler was being called then I could make adjustments to my view, but it never gets called if the user is not already logged in.
https://developer.apple.com/documentation/gamekit/authenticating_a_player
Thanks.
Delve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.
Post
Replies
Boosts
Views
Activity
Hi,
My app has a volumetric window displaying some 3D content for the user. I would like the user to be able to control the color of the material using a color picker displayed below the model in the same window, but unfortunately neither ColorPicker nor Picker are functional in volumetric scenes.
Attempting to use them causes the app to crash with NSInternalInconsistencyException: Presentations are not permitted within volumetric window scenes.
This seems rather limiting. Is there a way either of these components can be utilized? I could build a different "control panel" window but it would not be attached to the model window and it would get confusing if user has multiple 3d windows open.
Thank you
Hello,
I've been tinkering with PortalComponent on visionOS a bit but noticed that the content of the WorldComponent is always clipped to the mesh geometry of whatever entities have the PortalComponent applied. Now I'm wondering if there is any way or trick to allow contents of the portal to peek out – similar to the Encounter Dinosaurs experience on Vision Pro (I assume it also uses PortalComponent?).
I saw that PortalComponent has a clippingPlane property (https://developer.apple.com/documentation/realitykit/portalcomponent/clippingplane-swift.property). But so far I haven't been able to achieve a perceptible visual difference with it.
If possible I would like to avoid hacky tricks using duplicate meshes or similar to achieve this.
Thanks for any hints!
Context being VisionOS development, I was trying to do something like
let root = ModelEntity()
child1 = ModelEntity(...)
root.addChild(child1)
child2 = ModelEntity(...)
root.addChild(child2)
only to find that, despite seemingly being together, I can only pick by children entities when I apply a DragGesture in VisionOS. Any idea what's going on?
In a RealityView, I have scene loaded from Reality Composer Pro. The entity I'm interacting with has a PhysicallyBasedMaterial with a diffuse color. I want to change that color when on long press. I can get the entity and even get a reference to the material, but I can't seem to change anything about it.
What is the best way to change the color of a material at runtime?
var longPress: some Gesture {
LongPressGesture(minimumDuration: 0.5)
.targetedToAnyEntity()
.onEnded { value in
value.entity.position.y = value.entity.position.y + 0.01
if var shadow = value.entity.components[GroundingShadowComponent.self] {
shadow.castsShadow = true
value.entity.components.set(shadow)
}
if let model = value.entity.components[ModelComponent.self] {
print("material", model)
if let mat = model.materials.first {
print("material", mat)
// I have a material here but I can't set any properties?
// mat.diffuseColor does not exist
}
}
}
}
Here is the full code
struct Lab5026: View {
var body: some View {
RealityView { content in
if let root = try? await Entity(named: "GestureLab", in: realityKitContentBundle) {
root.position = [0, -0.45, 0]
if let subject = root.findEntity(named: "Cube") {
subject.components.set(HoverEffectComponent())
subject.components.set(GroundingShadowComponent(castsShadow: false))
}
content.add(root)
}
}
.gesture(longPress.sequenced(before: dragGesture))
}
var longPress: some Gesture {
LongPressGesture(minimumDuration: 0.5)
.targetedToAnyEntity()
.onEnded { value in
value.entity.position.y = value.entity.position.y + 0.01
if var shadow = value.entity.components[GroundingShadowComponent.self] {
shadow.castsShadow = true
value.entity.components.set(shadow)
}
if let model = value.entity.components[ModelComponent.self] {
print("material", model)
if let mat = model.materials.first {
print("material", mat)
// I have a material here but I can't set any properties?
// mat.diffuseColor does not exist
// PhysicallyBasedMaterial
}
}
}
}
var dragGesture: some Gesture {
DragGesture()
.targetedToAnyEntity()
.onChanged { value in
let newPostion = value.convert(value.location3D, from: .global, to: value.entity.parent!)
let limit: Float = 0.175
value.entity.position.x = min(max(newPostion.x, -limit), limit)
value.entity.position.z = min(max(newPostion.z, -limit), limit)
}
.onEnded { value in
value.entity.position.y = value.entity.position.y - 0.01
if var shadow = value.entity.components[GroundingShadowComponent.self] {
shadow.castsShadow = false
value.entity.components.set(shadow)
}
}
}
}
I'm working on a project in Xcode where I need to use a 3D model with multiple morph targets (shape keys in Blender) for animations. The model, specifically the Wolf3D_Head node, contains dozens of morph targets which are crucial for my project. Here's what I've done so far:
I verified the morph targets in Blender (I can see all the morph targets correctly when opening both the original .glb file and the converted .dae file in Blender).
Given that Xcode does not support .glb file format directly, I converted the model to .dae format, aiming to use it in my Xcode project. After importing the .dae file into Xcode, I noticed that Xcode does not show any morph targets for the Wolf3D_Head node or any other node in the model.
I've already attempted using tools like ColladaMorphAdjuster and another version by JakeHoldom to adjust the .dae file, hoping Xcode would recognize the morph targets, but it didn't resolve the issue.
My question is: How can I make Xcode recognize and display the morph targets present in the .dae file exported from Blender? Is there a specific process or tool that I need to use to ensure Xcode properly imports all the morph target information from a .dae file?
Tools tried: https://github.com/JonAllee/ColladaMorphAdjuster, https://github.com/JakeHoldom/ColladaMorphAdjuster
Thanks in advance!
Hi all,
I took bunch of photos using Apple's 'Capture Sample' iOS app. Even though the all images in .HEIC/HEIF file format that CLI tool logs the bunch of the following errors and couldn't find any solution.
1-) HEIF file is expected.
2-) *** Assertion failure in OCReturn OCNonModularSPI_CMPhoto_readResolution(const OCHeicReadHandle, const NSURL *__strong, uint64_t *, uint64_t *)(), CMPhoto+NonModularSPI.m:1271
I'm using RealityKit for a scene with many static and dynamic ModelEntitys simulating physics. When all the entities have simple collision generated from .generateCollisionShapes I don't see any issues, but for some entities I need much more complex and accurate collision. For this I've been using ShapeResource.generateStaticMesh with the mesh's data (2769 positions, 16272 face indices in this case), which works exactly as desired with a low entity count. However once there are 600+ dynamic entities introducing even one static entity with complex collision will reliably trigger a crash when colliding with one of the dynamic entities (not necessarily on first contact, but inevitably after multiple collisions).
If I arbitrarily limit the number of entities to a max of around 500 it seems to prevent the issue from happening, though the likelihood seems to increase with the number of entities so there may be a low probability of it triggering even at 500 entities that I haven't hit while testing.
If physx imposes some kind of entity or collision face/shape limit or something like that I'd at least like to know exactly what it is, but ideally there's a way to work around this. Right now my "fix" is just arbitrarily restricting the entity count in a way that limits what my app can do.
The crash triggers inside
0x00000001a6790dfc in physx::PxcDiscreteNarrowPhasePCM(physx::PxcNpThreadContext&, physx::PxcNpWorkUnit const&, physx::Gu::Cache&, physx::PxsContactManagerOutput&) ()
which looks like this (crash line has an -> arrow at the bottom)
CoreRE`physx::PxcDiscreteNarrowPhasePCM:
...
0x1a6790df0 <+668>: mov x1, x24
0x1a6790df4 <+672>: bl 0x1a67913d8 ; physx::PxcNpCacheStreamPair::reserve(unsigned int)
0x1a6790df8 <+676>: ldrb w8, [x23]
-> 0x1a6790dfc <+680>: str w8, [x0, #0x20]
I've been trying to animate the OpacityComponent to fade in/out entities in my scene. I've tried animating the component with an AnimationResource as well as tried animating with a custom System. Both worked fine in the simulator, but failed on device.
AnimationResource: When I animated the opacity of an entity using an animation with an opacity bind target, the entity would not change opacity until I physically looked away from the object. It's almost as if the device keeps an entity visible for as long as you keep looking at it, but once you look away it plays the animation.
System: I created a custom system that manually changes the opacity over time, however, on device the gradual fade of the entity doesn't work. Instead, the entity literally pops in/out of view instead of fading.
Can someone explain exactly how this component is supposed to be used? The simulator plays the animations exactly the way I would expect, but on device it's completely different.
Edit:
I'm trying to change the opacity of entities with a VideoMaterial added to a ModelComponent. The fade animations are performed at certain points in the video that are triggered by an AVPlayer time boundary observer.
We have been using attachment.bounds.extents to determine the size of a RealityView attachment at run time. It has been working fine until VisionOS 1.1 update. I wonder if we are doing something wrong as the release notes suggest some visual bounds calculation issue was fixed with the latest release. The funny thing is we did not have an issue before.
Below is how we access to height value:
let height = attachmentEntity.attachment.bounds.extents.y
Previously it returned the correct value. Now it returns 0.
I wonder if anyone else is having the same issue.
In the WWDC talk "Enhance your spatial computing app with RealityKit." we see how to create a portal effect with RealityKit. In the "Encounter Dinosaurs" experience on Vision Pro there is a similar portal, except this portal allows entities to stick out of the portal. Using the provided example code, I have been unable to replicate this effect. With the example code, anything that sticks out of the portal gets clipped.
How do I get entities to stick out of the portal in a way similar to the "Encounter Dinosaurs" experience?
I am familiar with the old way of using OcclusionMaterial to create portals, but if the camera gets between the OcclusionMaterial and the entity (such as walking behind the portal), this can break the effect, and I was unable to break the effect in the "Encounter Dinosaurs" experience.
If it helps at all: I have noticed that if you look from the edge of the portal very closely, the rocks will not stick out the way that the dinosaurs do; The rocks get clipped. Therefore, the dinosaurs are somehow being rendered differently.
I have a volumetric window that I am using to display 3D content.
The issue I have is that the rotation of the 3D models will rotate when the user moves the window. I want the rotation across the Y-axis to remain fixed when the user repositions the window. Is that possible?
Also, is there a way to visually debug the walls of the 3D volume window?
My MacBook Air M1 has installed Mac OS Sonoma 14.3.1, and I tried to install game-poring-toolkit tonight. After the step which it requires me to input the command "brew -v install apple/apple/game-porting-toolkit", Terminal ran for minutes. But at the end this error appeared: Error: apple/apple/game-porting-toolkit 1.1 did not build.
I don't know anything about coding and software. Could someone please tell me what cause this error and how to fix it after you read my post? I will appreciate your help!
Does anyone know how I can disable foveation for an ImmersiveSpace? I'm aware that I could use a CompositorLayer and my own Metal rendering to control foveation, but I'm hoping that I can configure an existing/underlying LayerRenderer (or similar) to disable it for an immersive scene.
Or if there's another approach I should be taking, any pointers are appreciated. Thank you!
Hi,
I'm trying to rotate an entity in VisionPro.
Most of the code is the same as the Diorama code from WWDC23.
The problem I'm having is that the rotiation occurs but the axis of the rotation is not the center of my object.
It seems to be centered on the zero coordinate of the immersive space . How do I change the rotation3DEffect to tell it to rotate around the entity? Not the space?
Is it even possible?
This is the code, the rotation is at the end.
var body: some View {
@Bindable var viewModel = viewModel
RealityView { content, _ in
do {
let entity = try await Entity(named: "DioramaAssembled", in: RealityKitContent.RealityKitContentBundle)
viewModel.rootEntity = entity
content.add(entity)
viewModel.updateScale()
// Offset the scene so it doesn't appear underneath the user or conflict with the main window.
entity.position = SIMD3<Float>(0, 0, -2)
subscriptions.append(content.subscribe(to: ComponentEvents.DidAdd.self, componentType: PointOfInterestComponent.self, { event in
createLearnMoreView(for: event.entity)
}))
entity.generateCollisionShapes (recursive: true)
entity.components.set(InputTargetComponent())
} catch {
print("Error in RealityView's make: \(error)")
}
}
.rotation3DEffect(.radians(currentrotateByX), axis: .y)
.rotation3DEffect(.radians(currentrotateByY), axis: .x)
Hello,
I've been trying to render these models in a VisionOS app using RealityKit's Model3D API. The heart seem to appear dark all the time. Any thoughts on why this would happen?
Color.clear
.overlay {
Model3D(named: modelName, bundle: realityKitContentBundle) { model in
model.resizable()
.scaledToFit()
.rotation3DEffect(
Rotation3D(
eulerAngles: .init(angles: orientation, order: .xyz)
)
)
.frame(depth: modelDepth)
.offset(z: -modelDepth / 2)
.accessibilitySortPriority(1)
} placeholder: {
ProgressView()
.offset(z: -modelDepth * 0.75)
}
}
.dragRotation(yawLimit: .degrees(120), pitchLimit: .degrees(20))
.offset(z: modelDepth)
I wanna draw a pixel buffer directly on the screen with the Metal API.
in OpenGL I can use glDrawPixels
how to do it in Metal?
Hello, I'm currently building an app that implements the on-device object capture API to create 3D models. I have two concerns that I cannot find addressed anywhere on the internet:
Can on-device object capture be performed by devices without LiDAR? I understand that depth data is necessary for making scale-accurate models - if there is an option to disable it, where would one specify that in code?
Can models be exported to .obj instead of .usdz? From WWDC2021 at 3:00 it is mentioned that it is possible with the Apple Silicon API but what about with on-device scanning?
I would be very grateful if anyone is knowledgeable enough to provide some insight. Thank you so much!
I am working on an application where we are planning to use Metal for directly rendering custom content. When user looks at something on the rendered image, I want to get the position or ray of cursor (the point where the user is currently looking at) to render something else like a crosshair. Is it possible to get the cursor position information on VisionOS to accomplish this? How can I know if something is being hovered on by the eyes?
Hi,
I am investigating how to emit the following in my visionOS app.
https://www.hiroakit.com/archives/1432
https://blog.terresquall.com/2020/01/getting-your-emission-maps-to-work-in-unity/
Right now, I'm trying various things with Shader Graph in Reality Composer Pro, but I can't tell from the official documentation and WWDC session videos what the individual functions and combined effects of Reality Composer Pro's Shader Graph nodes are, I am having a hard time understanding the effects of the individual functions and combinations of them.
I have a feeling that such luminous materials and expressions are not possible in visionOS to begin with. If there is a way to achieve this, please let me know.
Thanks.