Has anyone come across the issue that setting GKLocalPlayer.local.authenticateHandler breaks a RealityView's world tracking on iOS / iPadOS 18 beta 5?
I'm in the process of upgrading my app to make use of the much appreciated RealityView unification, using RealityView not only on visionOS but now also on iOS and iPadOS. In my RealityView, I enable world tracking on iOS like this:
content.camera = .worldTracking
However, device position and orientation were ignored (the camera remained static) and there was no camera pass-through. Then I discovered that the issue disappeared when I remove the line
GKLocalPlayer.local.authenticateHandler = { viewController, error in
// ... some more code ...
}
So I filed FB14731139 and hope that it will be resolved before the release of iOS / iPadOS 18.
Delve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I have various USDZ files in my visionOS app. Loading the USDZ files works quite well. I only have problems with the positioning of the 3D model. For example, I have a USDZ file that is displayed directly above me. I can't move the model or perform any other actions on it. If I sit on a chair or stand up again, the 3D model automatically moves with me. This is my source code for loading the USDZ files:
struct ImmersiveView: View {
@State var modelName: String
@State private var loadedModel = Entity()
var body: some View {
RealityView { content in
if let usdModel = try? await Entity(named: modelName) {
print("====> \(modelName) : \(usdModel) <====")
let bounds = usdModel.visualBounds(relativeTo: nil).extents
usdModel.scale = SIMD3<Float>(1.0, 1.0, 1.0)
usdModel.position = SIMD3<Float>(0.0, 0.0, 0.0)
usdModel.components.set(CollisionComponent(shapes: [.generateBox(size: bounds)]))
usdModel.components.set(HoverEffectComponent())
usdModel.components.set(InputTargetComponent())
loadedModel = usdModel
content.add(usdModel)
}
}
}
}
I only want the 3D models from the USDZ files to be displayed later, and later on, to be able to move them via gestures. Moving the models is step 2. First, I need to make sure the models are displayed correctly. What have I forgotten or done wrong?
We have a production Metal app with a complex multithreaded Metal pipeline.
When everything is operating smoothly, it works great.
Even when extremely overloaded, it does not crash for days at a time.
This isn't good enough for our users.
Unfortunately, when I have zero visibility into id, I have no way of knowing when metal is "done" with an id.
When overloaded, stale metal render passes need to be 'aborted', which results in metal callbacks not being called.
for example, these callbacks may not be called after an aborted pass:
id<MTLCommandBuffer> m_cmdbuf;
[m_cmdbuf addScheduledHandler:^(id <MTLCommandBuffer> cb) {
cpr->scheduled = MachAbsoluteTime();
}];
[m_cmdbuf addCompletedHandler:^(id <MTLCommandBuffer> cb) {
cpr->completed = MachAbsoluteTime();
}];
For the moment, our workaround is a system which waits a few seconds after we "think" a rendering pass should be done with all its (aborted) resources before releasing buffers. This is not ideal, to say the least.
So, in summary, my question is, it would be nice to be able to 'query' an id to know when metal is done with it, so that we know that its safe to release it along with our own internal resources.
Is there any such (undocumented) mechanism? I have exhaustively read all existing Metal documentation many times.
An idea that I've been toying with... it would be nice to have something akin to Zombie detection running all the time for id only.
In OpenGL, it was OK to use a released texture... you may display a bad frame, but not CRASH!. Is there any similar option for id?
Topic:
Graphics & Games
SubTopic:
Metal
I'm building an iOS/iPadOS app for iOS 18+ using the new RealityView in SwiftUI. (I may add visionOS, but I'm not focusing on it right now.) The 3D scene I'm rendering is fairly simple (just a few dozen vertices and a couple of textures), and I'd like to render it at 120fps on ProMotion devices if possible. I tried setting CADisableMinimumFrameDurationOnPhone to true in the info plist, but it had no effect. The frame rate in the GPU Report in Xcode stays capped at 60fps, and the gauge even tops out at 60.
My question is kind of the opposite of this post, which asks how to limit the frame rate of a RealityView.
I'm on Xcode 16 beta 5 on macOS Sonoma and iOS 18.0 beta 6 on my iPhone 15 Pro.
Using Reality Composer Pro 2.0, I created a simple shader graph that displays a texture on an unlit surface:
On visionOS 2 beta, I can successfully use ShaderGraphMaterial(named:from:in:) to load that shader graph material and assign it to a model entity.
However, on visionOS 1.2 and earlier, either in Simulator or on the device, ShaderGraphMaterial(named:from:in:) fails and I see the following logged to the console:
If, using Reality Composer Pro 1.0, I experimentally open the same project and delete and recreate exactly the same nodes above, then ShaderGraphMaterial(named:from:in:) works as expected on visionOS 1.2.
Is it a known issue that Reality Composer 2 can't be used with visionOS 1?
Is this intentional behavior?
I've submitted feedback as FB14828873, including a sample project and repro steps.
If possible, I would appreciate guidance from an Apple engineer, like "This is a known issue for [list of node types]" or "Reality Composer Pro 2 is not supported for visionOS 1 development, please refer to [documentation]" or "We recommend [workaround]."
Thank you.
How many 32-bit variables can I use concurrently in a single thread of a Metal compute kernel without worrying about the variables getting spilled into the device memory? Alternatively: how many 32-bit registers does a single thread have available for itself?
Let's say that each thread of my compute kernel needs to store and work with its own array of N float variables, where N can be 128, 256, 512 or more. To achieve maximum possible performance, I do not want to the local thread variables to get spilled into the slow device memory. I want all N variables to be stored "on-chip", in the thread memory space.
To make my question more concrete, let's say there is an array thread float localArray[N]. Assuming an unrealistic hypothetical scenario where localArray is the only variable in the whole kernel, what is the maximum value of N for which no portion of localArray would get spilled into the device memory?
I searched in the Metal feature set tables, but I could not find any details.
I am currently working on a project where I aim to overlay the camera feed obtained via the Apple Vision Pro's camera access API to align perfectly with the user's perspective in Vision Pro.
However, I've noticed a discrepancy between the captured camera feed and the actual view from the user's perspective. My assumption is that this difference might be related to lens distortion correction or the lack thereof.
Unfortunately, I'm not entirely sure how the camera feed is being corrected or processed. For the overlay, I'm using a typical 3D CG approach where a texture captured from the background plane is projected onto a surface. In this case, the "background capture" is the camera feed that I'm projecting.
If anyone has insights or suggestions on how to align the camera feed with the user's perspective more accurately, any information would be greatly appreciated.
Attached image shows what difference between the camera feed and actual user's perspective field of view.
I want to align the camera feed image to the user's perspective.
I'm trying to create a custom Metal-based visual effect as a UIView to be used inside an existing UIKit-based interface. (An example might be a view that applies a blur effect to what's behind it.) I need to capture the MTLTexture of what's behind the view so that I can feed it to MTLRenderCommandEncoder.setFragmentTexture(_:index:). Can someone show me how or point me to an example? Thanks!
Hi everyone,
I'm developing an ARKit app using RealityKit and encountering an issue where a video displayed on a 3D plane shows up as a pink screen instead of the actual video content.
Here's a simplified version of my setup:
func createVideoScreen(video: AVPlayerItem, canvasWidth: Float, canvasHeight: Float, aspectRatio: Float, fitsWidth: Bool = true) -> ModelEntity {
let width = (fitsWidth) ? canvasWidth : canvasHeight * aspectRatio
let height = (fitsWidth) ? canvasWidth * (1/aspectRatio) : canvasHeight
let screenPlane = MeshResource.generatePlane(width: width, depth: height)
let videoMaterial: Material = createVideoMaterial(videoItem: video)
let videoScreenModel = ModelEntity(mesh: screenPlane, materials: [videoMaterial])
return videoScreenModel
}
func createVideoMaterial(videoItem: AVPlayerItem) -> VideoMaterial {
let player = AVPlayer(playerItem: videoItem)
let videoMaterial = VideoMaterial(avPlayer: player)
player.play()
return videoMaterial
}
Despite following the standard process, the video plane renders pink. Has anyone encountered this before, or does anyone know what might be causing it?
Thanks in advance!
Guten Tag,
my project is simple, first I want draw wired Hexa,-Tetra- and Octahedrons.
I draw a cube with Metal but I didn't found rotation, translation and scale.
I have searched help , the examples I found are too complicated for me.
Mit freundlichen Grüßen
VanceRegnet
Hello,
I want to create a painting app for iOS and I saw many examples use a CAShapeLayer to draw a UIBezierPath.
As I understand CoreAnimation uses the GPU so I was wondering how is this implemented on the GPU? Or in other words, how would you do it with Metal or OpenGL?
I can only think of continuously updating a texture in response to the user's drawing but that would be a very resource intensive operation...
Thanks
Every now and then my SceneKit game app crashes and I have no idea why. The SCNView has a overlaySKScene, so it might also be SpriteKit's fault.
The stack trace is
#0 0x0000000241c1470c in jet_context::set_fragment_texture(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, jet_texture*) ()
#27 0x000000010572fd40 in _pthread_wqthread ()
Does anyone have an idea where I could start debugging this, without being able to consistently reproduce it?
I am trying to get a little game prototype up and running using Metal using the metal-cpp libraries where I run everything natively at 120Hz with a coupled renderer using Vsync turned on so that I have the absolute physically minimum input to photon latency possible.
// Create the metal view
SDL_MetalView metal_view = SDL_Metal_CreateView(window);
CA::MetalLayer *swap_chain = (CA::MetalLayer *)SDL_Metal_GetLayer(metal_view);
// Set up the Metal device
MTL::Device *device = MTL::CreateSystemDefaultDevice();
swap_chain->setDevice(device);
swap_chain->setPixelFormat(MTL::PixelFormat::PixelFormatBGRA8Unorm);
swap_chain->setDisplaySyncEnabled(true);
swap_chain->setMaximumDrawableCount(2);
I am using SDL3 just for creating the window. Now when I go through my game / render loop - I stall for a long time on getting the next drawable which is understandable - my app runs in about 2-3ms.
m_CurrentContext->m_Drawable = m_SwapChain->nextDrawable();
m_CurrentContext->m_CommandBuffer = m_CommandQueue->commandBuffer()->retain();
char frame_label[32];
snprintf(frame_label, sizeof(frame_label), "Frame %d", m_FrameIndex);
m_CurrentContext->m_CommandBuffer->setLabel(NS::String::string(frame_label, NS::UTF8StringEncoding));
m_CurrentContext->m_RenderPassDescriptor[ERenderPassTypeNormal] = MTL::RenderPassDescriptor::alloc()->init();
MTL::RenderPassColorAttachmentDescriptor* cd = m_CurrentContext->m_RenderPassDescriptor[ERenderPassTypeNormal]->colorAttachments()->object(0);
cd->setTexture(m_CurrentContext->m_Drawable->texture());
cd->setLoadAction(MTL::LoadActionClear);
cd->setClearColor(MTL::ClearColor( 0.53f, 0.81f, 0.98f, 1.0f ));
cd->setStoreAction(MTL::StoreActionStore);
However my ProMotion display does not reliably run at 120Hz when fullscreen and using the direct to display system - it seems to run faster when windowed in composite which is the opposite of what I would expect. The Metal HUD says 120Hz, but the delay to getting the next drawable and looking at what Instruments is saying tells otherwise.
When I profile it, the game loop has completed and is sitting there waiting for the next drawable, but the screen does not want to complete in 8.33ms, so the whole thing slows down for no discernible reason.
Also as a game developer it is very strange for the command buffer to actually need the drawable texture free to be allowed to encode commands - usually the command buffers and swapping the front and back render buffers are not directly dependent on each other. Usually you only actually need the render buffer texture free when you want to draw to it. I could give myself another drawable, but because I am completing in less than 3ms, all it would do would be to add another frame of latency.
I also looked at the FramePacing example and its behaviour is even worse at having high framerate with low latency - the direct to display is always rejected for some reason.
Is this just a flaw in the Metal API? Or am I missing something important? I hope someone can help - the behaviour of the display is baffling.
On macOS, system symbols displays in a SKTexture as expected, with the correct color and aspect ratio.
But on iOS they are always displayed in black, and sometimes with slightly wrong aspect ratio.
Is there a solution to this problem?
import SpriteKit
#if os(macOS)
import AppKit
#else
import UIKit
#endif
class GameScene: SKScene {
override func didMove(to view: SKView) {
let systemImage = "square.and.arrow.up"
let width = 400.0
#if os(macOS)
let image = NSImage(systemSymbolName: systemImage, accessibilityDescription: nil)!.withSymbolConfiguration(.init(hierarchicalColor: .white))!
let scale = NSScreen.main!.backingScaleFactor
image.size = CGSize(width: width * scale, height: width / image.size.width * image.size.height * scale)
#else
let image = UIImage(systemName: systemImage)!.applyingSymbolConfiguration(.init(pointSize: width))!.applyingSymbolConfiguration(.init(hierarchicalColor: .white))!
#endif
let texture = SKTexture(image: image)
print(image.size, texture.size(), image.size.width / image.size.height)
let size = CGSize(width: width, height: width / image.size.width * image.size.height)
addChild(SKSpriteNode(texture: texture, size: size))
}
}
In my app, I have an ARView that has cameraMode set to nonAR.
I occasionally hide the ARView when it is not needed and reveal it again later.
While the ARView is hidden, I'd like to pause the animation to save iPhone battery life. I'd also like to do this when I know that animation in my scene has paused and the contents of the view, although still visible, is static.
This was possible using SceneKit, but I can't seem to find an equivalent way to do it using RealityKit.
At least as of iOS 18, a hidden ARView with an empty scene appears to use approximately 30% of the CPU.
How can I pause ARView so that it won't use the battery unnecessarily?
Thank you for considering this question.
So I get JPEG data in my app. Previously I was using the higher level NSBitmapImageRep API and just feeding the JPEG data to it.
But now I've noticed on Sonoma If I get a JPEG in the CMYK color space the NSBitmapImageRep renders mostly black and is corrupted. So I'm trying to drop down to the lower level APIs. Specifically I grab a CGImageRef and and trying to use the Accelerate API to convert it to another format (to hopefully workaround the issue...
CGImageRef sourceCGImage = `CGImageCreateWithJPEGDataProvider(jpegDataProvider,`
NULL,
shouldInterpolate,
kCGRenderingIntentDefault);
Now I use vImageConverter_CreateWithCGImageFormat... with the following values for source and destination formats:
Source format: (derived from sourceCGImage)
bitsPerComponent = 8
bitsPerPixel = 32
colorSpace = (kCGColorSpaceICCBased; kCGColorSpaceModelCMYK; Generic CMYK Profile)
bitmapInfo = kCGBitmapByteOrderDefault
version = 0
decode = 0x000060000147f780
renderingIntent = kCGRenderingIntentDefault
Destination format:
bitsPerComponent = 8
bitsPerPixel = 24
colorSpace = (DeviceRBG)
bitmapInfo = 8197
version = 0
decode = 0x0000000000000000
renderingIntent = kCGRenderingIntentDefault
But vImageConverter_CreateWithCGImageFormat fails with kvImageInvalidImageFormat. Now if I change the destination format to use 32 bitsPerpixel and use alpha in the bitmap info the vImageConverter_CreateWithCGImageFormat does not return an error but I get a black image just like NSBitmapImageRep
I have created a turn based game using GameKit. Everything is pretty much done, last thing left to do is the turn time out.
I am passing GKTurnTimeoutDefault into the timeout argument in:
func endTurn(withNextParticipants nextParticipants: [GKTurnBasedParticipant], turnTimeout timeout: TimeInterval, match matchData: Data)
However when I check the .timeoutDate property of the GKTurnBasedParticipant participants, the value is always nil.
What am I doing wrong? Am I checking the right property or is there another one instead that I don't know about? I have tried passing different values to the timeout parameter but the timeoutDate is always nil.
Has anyone successfully implemented a timeout using GKTurnBasedMatch ?
Topic:
Graphics & Games
SubTopic:
GameKit
I'll leave this here for anyone who's interested but it is possible to slightly use Windows VR on ARM Mac, right now it's just some demos but I am still working on solutions: https://www.youtube.com/watch?v=qbucnU0dpDo&t=431s&ab_channel=NightSightProductions
I am sure others will agree with me on this. I personally don’t like the way the new reactions look. Too many different color for the reactions. I honestly prefer the old grey version for the reactions to text messages. The extra emoji thing is okay but the change in color for the heart, thumbs up and the other reactions are not the best. Auto correct is horrible in this new update by the way
Hello,
I’m working with the GameKit API, and I am encountering an issue when submitting a player’s score to a leaderboard at the end of a game.
Goal:
After submitting the new score to a leaderboard, I want to immediately fetch and display the updated leaderboard that reflects the new score.
Problem:
After successfully submitting the player’s score, when I fetch the leaderboard, the entries are not updated right away. The fetched leaderboard still shows the outdated player score.
Is this delay in updating the leaderboard expected behavior, or am I missing something in my implementation?
Steps to Reproduce:
Submit the local player’s score to Leaderboard X.
On successful submission, fetch the leaderboard entries for Leaderboard X.
Expected Result:
The fetched leaderboard should reflect the updated player score immediately.
Actual Result:
The fetched leaderboard shows the outdated score, with no immediate update.
As a workaround, I update the leaderboard myself locally, that does the job, but is error-prone and require more efforts.