What is the recommended way to attach SwiftUI views to RealityKit entities on macOS, iOS, etc?
All the APIs seem to be visionOS only:
https://developer.apple.com/documentation/realitykit/realityviewattachments
https://developer.apple.com/documentation/realitykit/viewattachmentcomponent
https://developer.apple.com/documentation/realitykit/presentationcomponent
https://developer.apple.com/documentation/realitykit/imagepresentationcomponent
My only idea is to do it "manually" with a ZStack and RealityView somehow?
I submitted this as a feedback since it seemed like an oversight: FB18034856.
Delve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
What evidence exists that it's safe to call nextDrawable() on CAMetalLayer off the main thread? I have seen developers claiming that it's OK, but the official docs are silent on the topic. Attempting to do so with Strict Concurrency Checking set to Complete complains that CAMetalLayer is not @Sendable.
I want to call it off the main thread since there doesn't seem to be any way to prevent it from blocking the UI for up to a second. I have read hints and allegations that this won't happen if you avoid asking for too many drawables, but that doesn't seem to be true 100% of the time in my experience.
Supposing it is allowed, I wonder how races are handled such as when the layer's size is changed on the main thread, or if the layer is removed from the layer hierarchy.
Topic:
Graphics & Games
SubTopic:
Metal
Hello,
I am trying to read video frames using AVAssetReaderTrackOutput. Here is the sample code:
//prepare assets
let asset = AVURLAsset(url: some_url)
let assetReader = try AVAssetReader(asset: asset)
guard let videoTrack = try await asset.loadTracks(withMediaCharacteristic: .visual).first else {
throw SomeErrorCode.error
}
var readerSettings: [String: Any] = [
kCVPixelBufferIOSurfacePropertiesKey as String: [String: String]()
]
//check if HDR video
var isHDRDetected: Bool = false
let hdrTracks = try await asset.loadTracks(withMediaCharacteristic: .containsHDRVideo)
if hdrTracks.count > 0 {
readerSettings[AVVideoAllowWideColorKey as String] = true
readerSettings[kCVPixelBufferPixelFormatTypeKey as String] =
kCVPixelFormatType_420YpCbCr10BiPlanarFullRange
isHDRDetected = true
}
//add output to assetReader
let output = AVAssetReaderTrackOutput(track: videoTrack, outputSettings: readerSettings)
guard assetReader.canAdd(output) else {
throw SomeErrorCode.error
}
assetReader.add(output)
guard assetReader.startReading() else {
throw SomeErrorCode.error
}
//add writer ouput settings
let videoOutputSettings: [String: Any] = [
AVVideoCodecKey: AVVideoCodecType.hevc,
AVVideoWidthKey: 1920,
AVVideoHeightKey: 1080,
]
let finalPath = "//some URL oath"
let assetWriter = try AVAssetWriter(outputURL: finalPath, fileType: AVFileType.mov)
guard assetWriter.canApply(outputSettings: videoOutputSettings, forMediaType: AVMediaType.video)
else {
throw SomeErrorCode.error
}
let assetWriterInput = AVAssetWriterInput(mediaType: .video, outputSettings: videoOutputSettings)
let sourcePixelAttributes: [String: Any] = [
kCVPixelBufferPixelFormatTypeKey as String: isHDRDetected
? kCVPixelFormatType_420YpCbCr10BiPlanarFullRange : kCVPixelFormatType_32ARGB,
kCVPixelBufferWidthKey as String: 1920,
kCVPixelBufferHeightKey as String: 1080,
]
//create assetAdoptor
let assetAdaptor = AVAssetWriterInputTaggedPixelBufferGroupAdaptor(
assetWriterInput: assetWriterInput, sourcePixelBufferAttributes: sourcePixelAttributes)
guard assetWriter.canAdd(assetWriterInput) else {
throw SomeErrorCode.error
}
assetWriter.add(assetWriterInput)
guard assetWriter.startWriting() else {
throw SomeErrorCode.error
}
assetWriter.startSession(atSourceTime: CMTime.zero)
//prepare tranfer session
var session: VTPixelTransferSession? = nil
guard
VTPixelTransferSessionCreate(allocator: kCFAllocatorDefault, pixelTransferSessionOut: &session)
== noErr, let session
else {
throw SomeErrorCode.error
}
guard let pixelBufferPool = assetAdaptor.pixelBufferPool else {
throw SomeErrorCode.error
}
//read through frames
while let nextSampleBuffer = output.copyNextSampleBuffer() {
autoreleasepool {
guard let imageBuffer = CMSampleBufferGetImageBuffer(nextSampleBuffer) else {
return
}
//this part copied from (https://developer.apple.com/videos/play/wwdc2023/10181) at 23:58 timestamp
let attachment = [
kCVImageBufferYCbCrMatrixKey: kCVImageBufferYCbCrMatrix_ITU_R_2020,
kCVImageBufferColorPrimariesKey: kCVImageBufferColorPrimaries_ITU_R_2020,
kCVImageBufferTransferFunctionKey: kCVImageBufferTransferFunction_SMPTE_ST_2084_PQ,
]
CVBufferSetAttachments(imageBuffer, attachment as CFDictionary, .shouldPropagate)
//now convert to CIImage with HDR data
let image = CIImage(cvPixelBuffer: imageBuffer)
let cropped = "" //here perform some actions like cropping, flipping, etc. and preserve this changes by converting the extent to CGImage first:
//this part copied from (https://developer.apple.com/videos/play/wwdc2023/10181) at 24:30 timestamp
guard
let cgImage = context.createCGImage(
cropped, from: cropped.extent, format: .RGBA16,
colorSpace: CGColorSpace(name: CGColorSpace.itur_2100_PQ)!)
else {
continue
}
//finally convert it back to CIImage
let newScaledImage = CIImage(cgImage: cgImage)
//now write it to a new pixelBuffer
let pixelBufferAttributes: [String: Any] = [
kCVPixelBufferCGImageCompatibilityKey as String: true,
kCVPixelBufferCGBitmapContextCompatibilityKey as String: true,
]
var pixelBuffer: CVPixelBuffer?
CVPixelBufferCreate(
kCFAllocatorDefault, Int(newScaledImage.extent.width), Int(newScaledImage.extent.height),
kCVPixelFormatType_420YpCbCr10BiPlanarFullRange, pixelBufferAttributes as CFDictionary,
&pixelBuffer)
guard let pixelBuffer else {
continue
}
context.render(newScaledImage, to: pixelBuffer) //context is a CIContext reference
var pixelTransferBuffer: CVPixelBuffer?
CVPixelBufferPoolCreatePixelBuffer(kCFAllocatorDefault, pixelBufferPool, &pixelTransferBuffer)
guard let pixelTransferBuffer else {
continue
}
// Transfer the image to the pixel buffer.
guard
VTPixelTransferSessionTransferImage(session, from: pixelBuffer, to: pixelTransferBuffer)
== noErr
else {
continue
}
//finally append to taggedBuffer
}
}
assetWriterInput.markAsFinished()
await assetWriter.finishWriting()
The result video is not in correct color as the original video. It turns out too bright. If I play around with attachment values, it can be either too dim or too bright but not exactly proper as the original video. What am I missing in my setup? I did find that kCVPixelFormatType_4444AYpCbCr16 can produce proper video output but then I can't convert it to CIImage and so I can't do the CIImage operations that I need. Mainly cropping and resizing the CIImage
Hello Everyone I am new here,
I am testing game center integration and using a development build of my IOS game. I have set up a couple of achievements in app store connect, but when I trigger them in the game then they do not unlock or show up.
Okay so i am signed into the game center with a sandbox account on a test advice. Is there anything else I need to configure, or do achievements usually only work after the game is released?.
I will appreciate any guidance…
Thanks in Advance!!!
hi
When analyzing our game using Instruments, I've always been confused about the two items "Drawable Present" and "Drawable Presented" in the GPU column. The timing of Drawable Present seems to be when the CPU layer calls commandbuffer:present, rather than when the actual encoding is completed on the GPU. Also, what does drawable presented specifically mean? In our case, when a CPU stall occurs, it appears that the vsync interval changes in the next frame, and a surface that has already been calculated is not displayed. Why is this happening?
I'm using RealityView in my iOS game mxied with SwiftUI. For the following 2 example usages, the simulator will only render the first RealityView, and the second one is either super laggy or show a black model. Running on the real device is all good, just simualtor has this issue.
Have a TabView and each tab has a RealityView.
Have a root view and detail view connected via a push navigation, both root and detail have a RealityView.
In the Simulator, the second RealityView is going to be very choppy and basically unusable, but on a real iPhone everything looks great.
Is this a known simulator issue or I did something bad?
I use unity 2020.3.48f1 to develop a game; trying to implement Apple Services integration I use Apple unity plugins(https://github.com/apple/unityplugins) Using latest version of unity plugins I getting error in Unity project after plugin import It say "Not allowed platform VisionOS" When I tryed to use older version of the plugins I getting error on runtime when calling "var fetchItemsResponse = await GKLocalPlayer.Local.FetchItems();" in line 42 it drop EXC_BAD_ACCESS(code=257, address=0x0000...) error I tryed to use different commits from official repositorys and even custom branches of apple unity plugins like (https://github.com/muZZkat/unityplugins/tree/muzzkat/fix-fetch-items) but it did not help
There is whole my script which trying to use apple unuity plugins
using System.Threading.Tasks;
using UnityEngine;
using System.Collections;
using System;
using Apple.GameKit;
using UnityEngine.UI;
public class TheScript : MonoBehaviour
{
[SerializeField]
InputField otp;
string Signature;
string TeamPlayerID;
string Salt;
string PublicKeyUrl;
string Timestamp;
void Start()
{
StartCoroutine(Call());
}
private IEnumerator Call()
{
yield return new WaitForSeconds(5);
Login();
}
public async Task Login()
{
otp.text += $"Loginig... ";
if (!Apple.GameKit.GKLocalPlayer.Local.IsAuthenticated)
{
try
{
var player = await GKLocalPlayer.Authenticate();
var localPlayer = GKLocalPlayer.Local;
TeamPlayerID = localPlayer.TeamPlayerId;
var fetchItemsResponse = await GKLocalPlayer.Local.FetchItems();
Signature = Convert.ToBase64String(fetchItemsResponse.GetSignature());
PublicKeyUrl = fetchItemsResponse.PublicKeyUrl;
otp.text += $"Team Player ID: {TeamPlayerID} ";
otp.text += $"PublicKeyUrl: {PublicKeyUrl} ";
}
catch(Exception e)
{
otp.text += $"Error: " + e.Message;
}
}
else
{
Debug.Log("AppleGameCenter player already logged in.");
}
}
async Task SignInWithAppleGameCenterAsync(string signature, string teamPlayerId, string publicKeyURL, string salt, ulong timestamp)
{
}
}
App Storeにある『浮遊時計 Premium』は1Hzごとか10Hzごと、または3段階以上のリフレッシュレート計測はできますか?
Topic:
Graphics & Games
SubTopic:
General
The code is pretty simple
kernel void naive(
constant RunParams *param [[ buffer(0) ]],
const device float *A [[ buffer(1) ]], // [N, K]
device float *output [[ buffer(2) ]],
uint2 gid [[ thread_position_in_grid ]]) {
uint a_ptr = gid.x * param->K;
for (uint i = 0; i < param->K; i++, a_ptr++) {
val += A[b_ptr];
}
output[ptr] = val;
}
when uint a_ptr = gid.x * param->K, the code got 150 GFLops
when uint a_ptr = gid.y * param->K, the code got 860 GFLops
param->K = 256;
thread per group: [16, 16]
I'd like to understand why the performance is so different, and how can I profile/diagnose this to help with further optimization.
Topic:
Graphics & Games
SubTopic:
Metal
Hi everyone,
I’m currently learning about ParticleEmitterComponentParticleEmitterComponent and exploring the sample app provided in the Simulating particles in your visionOS app documentation.
In the sample app, when I set the EmitterPreset to fireworks from the settings panel on the left side of the window and choose SystemImage, I noticed two issues:
The image applied to mainEmitter appears clipped or cropped.
The image on spawnedEmitter does not update to the selected SystemImage.
What I want to achieve:
Apply the same SystemImage to both mainEmittermainEmitter and spawnedEmitterspawnedEmitter so that it displays correctly without clipping.
Remove the animation that changes the size of spawnedEmitterspawnedEmitter over time and keep it at a constant size.
Could someone explain which properties should be adjusted to achieve this behavior? Any guidance or examples would be greatly appreciated.
Thanks in advance!
I'm an experienced SceneKit developer and I want to begin work on a new project using RealityKit. So I appreciated as timely, the WWDC 2025 Session, "Bring your SceneKit project to RealityKit".
However, now I am finding that:
Blender does not properly support exporting armatures in usdc files, and usdc is really the only file format that should be used for creating 3D assets for RealityKit.
The option of exporting from Blender to fbx or some other intermediate format, and then converting that to usdc, is a challenge.
Apple's Reality Converter App, which supposedly can support importing and converting fbx files to usdc, is no longer available from Apple's website. And an older copy of it I found at the Kodeco website requires Rosetta on Apple Silicon. As well, this older copy does not in fact import fbx or anything else - I find it doesn't work at all.
Apple's Reality Composer Pro, at least as far as I can tell, only supports importing usdc - it is not a file conversion tool.
Alternatively, I am under the impression that Maya supports producing usdc files with armatures, but Maya costs over $2000 per year and I am skilled with Blender, so I believe strongly that I should be able to continue with Blender. Maya's expense and skillset simply shouldn't be a requirement for building RealityKit applications.
What are my options then, if any, to produce assets with armatures and armature based animations using Blender, and then bring them into RealityKit?
Problem Description
I'm encountering an issue with SCNTechnique where the clearColor setting is being ignored when multiple passes share the same depth buffer. The clear color always appears as the scene background, regardless of what value I set. The minimal project for reproducing the issue: https://www.dropbox.com/scl/fi/30mx06xunh75wgl3t4sbd/SCNTechniqueCustomSymbols.zip?rlkey=yuehjtk7xh2pmdbetv2r8t2lx&st=b9uobpkp&dl=0
Problem Details
In my SCNTechnique configuration, I have two passes that need to share the same depth buffer for proper occlusion handling:
"passes": [
"box1_pass": [
"draw": "DRAW_SCENE",
"includeCategoryMask": 1,
"colorStates": [
"clear": true,
"clearColor": "0 0 0 0" // Expecting transparent black
],
"depthStates": [
"clear": true,
"enableWrite": true
],
"outputs": [
"depth": "box1_depth",
"color": "box1_color"
],
],
"box2_pass": [
"draw": "DRAW_SCENE",
"includeCategoryMask": 2,
"colorStates": [
"clear": true,
"clearColor": "0 0 0 0" // Also expecting transparent black
],
"depthStates": [
"clear": false,
"enableWrite": false
],
"outputs": [
"depth": "box1_depth", // Sharing the same depth buffer
"color": "box2_color",
],
],
"final_quad": [
"draw": "DRAW_QUAD",
"metalVertexShader": "myVertexShader",
"metalFragmentShader": "myFragmentShader",
"inputs": [
"box1_color": "box1_color",
"box2_color": "box2_color",
],
"outputs": [
"color": "COLOR"
]
]
]
And the metal shader used to display box1_color and box2_color with splitting:
fragment half4 myFragmentShader(VertexOut in [[stage_in]],
texture2d<half, access::sample> box1_color [[texture(0)]],
texture2d<half, access::sample> box2_color [[texture(1)]]) {
half4 color1 = box1_color.sample(s, in.texcoord);
half4 color2 = box2_color.sample(s, in.texcoord);
if (in.texcoord.x < 0.5) {
return color1;
}
return color2;
};
Expected Behavior
Both passes should clear their color targets to transparent black (0, 0, 0, 0)
The depth buffer should be shared between passes for proper occlusion
Actual Behavior
Both box1_color and box2_color targets contain the scene background instead of being cleared to transparent (see attached image)
This happens even when I explicitly set clearColor: "0 0 0 0" for both passes
Setting scene.background.contents = UIColor.clear makes the clearColor work as expected, but I need to keep the scene background for other purposes
What I've Tried
Setting different clearColor values - all are ignored when sharing depth buffer
Using DRAW_NODE instead of DRAW_SCENE - didn't solve the issue
Creating a separate pass to capture the background - the background still appears in the other passes
Various combinations of clear flags and render orders
Environment
iOS/macOS, running with "My Mac (Designed for iPad)"
Xcode 16.2
Question
Is this a known limitation of SceneKit when passes share a depth buffer? Is there a workaround to achieve truly transparent clear colors while maintaining a shared depth buffer for occlusion testing?
The core issue seems to be that SceneKit automatically renders the scene background in every DRAW_SCENE pass when a shared depth buffer is detected, overriding any clearColor settings.
Any insights or workarounds would be greatly appreciated. Thank you!
Hi everyone,
I encountered a very strange shader bug that seems related to Metal only (not OpenGL).
You can find the full description of the issue on the Babylon.js forums here: https://forum.babylonjs.com/t/strange-shader-related-issue-on-macos-with-safari-and-chrome-not-firefox/54289 (sorry, I couldn't post a clickable link here as this seems to be blocked here).
I have a workaround to fix the issue (as described in the link above), but this really looks like an issue in Metal itself.
Let me know if you need more details or explanations.
Topic:
Graphics & Games
SubTopic:
Metal
Currently looking for Metal developers to port Quake 2 RTX to Metal RT in order to give Apple Silicon Macs an amazing Pathtracing demo, This project falls under NightSightProductions who is also working on a Portal 2 with RTX Remaster. if you are interested and want to help further Mac gaming, message me here or on discord at king_vulpes
Hi all im having a variety of issues with gamekit matchmaking. On the simulator the matchmaking ui pops up and I can click Quick Match, then immediately "Failed to find Players" this is the same with a real Apple ID and a sandbox account.
If I use real devices the app at least discovers a match, but then the match none of the delegate methods for the match ever get called and the logs are filled with socket not connected and various errors.
My questions are:
Should match making via quick match work in the simulator, I have seen tutorial videos etc of this working, but I can't seem to get it to work.
How do people debug issues with GameCenter / Gamekit to find out why its not able to connect?
Many thanks in advance
I am looking to implement CAMetalDisplayLink on a separate thread on a macOS application. I am basing my implementation on the following example project:
Achieving Smooth Frame Rates with Metal Display Link
This project allows you to configure whether a separate thread is used for rendering by setting RENDER_ON_MAIN_THREAD in GameConfig to 0. However, when I set it to use a separate thread nothing is rendered. Stepping through the code shows that a separate thread is created, but a CAMetalDisplayLinkUpdate is never received. Does anyone know why this does not work?
Hi! I just installed GPTK2 on my new Mac , but the Terminal gave “Error:OpenSSL1.1 has been disabled.”
How should I fix it?Or waiting for the GPTK2 beta4?
Thanks.
Hello,
I'm currently working on my first SceneKit game and have encountered an issue related to moving an SCNNode using a UIPanGestureRecognizer.
When I deploy the game to my iPhone via Xcode in debug mode, all interactions are smooth. However, when I stop the debugging session and run the game directly from the device (outside of Xcode), the SCNNode movement behaves inconsistently — it works sometimes smoothly and sometimes not and the interaction becomes choppy. The SCNNode movement is controlled using a UIPanGestureRecognizer.
Do you have any ideas what might be causing the issue?
Hey, I've been struggling with this for some days now.
I am trying to write to a sparse texture in a compute shader. I'm performing the following steps:
Set up a sparse heap and create a texture from it
Map the whole area of the sparse texture using updateTextureMapping(..)
Overwrite every value with the value "4" in a compute shader
Blit the texture to a shared buffer
Assert that the values in the buffer are "4".
I have a minimal example (which is still pretty long unfortunately).
It works perfectly when removing the line heapDesc.type = .sparse.
What am I missing? I could not find any information that writes to sparse textures are unsupported. Any help would be greatly appreciated.
import Metal
func sparseTexture64x64Demo() throws {
// ── Metal objects
guard let device = MTLCreateSystemDefaultDevice()
else { throw NSError(domain: "SparseNotSupported", code: -1) }
let queue = device.makeCommandQueue()!
let lib = device.makeDefaultLibrary()!
let pipeline = try device.makeComputePipelineState(function: lib.makeFunction(name: "addOne")!)
// ── Texture descriptor
let width = 64, height = 64
let format: MTLPixelFormat = .r32Uint // 4 B per texel
let desc = MTLTextureDescriptor()
desc.textureType = .type2D
desc.pixelFormat = format
desc.width = width
desc.height = height
desc.storageMode = .private
desc.usage = [.shaderWrite, .shaderRead]
// ── Sparse heap
let bytesPerTile = device.sparseTileSizeInBytes
let meta = device.heapTextureSizeAndAlign(descriptor: desc)
let heapBytes = ((bytesPerTile + meta.size + bytesPerTile - 1) / bytesPerTile) * bytesPerTile
let heapDesc = MTLHeapDescriptor()
heapDesc.type = .sparse
heapDesc.storageMode = .private
heapDesc.size = heapBytes
let heap = device.makeHeap(descriptor: heapDesc)!
let tex = heap.makeTexture(descriptor: desc)!
// ── CPU buffers
let bytesPerPixel = MemoryLayout<UInt32>.stride
let rowStride = width * bytesPerPixel
let totalBytes = rowStride * height
let dstBuf = device.makeBuffer(length: totalBytes, options: .storageModeShared)!
let cb = queue.makeCommandBuffer()!
let fence = device.makeFence()!
// 2. Map the sparse tile, then signal the fence
let rse = cb.makeResourceStateCommandEncoder()!
rse.updateTextureMapping(
tex,
mode: .map,
region: MTLRegionMake2D(0, 0, width, height),
mipLevel: 0,
slice: 0)
rse.update(fence) // ← capture all work so far
rse.endEncoding()
let ce = cb.makeComputeCommandEncoder()!
ce.waitForFence(fence)
ce.setComputePipelineState(pipeline)
ce.setTexture(tex, index: 0)
let threadsPerTG = MTLSize(width: 8, height: 8, depth: 1)
let tgCount = MTLSize(width: (width + 7) / 8,
height: (height + 7) / 8,
depth: 1)
ce.dispatchThreadgroups(tgCount, threadsPerThreadgroup: threadsPerTG)
ce.updateFence(fence)
ce.endEncoding()
// Blit texture into shared buffer
let blit = cb.makeBlitCommandEncoder()!
blit.waitForFence(fence)
blit.copy(
from: tex,
sourceSlice: 0,
sourceLevel: 0,
sourceOrigin: MTLOrigin(x: 0, y: 0, z: 0),
sourceSize: MTLSize(width: width, height: height, depth: 1),
to: dstBuf,
destinationOffset: 0,
destinationBytesPerRow: rowStride,
destinationBytesPerImage: totalBytes)
blit.endEncoding()
cb.commit()
cb.waitUntilCompleted()
assert(cb.error == nil, "GPU error: \(String(describing: cb.error))")
// ── Verify a few texels
let out = dstBuf.contents().bindMemory(to: UInt32.self, capacity: width * height)
print("first three texels:", out[0], out[1], out[width]) // 0 1 64
assert(out[0] == 4 && out[1] == 4 && out[width] == 4)
}
Metal shader:
#include <metal_stdlib>
using namespace metal;
kernel void addOne(texture2d<uint, access::write> tex [[texture(0)]],
uint2 gid [[thread_position_in_grid]])
{
tex.write(4, gid);
}
This game is where you can play over 100 games and every game is very different and unique and you can save your favorite game over the 100 in store them and you can store over 100 if you like them all make your wildest dreams that you can search up as games and they could have them Youtubers, you can make good videos with this game, the Creator.
:D
Hope you enjoy it also I’m a kid so I don’t know how to make an update.
Topic:
Graphics & Games
SubTopic:
SpriteKit