I have a 3D model with morphing animation that works correctly in Blender.
I exported this model as a USDZ file and tried to display it in an Xcode-developed visionOS app, but the morphing animation does not play.
What I Have Tried:
Morphing animation works correctly in Blender.
After exporting to USDZ, the morphing animation does not play in the Xcode app.
Linear motion animations (such as object movement) work fine.
Behavior in Reality Converter:
GLB files do not display.
USDZ files load, but morphing animations do not play.
What I Want to Know:
Is there a way to play morphing animations in an Xcode-developed app?
Does RealityKit support morphing animations?
Can morphing animations be played in an Xcode-developed app?
If RealityKit does not support morphing animations, what alternative methods can be used to play them?
I am looking for a way to use the existing animations without recreating them.
Additional Information:
I have both the Blender file (where animations work) and the USDZ file (where animations do not play).
I am developing a visionOS app using Xcode.
Any advice or solutions would be greatly appreciated.
Thank you in advance!
Delve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I'm trying to position an Entity with inverse kinematics while dragging the handle only, but use forward kinematics (pose jointTransforms) otherwise.
The System, Components, Gestures and Rig all seem to work individually.
My approach is to add the IKComponent when dragging starts on the handle and removing the IKComponent it is released.
The switch into IK works, but when removing the IKComponent the App crashes
* thread #1, queue = 'com.apple.main-thread', stop reason = EXC_BAD_ACCESS (code=1, address=0x8)
* frame #0: 0x00000001aa5bb188 CoreRE`(anonymous namespace)::IKComponentSolverWrapper::getSolver() + 60
frame #1: 0x00000001aa5bafb0 CoreRE`re::internal::ikParametersNodeCallback(re::Slice<re::StringID>, re::Slice<re::RigDataValue>, re::Slice<re::StringID>, re::MutableSlice<re::RigDataValue>, void*) + 48
frame #2: 0x00000001aa52d090 CoreRE`re::(anonymous namespace)::resolveEvaluationContextCallback(re::EvaluationContext&, void*) + 152
frame #3: 0x00000001aa68c024 CoreRE`re::(anonymous namespace)::$_76::__invoke(re::Slice<unsigned long>, re::(anonymous namespace)::RegisterTable&) + 1080
frame #4: 0x00000001aa678c94 CoreRE`re::EvaluationModelSingleThread::evaluate(re::EvaluationContextSlices&) + 1188
frame #5: 0x00000001aa866984 CoreRE`re::SkeletalPoseRuntimeData::executeEvaluationTree() + 136
frame #6: 0x00000001aadf37ec CoreRE`re::ecs2::SkeletalPoseComponent::calculateSkeletalPoseBufferWithRig(re::ecs2::MeshComponent*, re::ecs2::RigComponent*, re::ecs2::SkeletalPoseBufferComponent*) + 492
frame #7: 0x00000001aadf4a84 CoreRE`re::ecs2::SkeletalPoseComponentStateImpl::processPreparingComponents(re::ecs2::System::UpdateContext const&, re::ecs2::BasicComponentStateSceneData<re::ecs2::SkeletalPoseComponent>*, re::ecs2::ComponentBuckets<re::ecs2::SkeletalPoseComponent>::BucketIteration, void*) + 268
frame #8: 0x00000001aadf54b0 CoreRE`re::ecs2::SkeletalPoseSystem::update(re::ecs2::System::UpdateContext) const + 732
frame #9: 0x00000001aaed3e54 CoreRE`re::internal::Callable<re::ecs2::ECSManager::configurePhaseECSSystems(re::Scheduler::ScheduleDescriptor&, re::ecs2::ECSSystemGroup, unsigned long)::$_1, void (float)>::operator()(float&&) const + 168
frame #10: 0x00000001ab40eda4 CoreRE`re::Scheduler::executePhase(unsigned long) + 440
frame #11: 0x00000001aa6a3b74 CoreRE`re::Engine::executePhase(re::FramePhase) + 144
frame #12: 0x000000023173de9c RealitySystemSupport`RCPSharedSimulationExecuteUpdate + 64
frame #13: 0x00000002276c9820 MRUIKit`__65-[MRUISharedSimulation _doJoinWithConnectionConfiguration:error:]_block_invoke.35 + 168
frame #14: 0x00000002276c8530 MRUIKit`__addCAPreFenceHandler_block_invoke + 32
frame #15: 0x000000018af22058 QuartzCore`CA::Transaction::run_commit_handlers(CATransactionPhase) + 112
frame #16: 0x000000018aef2ad4 QuartzCore`CA::Context::commit_transaction(CA::Transaction*, double, double*) + 592
frame #17: 0x000000018af21898 QuartzCore`CA::Transaction::commit() + 652
frame #18: 0x000000018af22dac QuartzCore`CA::Transaction::flush_as_runloop_observer(bool) + 68
frame #19: 0x0000000185a26820 UIKitCore`_UIApplicationFlushCATransaction + 48
frame #20: 0x0000000184f97af0 UIKitCore`_UIUpdateSequenceRun + 76
frame #21: 0x0000000185954290 UIKitCore`schedulerStepScheduledMainSection + 168
frame #22: 0x00000001859536d8 UIKitCore`runloopSourceCallback + 80
frame #23: 0x00000001804157fc CoreFoundation`__CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE0_PERFORM_FUNCTION__ + 24
frame #24: 0x0000000180415744 CoreFoundation`__CFRunLoopDoSource0 + 172
frame #25: 0x0000000180414eb0 CoreFoundation`__CFRunLoopDoSources0 + 232
frame #26: 0x000000018040f454 CoreFoundation`__CFRunLoopRun + 788
frame #27: 0x000000018040ecd4 CoreFoundation`CFRunLoopRunSpecific + 552
frame #28: 0x0000000190104b70 GraphicsServices`GSEventRunModal + 160
frame #29: 0x0000000185a27e30 UIKitCore`-[UIApplication _run] + 796
frame #30: 0x0000000185a2c058 UIKitCore`UIApplicationMain + 124
frame #31: 0x00000001d29558b4 SwiftUI`closure #1 (Swift.UnsafeMutablePointer<Swift.Optional<Swift.UnsafeMutablePointer<Swift.Int8>>>) -> Swift.Never in SwiftUI.KitRendererCommon(Swift.AnyObject.Type) -> Swift.Never + 164
frame #32: 0x00000001d29555dc SwiftUI`SwiftUI.runApp<τ_0_0 where τ_0_0: SwiftUI.App>(τ_0_0) -> Swift.Never + 84
frame #33: 0x00000001d265ecdc SwiftUI`static SwiftUI.App.main() -> () + 164
frame #34: 0x000000010303f1c4 Playground.debug.dylib`static PlaygroundApp.$main() at <compiler-generated>:0
frame #35: 0x000000010303f290 Playground.debug.dylib`main at PlaygroundApp.swift:7:8
frame #36: 0x0000000102f6d410 dyld_sim`start_sim + 20
frame #37: 0x000000010312e274 dyld`start + 2840
Is there a workaround or another way to switch between IK and FK?
Topic:
Graphics & Games
SubTopic:
RealityKit
Hello! I'm currently porting a videogame console emulator to iOS and I'm trying to make the renderer (tested on MacOS) work on iOS as well.
The emulator core is written in C++ and uses metal-cpp for rendering, whereas the iOS frontend is written in Swift with SwiftUI. I have an Objective-C++ bridging header for bridging the Swift and C++ sides.
On the Swift side, I create an MTKView. Inside the MTKView delegate, I run the emulator for 1 video frame and pass it the view's backing layer for it to render the final output image with. The emulator runs and returns, but when it returns I get a crash in Swift land (callstack attached below), inside objc_release, which indicates I'm doing something wrong with memory management.
My bridging interface (ios_driver.h):
#pragma once
#include <Foundation/Foundation.h>
#include <QuartzCore/QuartzCore.h>
void iosCreateEmulator();
void iosRunFrame(CAMetalLayer* layer);
Bridge implementation (ios_driver.mm):
#import <Foundation/Foundation.h>
extern "C" {
#include "ios_driver.h"
}
<...>
#define IOS_EXPORT extern "C" __attribute__((visibility("default")))
std::unique_ptr<Emulator> emulator = nullptr;
IOS_EXPORT void iosCreateEmulator() { ... }
// Runs 1 video frame of the emulator and
IOS_EXPORT void iosRunFrame(CAMetalLayer* layer) {
void* layerBridged = (__bridge void*)layer;
// Pass the CAMetalLayer to the emulator
emulator->getRenderer()->setMTKLayer(layerBridged);
// Runs the emulator for 1 frame and renders the output image using our layer
emulator->runFrame();
}
My MTKView delegate:
class Renderer: NSObject, MTKViewDelegate {
var parent: ContentView
var device: MTLDevice!
init(_ parent: ContentView) {
self.parent = parent
if let device = MTLCreateSystemDefaultDevice() {
self.device = device
}
super.init()
}
func mtkView(_ view: MTKView, drawableSizeWillChange size: CGSize) {}
func draw(in view: MTKView) {
var metalLayer = view.layer as! CAMetalLayer
// Run the emulator for 1 frame & display the output image
iosRunFrame(metalLayer)
}
}
Finally, the emulator's render function that interacts with the layer:
void RendererMTL::setMTKLayer(void* layer) {
metalLayer = (CA::MetalLayer*)layer;
}
void RendererMTL::display() {
CA::MetalDrawable* drawable = metalLayer->nextDrawable();
if (!drawable) {
return;
}
MTL::Texture* texture = drawable->texture();
<rest of rendering follows here using the drawable & its texture>
}
This is the Swift callstack at the time of the crash:
To my understanding, I shouldn't be violating ARC rules as my bridging header uses CAMetalLayer* instead of void* and Swift will automatically account for ARC when passing CoreFoundation objects to Objective-C. However I don't have any other idea as to what might be causing this. I've been trying to debug this code for a couple of days without much success.
If you need more info, the emulator code is also on Github
Metal renderer: https://github.com/wheremyfoodat/Panda3DS/blob/ios/src/core/renderer_mtl/renderer_mtl.cpp#L58-L68
Bridge implementation: https://github.com/wheremyfoodat/Panda3DS/blob/ios/src/ios_driver.mm
Bridging header: https://github.com/wheremyfoodat/Panda3DS/blob/ios/include/ios_driver.h
Any help is more than appreciated. Thank you for your time in advance.
Our application uses Core Image to apply custom CIFilters to still images and video. I'm running into issues when the supplied image is large enough (>4096) that the image is automatically tiled. The simplest of these to describe is a filter that performs various mirroring effects - backwards, upside-down etc.
The implementation portion of the filter provides a sampler (src) and passes this into the kernel with an roiCallback that uses the destRect, inset by -1 in both dimensions:
return [mirrorsKernel applyWithExtent:[src extent] roiCallback:^CGRect(int index, CGRect destRect) { return CGRectInset(destRect, -1, -1); }
arguments:@[src]
];
The kernel is very simple, sampling from the X coordinate equal to the src width - current coordinate:
float4 backwards(sampler image, destination dest)
{
float2 dc = dest.coord();
dc.x = image.size().x - dc.x;
return image.sample(image.transform(dc)));
}
When this runs on an image that is wider than 4096, tiling happens, with the result being that destRect is not the entire image and therefore the resulting output image is incorrect. If the ROI uses [src extent] instead of destRect, the result is correct, but this will lead to serious performance issues when src gets too large.
All of this makes sense to me. What I'd like to know is if there is a way to handle this filter's requirements for sampling from the entire source while still limiting the ROI to maintain performance? I think the answer is probably no within our current structure and performance limits. But I wanted to see if there's anything we're missing.
I am aware that the simple kernel above can be replaced with an affine transform, which is an option for backwards and upside-down mirroring. We have other kernels in this filter that perform mirroring of either half of the source image or one quadrant of the source image. In these cases, I suppose it might be possible (up to a point) to create a custom ROI that is only the portion of the source that is being mirrored. We have not attempted that yet.
Any thoughts/input appreciated, thanks!
Attempting to bring up the access point yields the following error log:
[GameCenterOverlayService] Failed to create GameOverlayUI Dashboard Remote Proxy
[GameCenterOverlayService] Could not create endpoint for service name: com.apple.GameOverlayUI.dashboard-service
[GameCenterOverlayService] Failed to create GameOverlayUI Dashboard Remote Proxy
[GameCenterOverlayService] Could not create endpoint for service name: com.apple.GameOverlayUI.dashboard-service
[GameCenterOverlayService] Failed to create GameOverlayUI Dashboard Remote Proxy
[GameCenterOverlayService] Failed to create GameOverlayUI Dashboard Remote Proxy
The same code (which is a single line setting 'active' to true) works on physical devices and on the simulator in iOS 18.6
I haven't been able to find any mention of this issue online. Any suggestions or help greatly appreciated.
Hello,
I am trying to read video frames using AVAssetReaderTrackOutput. Here is the sample code:
//prepare assets
let asset = AVURLAsset(url: some_url)
let assetReader = try AVAssetReader(asset: asset)
guard let videoTrack = try await asset.loadTracks(withMediaCharacteristic: .visual).first else {
throw SomeErrorCode.error
}
var readerSettings: [String: Any] = [
kCVPixelBufferIOSurfacePropertiesKey as String: [String: String]()
]
//check if HDR video
var isHDRDetected: Bool = false
let hdrTracks = try await asset.loadTracks(withMediaCharacteristic: .containsHDRVideo)
if hdrTracks.count > 0 {
readerSettings[AVVideoAllowWideColorKey as String] = true
readerSettings[kCVPixelBufferPixelFormatTypeKey as String] =
kCVPixelFormatType_420YpCbCr10BiPlanarFullRange
isHDRDetected = true
}
//add output to assetReader
let output = AVAssetReaderTrackOutput(track: videoTrack, outputSettings: readerSettings)
guard assetReader.canAdd(output) else {
throw SomeErrorCode.error
}
assetReader.add(output)
guard assetReader.startReading() else {
throw SomeErrorCode.error
}
//add writer ouput settings
let videoOutputSettings: [String: Any] = [
AVVideoCodecKey: AVVideoCodecType.hevc,
AVVideoWidthKey: 1920,
AVVideoHeightKey: 1080,
]
let finalPath = "//some URL oath"
let assetWriter = try AVAssetWriter(outputURL: finalPath, fileType: AVFileType.mov)
guard assetWriter.canApply(outputSettings: videoOutputSettings, forMediaType: AVMediaType.video)
else {
throw SomeErrorCode.error
}
let assetWriterInput = AVAssetWriterInput(mediaType: .video, outputSettings: videoOutputSettings)
let sourcePixelAttributes: [String: Any] = [
kCVPixelBufferPixelFormatTypeKey as String: isHDRDetected
? kCVPixelFormatType_420YpCbCr10BiPlanarFullRange : kCVPixelFormatType_32ARGB,
kCVPixelBufferWidthKey as String: 1920,
kCVPixelBufferHeightKey as String: 1080,
]
//create assetAdoptor
let assetAdaptor = AVAssetWriterInputTaggedPixelBufferGroupAdaptor(
assetWriterInput: assetWriterInput, sourcePixelBufferAttributes: sourcePixelAttributes)
guard assetWriter.canAdd(assetWriterInput) else {
throw SomeErrorCode.error
}
assetWriter.add(assetWriterInput)
guard assetWriter.startWriting() else {
throw SomeErrorCode.error
}
assetWriter.startSession(atSourceTime: CMTime.zero)
//prepare tranfer session
var session: VTPixelTransferSession? = nil
guard
VTPixelTransferSessionCreate(allocator: kCFAllocatorDefault, pixelTransferSessionOut: &session)
== noErr, let session
else {
throw SomeErrorCode.error
}
guard let pixelBufferPool = assetAdaptor.pixelBufferPool else {
throw SomeErrorCode.error
}
//read through frames
while let nextSampleBuffer = output.copyNextSampleBuffer() {
autoreleasepool {
guard let imageBuffer = CMSampleBufferGetImageBuffer(nextSampleBuffer) else {
return
}
//this part copied from (https://developer.apple.com/videos/play/wwdc2023/10181) at 23:58 timestamp
let attachment = [
kCVImageBufferYCbCrMatrixKey: kCVImageBufferYCbCrMatrix_ITU_R_2020,
kCVImageBufferColorPrimariesKey: kCVImageBufferColorPrimaries_ITU_R_2020,
kCVImageBufferTransferFunctionKey: kCVImageBufferTransferFunction_SMPTE_ST_2084_PQ,
]
CVBufferSetAttachments(imageBuffer, attachment as CFDictionary, .shouldPropagate)
//now convert to CIImage with HDR data
let image = CIImage(cvPixelBuffer: imageBuffer)
let cropped = "" //here perform some actions like cropping, flipping, etc. and preserve this changes by converting the extent to CGImage first:
//this part copied from (https://developer.apple.com/videos/play/wwdc2023/10181) at 24:30 timestamp
guard
let cgImage = context.createCGImage(
cropped, from: cropped.extent, format: .RGBA16,
colorSpace: CGColorSpace(name: CGColorSpace.itur_2100_PQ)!)
else {
continue
}
//finally convert it back to CIImage
let newScaledImage = CIImage(cgImage: cgImage)
//now write it to a new pixelBuffer
let pixelBufferAttributes: [String: Any] = [
kCVPixelBufferCGImageCompatibilityKey as String: true,
kCVPixelBufferCGBitmapContextCompatibilityKey as String: true,
]
var pixelBuffer: CVPixelBuffer?
CVPixelBufferCreate(
kCFAllocatorDefault, Int(newScaledImage.extent.width), Int(newScaledImage.extent.height),
kCVPixelFormatType_420YpCbCr10BiPlanarFullRange, pixelBufferAttributes as CFDictionary,
&pixelBuffer)
guard let pixelBuffer else {
continue
}
context.render(newScaledImage, to: pixelBuffer) //context is a CIContext reference
var pixelTransferBuffer: CVPixelBuffer?
CVPixelBufferPoolCreatePixelBuffer(kCFAllocatorDefault, pixelBufferPool, &pixelTransferBuffer)
guard let pixelTransferBuffer else {
continue
}
// Transfer the image to the pixel buffer.
guard
VTPixelTransferSessionTransferImage(session, from: pixelBuffer, to: pixelTransferBuffer)
== noErr
else {
continue
}
//finally append to taggedBuffer
}
}
assetWriterInput.markAsFinished()
await assetWriter.finishWriting()
The result video is not in correct color as the original video. It turns out too bright. If I play around with attachment values, it can be either too dim or too bright but not exactly proper as the original video. What am I missing in my setup? I did find that kCVPixelFormatType_4444AYpCbCr16 can produce proper video output but then I can't convert it to CIImage and so I can't do the CIImage operations that I need. Mainly cropping and resizing the CIImage
I want to implement the ability to apply Lightroom Preset (.xmp file) to an image in my app, but am running into difficulties. How can I configure things like color grading, curve, etc. in Swift?
Hi, I uptaded to the 15.2 my system. It was Turkish and I change the English for AI. But Image Playground is not working.
My Playground app has been stuck at Downloading support for Image Playground now for about 4-5 days.
Topic:
Graphics & Games
SubTopic:
General
TLDR; I can't get QueueName to work with matchmaking a turn-based match in Unity using matchmaking rules.
Long version:
I'm using the apple unity plugin found here: https://github.com/apple/unityplugins/blob/main/plug-ins/Apple.GameKit/Apple.GameKit_Unity/Assets/Apple.GameKit/Documentation~/Apple.GameKit.md
I have created a Queue, RuleSet and a simple Rule to match players by following these docs tightly: https://developer.apple.com/documentation/gamekit/finding-players-using-matchmaking-rules.
Here is the single rule I have that drives matchmaking:
{
"data" : {
"type" : "gameCenterMatchmakingRules",
"id" : "[hiddden-rule-id]",
"attributes" : {
"referenceName" : "ComplimentaryFactionPreference",
"description" : "default desc",
"type" : "MATCH",
"expression" : "requests[0].properties.preference != requests[1].properties.preference",
"weight" : null
},
"links" : {
"self" : "https://api.appstoreconnect.apple.com/v1/gameCenterMatchmakingRules/[hidden-rule-id]"
}
},
"links" : {
"self" : "https://api.appstoreconnect.apple.com/v1/gameCenterMatchmakingRules"
}
}
which belongs to a rule set which belongs to a queue. I have verified these are setup and linked via the App Store Connect API. Additionally, when I tested queue-based matchmaking without a queue established, I got an error in Unity. Now, with this, I do not. However there is a problem when I attempt to use the queue for matchmaking.
I have the basic C# function here:
public override void StartSearch(NSMutableDictionary<NSString, NSObject> properties)
{
if (searching) return;
base.StartSearch(properties);
//Establish matchmaking requests
_matchRequest = GKMatchRequest.Init();
_matchRequest.QueueName = _PreferencesToQueue(GetSerializedPreferences());
_matchRequest.Properties = properties;
_matchRequest.MaxPlayers = PLAYERS_COUNT;
_matchRequest.MinPlayers = PLAYERS_COUNT;
_matchTask = GKTurnBasedMatch.Find(_matchRequest);
}
The
_PreferencesToQueue(GetSerializedPreferences());
returns the exact name of the queue I added my ruleset to.
After this function is called, I poll the task generated from the .Find(...) function. Every time I run this function, a new match is created almost instantly. No two players are ever added to the same match.
Further, I'm running two built game instances, one on a mac and another on an ipad and when I simultaneously test, I am unable to join games this way.
Can someone help me debug why I cannot seem to match make when using a queue based approach?
Hi everyone,
I’m currently learning about ParticleEmitterComponentParticleEmitterComponent and exploring the sample app provided in the Simulating particles in your visionOS app documentation.
In the sample app, when I set the EmitterPreset to fireworks from the settings panel on the left side of the window and choose SystemImage, I noticed two issues:
The image applied to mainEmitter appears clipped or cropped.
The image on spawnedEmitter does not update to the selected SystemImage.
What I want to achieve:
Apply the same SystemImage to both mainEmittermainEmitter and spawnedEmitterspawnedEmitter so that it displays correctly without clipping.
Remove the animation that changes the size of spawnedEmitterspawnedEmitter over time and keep it at a constant size.
Could someone explain which properties should be adjusted to achieve this behavior? Any guidance or examples would be greatly appreciated.
Thanks in advance!
App Storeにある『浮遊時計 Premium』は1Hzごとか10Hzごと、または3段階以上のリフレッシュレート計測はできますか?
Topic:
Graphics & Games
SubTopic:
General
I decided to use a club to kick a ball and let it roll on the turf in RealityKit, but now I can only let it slide but can not roll.
I add collision on the turf(static), club (kinematic) and the ball(dynamic), and set some parameters: radius, mass.
Using these parameters calculate linear damping, inertia, besides, use time between frames and the club position to calculate speed. Code like these:
let radius: Float = 0.025
let mass: Float = 0.04593 // 质量,单位:kg
var inertia = 2/5 * mass * pow(radius, 2)
let currentPosition = entity.position(relativeTo: nil)
let distance = distance(currentPosition, rgfc.lastPosition)
let deltaTime = Float(context.deltaTime)
let speed = distance / deltaTime
let C_d: Float = 0.47 //阻力系数
let linearDamping = 0.5 * 1.2 * pow(speed, 2) * .pi * pow(radius, 2) * C_d //线性阻尼(1.2表示空气密度)
entity.components[PhysicsBodyComponent.self]?.massProperties.inertia = SIMD3<Float>(inertia, inertia, inertia)
entity.components[PhysicsBodyComponent.self]?.linearDamping = linearDamping
// force
let acceleration = speed / deltaTime
let forceDirection = normalize(currentPosition - rgfc.lastPosition)
let forceMultiplier: Float = 1.0
let appliedForce = forceDirection * mass * acceleration * forceMultiplier
entityCollidedWith.addForce(appliedForce, at: rgfc.hitPosition, relativeTo: nil)
Also I try to applyImpulse but not addForce, like:
let linearImpulse = forceDirection * speed * forceMultiplier * mass
No matter how I adjust the friction(static, dynamic) and restitution, using addForce or applyImpulse, the ball can only slide. How can I solve this problem?
Hello,
I found an issue with the Games app on macOS 26 (Tahoe) when viewing achievements:
In App Store Connect, each achievement has different values set for the pre-earned description and the post-earned description.
When testing with GameKit directly (GKAchievementDescription), both values are returned correctly.
However, in the macOS Games app, the post-earned description is shown even before the achievement is earned.
This seems to be a display issue specific to the Games app on macOS.
Could you confirm if this is a known bug in the Games app, or if there is a reason why pre-earned descriptions are not being shown?
Thank you.
Hello, I’m the developer of the “stepsquad” app. Our app uses the Game Center achievement feature, but we’ve been encountering a problem: the “Global Players” always shows 0%, even though there are friends who have already achieved these achievements. Initially, I thought it might be because the app was newly launched. However, it’s now been over two months since release, and it’s still showing 0%. If anyone has any insight into this issue, please leave a comment.
I am using SCNTechnique in combination with ARSCNView.
The technique is doing so minor post-processing. I have written several filter variant for this post-processing, but I'm facing an issue when with one of the filters/fragment shaders, SCNTechnique discards my output and just presents the plain camera feed on screen instead. This is clearly visible in the Metal pipeline, using the GPU frame debugger.
Let me stress that my setup works for 90% of my filters, but not this one and I want to know why.
iOS 18.1, iPhone 13 Mini. Xcode 16.1.
Encoder 0 & 1 are injected by the system.
Render encoder 2 & 3 correspond to my SCNTechnique's render passes: one to manipulate pixel data (darken it in this case) and another to BLIT it back to the main texture. I know the separate buffer is not strictly for this particular operation, but it shouldn't matter.
Note that the issue occurs in encoder 4 (not mine but ARKit's).
In Render Encoder 4, scn_postprocess_AR_fragment handle my texture (#0, ending in f980) and another from the camera feed (Texture 2). I know this pass is typically used for grain because that's what it used to do before I disabled grain on ARSCNView (+ the buffer still contains grain paramaters).
I have other post-processing filters that work just fine. By what magic is ARKit determining to use Texture 2 instead of my Texture 0?
Sure, I could keep digging into the minute differences between my shaders to find out which LoC affects how some ARKit shader down the line operates, but it's awfully opaque so far.
I am working on a SceneKit project where I use a CAShapeLayer as the content for SCNMaterial's diffuse.contents to display a progress bar. Here's my initial code:
func setupProgressWithCAShapeLayer() {
let progressLayer = createProgressLayer()
progressBarPlane?.firstMaterial?.diffuse.contents = progressLayer
DispatchQueue.main.async {
var progress: CGFloat = 0.0
Timer.scheduledTimer(withTimeInterval: 0.1, repeats: true) { timer in
progress += 0.01
if progress > 1.0 {
progress = 0.0
}
progressLayer.strokeEnd = progress // Update progress
}
}
}
// MARK: - ARSCNViewDelegate
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
progressBarPlane = SCNPlane(width: 0.2, height: 0.2)
setupProgressWithCAShapeLayer()
let planeNode = SCNNode(geometry: progressBarPlane)
planeNode.position = SCNVector3(x: 0, y: 0.2, z: 0)
node.addChildNode(planeNode)
}
This works fine, and the progress bar updates smoothly. However, when I change the code to use a class property (self.progressLayer) instead of a local variable, the rendering starts flickering on the screen:
func setupProgressWithCAShapeLayer() {
self.progressLayer = createProgressLayer()
progressBarPlane?.firstMaterial?.diffuse.contents = progressLayer
DispatchQueue.main.async { [weak self] in
var progress: CGFloat = 0.0
Timer.scheduledTimer(withTimeInterval: 0.1, repeats: true) { [weak self] timer in
progress += 0.01
if progress > 1.0 {
progress = 0.0
}
self?.progressLayer?.strokeEnd = progress // Update progress
}
}
}
After this change, the progressBarPlane in SceneKit starts flickering while being rendered on the screen.
My Question:
Why does switching from a local variable (progressLayer) to a class property (self.progressLayer) cause the flickering issue in SceneKit rendering?
I'm running into a persistent visual issue while deploying a floral corridor scene to Apple Vision Pro using Unity 6.0 with URP and Metal. The issue only appears on the Vision Pro device — everything looks fine in the Unity Editor.
Issue Description
When the frame rate drops to around 60–70 FPS, noticeable distortion artifacts appear around the edges of foliage models. It seems like the background meshes (behind the plants) get warped and leak through the edges of the foliage. Although this is most visible around the leaves, even solid objects like standard URP wall or box models show distorted edges when the issue occurs.
All the foliage uses Opaque or Alpha Clipping materials.
Things I've Tried
Changing the foliage materials to Transparent mode —distortion around edges disappears, but using Transparent for a large number of foliage assets is not ideal for performance or sorting complexity.
Reducing the number of foliage objects — with only a few plants in the scene and the frame rate staying around 100 FPS, the distortion disappears. However, this isn’t a practical solution for a full environment.
Possible Cause?
I came across this note in the Unity documentation:
"Ensure depth-buffer for each pixel is non-zero - on visionOS, the depth buffer is used for reprojection. To ensure visual effects like skyboxes and shaders are displayed beautifully, ensure that some value is written to the depth for each pixel."
Could this be related to the issue? Is it possible that Alpha Clipping with low pixel coverage leads to some pixels not writing to the depth buffer, which then causes problems during Vision Pro’s reprojection or foveated rendering? However, even when I disable Alpha Clipping entirely, the distortion issue still persists, so it may not be solely caused by clipping itself.
Project Setup
Unity 6.0 (URP)
Depth Texture: Enable
Using Metal as the graphics backend
Running on real Vision Pro hardware (not simulator)
Any advice on how to avoid these distortion issues on Vision Pro would be greatly appreciated.
Thanks!
Hey, i have created a game in unity with the apple core and apple gamekit plugins present. I setup 5 leaderboards on the app store connect. I made a unity build and did the whole testflight build loop to test everything. When i open my gamecenter panel via the button i see my leaderboards but they show as MISSING TITLE which is weird because i have for sure set them up correctly they have a leaderboard reference name and leaderboard id as well. When debugging i can see that when i call my submit score function it gets submitted with no error but then i also dont see the score appear anywhere .
Keep in mind the leaderboards are not live and are being tested on testflight first
Hello experts, I'm trying to implement a material with custom shader code, but I saw that visionOS doesn't allow you to inject custom Metal functions or use CustomMaterial like iOS/macOS, nor can you directly write Metal Shading Language (.metal) and use it through ShaderGraphMaterial. So my question is, if i want to implement your own shader code, how should i do it?
I play this game called Sonic Forces: Speed Battle that's available in the app store and I completed a quest outside of the app on this site called TapResearch for some rewards as I've done before and has worked, but after this one time I can no longer enter back into the game without crashing immediately. I tried deleting and reinstalling but nothing. I even tried signing into a different account but that didn't work either. So then I tried to make a new game center account to try and see if it works, and it did, though all my progress has been restarted. Does anyone know how to fix this?