Delve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.

All subtopics
Posts under Graphics & Games topic

Post

Replies

Boosts

Views

Activity

No smooth animation transition in IOS 18
Hi, I’m facing an issue with SceneKit. I’m developing a 3D mobile game. I have a character 3D model and several skeletal animations CAAnimation. I import both the model and the animations from Maya in *.dae format. The character’s animations play continuously one after the other, with each new animation being triggered randomly. The transition between animations makes smoothly by setting the fadeInDuration and fadeOutDuration properties. Here’s an example of the code: import UIKit import QuartzCore import SceneKit //---------------------- class TestAnimationController: UIViewController { var bodyNode: SCNNode? override func viewDidLoad() { super.viewDidLoad() let scnView = SCNView(frame: self.view.bounds) scnView.backgroundColor = .black // Set your desired background color scnView.autoresizingMask = [.flexibleWidth, .flexibleHeight] let scene = SCNScene(named: "art.scnassets/scene/Base_room/ROOM5.scn")! bodyNode = collada2SCNNode(filepath: "art.scnassets/female/girl_body_races.dae")! bodyNode?.renderingOrder = 10 scene.rootNode.addChildNode(bodyNode!) playIdleAnimation() scnView.scene = scene // Assign the scene to the SCNView self.view.addSubview(scnView) // Add the SCNView to your main view) } func collada2SCNNode(filepath:String) -> SCNNode? { if let scene = SCNScene(named: filepath) { let node = scene.rootNode.childNodes[0] return node } else { return nil } } func playIdleAnimation() { let array = [ "art.scnassets/female/animations/idle/girl_idle_4.dae", "art.scnassets/female/animations/idle/girl_idle1.dae", "art.scnassets/female/animations/idle/girl_idle2.dae", "art.scnassets/female/animations/idle/Girl_idle3.dae", ] let animation = CAAnimation.animationWithSceneNamed(array.randomElement() ?? "")! self.setAnimationAdd( fadeInDuration: 1.0, fadeOutDuration: 1.0, keyTime: 0.99, animation, isLooped: false ) { [weak self] in guard let self = self else { return } try? self.playBoringAnimations() } } func playBoringAnimations() { let array = [ "art.scnassets/female/animations/boring/girl_boring1.dae", "art.scnassets/female/animations/boring/girl_boring2.dae", "art.scnassets/female/animations/boring/girl_boring3.dae", "art.scnassets/female/animations/boring/girl_boring4.dae", "art.scnassets/female/animations/boring/girl_boring5.dae", "art.scnassets/female/animations/boring/girl_boring6.dae", "art.scnassets/female/animations/boring/girl_boring8.dae" ] let animation = CAAnimation.animationWithSceneNamed(array.randomElement() ?? "")! self.setAnimationAdd( fadeInDuration: 1.0, fadeOutDuration: 1.0, keyTime: 0.99, animation, isLooped: false ) { [weak self] in guard let self = self else { return } try? self.playIdleAnimation() } } func setAnimationAdd(fadeInDuration : CGFloat, fadeOutDuration : CGFloat, keyTime : CGFloat, _ animation: CAAnimation, isLooped: Bool, completion: (() -> Void)?) { animation.fadeInDuration = fadeInDuration animation.fadeOutDuration = fadeOutDuration if !isLooped { animation.repeatCount = 1 } else { animation.repeatCount = Float.greatestFiniteMagnitude } animation.animationEvents = [ SCNAnimationEvent(keyTime: keyTime, block: { _, _, _ in completion?() }) ] bodyNode?.addAnimation(animation, forKey: "avatarAnimation") } } Everything worked perfectly until I updated to iOS 18. On a physical device, the animations now transition abruptly without the smooth blending that was present in earlier iOS versions. The switch between them is very noticeable, as if the fadeInDuration and fadeOutDuration parameters are being ignored. However, in the iOS 18 simulator, the animations still transition smoothly as before. Here two example videos - IOS 17.5 and IOS 18 https://youtube.com/shorts/jzoMRF4skAQ - IOS 17,5 smooth https://youtube.com/shorts/VJXrZzO9wl0 - IOS 18 not smooth
0
1
839
Oct ’24
CAMetalDisplayLink does not work on separate thread
I am looking to implement CAMetalDisplayLink on a separate thread on a macOS application. I am basing my implementation on the following example project: Achieving Smooth Frame Rates with Metal Display Link This project allows you to configure whether a separate thread is used for rendering by setting RENDER_ON_MAIN_THREAD in GameConfig to 0. However, when I set it to use a separate thread nothing is rendered. Stepping through the code shows that a separate thread is created, but a CAMetalDisplayLinkUpdate is never received. Does anyone know why this does not work?
1
0
662
Oct ’24
Does OLED burn-in affect iPad Pro displays, and should I implement a screen saver in my always-on app?
Hi everyone, I’m developing an iPad app that will be running continuously with the screen always on — similar to a restaurant ordering system. I understand that some of the newer iPad Pro models are equipped with OLED displays. I'm concerned about the potential risk of screen burn-in due to static UI elements being displayed for extended periods. Does burn-in occur on the OLED iPad Pro models under such usage? Would it be advisable to implement a screen saver or periodically animate/change parts of the UI to prevent this? Any insights or best practices would be greatly appreciated. Thank you!
1
0
83
Jun ’25
Can spatial scene function be used outside of the Photo App?
I'm new here so I don't know what's this function belongs to which topic... Sorry about that! I watched the WWDC stream and I am really interested in this function, I'm wondering if this function could be used in my apps. I looked up the document but I find it only support visionOS(i'm not sure about that, but I saw the demo is base on the visionOS)
2
0
160
Jun ’25
SceneKit app randomly crashes with EXC_BAD_ACCESS in jet_context::set_fragment_texture
Every now and then my SceneKit game app crashes and I have no idea why. The SCNView has a overlaySKScene, so it might also be SpriteKit's fault. The stack trace is #0 0x0000000241c1470c in jet_context::set_fragment_texture(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, jet_texture*) () #27 0x000000010572fd40 in _pthread_wqthread () Does anyone have an idea where I could start debugging this, without being able to consistently reproduce it?
12
0
1.3k
Nov ’24
RealityKit VideoMaterial renders pink on iOS 18
our app is live, and it appears that since the ios 18 update - the VideoMaterial renders pink / purple color instead of the video (picture attached). the audio is rendered properly. we found that it occurs on old devices: iPhone 11 & iPhone SE 2020. I've found this thread of Andy Jazz on stackoverflow: Steps to Reproduce: Create a plane for the video screen. Apply a VideoMaterial using AVPlayerItem. Anchor the model entity to an ARImageAnchor. Expected Outcome: The video should play as a material on the plane in RealityKit. Actual Outcome: On iOS 18, the plane appears pink, indicating the VideoMaterial isn’t applied. What I’ve Tried: -Verified the video URL is correct. -Checked that the AVPlayerItem and VideoMaterial are initialised correctly. -Ensured the AVPlayer is playing the video. I also tried different formats (mov / mp4 / m4v), and verifying that the video's status is readyToPlay. any suggestions?
1
0
166
Jun ’25
CGContext PDF/A intents
let dic : [AnyHashable:Any] = [ kCGPDFXRegistryName: "http://www.color.org" as CFString, kCGPDFXOutputConditionIdentifier: "FOGRA43" as CFString, kCGPDFContextOutputIntent: "GTS_PDFX" as CFString, kCGPDFXOutputIntentSubtype: "GTS_PDFX" as CFString, kCGPDFContextCreateLinearizedPDF: "" as CFString, kCGPDFContextCreatePDFA: "" as CFString, kCGPDFContextAuthor: "Placeholder" as CFString, kCGPDFContextCreator: "Placeholder" as CFString ] Hello, Now I would like to export my PDF's as PDF/A. In my opinion, there is also the right option for this under Core Graphics. Unfortunately, the documentation does not show what is 'kCGPDFContextCreatePDFA' or 'kCGPDFContextLinearizedPDF' for a stringvalue is required. What I have already tried: GTS_PDFA1 , PDF/A-1, true as CFString. (Above my CFDictionary. ...Author e.g are working perfectly.) In the Finder you can see these two options, which I would also like to implement in my app. Thank you in advance!
1
0
123
Jun ’25
M1 GPU violates atomic_thread_fence across threadgroups
I have an M1 Pro with a 16-core GPU. When I run a shader with 8193 threads, atomic_thread_fence is violated across the boundary between thread 8191 (the last thread in the 7th threadgroup) and 8192 (the first thread in the 9th threadgroup). I've attached the Metal and Swift files, but I'll repost the relevant kernel here. It's a function that launches N threads to iterate through a binary tree from the leaves, where the first thread to reach the parent terminates and the second one populates it with the sum of the nodes two children. // clang-format off void sum(device const int& size, device const int* __restrict__ in, device int* __restrict__ out, device atomic_int* visited, uint i [[thread_position_in_grid]]) { // clang-format on int val = in[i]; uint cur = (size + i - 1); out[cur] = val; atomic_thread_fence(mem_flags::mem_device, memory_order_seq_cst); cur = (cur - 1) / 2; int proceed = atomic_fetch_add_explicit(&visited[cur], 1, memory_order_relaxed); while (proceed == 1) { uint left = 2 * cur + 1; uint right = 2 * cur + 2; uint val_left = out[left]; uint val_right = out[right]; uint val_cur = val_left + val_right; out[cur] = val_cur; if (cur == 0) { break; } cur = (cur - 1) / 2; atomic_thread_fence(mem_flags::mem_device, memory_order_seq_cst); proceed = atomic_fetch_add_explicit(&visited[cur], 1, memory_order_relaxed); } } What I'm observing is that thread 8192 hits the atomic_fetch_add first and terminates, while thread 8191 hits it second (observes that thread 8192 had incremented it by 1) and proceeds into the loop. Thread 8191 reads out[16383] (which it populated with 8191) and out[16384] (which thread 8192 populated with 8192 prior to the atomic_thread_fence). Instead of reading 8192 from out[16384] though, it reads 0. Maybe I'm missing something but this seems like a pretty clear violation of the atomic_thread_fence which (I thought) was supposed to guarantee that the write from thread 8192 to out[16384] would be visible to any thread observing the effects of the following atomic_fetch_add. Is atomic_fetch_add not a store operation? Modifying it to an atomic_store or atomic_exchange still results in the bug. Adding another atomic_thread_fence between the atomic_fetch_add and reading of out also doesn't change anything. I only begin to observe this on grid sizes of 8193 and upwards. That's 9 threadgroups per grid, which I assume could be related to my M1 Pro GPU having 16 cores. Running the same example on an A17 Pro GPU doesn't show any of this behavior up through a tested grid size of 4194303 (2^22-1), at which point testing larger grid sizes starts to run into other issues so I can't test anything larger. Removing the atomic_thread_fences on both the M1 and A17 cause the test to fail at much smaller grid sizes, as expected. sum.metal main.swift
2
0
485
Dec ’24
How to use imageblock_slice
Is there a working example of imageblock_slice with implicit layout somewhere? I get a compilation error when i write this: imageblock_slilce color_slice = img_blk.slice(frag->color); Error: No matching member function for call to 'slice' candidate template ignored: couldn't infer template argument 'E' candidate function template not viable: requires 2 arguments, but 1 was provided Too few template arguments for class template 'imageblock_slice' It seems the syntax has changed since the Imageblocks presentation https://developer.apple.com/videos/play/tech-talks/603/ I tried supplying the struct type of the image block between <> but it still does not work.
1
0
637
Dec ’24
ModelEntity(named:in:) fails to load USD file from RealityKitContent bundle with misleading error?
My experience has been that ModelEntity(named:in:) can be used to load a USD file with a simple structure consisting of entities and model entities, and, critically, it will flatten the entity hierarchy down to a single ModelEntity, presumably reducing the number of draw calls. However, can anyone verify that the following is true? If ModelEntity(named:in:) is used to load a USD file from a RealityKit content bundle, it may fail when the USD file contains more complex data, such as shader graph material definitions, or perhaps for some other reason. I am not sure. AND the error that ModelEntity(named:in:) throws in this case is Cannot load RealityKitContent entity: Failed to find resource with name "<name>" in bundle which would literally suggest that the file does not exist, instead of what I assume the error actually is, which is "the file exists but its entity hierarchy could not be flattened to a single ModelEntity" ? Is that an accurate description of the known behavior of ModelEntity:named:in:)? I understand that I could use Entity(named:in:) instead, without the flattening feature. My question is really more about the seemingly misleading error message. Thank you for any clarification you can provide.
2
0
135
May ’25
SceneKit - different behavior when debugging
Hello, I'm currently working on my first SceneKit game and have encountered an issue related to moving an SCNNode using a UIPanGestureRecognizer. When I deploy the game to my iPhone via Xcode in debug mode, all interactions are smooth. However, when I stop the debugging session and run the game directly from the device (outside of Xcode), the SCNNode movement behaves inconsistently — it works sometimes smoothly and sometimes not and the interaction becomes choppy. The SCNNode movement is controlled using a UIPanGestureRecognizer. Do you have any ideas what might be causing the issue?
1
0
113
May ’25
Safe Places to Find Dependable App Developers
Hello! Brand new to the Apple developer community, so, hello everyone! I'm a game developer, we just launched our first game on PC and I'm looking to port it to ios. Time is something I'm kind of short on, and I hear it takes some jumping through hoops to get the know-how to port something to mobile. Any good sites you'd recommend for finding programmers to port your game? It's fairly simple - just a visual novel. Any and all suggestions welcome! All the best! Elijah
3
0
586
Dec ’24
OS choosing performance state poorly for GPU use case
I am building a MacOS desktop app (https://anukari.com) that is using Metal compute to do real-time audio/DSP processing, as I have a problem that is highly parallelizable and too computationally expensive for the CPU. However it seems that the way in which I am using the GPU, even when my app is fully compute-limited, the OS never increases the power/performance state. Because this is a real-time audio synthesis application, it's a huge problem to not be able to take advantage of the full clock speeds that the GPU is capable of, because the app can't keep up with real-time. I discovered this issue while profiling the app using Instrument's Metal tracing (and Game tracing) modes. In the profiling configuration under "Metal Application" there is a drop-down to select the "Performance State." If I run the application under Instruments with Performance State set to Maximum, it runs amazingly well, and all my problems go away. For comparison, when I run the app on its own, outside of Instruments, the expensive GPU computation it's doing takes around 2x as long to complete, meaning that the app performs half as well. I've done a ton of work to micro-optimize my Metal compute code, based on every scrap of information from the WWDC videos, etc. A problem I'm running into is that I think that the more efficient I make my code, the less it signals to the OS that I want high GPU clock speeds! I think part of why the OS is confused is that in most use cases, my computation can be done using only a small number of Metal threadgroups. I'm guessing that the OS heuristics see that only a small fraction of the GPU is saturated and fail to scale up the power/clock state. I'm not sure what to do here; I'm in a bit of a bind. One possibility is that I intentionally schedule busy work -- spin threadgroups just to waste energy and signal to the OS that I need higher clock speeds. This is obviously a really bad idea, but it might work. Is there any other (better) way for my app to signal to the OS that it is doing real-time latency-sensitive computation on the GPU and needs the clock speeds to be scaled up? Note that game mode is not really an option, as my app also runs as an AU plugin inside hosts like Garageband, so it can't be made fullscreen, etc.
6
0
872
May ’25
Embedded links not clickable in PDFs for iOS devices
I have a SPFx React application where I am printing the HTML page content using the javascript default window.print() functionality. Once I save the page as pdf from the print preview window and open it using Adobe Acrobat, the links(for eg -> Google) within the content are not clickable and appearing as plain text. I have tried to print random pages post searching with any keywords in Google and saved the files as pdfs, but, unfortunately, the links are still not clickable there as well. To check whether it is an Adobe Acrobat issue, I have performed the same print functionality from Android devices and shared the pdf file across the iOS devices and in that case, when opened using Adobe Acrobat, the links are appearing to be clickable. I am wondering whether it is something related to how the default print functionality works for iPadOS and iOS devices. Any insights on this would be really helpful. Thanks!!! Note: The links are clickable for MacOS as well as for Windows. #ios #ipados #javascript #spfx #react
2
0
106
May ’25
Reality Composer Pro 2.0 shader graphs can't be loaded on visionOS 1
Using Reality Composer Pro 2.0, I created a simple shader graph that displays a texture on an unlit surface: On visionOS 2 beta, I can successfully use ShaderGraphMaterial(named:from:in:) to load that shader graph material and assign it to a model entity. However, on visionOS 1.2 and earlier, either in Simulator or on the device, ShaderGraphMaterial(named:from:in:) fails and I see the following logged to the console: If, using Reality Composer Pro 1.0, I experimentally open the same project and delete and recreate exactly the same nodes above, then ShaderGraphMaterial(named:from:in:) works as expected on visionOS 1.2. Is it a known issue that Reality Composer 2 can't be used with visionOS 1? Is this intentional behavior? I've submitted feedback as FB14828873, including a sample project and repro steps. If possible, I would appreciate guidance from an Apple engineer, like "This is a known issue for [list of node types]" or "Reality Composer Pro 2 is not supported for visionOS 1 development, please refer to [documentation]" or "We recommend [workaround]." Thank you.
7
0
1.4k
May ’25
How use custom segmentation occlusion in RealityKit?
I have a neural network model for segmentation, I successfully integrated it and am getting a grayscale image. Next, I need to apply the segmentation mask in RealityKit to achieve the occlusion effect (like person segmentation). I tried doing it through post-processing and other methods, but none of them worked. Is there any example of how this can be done in RealityKit?
1
0
641
Oct ’24