Delve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.

All subtopics
Posts under Graphics & Games topic

Post

Replies

Boosts

Views

Created

Assets catalog .webp warrning
When I enter webp image to the assets catalog I get warrning: /Users/..../Assets.xcassets The image set "Card-Back" references a file "Card-Back.webp", but that file does not have a valid extension. It works, I see all my images perfect. How can I fix the 200+ warrnings?
0
0
261
2d
Showing 3D/AR content on multiple pages in iOS with RealityView
Hi. I'm trying to show 3D or AR content in multiple pages on an iOS app but I have found that if a RealityView is used earlier in a flow then future uses will never display the camera feed, even if those earlier uses do not use any spatial tracking. For example, in the following simplified view, the second page realityView will not display the camera feed even though the first view does not use it not start a tracking session. struct ContentView: View { var body: some View { NavigationStack { VStack { RealityView { content in content.camera = .virtual // showing model skipped for brevity } NavigationLink("Second Page") { RealityView { content in content.camera = .spatialTracking } } } } } } What is the way around this so that 3D content can be displayed in multiple places in the app without preventing AR content from working? I have also found the same problem when wrapping an ARView for use in SwiftUI.
0
0
183
3d
انشاء تطبيق جديد
اريد انشاء لعبه في ابل ستور و تكون اول صفحه تكون شروط و الاحكام و خيار بدا اللعبه200 فئات من السعوديه من مسلسل من العب من بنات و بس وقطر و الإمارات وانمي ومسلسلات تركيه و السياحه و الدول وشركات عالميه و شركات كترونيه
2
0
99
4d
Unable to find intelgpu_kbl_gt2r0 slice or a compatible one in binary archive
Unable to find intelgpu_kbl_gt2r0 slice or a compatible one in binary archive 'file:///System/Library/PrivateFrameworks/IconRendering.framework/Resources/binary.metallib' available slices: applegpu_g13g, applegpu_g13s, applegpu_g13d, applegpu_g14g, applegpu_g14s, applegpu_g14d, applegpu_g15g, applegpu_g15s, applegpu_g15d, applegpu_g16g, applegpu_g16s, applegpu_g17g, applegpu_g15g, applegpu_g15s, applegpu_g15d, applegpu_g16s Is it related to performance of applications in macOS 26.2 on Intel Macs?
3
0
201
5d
Open Shading Language (OSL) in Metal
Hi. I'm a 3D designer, using Blender for most of my work. The most recent Blender conference discussed utilizing the Open Shading Language (OSL) in their latest versions, which allows designers to write custom shaders for their workflows. At the moment, only Nvidia Optix GPU's can utilize this language for rendering (from what I understand), but Blender developers stated they are waiting on other GPU manufacturers to implement this feature as well. I'm not sure if there are any licensing issues here, but would this be something Apple could implement in Metal to make their hardware more attractive to the 3D design community? Any help or knowledge on this topic would be greatly appreciated.
0
0
138
5d
RealityKit .Kinematic + collisions (visionOs)
Hi everyone, I'm new to visionOS development. I'm trying to create a physics-based scene (with gravity) where users can pick up and move objects on a workbench. I am struggling with physics interactions during the drag gesture: Kinematic Mode: If I switch to .kinematic during the drag, the object moves smoothly but clips through other objects (no collisions). Dynamic Mode: I tried keeping it .dynamic and applying linear velocity toward the hand position, but the movement feels laggy and unresponsive. Hybrid Approach: I tried switching to .kinematic during DragGesture.onChange and back to .dynamic on collision, but this causes the entity to jitter/shake violently when touching other objects. Has anyone found a clean way to drag objects while maintaining solid collisions. Thanks for your help!
2
0
672
5d
Present GameCentre Overlay Programmatically
As GKGameCenterViewController has been deprecated, it seems that GKAccessPoint is now the correct way to present the GameCentre leaderboard. But the placement options for the GKAccessPoint are very limited and lead to misaligned UI that looks clunky, as the GKAccessPoint does not align with the system navigation toolbar. Am I missing something here or am I just stuck with a lopsided UI now? I much preferred how this previously worked, where I could present the GKGameCenterViewController in a sheet from my own button
1
0
188
6d
Core Text incremental redraw glitch: overlapping glyphs during editing
During editing in Pages (or Word) I am getting these glitches (see attachment). Started after the last update to Mac OS 26.3 (beta) Also removed 2 recent instalments (Blackhole audio driver and kDrive/Infomaniak, but trouble is still there. 27" iMac 2020 (Intel) i7 3,8 Ghz AMD Radeon Pro 5500 XT 8 GB 24 GB RAM macOS Tahoe 26.3 (=beta) Tried restart in safe mode, checked fonts. Talked to aissistent to get a solution, but no ...) Thx for any advice, Pieter (not a developer so please kee pit simple 🙏🏻)
3
0
269
1w
Optimizing HZB Mip-Chain Generation and Bindless Argument Tables in a Custom Metal Engine
Hi everyone, I’ve been developing a custom, end-to-end 3D rendering engine called Crescent from scratch using C++20 and Metal-cpp (targeting macOS and visionOS). My primary goal is to build a zero-bottleneck, GPU-driven pipeline that maximizes the potential of Apple Silicon’s Unified Memory and TBDR architecture. While the fundamental systems are stable, I am looking for architectural feedback from Metal framework engineers regarding specific synchronization and latency challenges. Current Core Implementations: GPU-Driven Instance Culling: High-performance occlusion culling using a Hierarchical Z-Buffer (HZB) approach via Compute Shaders. Clustered Forward Shading: Support for high-count dynamic lights through view-space clustering. Temporal Stability: Custom TAA with history rejection and Motion Blur resolve. Asset Infrastructure: Robust GUID-based scene serialization and a JSON-driven ECS hierarchy. The Architectural Challenge: I am currently seeing slight synchronization overhead when generating the HZB mip-chain. On Apple Silicon, I am evaluating the cost of encoder transitions versus cache-friendly barriers. && m_hzbInitPipeline && m_hzbDownsamplePipeline && !m_hzbMipViews.empty(); if (canBuildHzb) { MTL::ComputeCommandEncoder* hzbInit = commandBuffer->computeCommandEncoder(); hzbInit->setComputePipelineState(m_hzbInitPipeline); hzbInit->setTexture(m_depthTexture, 0); hzbInit->setTexture(m_hzbMipViews[0], 1); if (m_pointClampSampler) { hzbInit->setSamplerState(m_pointClampSampler, 0); } else if (m_linearClampSampler) { hzbInit->setSamplerState(m_linearClampSampler, 0); } const uint32_t hzbWidth = m_hzbMipViews[0]->width(); const uint32_t hzbHeight = m_hzbMipViews[0]->height(); const uint32_t threads = 8; MTL::Size tgSize = MTL::Size(threads, threads, 1); MTL::Size gridSize = MTL::Size((hzbWidth + threads - 1) / threads * threads, (hzbHeight + threads - 1) / threads * threads, 1); hzbInit->dispatchThreads(gridSize, tgSize); hzbInit->endEncoding(); for (size_t mip = 1; mip < m_hzbMipViews.size(); ++mip) { MTL::Texture* src = m_hzbMipViews[mip - 1]; MTL::Texture* dst = m_hzbMipViews[mip]; if (!src || !dst) { continue; } MTL::ComputeCommandEncoder* downEncoder = commandBuffer->computeCommandEncoder(); downEncoder->setComputePipelineState(m_hzbDownsamplePipeline); downEncoder->setTexture(src, 0); downEncoder->setTexture(dst, 1); const uint32_t mipWidth = dst->width(); const uint32_t mipHeight = dst->height(); MTL::Size downGrid = MTL::Size((mipWidth + threads - 1) / threads * threads, (mipHeight + threads - 1) / threads * threads, 1); downEncoder->dispatchThreads(downGrid, tgSize); downEncoder->endEncoding(); } if (m_instanceCullHzbPipeline) { dispatchInstanceCulling(m_instanceCullHzbPipeline, true); } } My Questions: Encoder Synchronization: Would you recommend moving this loop into a single ComputeCommandEncoder using MTLBarrier between dispatches to maintain L2 cache residency, or is the overhead of separate encoders negligible for depth-downsampling on TBDR? visionOS Bindless Latency: For stereo rendering on visionOS, what are the best practices for managing MTL4ArgumentTable updates at 90Hz+? I want to ensure that updating bindless resources for each eye doesn't introduce unnecessary CPU-to-GPU latency. Memory Management: Are there specific hints for Memoryless textures that could be applied to intermediate HZB levels to save bandwidth during this process? I’ve attached a screenshot of a scene rendered with the engine (PBR, SSR, and IBL).
0
0
325
1w
Many CGECreateKeyboardEvent in quick succession causing function and dictation buttons to be pressed?
Hi! I hope everyone reading is doing well. I am working on developing a reinforcement learning agent that involves sending scan codes to a window, which I've been doing by sending virtual scan codes with CGEventCreateKeyboardEvent per the docs. There is no event source when I send the keyboard events. However, when many keyboard events are happening (with the keys 'q', 'w', 'e', 'r', 'f', 'd', 's', space, arrow keys) in quick succession (<250ms), the enable dictation popup or the function button emojis popup appear for seemingly no reason. I have verified that I am using the correct scan codes for these keypresses, so I am wondering what else could cause this to happen. It is as if I am choosing to press f5 or fn. It does not happen when 'a' is the only button being pressed in quick succession. One thing that I have not been able to easily find is the scan code inputs for dictation nor the function button. do these scan codes overlap somehow? Thank you all for the help! Hunter
2
0
403
1w
SpriteKit scene used as SCNView.overlaySKScene crashes due to SKShapeNode
I recently published my first game on the App Store. It uses SceneKit with a SpriteKit overlay. All crashes Xcode downloaded for it so far are related to some SpriteKit/SceneKit internals. The most common crash is caused by SKCShapeNode::_NEW_copyRenderPathData. What could cause such a crash? crash.crash While developing this game (and the BoardGameKit framework that appears in the crash log) over the years I experienced many crashes presumably caused by the SpriteKit overlay (I opened a post SceneKit app randomly crashes with EXC_BAD_ACCESS in jet_context::set_fragment_texture about such a crash in September 2024), and other people on the internet also mention that they experience crashes when using SpriteKit as a SceneKit overlay. Should I use a separate SKView and lay it on top of SCNView rather than setting SCNView.overlaySKScene? That seemed to solve the crashes for a guy on stackoverflow, but is it also encouraged by Apple? I know SceneKit is deprecated, but according to Apple critical bugs would still be fixed. Could this be considered a critical bug?
4
0
587
1w
Krazy Krownz multiplayer option not working for Game Center!
When trying to play with friends Krazy Krownz doesn’t allow me to click multiplayer even though my Apple Game Center connected and my friends Apple game center connected as well. I even tried sending an invite from Apple Game Center to friends and Krazy Krownz doesn’t even show up on the list of available multiplayer games. I’ve signed out and back in the same issue remain. I’ve try to contact the game developer, but the website doesn’t work.
1
0
235
1w
Xcode Metal Trace
Code is download from apple official metal4 sample [https://developer.apple.com/documentation/metal/drawing-a-triangle-with-metal-4?language=objc] enable metal gpu trace in macOS schema and trace a frame in Xcode. Xcode may show segment fault on App from some 'GTTrace' function when click trace button. When replay a .gputrace file, Xcode may crash , throw an internal error or a XPC error. The example code using old metal-renderer can trace without any problem and everything works fine. Test Environment: Xcode Version 26.2 (17C52) macOS 26.2 (25C56) M1 Pro 16GB A2442
2
0
429
1w
receivedTurnEventForMatch giving stale data
In my turn-based game, I receive GKListener event receivedTurnEventForMatch and decode the match.matchData. On occasion, the matchData is clearly stale and is from the previous turn. If I call the MatchMaker ViewController up and select that same match, the data is not stale, so it's not a matter of not calling endTurn. I have tried both loadMatchWithID and loadMatchesWithCompletionHandler after receiving the receivedTurnEventForMatch, but the data is still stale. Advice?
2
0
589
2w
- (BOOL) contentsAreFlipped needs to be true for .nib layouts
I have an odd bug, if I use initWithFrame as the init routine for NSView subclass that uses layers I don't see this bug. But if I embedded this view into a storyboard with a .nib file and use initWithCoder, I need to return true on (BOOL) contentsAreFlipped From the NSView subclass If I don't the CALayer actually renders from 0,0 from the view upwards and off the window. The frame sizes for the NSView and the CALayer are good.. when I see them in updateLayer. Obviously I have a fix.. but I would like to understand why.
0
0
223
2w
macOS SwiftUI app with external 4K camera & sensors for Hospital Avatar: ARKit, MLX, and Thermal feasibility?
We are developing a standalone AI avatar application for hospital reception kiosks using Mac mini (M2/M4). The app runs on SwiftUI + RealityKit, displays on a 75-inch monitor, and utilizes a USB-connected 4K camera and external sensors (LiDAR/mmWave). We have several technical concerns regarding the transition from iPadOS to macOS. Could you please provide insights on the following? ARKit/Vision Framework on macOS with External Camera On iPadOS, ARKit provides robust Face Tracking. On macOS with an external USB 4K camera: Can we achieve real-time face tracking (expression/gaze/depth) with Vision framework or ARKit comparable to iPadOS performance? Are there any specific limitations for accessing the Neural Engine via Vision framework for real-time 4K video analysis on macOS? Accessing External Hardware (LiDAR/Sensors) in Sandbox We plan to connect external LiDAR and mmWave sensors (e.g., Akara) via USB/Bluetooth. Is it feasible to communicate with these custom drivers/devices within the App Sandbox environment? Would DriverKit be required, or can we use standard serial communication APIs? On-Device LLM (MLX) & Thermals We intend to run a local LLM (e.g., Llama 3 using MLX framework) for offline conversation, alongside 3D rendering. With the M2/M4 Mac mini fan design, is there a risk of thermal throttling during 10+ hours of continuous operation (simultaneous CoreML + 3D rendering)? Is the Mac Studio recommended over the Mac mini for this thermal profile? Long-running Speech API Are there any known issues (memory leaks, API limits) when using Spherch framework and AVSpeechSynthesizer continuously for over 10 hours daily? 3D Display Output Are there any macOS constraints for rendering a SwiftUI window in a specific 3D format (e.g., Side-by-Side) and outputting it via HDMI to a 3D digital signage display (fixed refresh rate/resolution)? Thank you for your assistance.
0
0
244
2w
MetalFX FrameInterpolator assertion: `Color texture width mismatch from descriptor` even when all texture sizes match
I am integrating MetalFX FrameInterpolator into a custom Unity RenderGraph–based render pipeline (C++ native plugin + C# render passes), and I am hitting the following assertion at runtime: /MetalFXDebugError.h:29: failed assertion `Color texture width mismatch from descriptor' What makes this confusing is that all input/output textures have the correct width and height, and they exactly match the values specified in the MTLFXFrameInterpolatorDescriptor. Setup Input resolution: 1024 x 512 Output resolution: 2048 x 1024 MTLFXTemporalScaler is created first and then passed into MTLFXFrameInterpolator The TemporalScaler and FrameInterpolator descriptors use the same input/output sizes and formats All Metal textures: Have no parentTexture Are 2D textures Match the descriptor sizes exactly (verified via logging) Texture bindings at encode time frameInterpolator.colorTexture = mtlTexColor; // 1024 x 512 frameInterpolator.prevColorTexture = mtlTexPrevColor; // 1024 x 512 frameInterpolator.motionTexture = mtlTexMotion; // 1024 x 512 frameInterpolator.depthTexture = mtlTexDepth; // 1024 x 512 frameInterpolator.uiTexture = mtlTexUI; // 2048 x 1024 frameInterpolator.outputTexture = mtlTexOutput; // 2048 x 1024 All widths/heights are logged and match: Color : 1024 x 512 (input) PrevColor : 1024 x 512 (input) Motion : 1024 x 512 (input) Depth : 1024 x 512 (input) UI : 2048 x 1024 (output) Output : 2048 x 1024 (output) The TemporalScaler works correctly on its own. The assertion only occurs when using FrameInterpolator. Important detail about colorTexture Originally, colorTexture was copied from BuiltinRenderTextureType.CurrentActive. After reading that this might violate MetalFX semantics, I changed the pipeline so that: colorTexture now comes from a dedicated private RenderGraph texture It is not the backbuffer It is not a drawable It is not used as a final output It is created before UI rendering Despite this, the assertion still occurs. Question Can uiTexture for MTLFXFrameInterpolator legally come from a texture copied from BuiltinRenderTextureType.CurrentActive? More generally: Are there additional hidden constraints on colorTexture / prevColorTexture (such as Metal usage, storageMode, aliasing, or hazard tracking) that could cause this assertion, even when sizes match? Does FrameInterpolator require colorTexture and prevColorTexture to be created in a very specific way (e.g. non-aliased, ShaderRead usage, identical Metal resource properties)? Any clarification on the exact semantic requirements for colorTexture, prevColorTexture, or uiTexture in MetalFX FrameInterpolator would be greatly appreciated.
4
0
547
2w
Clarifying when Game Center activity events fire relative to authentication
Hello, In our game we enforce an age gate before showing Game Center sign‑in. Only after the user passes the age gate do we call GKLocalPlayer.localPlayer.authenticateHandler. The reason I’m asking is that we want to reliably detect if the game was launched from a Game Center activity in the Games app (iOS 26+). If the user prefers to enter via activities, we don’t want to miss that event during cold start. Our current proposal is: Register a GKLocalPlayerListener early in didFinishLaunchingWithOptions: so the app is ready to catch events. Queue any incoming events in our dispatcher. Only process those events after the user passes the age gate and authentication succeeds. My questions are: Does player:wantsToPlayGameActivity:completionHandler: ever fire before authentication, or only after the local player is authenticated? If it only fires after authentication, is our “register early but gate processing” approach the correct way to ensure we don’t miss activity launches? Is there any recommended pattern to distinguish “activity launch” vs. “normal launch” in this age‑gate scenario? We want to respect Apple’s age gate requirements, but also ensure activity launches are not lost if the user prefers that entry point. Sorry if this is a stupid question — I just want to be sure we’re following the right pattern. Thanks for any clarification or best‑practice guidance!
2
0
368
2w