Delve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.

All subtopics
Posts under Graphics & Games topic

Post

Replies

Boosts

Views

Activity

Background GPU Access availability
I would love to use Background GPU Access to do some video processing in the background. However the documentation of BGContinuedProcessingTaskRequest.Resources.gpu clearly states: Not all devices support background GPU use. For more information, see Performing long-running tasks on iOS and iPadOS. Is there a list available of currently released devices that do (or don't) support GPU background usage? That would help to understand what part of our user base can use this feature. (And what hardware we need to test this on as developers.) For example it seems that it isn't supported on an iPad Pro M1 with the current iOS 26 beta. The simulators also seem to not support the background GPU resource. So would be great to understand what hardware is capable of using this feature!
4
0
885
Jul ’25
vImageConverter_CreateWithCGImageFormat Fails with kvImageInvalidImageFormat When Trying to Convert CMYK to RGB
So I get JPEG data in my app. Previously I was using the higher level NSBitmapImageRep API and just feeding the JPEG data to it. But now I've noticed on Sonoma If I get a JPEG in the CMYK color space the NSBitmapImageRep renders mostly black and is corrupted. So I'm trying to drop down to the lower level APIs. Specifically I grab a CGImageRef and and trying to use the Accelerate API to convert it to another format (to hopefully workaround the issue... CGImageRef sourceCGImage = `CGImageCreateWithJPEGDataProvider(jpegDataProvider,` NULL, shouldInterpolate, kCGRenderingIntentDefault); Now I use vImageConverter_CreateWithCGImageFormat... with the following values for source and destination formats: Source format: (derived from sourceCGImage) bitsPerComponent = 8 bitsPerPixel = 32 colorSpace = (kCGColorSpaceICCBased; kCGColorSpaceModelCMYK; Generic CMYK Profile) bitmapInfo = kCGBitmapByteOrderDefault version = 0 decode = 0x000060000147f780 renderingIntent = kCGRenderingIntentDefault Destination format: bitsPerComponent = 8 bitsPerPixel = 24 colorSpace = (DeviceRBG) bitmapInfo = 8197 version = 0 decode = 0x0000000000000000 renderingIntent = kCGRenderingIntentDefault But vImageConverter_CreateWithCGImageFormat fails with kvImageInvalidImageFormat. Now if I change the destination format to use 32 bitsPerpixel and use alpha in the bitmap info the vImageConverter_CreateWithCGImageFormat does not return an error but I get a black image just like NSBitmapImageRep
14
0
1.4k
Aug ’25
App not showing in Game Center “All Activity” after release
Hello — I shipped an App Store build that signs in to Game Center using the Apple Unity Plugins (GameKit). The login banner appears, but my app still doesn’t show up in Game Center’s “All activity” (You started playing XXX 2d ago) What I’ve done Call await GKLocalPlayer.Authenticate(); “Game Center” is enabled for the current version in App Store Connect Confirmed: other App Store games do appear under “All Activity” on the same device/account Timeline: This is the first version that enables Game Center (not the app’s first release), and it has been about 2 hours since this build went live. Questions Is authentication alone sufficient for “Recently Played,” or is at least one Game Center component (leaderboards, achievements, activities, multiplayer) required? Is there a typical propagation delay before “Recently Played” starts showing a newly enabled app/version? Is there anything else I should configure in App Store Connect or entitlements to make “Recently Played” visible? Thanks for any help.
2
0
541
Aug ’25
MetalFX upscaler/denoiser and instant changes
Hi, What's the best way to handle drastic changes in scene charateristics with the new MTLFXTemporalDenoisedScaler? Let's say a visible object of the scene radically changes its material properties. I can modify the albedo and roughness textures consequently. But I suspect the history will be corrupted. Blending visual information between the new frame and the previous ones might be a nonsense. I guess the problem should be the same when objects appear or disappear instantly. Is the upsacler manage these events for us (by lowering blending), or should we use the reactive or the denoise strength mask or something like that to handle them?
2
0
124
Jul ’25
Editor Ok, but running batch get: Cannot locate a Release or Debug Apple.GameKit native library for macOS
After running build.py -p Core GameKit and adding the tar balls to the Unity project in Assets/ExternalPackages no packages seem to be found when running the build using our continuous integration system. This was not the case when the project was opened in the Editor. It looks like in Apple.Core, the ApplePluginEnvironment hasn't run the OnEditorUpdate function and so the _appleUnityPackages Dictionary is empty. A change to ApplePlugInEnvironment.cs seemed to fix the issue: public static AppleNativeLibrary GetLibrary(string packageDisplayName, string appleBuildConfig, string applePlatform) { // ?FIX?: If we're not in the editor, we might not have updated the package list. if (_appleUnityPackages.Count == 0 && _updateState == UpdateState.Initializing) { OnEditorUpdate(); // UpdateState.Initializing OnEditorUpdate(); // UpdateState.Updating } I'm not sure if this is something we're doing incorrectly, the documentation for the plug-in mostly covered building the package.
2
0
691
Dec ’24
Xcode compile stucks(stops) when add new source files or add functions to previous files
Hello, we are working on a iOS game project, as progress, the project grows larger and larger. Because we are using other game dependencies and libraries, here larger and larger refers to the whole project, and our source files integrated and compiled by Xcode are not many. Now, it seems we hit a bottleneck, when I add new files or functions to the previous files to implement a new feature, Xcode compile stucks(stops), it's Indexing | Initializing datastore forever, cannot produce a final build. macOS 15.1, Xcode 16.2 Can you provide any solutions to solve this problem? Also submitted Feedback ID #FB18432749
4
0
667
Aug ’25
Compute kernel fails to compile when calling texture.read()
If I compile a compute kernel with a call to texture.read(), it fails with the following error: "Error Domain=AGXMetalG13X Code=3 "Encountered unlowered function call to air.get_read_sampler" UserInfo={NSLocalizedDescription=Encountered unlowered function call to air.get_read_sampler}." This error occurs on both macOS and iOS 26 Beta 5, but not when running on a simulator or in a playground. It does not occur on a macOS Sequoia VM. It occurs whether I use the old metal 3 or new metal 4 compilation method. A workaround would be to use a sampler, but according to the feature tables, all platforms support reading from textures of all formats. Below is a minimal example which produces the error: let device = MTLCreateSystemDefaultDevice()! let library = device.makeDefaultLibrary()! let computeFunction = library.makeFunction(name: "compute_test")! do { let pipeline = try device.makeComputePipelineState(function: computeFunction) debugPrint(pipeline) } catch { debugPrint("Metal 3 failed with error:\n\(error)") } #import <metal_stdlib> using namespace metal; kernel void compute_test(uint2 gid [[thread_position_in_grid]], texture2d<float, access::read> in [[texture(0)]], texture2d<float, access::write> out [[texture(1)]]) { out.write(in.read(gid), gid); } I filed feedback FB19530049.
1
0
182
Aug ’25
Metal: Non-uniform thread groups unsupported in Simulator? Is it?
My app is running Compute Shaders that use non-uniform thread groups. When I run the app in the debugger with a simulator target the app crashes on encoder.dispatchThreads and the error message is: Dispatch Threads with Non-Uniform Threadgroup Size is not supported on this device. Previously the log output states that: Metal Shader Validation is unsupported for Simulator. However: When I stop the debugger and just run the app in the simulator without the debugger attached, the app just runs fine and does not crash. The SwiftUI Preview that also triggers the Compute Shader when preparing data also just runs fine without a crash. I can run and debug on a real device no problem - I just don't have all sizes available. Is there anything I need to check in my lldb/simulator configuration? It obviously does work, just the debugger cannot really deal with it? Any input would be nice as this really slows my down as I have to be extremely careful when debugging on the simulator.
2
0
568
Mar ’25
Is there any future for screensavers on macOS?
I haven't been looking at screensavers for a long time because of Apple's lack of will (or resources?) to provide a public version of the private modern SDK used by Apple for a very long time now. I'm now looking at the Screen Saver pane in System Settings (the What-If version of System Preferences in an alternate universe where all screens are in portrait mode). In macOS Sequoia, it seems like 3rd party screensavers are not welcome considering that they are relegated to the "Other" section at the bottom of the list and you have to click Show All to start seeing 3rd party screen savers. I also had a quick look at macOS Tahoe Beta 3 and it looks like that all the real screensavers are gone (3rd party and the ones from Apple: Hello, Message, Flurry, etc.) or at least it requires to be a Nobel Prize to find them (and the Search field is not useful). I tried to install a 3rd party screen saver on macOS Tahoe Beta 3, it doesn't show up in the list. To summarize: No public access to modern APIs AFAIK. UI that is hostile to 3rd party screen savers on macOS Sequoia. Apparently only screensavers that are slideshows or movies curated by Apple in macOS Tahoe b3. Hence the question: Is there any future for screen savers on macOS? Because if there's none, I won't waste my time trying to update some old screen savers.
3
0
520
Aug ’25
After updating CAMetalLayer.drawableSize, [CAMetalLayer nextDrawable:] frequently takes ~1s
I have a bare-bones Metal app setup where I attach a CAMetalLayer to a window that inherits from a NSWindow with a custom delegate. Everything else is vanilla. I'm also using metal-cpp and metal shader converter. I'm running into a issue where the application runs fine in the beginning, but once I resize the window, it starts hitching. It turns out that [CAMetalLayer nextDrawable:] frequently (but not always) takes around a full second (plus or minus a few milliseconds) to return once drawableSize has been updated. I've tried setting allowsNextDrawableTimeout to false which doesn't work; it returns a valid drawable after a second instead of nil. Setting displaySyncEnabled to false reduces the likelihood of this happening to around 50% from 90%+ but does not eliminate it. Setting maximumDrawableCount to 2 or 3 does not seem to make a difference. By dumping the resource IDs of the returned textures I've noticed something interesting: Before resizing, the layer seems to shuffle between 2 textures or at least 2 resource IDs, but after resizing it starts to create new textures for each returned drawable. Occasionally it seems to reuse a previous resource ID, but it does not seem to have anything to do with whether the method returns quickly or not. Why does this happen, and how can I fix it? Should I create a new CAMetalLayer when resizing the window instead of updating drawableSize?
3
0
742
Jan ’25
SceneView selective draw since concurrency
I have used SceneKit for several years but recently have a problem where a scene with fewer than 50 nodes is partially drawn, i.e., some nodes are, some aren't, and greater than 50 nodes are always draw correctly. This seems to have happened since concurrency was introduced. (w.r.t. concurrency, I had been using DispatchQueue successfully before then.) Since all nodes (few or many) are constructed and implemented by the same functions etc. I'm baffled. When I print the node hierarchy all nodes are present whether few or many. SceneView() has [.rendersContinually] option selected. Every node created (few or many) has .opacity = 1.0, .isHidden = false I haven't tried setting-back the compiler version as that is not a long term solution, and I know the same code worked fine then.
8
0
719
Feb ’25
Xcode Vulkan is opening two windows instead of one.
I'm a newbee at Vulkan and Xcode. I have my project on github https://github.com/flocela/OrangeSpider/ Whenever I run, two windows open instead of only one. I added testing, which means I have an OrangeSpider.xctestplan in the OrangeSpider/TestsOrangeSpider/ folder. This is my first time adding testing to an XCode project, so I think this may be where the problem is. I also get this error message: ViewBridge to RemoteViewService Terminated: Error Domain=com.apple.ViewBridge Code=18 "(null)" UserInfo={com.apple.ViewBridge.error.hint=this process disconnected remote view controller -- benign unless unexpected, com.apple.ViewBridge.error.description=NSViewBridgeErrorCanceled}
4
0
153
Jul ’25
Request: API to temporarily defer Reachability for fullscreen game swipes (bottom edge)
In swipe-driven games, a first downward swipe starting near the home indicator can trigger Reachability, even when using preferredScreenEdgesDeferringSystemGestures = .bottom and prefersHomeIndicatorAutoHidden = true. This causes the app to slide down to half screen and breaks gameplay. Please consider an API to temporarily defer Reachability while a custom gesture is active (similar to existing system gesture deferral), without disabling accessibility globally. Environment: Devices: iPhones with Home Indicator (Face ID) Why this matters: Bottom-origin swipes are core in many games (flick shots, slingshot, physics toss, bottom sheets). Current workarounds degrade UX and discoverability, and players still accidentally trigger Reachability. Feedback Assistant Post
2
0
653
Aug ’25
Utilizing Point Cloud data from `ObjectCaptureSession` in WWDC23
I am currently developing a mobile and server-side application using the new ObjectCaptureSession on iOS and PhotogrammetrySession on MacOS. I have two questions regarding the newly updated APIs. From WWDC23 session: "Meet Object Capture for iOS", I know that the Object Capture API uses Point Cloud data captured from iPhone LiDAR sensor. I want to know how to use the Point Cloud data captured on iPhone ObjectCaptureSession and use it to create 3D models on PhotogrammetrySession on MacOS. From the example code from WWDC21, I know that the PhotogrammetrySession utilizes depth map from captured photo images by embedding it into the HEIC image and use those data to create a 3D asset on PhotogrammetrySession on MacOS. I would like to know if Point Cloud data is also embedded into the image to be used during 3D reconstruction and if not, how else the Point Cloud data is inserted to be used during reconstruction. Another question is, I know that Point Cloud data is returned as a result from request to the PhtogrammetrySession.Request. I would like to know if this PointCloud data is the same set of data captured during ObjectCaptureSession from WWDC23 that is used to create ObjectCapturePointCloudView. Thank you to everyone for the help in advance. It's a real pleasure to be developing with all the updates to RealityKit and the Object Capture API.
6
0
2.2k
Jul ’25
SceneKit
Helle there Currently, I’m attempting to create an interactive learning application with a 3D view. I’ve discovered the framework SceneKit, but I lack the necessary knowledge to animate, load and moving objects. Could someone kindly suggest some good articles or tutorials on this topic?
2
0
579
Feb ’25
metal-cpp syntax for MTL::Buffer float2 parameter
I'm trying to pass a buffer of float2 items from CPU to GPU. In the kernel, I can provide a parameter for the buffer: device const float2* values for example. How do I specify float2 as the type for the MTL::Buffer? I managed to get the code to work by "cheating" by defining a simple class that has the same data members as a float2, but there is probably a better way. class Coord_f { public: float x{0.0f}; float y{0.0f}; }; then using code to allocate like this: NS::TransferPtr(device->newBuffer(n_elements * sizeof(Coord_f), MTL::ResourceStorageModeManaged)) The headers for metal-cpp do not appear to define vector objects like float2, but I'm doubtless missing something. Thanks.
2
0
648
Jan ’25
GKMatch.chooseBestHostingPlayer(_:) always returns nil player
I'm building a game with a client-server architecture. Using GKMatch.chooseBestHostingPlayer(_:) rarely works. When I started testing it today, it worked once at the very beginning, and since then it always succeeds on one client and returns nil on the other client. I'm testing with a Mac and an iPhone. Sometimes it fails on the Mac, sometimes on the iPhone. On the device that it succeeds on, the provided host can be the device itself or the other one. I created FB9583628 in August 2021, but after the Feedback Assistant team replied that they are not able to reproduce it, the feedback never went forward. import SceneKit import GameKit #if os(macOS) typealias ViewController = NSViewController #else typealias ViewController = UIViewController #endif class GameViewController: ViewController, GKMatchmakerViewControllerDelegate, GKMatchDelegate { var match: GKMatch? var matchStarted = false override func viewDidLoad() { super.viewDidLoad() GKLocalPlayer.local.authenticateHandler = authenticate } private func authenticate(_ viewController: ViewController?, _ error: Error?) { #if os(macOS) if let viewController = viewController { presentAsSheet(viewController) } else if let error = error { print(error) } else { print("authenticated as \(GKLocalPlayer.local.gamePlayerID)") let viewController = GKMatchmakerViewController(matchRequest: defaultMatchRequest())! viewController.matchmakerDelegate = self GKDialogController.shared().present(viewController) } #else if let viewController = viewController { present(viewController, animated: true) } else if let error = error { print(error) } else { print("authenticated as \(GKLocalPlayer.local.gamePlayerID)") let viewController = GKMatchmakerViewController(matchRequest: defaultMatchRequest())! viewController.matchmakerDelegate = self present(viewController, animated: true) } #endif } private func defaultMatchRequest() -> GKMatchRequest { let request = GKMatchRequest() request.minPlayers = 2 request.maxPlayers = 2 request.defaultNumberOfPlayers = 2 request.inviteMessage = "Ciao!" return request } func matchmakerViewControllerWasCancelled(_ viewController: GKMatchmakerViewController) { print("cancelled") } func matchmakerViewController(_ viewController: GKMatchmakerViewController, didFailWithError error: Error) { print(error) } func matchmakerViewController(_ viewController: GKMatchmakerViewController, didFind match: GKMatch) { self.match = match match.delegate = self startMatch() } func match(_ match: GKMatch, player: GKPlayer, didChange state: GKPlayerConnectionState) { print("\(player.gamePlayerID) changed state to \(String(describing: state))") startMatch() } func startMatch() { let match = match! if matchStarted || match.expectedPlayerCount > 0 { return } print("starting match with local player \(GKLocalPlayer.local.gamePlayerID) and remote players \(match.players.map({ $0.gamePlayerID }))") match.chooseBestHostingPlayer { host in print("host is \(String(describing: host?.gamePlayerID))") } } }
4
0
342
Apr ’25
How can I get pixel coordinates in the fragment tile function?
In this video, tile fragment shading is recommended for image processing. In this example, the unpack function takes two arguments, one of which is RasterizerData. As I understand it, this is the data passed to us from the previous stage (Vertex) of the graphics pipeline. However, the properties of MTLTileRenderPipelineDescriptor do not include an option for specifying a Vertex function. Therefore, in this render pass, a mix of commands is used: first, a draw command is executed to obtain UV coordinates, and then threads are dispatched. My question is: without using a draw command, only dispatch, how can I get pixel coordinates in the fragment tile function? For the kernel tile function, everything is clear. typedef struct { float4 OPTexture [[ color(0) ]]; float4 IntermediateTex [[ color(1) ]]; } FragmentIO; fragment FragmentIO Unpack(RasterizerData in [[ stage_in ]], texture2d<float, access::sample> srcImageTexture [[texture(0)]]) { FragmentIO out; //... // Run necessary per-pixel operations out.OPTexture = // assign computed value; out.IntermediateTex = // assign computed value; return out; }
1
0
166
Mar ’25