Delve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.

Posts under General subtopic

Post

Replies

Boosts

Views

Activity

The delay issue of 4G TCP connection for iPhone 17 in China's mobile network
Reproduce Same SIM card with 4G, same testing location, connected to the same server, xcode debugging game applications, network/profile retrotransmitted, Avg round trip to view data iPhone17, Turn off 4G and turn on WiFi. All the above indicators are acceptable iPhone17, Turn on 4G, turn off WiFi, retry with retransmission and very high Avg round trip iPhone14-16, Turn on 4G and turn off WiFi. All the above indicators are acceptable App Unity3d project .netframe4.0 C# Socket Other Many developers in Chinese forums have provided feedback on this issue
2
0
766
Oct ’25
Value of type 'SCRecordingOutput' has no member 'delegate'
Hello, I am trying to capture screen recording ( output.mp4 ) using ScreenCaptureKit and also the mouse positions during the recording ( mouse.json ). The recording and the mouse positions ( tracked based on mouse movements events only ) needs to be perfectly synced in order to add effects in post editing. I started off by using the await stream?.startCapture() and after that starting my mouse tracking function :- try await captureEngine.startCapture(configuration: config, filter: filter, recordingOutput: recordingOutput) let captureStartTime = Date() mouseTracker?.startTracking(with: captureStartTime) But every time I tested, there is a clear inconsistency in sync between the recorded video and the recorded mouse positions. The only thing I want is to know when exactly does the recording "actually" started so that I can start the mouse capture at that same time, and thus I tried using the Delegates, but being able to set them up perfectly. import Foundation import AVFAudio import ScreenCaptureKit import OSLog import Combine class CaptureEngine: NSObject, @unchecked Sendable { private let logger = Logger() private(set) var stream: SCStream? private var streamOutput: CaptureEngineStreamOutput? private var recordingOutput: SCRecordingOutput? private let videoSampleBufferQueue = DispatchQueue(label: "com.francestudio.phia.VideoSampleBufferQueue") private let audioSampleBufferQueue = DispatchQueue(label: "com.francestudio.phia.AudioSampleBufferQueue") private let micSampleBufferQueue = DispatchQueue(label: "com.francestudio.phia.MicSampleBufferQueue") func startCapture(configuration: SCStreamConfiguration, filter: SCContentFilter, recordingOutput: SCRecordingOutput) async throws { // Create the stream output delegate. let streamOutput = CaptureEngineStreamOutput() self.streamOutput = streamOutput do { stream = SCStream(filter: filter, configuration: configuration, delegate: streamOutput) try stream?.addStreamOutput(streamOutput, type: .screen, sampleHandlerQueue: videoSampleBufferQueue) try stream?.addStreamOutput(streamOutput, type: .audio, sampleHandlerQueue: audioSampleBufferQueue) try stream?.addStreamOutput(streamOutput, type: .microphone, sampleHandlerQueue: micSampleBufferQueue) self.recordingOutput = recordingOutput recordingOutput.delegate = self try stream?.addRecordingOutput(recordingOutput) try await stream?.startCapture() } catch { logger.error("Failed to start capture: \(error.localizedDescription)") throw error } } func stopCapture() async throws { do { try await stream?.stopCapture() } catch { logger.error("Failed to stop capture: \(error.localizedDescription)") throw error } } func update(configuration: SCStreamConfiguration, filter: SCContentFilter) async { do { try await stream?.updateConfiguration(configuration) try await stream?.updateContentFilter(filter) } catch { logger.error("Failed to update the stream session: \(String(describing: error))") } } func stopRecordingOutputForStream(_ recordingOutput: SCRecordingOutput) throws { try self.stream?.removeRecordingOutput(recordingOutput) } } // MARK: - SCRecordingOutputDelegate extension CaptureEngine: SCRecordingOutputDelegate { func recordingOutputDidStartRecording(_ recordingOutput: SCRecordingOutput) { let startTime = Date() logger.info("Recording output did start recording \(startTime)") } func recordingOutputDidFinishRecording(_ recordingOutput: SCRecordingOutput) { logger.info("Recording output did finish recording") } func recordingOutput(_ recordingOutput: SCRecordingOutput, didFailWithError error: any Error) { logger.error("Recording output failed with error: \(error.localizedDescription)") } } private class CaptureEngineStreamOutput: NSObject, SCStreamOutput, SCStreamDelegate { private let logger = Logger() override init() { super.init() } func stream(_ stream: SCStream, didOutputSampleBuffer sampleBuffer: CMSampleBuffer, of outputType: SCStreamOutputType) { guard sampleBuffer.isValid else { return } switch outputType { case .screen: break case .audio: break case .microphone: break @unknown default: logger.error("Encountered unknown stream output type:") } } func stream(_ stream: SCStream, didStopWithError error: Error) { logger.error("Stream stopped with error: \(error.localizedDescription)") } } I am getting error Value of type 'SCRecordingOutput' has no member 'delegate' Even though I am targeting macOs 15+ ( macOs 26 actually ) and macOs only. What is the best way to achieving the desired result? Is there any other / better way to do it?
1
0
223
Oct ’25
Request: Allow Game Mode to be enabled locally for non-game App Store categories
Hi Apple team, Game Mode was introduced in iOS 18. To activate Game Mode, an app must include specific key-value pairs in its *.plist and be categorized as a "Game" on the App Store. My app (https://apps.apple.com/us/app/voidlink/id6747717070) works primarily as a self-hosted game streaming (PC->iPhone/iPad) client. Game Mode provides clear benefits in terms of latency and frame rate stability, but it can currently only be activated when running via Xcode or TestFlight. I am an individual iOS developer based in China, where an additional government license is required for apps to be listed under the "Game" category on the App Store. Obtaining such a license is very difficult for independent developers, so my app has been categorized under "Utilities" instead.(If move the app to game category, it will disappear from Chinese App Store immediately) Expectation / Suggestion: Please consider making Game Mode available as a local, user-controllable option on iOS18/26+, such as through a system “App Pool” where users can choose which apps to enable Game Mode for, regardless of App Store category. This would greatly benefit use cases like streaming clients, benchmarking tools, and remote play utilities, without requiring developers to reclassify their apps as “Games” on App Store.
3
1
702
Nov ’25
Request to restore full ICC profile support (LUT-based display profiles) in macOS ColorSync
Dear Apple Color Management Team, I’m a professional visual creator working on color-critical photo and graphic projects using macOS (currently 26.1 Tahoe). In recent macOS releases, LUT-based ICC display profiles (such as XYZ LUT + Matrix types generated by DisplayCAL or professional spectrophotometers) can no longer be installed or activated via ColorSync. This limitation significantly affects professional workflows in photography, graphic design, prepress, and video color grading — fields that rely on precise display profiling. The current workaround (converting LUT profiles to simple shaper/matrix ICC v2) results in less accurate tone response and color reproduction, particularly in the dark range and wide-gamut displays. I kindly request Apple to restore or re-enable the ability to install and use ICC v2/v4 LUT-based display profiles under ColorSync, as was possible on macOS Monterey and Ventura. This would allow professionals to continue using trusted calibration tools such as DisplayCAL, X-Rite i1Profiler, and Calibrite Profiler to achieve accurate color management. macOS is widely used in professional creative industries, and restoring this feature would be a huge help for countless photographers, designers, and colorists. Thank you for your attention and commitment to professional users. Best regards, Richárd Deutsch Professional Photographer https://riccio.hu/ MacBook Pro (M4 Pro, macOS 26.1)
7
0
1.5k
Dec ’25
Looking for some clarification
Was wondering if anyone from Apple could provide some clarification, The gaming studio "Epic Games" Is wondering if they could distribute the award winning game "Fortnite" back on MacOS without any retaliations. I know Fortnite being back on MacOS would benefit thousands of MacOS Devs. Hoping to get a clarification so Epic could start on bringing Fortnite back.
0
1
565
3w
Custom GCController subclass for new hardware?
Hi all, Wondering how I would go about creating a plugin/class to support a new (physical/hardware) device with the game controller framework? Between GCVirtualController on iOS and the "KeyboardAndMouseSupport.bundle" I see inside GameController.framework on my Mac, it looks like the framework must be designed to support this but I can't find any documentation. Thanks!
1
0
555
3d
How to apply a geometry modifier to a VideoMaterial in RealityKit?
I have a model that uses a video material as the surface shader and I need to also use a geometry modifier on the material. This seemed like it would be promising (adapted from https://developer.apple.com/wwdc21/10075 ~5m 50s). // Did the setup for the video and AVPlayer eventually leading me to let videoMaterial = VideoMaterial(avPlayer: avPlayer) // Assign the material to the entity entity.model!.materials = [videoMaterial] // The part shown in WWDC: Set up the library and geometry modifier before, so now try to map the new custom material to the video material entity.model!.materials = entity.model!.materials.map { baseMaterial in       try! CustomMaterial(from: baseMaterial, geometryModifier: geometryModifier)     } But, I get the following error Thread 1: Fatal error: 'try!' expression unexpectedly raised an error: RealityFoundation.CustomMaterialError.defaultSurfaceShaderForMaterialNotFound How can I apply a geometry modifier to a VideoMaterial? Or, if I can't do that, is there an easy way to route the AVPlayer video data into the baseColor of CustomMaterial?
2
0
1.2k
Dec ’25
Will metal-cpp-extensions headers for Appkit and MetalKit be maintained?
Hi, The metal-cpp distribution appears to only contain headers for Foundation and Quartzcore. The LearnMetalCPP download [1] provides a ZIP with an metal-cpp-extensions directory containing AppKit.hpp and MetalKit.hpp headers. First question: Are these headers distributed anywhere else more publicly? Without these headers only the renderer can be fully written in C++ as far as I can tell, i.e. no complete C++ NSApplication. Second question: Will these headers, if needed, be maintained (e.g. updated and/or extended) by Apple along side metal-cpp? [1] https://developer.apple.com/metal/cpp/ Thank you and regards.
2
0
2.1k
Feb ’25
Utilizing Point Cloud data from `ObjectCaptureSession` in WWDC23
I am currently developing a mobile and server-side application using the new ObjectCaptureSession on iOS and PhotogrammetrySession on MacOS. I have two questions regarding the newly updated APIs. From WWDC23 session: "Meet Object Capture for iOS", I know that the Object Capture API uses Point Cloud data captured from iPhone LiDAR sensor. I want to know how to use the Point Cloud data captured on iPhone ObjectCaptureSession and use it to create 3D models on PhotogrammetrySession on MacOS. From the example code from WWDC21, I know that the PhotogrammetrySession utilizes depth map from captured photo images by embedding it into the HEIC image and use those data to create a 3D asset on PhotogrammetrySession on MacOS. I would like to know if Point Cloud data is also embedded into the image to be used during 3D reconstruction and if not, how else the Point Cloud data is inserted to be used during reconstruction. Another question is, I know that Point Cloud data is returned as a result from request to the PhtogrammetrySession.Request. I would like to know if this PointCloud data is the same set of data captured during ObjectCaptureSession from WWDC23 that is used to create ObjectCapturePointCloudView. Thank you to everyone for the help in advance. It's a real pleasure to be developing with all the updates to RealityKit and the Object Capture API.
6
0
2.3k
Jul ’25
OpenGL ES support on Apple Silicon Simulators
Hey folks, I have a legacy game that is running OpenGL ES - and it no longer works on the simulators that are running Apple Silicon, ie iPhone 15 Pro, or the 13" iPads. And yes, i'm also running on Apple Silicon (M1 Max). The apps work fine on the actual devices, but the simulator crashes on any glDrawElements with a stack that looks like the following: I have not yet seen an announcement about this not working but i've seen mention in other apps of stopping to support GL (https://github.com/maplibre/maplibre-native/issues/2351) Can anyone shed some light? I'm obviously going to try to fix it, or find a recent sample app from which to start to see what might be up. Or move to metal, but i hadn't bargained for that level of effort atm ;) Any suggestions appreciated!
10
0
2.6k
Mar ’25
vImageConverter_CreateWithCGImageFormat Fails with kvImageInvalidImageFormat When Trying to Convert CMYK to RGB
So I get JPEG data in my app. Previously I was using the higher level NSBitmapImageRep API and just feeding the JPEG data to it. But now I've noticed on Sonoma If I get a JPEG in the CMYK color space the NSBitmapImageRep renders mostly black and is corrupted. So I'm trying to drop down to the lower level APIs. Specifically I grab a CGImageRef and and trying to use the Accelerate API to convert it to another format (to hopefully workaround the issue... CGImageRef sourceCGImage = `CGImageCreateWithJPEGDataProvider(jpegDataProvider,` NULL, shouldInterpolate, kCGRenderingIntentDefault); Now I use vImageConverter_CreateWithCGImageFormat... with the following values for source and destination formats: Source format: (derived from sourceCGImage) bitsPerComponent = 8 bitsPerPixel = 32 colorSpace = (kCGColorSpaceICCBased; kCGColorSpaceModelCMYK; Generic CMYK Profile) bitmapInfo = kCGBitmapByteOrderDefault version = 0 decode = 0x000060000147f780 renderingIntent = kCGRenderingIntentDefault Destination format: bitsPerComponent = 8 bitsPerPixel = 24 colorSpace = (DeviceRBG) bitmapInfo = 8197 version = 0 decode = 0x0000000000000000 renderingIntent = kCGRenderingIntentDefault But vImageConverter_CreateWithCGImageFormat fails with kvImageInvalidImageFormat. Now if I change the destination format to use 32 bitsPerpixel and use alpha in the bitmap info the vImageConverter_CreateWithCGImageFormat does not return an error but I get a black image just like NSBitmapImageRep
14
0
1.4k
Aug ’25
Trying to better understand CGAffineTransform.... and need a bit of guidance.
I have a CoreImage pipeline and one of my steps is to rotate my image about the origin (bottom left corner) and then translate it. I'm not seeing the behaviour I'm expecting, and I think my problem is in how I'm combining these two steps. As an example, I start with an identity transform (lldb) po transform333 ▿ CGAffineTransform - a : 1.0 - b : 0.0 - c : 0.0 - d : 1.0 - tx : 0.0 - ty : 0.0 I then rotate 1.57 radians (approx. 90 degrees, CCW) transform333 = transform333.rotated(by: 1.57) - a : 0.0007963267107332633 - b : 0.9999996829318346 - c : -0.9999996829318346 - d : 0.0007963267107332633 - tx : 0.0 - ty : 0.0 I understand the current contents of the transform. But then I translate by 10, 10: (lldb) po transform333.translatedBy(x: 10, y: 10) - a : 0.0007963267107332633 - b : 0.9999996829318346 - c : -0.9999996829318346 - d : 0.0007963267107332633 - tx : -9.992033562211013 - ty : 10.007960096425679 I was expecting tx and ty to be 10 and 10. I have noticed that when I reverse the order of these operations, the transform contents look correct. So I'll most likely just perform the steps in what feels to me like the incorrect order. Is anyone willing/able to point me to an explanation of why the steps I'm performing are giving me these results? thanks, mike
3
0
601
Jan ’25
Why doesn't mac opengl support alpha textures?
I have used the Mac M1 and M4. Developing OpenGL projects on machines running macOS 15.2 and 13.6. Call the OpenGL library functions of Mac. glTexImage2D If you use GL_LUMINANCE, GL_LUMINANCE_ALPHA, GL_ALPHA these three textures, you will get an error gl 500. It makes me unable to draw normally on Mac. What's the reason for this? Don't they support it?
2
0
738
Jan ’25
Apple API "CGDisplayCopyAllDisplayModes provides resolution list which does not match with system resolution for the external monitors.
Our application is trying to read all resolutions of an external monitor. We have observed that, for the external monitor there is a mismatch in resolution list in our application and the resolution list in system settings. We are using the apple API "CGDisplayCopyAllDisplayModes" to read the resolutions.
1
0
575
Mar ’25
JPEG2000 (JP2) Decoding Works on iOS 16 but Fails on iOS 18
I am extracting a JPEG2000 (JP2) facial image from an NFC passport chip (ISO/IEC 19794-5) and attempting to create a UIImage from it. On iOS 16, the following code works fine: import ImageIO import UIKit func getUIImage(from imageData: [UInt8]) -> UIImage? { let data = Data(imageData) guard let imageSource = CGImageSourceCreateWithData(data as CFData, nil), let cgImage = CGImageSourceCreateImageAtIndex(imageSource, 0, nil) else { print("Failed to decode JP2 image!") return nil } return UIImage(cgImage: cgImage) } However, on iOS 18, this fails with errors like: initialize:1415: *** invalid JPEG2000 file *** makeImagePlus:3752: *** ERROR: 'JP2 ' - failed to create image [-50] CGImageSourceCreateImageAtIndex: *** ERROR: failed to create image [-59] Questions: Did Apple remove or modify JPEG2000 support in iOS 18? Is there an official workaround for decoding JPEG2000 on iOS 18? Should I use Vision/Metal/Core Image instead? Is there a recommended way to convert JPEG2000 to JPEG/PNG before creating a UIImage? Are there any Apple-provided APIs that maintain backward compatibility for JPEG2000 decoding? Additional Info: The UInt8 array has a valid JPEG2000 header (0x00 0x00 0x00 0x0C 6A 50 ...). The image works on iOS 16 but fails on iOS 18. Tested on iPhone running iOS 18.0 beta. Any insights on how to handle JPEG2000 decoding in iOS 18 would be greatly appreciated! 🚀
3
0
372
Mar ’25
Fullscreen detection using Core Graphic
Hi, I am trying to detect if all the screen are in fullscreen mode. The current approach is to get all windows' information from CGWindowListCopyWindowInfo and then compare the frame and coordinate with the frame of NSScreen.screens. However, there is a problem, the y position of the window seems to be relative to the screen. As it is not absolute position, I cannot compare it with the coordinate of the screen. Does anyone know if there are other information that I can use? Or is there a way to retrieve the absolute position or screen ID from the GCWIndow object?
2
0
157
Apr ’25
my game is very big when I upload my game in the app store
i have a game that i upload it in the app store that my game size is 3 gigaByte but when I download it, it show that the really size is about 100 megaByte, i upload the game in google app is given me the real size, so the problem i think is when it get out the xcode, maybe some one can give me i clue for what is going on. my game was made by unity2020. if that helps.
1
0
140
Apr ’25
Game Rejected as Spam
My app is being rejected and all I'm being told is that it is spam. I've tried improving various aspects of the game, but I just receive the same copy and paste rejection message each time. I have no idea if I'm moving in the right direction or what part of my game needs to be changed or improved. Is there a game quality benchmark document or some kind of resource I can use to better understand why my game is being rejected and how to bring it to a level that meets apple's standards.
1
0
107
Apr ’25