在正常游戏中,如果非常频繁的调用assetBundle.Unload接口,会导致游戏应用画面卡死,但是游戏的背景音乐仍然正常播放。这类问题仅发生在iphone16 和iphone17的手机上,低版本的手机没有任何问题,请问该如何解决这个问题?
Delve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hello,
Question re: iOS RealityView postProcess. I've got a working postProcess kernel and I'd like to add some depth-based effects to it. Theoretically I should be able to just do:
encoder.setTexture(context.sourceDepthTexture, index: 1)
and then in the kernel:
texture2d<float, access::read> depthIn [[texture(1)]]
...
outTexture.write(depthIn.read(gid), gid);
And I consistently see all black rendered to the view. The postProcess shader works, so that's not the issue. It just seems to not be receiving actual depth information.
(If I set a breakpoint at the encoder setTexture step, I can see preview the color texture of the scene, but the context's depthTexture looks like all NaN / blank.)
I've looked at all the WWDC samples, but they include ARView for all the depth sample code, which has a different set of configuration options than RealityView. So far I haven't seen anywhere to explicitly tell RealityView "include the depth information". So I'm not sure if I'm missing something there.
It appears that there is indeed a depth texture being passed, but it looks blank.
Is there a working example somewhere that we can reference?
Hi, I'm Beginner with Metal 4 and Model I/O 🥺.
I can render simple models with just one mesh, but when I try to render models with SubMeshes, nothing shows up on screen.
Can anyone help me figure out how to properly render models with multiple submeshes? I think I'm not iterating through them correctly or maybe missing some buffers setup.
Here's what I have so far:
https://www.icloud.com.cn/iclouddrive/0a6x_NLwlWy-herPocExZ8g3Q#LoadModel
I am using the latest version of the Game Center plugin for Unity and have noticed that my game will crash on launch when trying to authenticate.
I've tried this in an empty project with just the plugin and it still crashes with this exception.
GfxDevice: creating device client; threaded=1; jobified=0
Initializing Metal device caps: Apple A14 GPU
Initialize engine version: 2022.3.62f2 (7670c08855a9)
GameKitException: Code=-7 Domain=GKErrorDomain Description=The operation couldn’t be completed. (GKErrorDomain error -7.) (UnsupportedOperationForOSVersion)
at Apple.GameKit.DefaultNSErrorHandler.ThrowNSError (System.IntPtr nsErrorPtr) [0x00000] in <00000000000000000000000000000000>:0
Rethrow as TypeInitializationException: The type initializer for 'Apple.GameKit.GKGameActivity' threw an exception.
And the area in the native code that is triggering the crash is this inside the GKLocalPlayer_SetAuthenticateHandler function
`_onAuthenticate!(tid, _mostRecentAuthenticatePlayer!.passRetainedUnsafeMutablePointer());
I am using Unity 2022.3.62f2 and MacOS 15.6 with iOS 18.6.2 which based on the min specs for the plugin we should be within spec.
I have also included this message because I thought it might help too
`terminating due to uncaught exception of type Il2CppExceptionWrapper
Could not import Swift modules for translation unit: failed to get module "GameKitWrapper" from AST context:
error: 'GKErrorCodeExtension.h' file not found
in file included from :1:
error: could not build Objective-C module 'GameKitWrapper'
warning: Ignoring missing VFS file: /Users/james/Library/Developer/Xcode/DerivedData/GameKitWrapper-dzawbtxqdxdviiakfxmfunexppqv/Build/Intermediates.noindex/GameKitWrapper.build/Release-iphoneos/GameKitWrapper-bc72bd3638f4d2956cac9b00e84c1a7d-VFS-iphoneos/all-product-headers.yaml
This is the likely root cause for any subsequent compiler
errors.warning: Ignoring missing VFS file: /Users/bill/Library/Developer/Xcode/DerivedData/GameKitWrapper-dzawbtxqdxdviiakfxmfunexppqv/Build/Intermediates.noindex/GameKitWrapper.build/Release-iphoneos/GameKitWrapper iOS.build/unextended-module-overlay.yaml
This is the likely root cause for any subsequent compiler errors.warning: TypeSystemSwiftTypeRef::GetNumChildren: had to engage SwiftASTContext fallback for type $syyXCD
I've also attached the script that I am using for authentication, this script runs on the first scene.
GameCenterManager.cs
After watching WWDC 2025 session "Combine Metal 4 machine learning and graphics", I have decided to give it a shot to integrate the latest MTL4MachineLearningCommandEncoder to my existing render pipeline. After a lot of trial and errors, I managed to set up the pipeline and have the app compiled.
However, I am now stuck on creating a MTLLibrary with .mtlpackage.
Here is the code I have to create a MTLLibrary according the WWDC session https://developer.apple.com/videos/play/wwdc2025/262/?time=550:
let coreMLFilePath = bundle.path(forResource: "my_model", ofType: "mtlpackage")!
let coreMLURL = URL(string: coreMLFilePath)!
do {
metalDevice.makeLibrary(URL: coreMLURL)
} catch {
print("error: \(error)")
}
With the above code, I am getting error:
Error Domain=MTLLibraryErrorDomain Code=1 "Invalid metal package" UserInfo={NSLocalizedDescription=Invalid metal package}
What is the correct way to create a MTLLibrary with .mtlpackage? Do I see this error because the .mtlpackage I am using is incorrect? How should I go with debugging this?
I'd really appreciate if I could get some help on this as I have been stuck with it for some time now. Thanks in advance!
Hi,
How to enable multitouch on ARView?
Touch functions (touchesBegan, touchesMoved, ...) seem to only handle one touch at a time. In order to handle multiple touches at a time with ARView, I have to either:
Use SwiftUI .simultaneousGesture on top of an ARView representable
Position a UIView on top of ARView to capture touches and do hit testing by passing a reference to ARView
Expected behavior:
ARView should capture all touches via touchesBegan/Moved/Ended/Cancelled.
Here is what I tried, on iOS 26.1 and macOS 26.1:
ARView Multitouch
The setup below is a minimal ARView presented by SwiftUI, with touch events handled inside ARView. Multitouch doesn't work with this setup.
Note that multitouch wouldn't work either if the ARView is presented with a UIViewController instead of SwiftUI.
import RealityKit
import SwiftUI
struct ARViewMultiTouchView: View {
var body: some View {
ZStack {
ARViewMultiTouchRepresentable()
.ignoresSafeArea()
}
}
}
#Preview {
ARViewMultiTouchView()
}
// MARK: Representable ARView
struct ARViewMultiTouchRepresentable: UIViewRepresentable {
func makeUIView(context: Context) -> ARView {
let arView = ARViewMultiTouch(frame: .zero)
let anchor = AnchorEntity()
arView.scene.addAnchor(anchor)
let boxWidth: Float = 0.4
let boxMaterial = SimpleMaterial(color: .red, isMetallic: false)
let box = ModelEntity(mesh: .generateBox(size: boxWidth), materials: [boxMaterial])
box.name = "Box"
box.components.set(CollisionComponent(shapes: [.generateBox(width: boxWidth, height: boxWidth, depth: boxWidth)]))
anchor.addChild(box)
return arView
}
func updateUIView(_ uiView: ARView, context: Context) { }
}
// MARK: ARView
class ARViewMultiTouch: ARView {
required init(frame: CGRect) {
super.init(frame: frame)
/// Enable multi-touch
isMultipleTouchEnabled = true
cameraMode = .nonAR
automaticallyConfigureSession = false
environment.background = .color(.gray)
/// Disable gesture recognizers to not conflict with touch events
/// But it doesn't fix the issue
gestureRecognizers?.forEach { $0.isEnabled = false }
}
required dynamic init?(coder decoder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
for touch in touches {
/// # Problem
/// This should print for every new touch, up to 5 simultaneously on an iPhone (multi-touch)
/// But it only fires for one touch at a time (single-touch)
print("Touch began at: \(touch.location(in: self))")
}
}
}
Multitouch with an Overlay
This setup works, but it doesn't seem right. There must be a solution to make ARView handle multi touch directly, right?
import SwiftUI
import RealityKit
struct MultiTouchOverlayView: View {
var body: some View {
ZStack {
MultiTouchOverlayRepresentable()
.ignoresSafeArea()
Text("Multi touch with overlay view")
.font(.system(size: 24, weight: .medium))
.foregroundStyle(.white)
.offset(CGSize(width: 0, height: -150))
}
}
}
#Preview {
MultiTouchOverlayView()
}
// MARK: Representable Container
struct MultiTouchOverlayRepresentable: UIViewRepresentable {
func makeUIView(context: Context) -> UIView {
/// The view that SwiftUI will present
let container = UIView()
/// ARView
let arView = ARView(frame: container.bounds)
arView.autoresizingMask = [.flexibleWidth, .flexibleHeight]
arView.cameraMode = .nonAR
arView.automaticallyConfigureSession = false
arView.environment.background = .color(.gray)
let anchor = AnchorEntity()
arView.scene.addAnchor(anchor)
let boxWidth: Float = 0.4
let boxMaterial = SimpleMaterial(color: .red, isMetallic: false)
let box = ModelEntity(mesh: .generateBox(size: boxWidth), materials: [boxMaterial])
box.name = "Box"
box.components.set(CollisionComponent(shapes: [.generateBox(width: boxWidth, height: boxWidth, depth: boxWidth)]))
anchor.addChild(box)
/// The view that will capture touches
let touchOverlay = TouchOverlayView(frame: container.bounds)
touchOverlay.autoresizingMask = [.flexibleWidth, .flexibleHeight]
touchOverlay.backgroundColor = .clear
/// Pass an arView reference to the overlay for hit testing
touchOverlay.arView = arView
/// Add views to the container.
/// ARView goes in first, at the bottom.
container.addSubview(arView)
/// TouchOverlay goes in last, on top.
container.addSubview(touchOverlay)
return container
}
func updateUIView(_ uiView: UIView, context: Context) {
}
}
// MARK: Touch Overlay View
/// A UIView to handle multi-touch on top of ARView
class TouchOverlayView: UIView {
weak var arView: ARView?
override init(frame: CGRect) {
super.init(frame: frame)
isMultipleTouchEnabled = true
isUserInteractionEnabled = true
}
required init?(coder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
let totalTouches = event?.allTouches?.count ?? touches.count
print("--- Touches Began --- (New: \(touches.count), Total: \(totalTouches))")
for touch in touches {
let location = touch.location(in: self)
/// Hit testing.
/// ARView and Touch View must be of the same size
if let arView = arView {
let entity = arView.entity(at: location)
if let entity = entity {
print("Touched entity: \(entity.name)")
} else {
print("Touched: none")
}
}
}
}
override func touchesCancelled(_ touches: Set<UITouch>, with event: UIEvent?) {
let totalTouches = event?.allTouches?.count ?? touches.count
print("--- Touches Cancelled --- (Cancelled: \(touches.count), Total: \(totalTouches))")
}
}
The following minimal snippet SEGFAULTS with SDK 26.0 and 26.1. Won't crash if I remove async from the enclosing function signature - but it's impractical in a real project.
import Metal
import MetalPerformanceShaders
let SEED = UInt64(0x0)
typealias T = Float16
/* Why ran in async context? Because global GPU object,
and async makeMTLFunction,
and async makeMTLComputePipelineState.
Nevertheless, can trigger the bug without using global
@MainActor
let myGPU = MyGPU()
*/
@main
struct CMDLine {
static func main() async {
let ptr = UnsafeMutablePointer<T>.allocate(capacity: 0)
async let future: Void = randomFillOnGPU(ptr, count: 0)
print("Main thread is playing around")
await future
print("Successfully reached the end.")
}
static func randomFillOnGPU(_ buf: UnsafeMutablePointer<T>, count destbufcount: Int) async {
// let (device, queue) = await (myGPU.device, myGPU.commandqueue)
let myGPU = MyGPU()
let (device, queue) = (myGPU.device, myGPU.commandqueue)
// Init MTLBuffer, async let makeFunction, makeComputePipelineState, etc.
let tempDataType = MPSDataType.uInt32
let randfiller = MPSMatrixRandomMTGP32(device: device, destinationDataType: tempDataType, seed: Int(bitPattern:UInt(SEED)))
print("randomFillOnGPU: successfully created MPSMatrixRandom.")
// try await computePipelineState
// ^ Crashes before this could return
// Or in this minimal case, after randomFillOnGPU() returns
// make encoder, set pso, dispatch, commit...
}
}
actor MyGPU {
let device : MTLDevice
let commandqueue : MTLCommandQueue
init() {
guard let dev: MTLDevice = MPSGetPreferredDevice(.skipRemovable),
let cq = dev.makeCommandQueue(),
dev.supportsFamily(.apple6) || dev.supportsFamily(.mac2)
else { print("Unable to get Metal Device! Exiting"); exit(EX_UNAVAILABLE) }
print("Selected device: \(String(format: "%llX", dev.registryID))")
self.device = dev
self.commandqueue = cq
print("myGPU: initialization complete.")
}
}
See FB20916929. Apparently objc autorelease pool is releasing the wrong address during context switch (across suspension points). I wonder why such obvious case has not been caught before.
https://developer.apple.com/forums/profile/mozheralqahtani
Topic:
Graphics & Games
SubTopic:
General
Hello everyone,
I'm working on a screen recording app using ScreenCaptureKit and I've hit a strange issue. My app records the screen to an .mp4 file, and everything works perfectly until the .captureMicrophone is false
In this case, I get a valid, playable .mp4 file.
However, as soon as I try to enable the microphone by setting streamConfig.captureMicrophone = true, the recording seems to work, but the final .mp4 file is corrupted and cannot be played by QuickTime or any other player. This happens whether capturesAudio (app audio) is on or off.
I've already added the "Privacy - Microphone Usage Description" (NSMicrophoneUsageDescription) to my Info.plist, so I don't think it's a permissions problem.
I have my logic split into a ScreenRecorder class that manages state and a CaptureEngine that handles the SCStream. Here is how I'm configuring my SCStream:
ScreenRecorder.swift
// This is my main SCStreamConfiguration
private var streamConfiguration: SCStreamConfiguration {
var streamConfig = SCStreamConfiguration()
// ... other HDR/preset config ...
// These are the problem properties
streamConfig.capturesAudio = isAudioCaptureEnabled
streamConfig.captureMicrophone = isMicCaptureEnabled // breaks it if true
streamConfig.excludesCurrentProcessAudio = false
streamConfig.showsCursor = false
if let region = selectedRegion, let display = currentDisplay {
// My region/frame logic (works fine)
let regionWidth = Int(region.frame.width)
let regionHeight = Int(region.frame.height)
streamConfig.width = regionWidth * scaleFactor
streamConfig.height = regionHeight * scaleFactor
// ... (sourceRect logic) ...
}
streamConfig.pixelFormat = kCVPixelFormatType_32BGRA
streamConfig.colorSpaceName = CGColorSpace.sRGB
streamConfig.minimumFrameInterval = CMTime(value: 1, timescale: 60)
return streamConfig
}
And here is how I'm setting up the SCRecordingOutput that writes the file:
ScreenRecorder.swift
private func initRecordingOutput(for region: ScreenPickerManager.SelectedRegion) throws {
let screeRecordingOutputURL = try RecordingWorkspace.createScreenRecordingVideoFile(
in: workspaceURL,
sessionIndex: sessionIndex
)
let recordingConfiguration = SCRecordingOutputConfiguration()
recordingConfiguration.outputURL = screeRecordingOutputURL
recordingConfiguration.outputFileType = .mp4
recordingConfiguration.videoCodecType = .hevc
let recordingOutput = SCRecordingOutput(configuration: recordingConfiguration, delegate: self)
self.recordingOutput = recordingOutput
}
Finally, my CaptureEngine adds these to the SCStream:
CaptureEngine.swift
class CaptureEngine: NSObject, @unchecked Sendable {
private(set) var stream: SCStream?
private var streamOutput: CaptureEngineStreamOutput?
// ... (dispatch queues) ...
func startCapture(configuration: SCStreamConfiguration, filter: SCContentFilter, recordingOutput: SCRecordingOutput) async throws {
let streamOutput = CaptureEngineStreamOutput()
self.streamOutput = streamOutput
do {
stream = SCStream(filter: filter, configuration: configuration, delegate: streamOutput)
// Add outputs for raw buffers (not used for file recording)
try stream?.addStreamOutput(streamOutput, type: .screen, sampleHandlerQueue: videoSampleBufferQueue)
try stream?.addStreamOutput(streamOutput, type: .audio, sampleHandlerQueue: audioSampleBufferQueue)
try stream?.addStreamOutput(streamOutput, type: .microphone, sampleHandlerQueue: micSampleBufferQueue)
// Add the file recording output
try stream?.addRecordingOutput(recordingOutput)
try await stream?.startCapture()
} catch {
logger.error("Failed to start capture: \(error.localizedDescription)")
throw error
}
}
// ... (stopCapture, etc.) ...
}
When I had the .captureMicrophone value to be false, I get a perfect .mp4 video playable everywhere, however, when its true, I am getting corrupted video which doesn't play at all :-
Matchmaking rules
https://developer.apple.com/documentation/gamekit/matchmaking-rules?language=objc
AppStoreConnectApi rules
https://developer.apple.com/documentation/appstoreconnectapi/rules?language=objc
・Environment
Unity 6000.2.2f1
XCode 16.1
iOS 26
3 iPhones
・AppStoreConnectApi rules
"type": "gameCenterMatchmakingRuleSets",
"id": "f6a88caf-85db-42bf-xxxxxxxxxxxxxxxxxxxx",
"attributes": {
"referenceName": "co.mygame.RuleSets.GvERandom34",
"ruleLanguageVersion": 1,
"minPlayers": 3,
"maxPlayers": 4
},
"type": "gameCenterMatchmakingRules",
"id": "6afa68ce-4d2c-496f-xxxxxxxxxxxxxxxxxxxx",
"attributes": {
"referenceName": "GameVersion",
"description": "Check Game Version. GvERandom34",
"type": "COMPATIBLE",
"expression": "requests[0].properties.gameVersion == requests[1].properties.gameVersion",
"weight": null
},
"type": "gameCenterMatchmakingQueues",
"id": "7fb645ef-4eca-4510-xxxxxxxxxxxxxxxxxxxx",
"attributes": {
"referenceName": "co.mygame.que.GvERandom34",
"classicMatchmakingBundleIds": []
},
・Objective-C Execution code
queueName = "co.mygame.que.GvERandom34"
keyStr = "gameVersion "
valueStr = "1.0"
- (void)MatchQueueParamStr1Start:(NSString*)queueName keyStr:(NSString*)keyStr valueStr:(NSString*)valueStr
{
if (@available(iOS 17.2, tvOS 17.2, macOS 14.2, visionOS 1.1, *) == NO)
{
DBGLOG(@"MatchQueueParamStr1Start Not support.");
return;
}
self->_matchMakingFlag = YES;
self->_matchFinishFlag = NO;
self->_myMatch = nil;
GKMatchRequest *req = [[GKMatchRequest alloc] init];
if (@available(iOS 17.2, tvOS 17.2, macOS 14.2, visionOS 1.1, *))
{
req.queueName = queueName;
req.properties = @{keyStr: valueStr};
}
[[GKMatchmaker sharedMatchmaker] findMatchForRequest:req withCompletionHandler: ^(GKMatch *match, NSError *error)
{
if (error)
{
[self SetupErrorInfo:error descriptionText:@"findMatchForRequest"];
}
else if(match)
{
self->_myMatch = match;
self->_myMatch.delegate = self;
}
self->_matchMakingFlag = NO;
self->_matchFinishFlag = YES;
}];
}
・
I'm trying to match with three devices.
Matching doesn't work.
5 minutes later times out.
What's the problem?
Hi Apple team,
Game Mode was introduced in iOS 18. To activate Game Mode, an app must include specific key-value pairs in its *.plist and be categorized as a "Game" on the App Store.
My app (https://apps.apple.com/us/app/voidlink/id6747717070) works primarily as a self-hosted game streaming (PC->iPhone/iPad) client. Game Mode provides clear benefits in terms of latency and frame rate stability, but it can currently only be activated when running via Xcode or TestFlight.
I am an individual iOS developer based in China, where an additional government license is required for apps to be listed under the "Game" category on the App Store. Obtaining such a license is very difficult for independent developers, so my app has been categorized under "Utilities" instead.(If move the app to game category, it will disappear from Chinese App Store immediately)
Expectation / Suggestion:
Please consider making Game Mode available as a local, user-controllable option on iOS18/26+, such as through a system “App Pool” where users can choose which apps to enable Game Mode for, regardless of App Store category.
This would greatly benefit use cases like streaming clients, benchmarking tools, and remote play utilities, without requiring developers to reclassify their apps as “Games” on App Store.
Topic:
Graphics & Games
SubTopic:
General
Hello,
I'm working on a game that features online multiplayer. The game is developed using Unity and Apple Unity plugins.
The "isUnderAge" property restricts the online multiplayer feature. Everything works as expected on all platforms (Mac, iPhone, iPad, AppleTV, and visionPro) except on Macs equipped with an Intel chip.
Using the same iCloud and GameCenter, with no restrictions enabled, "isUnderAge" returns false, as expected, but on Mac equipped with an Intel chip, it returns true.
Is there any restriction or compatibility issue with those chips? Is there a workaround?
Thanks
Topic:
Graphics & Games
SubTopic:
GameKit
Hello
XQuartz is an open-source effort to develop a version of the X.Org X Window System (https://www.xquartz.org/), widely used to bring graphical support to applications running in remote servers (usually via SSH).
Since macOS Tahoe, XQuartz fails to refresh properly on window resize (more info here https://github.com/XQuartz/XQuartz/issues/438#issuecomment-3371409500), leading to severe usability issues.
The XQuartz developers are already aware of the issue, but I’m wondering if there’s anything we can do at the OS level to resolve it and restore the usual behavior from before macOS Tahoe.
Thanks,
KiM
Topic:
Graphics & Games
SubTopic:
General
Hi,
I have an Unity game. I need to have multiple App Icons for my game for it to be able to be recognized in different countries.
In other words, is it possible to have an iOS app in which the App Icon changes based on device locale/language?
On Android this is possible using Unity Localization package "com.unity.localization"
Topic:
Graphics & Games
SubTopic:
General
Context
I’m deploying large language models on iPhone using llama.cpp. A new iPhone Air (12 GB RAM) reports a Metal MTLDevice.recommendedMaxWorkingSetSize of 8,192 MB, and my attempt to load Llama-2-13B Q4_K (~7.32 GB weights) fails during model initialization.
Environment
Device: iPhone Air (12 GB RAM)
iOS: 26
Xcode: 26.0.1
Build: Metal backend enabled llama.cpp
App runs on device (not Simulator)
What I’m seeing
MTLCreateSystemDefaultDevice().recommendedMaxWorkingSetSize == 8192 MiB
Loading Llama-2-13B Q4_K (7.32 GB) fails to complete. Logs indicate memory pressure / allocation issues consistent with the 8 GB working-set guidance.
Smaller models (e.g., 7B/8B with similar quantization) load and run (8B Q4_K provide around 9 tokens/second decoding speed).
Questions
Is 8,192 MB an expected recommendedMaxWorkingSetSize on a 12 GB iPhone?
What values should I expect on other 2025 devices including iPhone 17 (8 GB RAM) and iPhone 17 Pro (12 GB RAM)
Is it strictly enforced by Metal allocations (heaps/buffers), or advisory for best performance/eviction behavior?
Can a process practically exceed this for long-lived buffers without immediate Jetsam risk?
Any guidance for LLM scenarios near the limit?
Hi,
I can't see RealityKit statistics on Xcode Canvas using:
arView.debugOptions = [.showStatistics]
The statistics only show on a physical device, not Xcode live canvas with #Preview. Testing in Xcode 26.0.1 (17A400) on Tahoe 26.0.1 (25A362).
Use case: I'm using RealityKit as a non-AR 3D engine. Xcode Canvas is useful for live iterations.
Is this expected behavior? How can I see FPS on Xcode canvas? SKView for example shows all debug options on both Xcode Canvas and physical devices.
Topic:
Graphics & Games
SubTopic:
RealityKit
Hello, I am trying to capture screen recording ( output.mp4 ) using ScreenCaptureKit and also the mouse positions during the recording ( mouse.json ). The recording and the mouse positions ( tracked based on mouse movements events only ) needs to be perfectly synced in order to add effects in post editing.
I started off by using the await stream?.startCapture() and after that starting my mouse tracking function :-
try await captureEngine.startCapture(configuration: config, filter: filter, recordingOutput: recordingOutput)
let captureStartTime = Date()
mouseTracker?.startTracking(with: captureStartTime)
But every time I tested, there is a clear inconsistency in sync between the recorded video and the recorded mouse positions.
The only thing I want is to know when exactly does the recording "actually" started so that I can start the mouse capture at that same time, and thus I tried using the Delegates, but being able to set them up perfectly.
import Foundation
import AVFAudio
import ScreenCaptureKit
import OSLog
import Combine
class CaptureEngine: NSObject, @unchecked Sendable {
private let logger = Logger()
private(set) var stream: SCStream?
private var streamOutput: CaptureEngineStreamOutput?
private var recordingOutput: SCRecordingOutput?
private let videoSampleBufferQueue = DispatchQueue(label: "com.francestudio.phia.VideoSampleBufferQueue")
private let audioSampleBufferQueue = DispatchQueue(label: "com.francestudio.phia.AudioSampleBufferQueue")
private let micSampleBufferQueue = DispatchQueue(label: "com.francestudio.phia.MicSampleBufferQueue")
func startCapture(configuration: SCStreamConfiguration, filter: SCContentFilter, recordingOutput: SCRecordingOutput) async throws {
// Create the stream output delegate.
let streamOutput = CaptureEngineStreamOutput()
self.streamOutput = streamOutput
do {
stream = SCStream(filter: filter, configuration: configuration, delegate: streamOutput)
try stream?.addStreamOutput(streamOutput, type: .screen, sampleHandlerQueue: videoSampleBufferQueue)
try stream?.addStreamOutput(streamOutput, type: .audio, sampleHandlerQueue: audioSampleBufferQueue)
try stream?.addStreamOutput(streamOutput, type: .microphone, sampleHandlerQueue: micSampleBufferQueue)
self.recordingOutput = recordingOutput
recordingOutput.delegate = self
try stream?.addRecordingOutput(recordingOutput)
try await stream?.startCapture()
} catch {
logger.error("Failed to start capture: \(error.localizedDescription)")
throw error
}
}
func stopCapture() async throws {
do {
try await stream?.stopCapture()
} catch {
logger.error("Failed to stop capture: \(error.localizedDescription)")
throw error
}
}
func update(configuration: SCStreamConfiguration, filter: SCContentFilter) async {
do {
try await stream?.updateConfiguration(configuration)
try await stream?.updateContentFilter(filter)
} catch {
logger.error("Failed to update the stream session: \(String(describing: error))")
}
}
func stopRecordingOutputForStream(_ recordingOutput: SCRecordingOutput) throws {
try self.stream?.removeRecordingOutput(recordingOutput)
}
}
// MARK: - SCRecordingOutputDelegate
extension CaptureEngine: SCRecordingOutputDelegate {
func recordingOutputDidStartRecording(_ recordingOutput: SCRecordingOutput) {
let startTime = Date()
logger.info("Recording output did start recording \(startTime)")
}
func recordingOutputDidFinishRecording(_ recordingOutput: SCRecordingOutput) {
logger.info("Recording output did finish recording")
}
func recordingOutput(_ recordingOutput: SCRecordingOutput, didFailWithError error: any Error) {
logger.error("Recording output failed with error: \(error.localizedDescription)")
}
}
private class CaptureEngineStreamOutput: NSObject, SCStreamOutput, SCStreamDelegate {
private let logger = Logger()
override init() {
super.init()
}
func stream(_ stream: SCStream, didOutputSampleBuffer sampleBuffer: CMSampleBuffer, of outputType: SCStreamOutputType) {
guard sampleBuffer.isValid else { return }
switch outputType {
case .screen:
break
case .audio:
break
case .microphone:
break
@unknown default:
logger.error("Encountered unknown stream output type:")
}
}
func stream(_ stream: SCStream, didStopWithError error: Error) {
logger.error("Stream stopped with error: \(error.localizedDescription)")
}
}
I am getting error
Value of type 'SCRecordingOutput' has no member 'delegate'
Even though I am targeting macOs 15+ ( macOs 26 actually ) and macOs only.
What is the best way to achieving the desired result? Is there any other / better way to do it?
I'm experiencing an issue with PDFKit where page.removeAnnotation(annotation) successfully removes the annotation from the page's data structure, but the PDFView no longer updates automatically to reflect the change visually.
Issue Details:
The annotation is removed (verified by checking page.annotations.count)
The PDFView display doesn't refresh to show the removal
This code was working correctly before and suddenly stopped working
No code changes were made on my end
Hi everyone,
I faced an issue that on IOS 26 removeAnnotation method doesn't remove annotation. This code worked on previous versions (IOS 18, 17) but suddenly stopped working on IOS 26.
Has anyone faced this issue?
guard let document = await pdfView.document else { return }
for pageIndex in 0..<document.pageCount {
guard let page = document.page(at: pageIndex) else { continue }
let annotations = page.annotations
for annotation in annotations {
page.removeAnnotation(annotation)
}
}
It's a Broadcast Extension issue: on iOS 26.1 beta the extension never launches—after you tap “Start Broadcast” in the system picker the countdown disappears after 3 s and no broadcast starts, so every live-streaming app(and all other non-system apps that use Broadcast Extension) fails to go live (only the native Photos screen recording still works). Is this a known regression or is a new entitlement required?