Our application uses Screen capture KIT API for screen recording.
But from Sequoia OS, we are getting additional permission dialog which states " is requesting to bypass the system private window picker and directly access your screen and audio".
It seems we need to add our app under "System settings -> Privacy & Security -> Remote Desktop" setting to avoid getting above additional dialogue in every few days.
Some places mention use of .plist file that if mention in this file, the app will request for this permission. But did not seem to work or we do not understand that properly yet.
ScreenCaptureKit
RSS for tagScreenCaptureKit brings high-performance screen capture, including audio and video, to macOS.
Posts under ScreenCaptureKit tag
45 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
There are different kinds of screen-sharing applications, all using different APIs.
The API used by AnyDesk, for example, or TeamViewer, which doesn't require light signals.
I wonder if this is more appropriate for a corporate application, i.e. MDM,
A screen-sharing application could be created and validated by Apple to display no light signals, and which could access the user's screen whenever the person wanted to after an initial acceptance?
In other words, the user accepts to share his screen once, but won't be notified to accept the next time.
Or is this impossible on iOS?
I'd be honored to have some answers
Is there a way to distinguish physical mouse/keyboard input from remote control mouse/keyboard input on Mac? Or even better, is there a way to detect if my Mac is being remotely controlled?
I have an SCStreamDelegate for capturing frames from applications. On recent point releases of macOS Sonoma, I've noticed that the stream is being cancelled with no user action being taken. I started trying to debug it and when my on error method is called, the error parameter being passed is null:
func stream(_ stream: SCStream, didStopWithError error: Error) {
/*debugger shows this and segfaults if I try to print "\(error)"
error (Error)
> error = (Builtin.RawPointer) 0x0
*/
From what I can tell, error should be a valid NSError so I can check the error code, based on similar code I've seen in, for example OBS (https://github.com/obsproject/obs-studio/blob/265239d4174f8d291b0de437088c5b78f8e27687/plugins/mac-capture/mac-sck-common.m#L29)
Usually when this happens, the menubar icon for screen sharing (where I would click to change sharing window, etc) stays there even after my app has closed an no apps are doing sharing stuff.
Has anyone come across this before? Am I misinterpreting what the debugger is saying about the error parameter?
I'm running macos 14.7.3, but I just updated from 14.7.2 earlier and had basically the same issue on both macos versions
Hi all,
with my app ScreenFloat, you can record your screen, along with system- and microphone audio.
Those two audio feeds are recorded into separate audio tracks in order to individually remove or edit them later on.
Now, these recordings you create with ScreenFloat can be drag-and-dropped to other apps instantly. So far, so good, but some apps, like Slack, or VLC, or even websites like YouTube, do not play back multiple audio tracks, just one.
So what I'm trying to do is, on dragging the video recording file out of ScreenFloat, instantly baking together the two individual audio tracks into one, and offering that new file as the drag and drop file, so that all audio is played in the target app.
But it's slow. I mean, it's actually quite fast, but for drag and drop, it's slow.
My approach is this:
"Bake together" the two audio tracks into a one-track m4a audio file using AVMutableAudioMix and AVAssetExportSession
Take the video track, add the new audio file as an audio track to it, and render that out using AVAssetExportSession
For a quick benchmark, a 3'40'' movie, step 1 takes ~1.7 seconds, and step two adds another ~1.5 seconds, so we're at ~3.2 seconds. That's an eternity for a drag and drop, where the user might cancel if there's no immediate feedback.
I could also do it in one step, but then I couldn't use the AV*Passthrough preset, and that makes it take around 32 seconds then, because I assume it touches the video data (which is unnecessary in this case, so I think the two-step approach here is the fastest).
So, my question is, is there a faster way?
The best idea I can come up with right now is, when initially recording the screen with system- and microphone audio as separate tracks, to also record both of them into a third, muted, "hidden" track I could use later on, basically eliminating the need for step one and just ripping the two single audio tracks out of the movie and only have the video and the "hidden" track (then unmuted), but I'd still have a ~1.5 second delay there. Also, there's the processing and data overhead (basically doubling the movie's audio data).
All this would be great for an export operation (where one expects it to take a little time), but for a drag-and-drop operation, it's not ideal.
I've discarded the idea of doing a promise file drag, because many apps do not accept those, and I want to keep wide compatibility with all sorts of apps.
I'd appreciate any ideas or pointers.
Thank you kindly,
Matthias
Hi,
On macOS Sonoma and earlier versions, I used my script along with a LaunchDaemon to continuously record my screen.
However, after upgrading to macOS Sequoia, the LaunchDaemon no longer works for screen recording. It only works when I use a LaunchAgent.
For my workflow, using a LaunchDaemon and running the ffmpeg process as root is much more efficient than running it as a regular user. Does anyone know how to run a script in the background using a LaunchDaemon on macOS Sequoia?
Here are my LaunchDaemon plist and script:
LaunchDaemon plist:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>Label</key> <string>local.ScreenRecord</string>
<key>Disabled</key> <false/>
<key>RunAtLoad</key> <true/>
<key>KeepAlive</key> <true/>
<key>Nice</key>
<integer>-20</integer>
<key>ProgramArguments</key>
<array>
<string>/bin/bash</string>
<string>/usr/local/distrib/record.bash</string>
</array>
</dict>
</plist>
Script
## !!! START CONFIGURATION !!!
#
FFMPEG_LOC="/usr/local/distrib/ffmpeg"
FFMPEG_INPUT="-hide_banner -f avfoundation -capture_cursor 1 -pixel_format uyvy422"
FFMPEG_VSET="-codec libx264 -r 22 -crf 26 -preset veryfast -b:v 5M -maxrate 7M -bufsize 14M"
TIME_REC="-t 600"
FOLDER_REC="/usr/local/distrib/REC/"
#
## !!! END CONFIGURATION !!!
#
# Find number id monitor
MON_ID=$($FFMPEG_LOC -hide_banner -f avfoundation -list_devices true -i "" 2>&1 | awk -F'[]|[]' '/Capture\ screen/ {print $4}')
#
if [ -n "$MON_ID" ]
then
# Time for name
DATE_REC=$(date +"%m-%d-%Y_%H-%M-%S")
# Number of output video file
OUTPUT_NUM=0
# Full command to record
FFMPEG_FULL_COMMAND=""
for MON_NUM in $MON_ID
do
FFMPEG_FULL_COMMAND=""$FFMPEG_FULL_COMMAND" "$FFMPEG_INPUT" -i "$MON_NUM" "$FFMPEG_VSET" "$TIME_REC" -map "$OUTPUT_NUM" "$FOLDER_REC""$DATE_REC"_Mon"$MON_NUM".mkv"
let OUTPUT_NUM=$OUTPUT_NUM+1
done
$FFMPEG_LOC $FFMPEG_FULL_COMMAND
else
echo "No monitors"
sleep 5
fi
We are having issues with ScreenCaptureKit. Our use case is the following:
We have multiple applications that each starts a stream capture, using ScreenCaptureKit. It works fine when just one application is streaming, but when starting multiple streams continuesly, all streams stops or crashes, without ScreenCaptureKit reporting an error back.
Restarting replayd for the user will allow us to start streaming again, if the streaming applications are restartet too.
We have build a small test program that we have tested on different Macs, running different versions of macOS, with identical results. The test program just calls the 'getShareableContentExcludingDesktopWindows' function multiple times, since this was the simplest way to show the problem.
Our test setup is the following:
Mac Mini M2 macOS 13.6
MacBook Pro M3 macOS 14.4
MacBook Pro M3 macOS 15.1
Code main.m
#import <Foundation/Foundation.h>
#import <ScreenCaptureKit/ScreenCaptureKit.h>
@interface Runner : NSObject
@property(atomic, assign) BOOL keepRunning;
@property(atomic, retain) SCShareableContent* availableWindows;
-(void) updateAvailableWindows;
-(void) notif:(NSNotification*) aNotif;
@end
int main(int argc, const char * argv[]) {
@autoreleasepool {
NSRunLoop* loo = [NSRunLoop mainRunLoop];
Runner* r = [[Runner alloc] init];
[r updateAvailableWindows];
while (r.keepRunning)
[loo runUntilDate:[NSDate distantFuture]];
NSLog(@"Program exit");
}
return 0;
}
@implementation Runner
-(instancetype) init
{
self = [super init];
if(self){
_keepRunning = YES;
[[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(notif:) name:@"windowWasNotFound" object:nil];
[[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(notif:) name:@"windowWasFound" object:nil];
}
return self;
}
-(void) updateAvailableWindows
{
@autoreleasepool {
NSLog(@"begin");
[SCShareableContent getShareableContentExcludingDesktopWindows:NO onScreenWindowsOnly:YES completionHandler:^(SCShareableContent* content, NSError* error){
NSLog(@"running");
if(!error){
self.availableWindows = content;
for (__unused SCWindow* aWin in self.availableWindows.windows) {
[NSThread sleepForTimeInterval:0.01];
}
[[NSNotificationCenter defaultCenter] postNotificationName:@"windowWasFound" object:self.availableWindows];
}
else{
[[NSNotificationCenter defaultCenter] postNotificationName:@"windowWasNotFound" object:nil];
}
}];
}
}
-(void) notif:(NSNotification*) aNotif
{
if(aNotif.object)
[self performSelectorOnMainThread:@selector(updateAvailableWindows) withObject:nil waitUntilDone:NO];
else{
NSLog(@"Not Found");
self.keepRunning = NO;
}
}
@end
How to replicate:
Compile and run the program from multiple terminal windows (terminal must be granted screen recording permission) and notice that the output stops, when the replayd stops responding (our assumption). Restarting the application does nothing, before the replayd also is restarted.
Running th application in Xcode gives the following error:
[ERROR] -[RPDaemonProxy fetchShareableContentWithOption:windowID:currentProcess:withCompletionHandler:]_block_invoke:902 error: 4097
This error is not something we have been able to detect in our applications - and since the only workaround is restarting replayd and the applications, catching the error would help us.
I am trying to use ScreenCaptureKit to select this window automatically rather than having the user do it.
A few weeks ago I requested the subject entitlement. I'm still waiting for it to be added to our account. Who or how can I find out what going on with it. I have no correspondence from Apple yet saying it was denied and why.
https://developer.apple.com/documentation/bundleresources/entitlements/com.apple.developer.persistent-content-capture?language=objc
Thank you.
I am a complete newbie when it comes to Swift and MacOS development. So apologies, I don't even know what is the right thing to search for.
I have an app which uses ScreenCaptureKit. I had a preview working which showed the different windows available, it initially required me to give my app permissions for screen and system audio recording which I did.
However now whenever I rebuild the app it asks for permission again and fails - despite the permission already being given.
1.Open the DingTalk macOS application, start a meeting, and initiate screen sharing.
2.The DingTalk app calls the [SCShareableContent getShareableContentWithCompletionHandler:] API.
3.The API takes over 40+ seconds to return a response.
I tried running the latest CaptureSample, selected local/canonical HDR, add to screen output, then stop capture. It is supposed to save the file, but no file is added at all.
It works for SDR, just not for HDR. How do I get HDR working?
Hi, I'm trying to modify the ScreenCaptureKit Sample code by implementing an actor for Metal rendering, but I'm experiencing issues with frame rendering sequence.
My app workflow is:
ScreenCapture -> createFrame -> setRenderData
Metal draw callback -> renderAsync (getData from renderData)
I've added timestamps to verify frame ordering, I also using binarySearch to insert the frame with timestamp, and while the timestamps appear to be in sequence, the actual rendering output seems out of order.
// ScreenCaptureKit sample
func createFrame(for sampleBuffer: CMSampleBuffer) async {
if let surface: IOSurface = getIOSurface(for: sampleBuffer) {
await renderer.setRenderData(surface, timeStamp: sampleBuffer.presentationTimeStamp.seconds)
}
}
class Renderer {
...
func setRenderData(surface: IOSurface, timeStamp: Double) async {
_ = await renderSemaphore.getSetBuffers(
isGet: false,
surface: surface,
timeStamp: timeStamp
)
}
func draw(in view: MTKView) {
Task {
await renderAsync(view)
}
}
func renderAsync(_ view: MTKView) async {
guard await renderSemaphore.beginRender() else { return }
guard let frame = await renderSemaphore.getSetBuffers(
isGet: true, surface: nil, timeStamp: nil
) else {
await renderSemaphore.endRender()
return }
guard let texture = await renderSemaphore.getRenderData(
device: self.device,
surface: frame.surface) else {
await renderSemaphore.endRender()
return
}
guard let commandBuffer = _commandQueue.makeCommandBuffer(),
let renderPassDescriptor = await view.currentRenderPassDescriptor,
let renderEncoder = commandBuffer.makeRenderCommandEncoder(descriptor: renderPassDescriptor) else {
await renderSemaphore.endRender()
return
}
// Shaders ..
renderEncoder.endEncoding()
commandBuffer.addCompletedHandler() { @Sendable (_ commandBuffer)-> Swift.Void in
updateFPS()
}
// commit frame in actor
let success = await renderSemaphore.commitFrame(
timeStamp: frame.timeStamp,
commandBuffer: commandBuffer,
drawable: view.currentDrawable!
)
if !success {
print("Frame dropped due to out-of-order timestamp")
}
await renderSemaphore.endRender()
}
}
actor RenderSemaphore {
private var frameBuffers: [FrameData] = []
private var lastReadTimeStamp: Double = 0.0
private var lastCommittedTimeStamp: Double = 0
private var activeTaskCount = 0
private var activeRenderCount = 0
private let maxTasks = 3
private var textureCache: CVMetalTextureCache?
init() {
}
func initTextureCache(device: MTLDevice) {
CVMetalTextureCacheCreate(kCFAllocatorDefault, nil, device, nil, &self.textureCache)
}
func beginRender() -> Bool {
guard activeRenderCount < maxTasks else { return false }
activeRenderCount += 1
return true
}
func endRender() {
if activeRenderCount > 0 {
activeRenderCount -= 1
}
}
func setTextureLoaded(_ loaded: Bool) {
isTextureLoaded = loaded
}
func getSetBuffers(isGet: Bool, surface: IOSurface?, timeStamp: Double?) -> FrameData? {
if isGet {
if !frameBuffers.isEmpty {
let frame = frameBuffers.removeFirst()
if frame.timeStamp > lastReadTimeStamp {
lastReadTimeStamp = frame.timeStamp
print(frame.timeStamp)
return frame
}
}
return nil
} else {
// Set
let frameData = FrameData(
surface: surface!,
timeStamp: timeStamp!
)
// insert to the right position
let insertIndex = binarySearch(for: timeStamp!)
frameBuffers.insert(frameData, at: insertIndex)
return frameData
}
}
private func binarySearch(for timeStamp: Double) -> Int {
var left = 0
var right = frameBuffers.count
while left < right {
let mid = (left + right) / 2
if frameBuffers[mid].timeStamp > timeStamp {
right = mid
} else {
left = mid + 1
}
}
return left
}
// for setRenderDataNormalized
func tryEnterTask() -> Bool {
guard activeTaskCount < maxTasks else { return false }
activeTaskCount += 1
return true
}
func exitTask() {
activeTaskCount -= 1
}
func commitFrame(timeStamp: Double,
commandBuffer: MTLCommandBuffer,
drawable: MTLDrawable) async -> Bool {
guard timeStamp > lastCommittedTimeStamp else {
print("Drop frame at commit: \(timeStamp) <= \(lastCommittedTimeStamp)")
return false
}
commandBuffer.present(drawable)
commandBuffer.commit()
lastCommittedTimeStamp = timeStamp
return true
}
func getRenderData(
device: MTLDevice,
surface: IOSurface,
depthData: [Float]
) -> (MTLTexture, MTLBuffer)? {
let _textureName = "RenderData"
var px: Unmanaged<CVPixelBuffer>?
let status = CVPixelBufferCreateWithIOSurface(kCFAllocatorDefault, surface, nil, &px)
guard status == kCVReturnSuccess, let screenImage = px?.takeRetainedValue() else {
return nil
}
CVMetalTextureCacheFlush(textureCache!, 0)
var texture: CVMetalTexture? = nil
let width = CVPixelBufferGetWidthOfPlane(screenImage, 0)
let height = CVPixelBufferGetHeightOfPlane(screenImage, 0)
let result2 = CVMetalTextureCacheCreateTextureFromImage(
kCFAllocatorDefault,
self.textureCache!,
screenImage,
nil,
MTLPixelFormat.bgra8Unorm,
width,
height,
0, &texture)
guard result2 == kCVReturnSuccess,
let cvTexture = texture,
let mtlTexture = CVMetalTextureGetTexture(cvTexture) else {
return nil
}
mtlTexture.label = _textureName
let depthBuffer = device.makeBuffer(bytes: depthData, length: depthData.count * MemoryLayout<Float>.stride)!
return (mtlTexture, depthBuffer)
}
}
Above's my code - could someone point out what might be wrong?
When I tried to launch my application from non-gui process (from launch daemon) NSworkspace openApplicationAtURL failed if I tried to run it when my device on the login screen.
Everything is working if someone logged in, but on the login screen I have the error
The application “TestApp” could not be launched because a miscellaneous error occurred. with code 256
NSWorkspace* workspace = [NSWorkspace sharedWorkspace];
NSWorkspaceOpenConfiguration* config = [NSWorkspaceOpenConfiguration configuration];
config.createsNewApplicationInstance = YES;
config.activates = NO;
config.promptsUserIfNeeded = NO;
config.addsToRecentItems = NO;
[workspace openApplicationAtURL: appURL
configuration: config
completionHandler:^(NSRunningApplication *app, NSError *error)
{
}];
Sometimes after the third try it works, sometimes not at all.
I try to use "open" command, it works on MacOS Sequoia, but not working for operating systems below, I see this error
The application cannot be opened for an unexpected reason, error=Error Domain=RBSRequestErrorDomain Code=5 "Launch failed." UserInfo={NSLocalizedFailureReason=Launch failed., NSUnderlyingError=0x600002998120 {Error Domain=OSLaunchdErrorDomain Code=125 "Domain does not support specified action" UserInfo={NSLocalizedFailureReason=Domain does not support specified action}}}
All these problems occur only on the login screen. I'm developing screen share utility, so I need somehow to launch my application on the login screen.
Could someone please help me understand what is recommended way to launch application on the login screen?
I understand that there are no delegate methods for this. But to determine a positive consent from user to Record Screen needs to be evaluated via block parameter.
When user denies the permission by mistake, and, if user tries again, the alert is not showing up. How do I reset the permission and throw the below alert again?
I've made a simple command line app that requires Screen recording permission.
When I ran it from Xcode, it prompts for a permission and once I allowed it from the settings, it runs well.
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <CoreGraphics/CGDisplayStream.h>
int main() {
printf("# Start #\n");
if (CGPreflightScreenCaptureAccess()) {
printf("# Permitted.\n");
} else {
printf("# Not permitted.\n");
if (CGRequestScreenCaptureAccess() == false) {
printf("# CGRequestScreenCaptureAccess() returning false\n");
}
}
size_t output_width = 1280;
size_t output_height = 720;
dispatch_queue_t dq = dispatch_queue_create("com.domain.screengrabber", DISPATCH_QUEUE_SERIAL);
CGError err;
CGDisplayStreamRef sref = CGDisplayStreamCreateWithDispatchQueue(
1,
output_width,
output_height,
'BGRA',
NULL,
dq,
^(
CGDisplayStreamFrameStatus status,
uint64_t time,
IOSurfaceRef frame,
CGDisplayStreamUpdateRef ref
) {
printf("Got frame: %llu, FrameStatus:%d \n", time, status);
}
);
err = CGDisplayStreamStart(sref);
if (kCGErrorSuccess != err) {
printf("Error: failed to start streaming the display. %d\n", err);
exit(EXIT_FAILURE);
}
while (true) {
usleep(1e5);
}
CGDisplayStreamStop(sref);
printf("\n\n");
return 0;
}
Now I want to execute this from terminal, so I went to the build folder and
typed the app name.
cd /Users/klee/Library/Developer/Xcode/DerivedData/ScreenStreamTest-ezddqbkzhndhakadslymnvpowtig/Build/Products/Debug
./ScreenStreamTest
But I am getting following output without any prompt for permission.
# Start #
# Not permitted.
# CGRequestScreenCaptureAccess() returning false
Error: failed to start streaming the display. 1001
Is there a something I need to consider for this type of command line app?
I've been using CGWindowListCreateImage which automatically creates an image with the size of the captured window.
But SCScreenshotManager.captureImage(contentFilter:configuration:) always creates images with the width and height specified in the provided SCStreamConfiguration. I could be setting the size explicitly by reading SCWindow.frame or SCContentFilter.contentRect and multiplying the width and height by SCContentFilter.pointPixelScale , but it won't work if I want to keep the window shadow with SCStreamConfiguration.ignoreShadowsSingleWindow = false.
Is there a way and what's the best way to take full-resolution screenshots of the correct size?
import Cocoa
import ScreenCaptureKit
class ViewController: NSViewController {
@IBOutlet weak var imageView: NSImageView!
override func viewDidAppear() {
imageView.imageScaling = .scaleProportionallyUpOrDown
view.wantsLayer = true
view.layer!.backgroundColor = .init(red: 1, green: 0, blue: 0, alpha: 1)
Task {
let windows = try await SCShareableContent.excludingDesktopWindows(false, onScreenWindowsOnly: true).windows
let window = windows[0]
let filter = SCContentFilter(desktopIndependentWindow: window)
let configuration = SCStreamConfiguration()
configuration.ignoreShadowsSingleWindow = false
configuration.showsCursor = false
configuration.width = Int(Float(filter.contentRect.width) * filter.pointPixelScale)
configuration.height = Int(Float(filter.contentRect.height) * filter.pointPixelScale)
print(filter.contentRect)
let windowImage = try await SCScreenshotManager.captureImage(contentFilter: filter, configuration: configuration)
imageView.image = NSImage(cgImage: windowImage, size: CGSize(width: windowImage.width, height: windowImage.height))
}
}
}
I have a macOS app in production, supporting all macOS versions since 10.15 (Catalina) thru Sequoia. One aspect of the app's functionality is to screen capture the entire screen, including all windows.
Starting with Sequoia, my users are receiving a scary system alert saying:
"SomeApp" is requesting to bypass the system private window picker and directly access your screen and audio. This will allow SomeApp to record your screen and system audio, including personal or sensitive information that may be visible or audible.
I have several questions and concerns about this alert. First of all, as a developer, this is frustrating, as I am using documented, long-standing system APIs, and made no change to my code to cause this warning. Second, nothing in my app records audio in any fashion, and yet the user is made to think I am trying to furtively bypass security controls to record audio, which is absolutely false. The alert seems to be due to the screen capture feature, which is one of the main features of the app, which the user explicitly requests and grants permission for.
But to get to the point of the question: is there any definitive documentation anywhere describing exactly which API's trigger this alert? I can't find first-party information from Apple, so I'm kind of guessing in the dark.
Searching the internet for all the info I can find (mostly from blog posts of developers and beta-testers), it seemed like the culprit in my code was probably a call to CGWindowListCreateImage, so I spent some time forking the code paths in my app (since I still support back to 10.15) to use the more modern ScreenCaptureKit APIs on systems that support it. But the alert is still appearing, despite not calling into that API at all.
Is there a way of calling the modern ScreenCaptureKit APIs that also triggers this alert? As an example, I'm using a snippet like this to get the shareable displays I need
do {
try await SCShareableContent.excludingDesktopWindows(
false,
onScreenWindowsOnly: false
)
return true
} catch {
return false
}
is it possible that this code is triggering the alert because I'm not excluding desktop windows and asking for all windows?
to sum up, I (and I'm guessing others) could really use some definitive guidelines on exactly which APIs trigger this alert, so that we can migrate and avoid them if possible. can anyone provide any guidance on this? Thanks in advance!
Our remote access application uses ScreenCaptureKit for capturing the screen in the user context and CGDisplayStream API as a fallback when running in the context of the Login Window.
Environment:
macOS 15.0; macOS 15.1 beta; Xcode 16
The application is authorized by the user for System Screen recording.
The GUI process runs under the root user over the Login Screen.
The calling thread gets stuck on CGDisplayStreamCreateWithDispatchQueue() for exactly 30 seconds. The method returns NULL afterward.
The same code worked fine on Sonoma
I'm running a launch agent in a CI node. The agent is responsible for launching CI build/test jobs. The agent, being the responsible process, has been granted kTCCServiceScreenCapture permission. With this in place I can run /usr/sbin/screencapture during CI test jobs, archiving the visual state of the CI machine if a test fails, which makes it easier to diagnose why the test failed.
However with macOS 15 I get weekly/monthly notifications about the agent being able to record the screen.
The general advice for this is that apps should migrate to ScreenCaptureKit, but I'm using a built in tool in macOS, /usr/sbin/screencapture, so how am I supposed to deal with that?