Is there a way to distinguish physical mouse/keyboard input from remote control mouse/keyboard input on Mac? Or even better, is there a way to detect if my Mac is being remotely controlled?
ScreenCaptureKit
RSS for tagScreenCaptureKit brings high-performance screen capture, including audio and video, to macOS.
Posts under ScreenCaptureKit tag
38 Posts
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hi,
On macOS Sonoma and earlier versions, I used my script along with a LaunchDaemon to continuously record my screen.
However, after upgrading to macOS Sequoia, the LaunchDaemon no longer works for screen recording. It only works when I use a LaunchAgent.
For my workflow, using a LaunchDaemon and running the ffmpeg process as root is much more efficient than running it as a regular user. Does anyone know how to run a script in the background using a LaunchDaemon on macOS Sequoia?
Here are my LaunchDaemon plist and script:
LaunchDaemon plist:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>Label</key> <string>local.ScreenRecord</string>
<key>Disabled</key> <false/>
<key>RunAtLoad</key> <true/>
<key>KeepAlive</key> <true/>
<key>Nice</key>
<integer>-20</integer>
<key>ProgramArguments</key>
<array>
<string>/bin/bash</string>
<string>/usr/local/distrib/record.bash</string>
</array>
</dict>
</plist>
Script
## !!! START CONFIGURATION !!!
#
FFMPEG_LOC="/usr/local/distrib/ffmpeg"
FFMPEG_INPUT="-hide_banner -f avfoundation -capture_cursor 1 -pixel_format uyvy422"
FFMPEG_VSET="-codec libx264 -r 22 -crf 26 -preset veryfast -b:v 5M -maxrate 7M -bufsize 14M"
TIME_REC="-t 600"
FOLDER_REC="/usr/local/distrib/REC/"
#
## !!! END CONFIGURATION !!!
#
# Find number id monitor
MON_ID=$($FFMPEG_LOC -hide_banner -f avfoundation -list_devices true -i "" 2>&1 | awk -F'[]|[]' '/Capture\ screen/ {print $4}')
#
if [ -n "$MON_ID" ]
then
# Time for name
DATE_REC=$(date +"%m-%d-%Y_%H-%M-%S")
# Number of output video file
OUTPUT_NUM=0
# Full command to record
FFMPEG_FULL_COMMAND=""
for MON_NUM in $MON_ID
do
FFMPEG_FULL_COMMAND=""$FFMPEG_FULL_COMMAND" "$FFMPEG_INPUT" -i "$MON_NUM" "$FFMPEG_VSET" "$TIME_REC" -map "$OUTPUT_NUM" "$FOLDER_REC""$DATE_REC"_Mon"$MON_NUM".mkv"
let OUTPUT_NUM=$OUTPUT_NUM+1
done
$FFMPEG_LOC $FFMPEG_FULL_COMMAND
else
echo "No monitors"
sleep 5
fi
We are having issues with ScreenCaptureKit. Our use case is the following:
We have multiple applications that each starts a stream capture, using ScreenCaptureKit. It works fine when just one application is streaming, but when starting multiple streams continuesly, all streams stops or crashes, without ScreenCaptureKit reporting an error back.
Restarting replayd for the user will allow us to start streaming again, if the streaming applications are restartet too.
We have build a small test program that we have tested on different Macs, running different versions of macOS, with identical results. The test program just calls the 'getShareableContentExcludingDesktopWindows' function multiple times, since this was the simplest way to show the problem.
Our test setup is the following:
Mac Mini M2 macOS 13.6
MacBook Pro M3 macOS 14.4
MacBook Pro M3 macOS 15.1
Code main.m
#import <Foundation/Foundation.h>
#import <ScreenCaptureKit/ScreenCaptureKit.h>
@interface Runner : NSObject
@property(atomic, assign) BOOL keepRunning;
@property(atomic, retain) SCShareableContent* availableWindows;
-(void) updateAvailableWindows;
-(void) notif:(NSNotification*) aNotif;
@end
int main(int argc, const char * argv[]) {
@autoreleasepool {
NSRunLoop* loo = [NSRunLoop mainRunLoop];
Runner* r = [[Runner alloc] init];
[r updateAvailableWindows];
while (r.keepRunning)
[loo runUntilDate:[NSDate distantFuture]];
NSLog(@"Program exit");
}
return 0;
}
@implementation Runner
-(instancetype) init
{
self = [super init];
if(self){
_keepRunning = YES;
[[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(notif:) name:@"windowWasNotFound" object:nil];
[[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(notif:) name:@"windowWasFound" object:nil];
}
return self;
}
-(void) updateAvailableWindows
{
@autoreleasepool {
NSLog(@"begin");
[SCShareableContent getShareableContentExcludingDesktopWindows:NO onScreenWindowsOnly:YES completionHandler:^(SCShareableContent* content, NSError* error){
NSLog(@"running");
if(!error){
self.availableWindows = content;
for (__unused SCWindow* aWin in self.availableWindows.windows) {
[NSThread sleepForTimeInterval:0.01];
}
[[NSNotificationCenter defaultCenter] postNotificationName:@"windowWasFound" object:self.availableWindows];
}
else{
[[NSNotificationCenter defaultCenter] postNotificationName:@"windowWasNotFound" object:nil];
}
}];
}
}
-(void) notif:(NSNotification*) aNotif
{
if(aNotif.object)
[self performSelectorOnMainThread:@selector(updateAvailableWindows) withObject:nil waitUntilDone:NO];
else{
NSLog(@"Not Found");
self.keepRunning = NO;
}
}
@end
How to replicate:
Compile and run the program from multiple terminal windows (terminal must be granted screen recording permission) and notice that the output stops, when the replayd stops responding (our assumption). Restarting the application does nothing, before the replayd also is restarted.
Running th application in Xcode gives the following error:
[ERROR] -[RPDaemonProxy fetchShareableContentWithOption:windowID:currentProcess:withCompletionHandler:]_block_invoke:902 error: 4097
This error is not something we have been able to detect in our applications - and since the only workaround is restarting replayd and the applications, catching the error would help us.
I am trying to use ScreenCaptureKit to select this window automatically rather than having the user do it.
I've made a simple command line app that requires Screen recording permission.
When I ran it from Xcode, it prompts for a permission and once I allowed it from the settings, it runs well.
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <CoreGraphics/CGDisplayStream.h>
int main() {
printf("# Start #\n");
if (CGPreflightScreenCaptureAccess()) {
printf("# Permitted.\n");
} else {
printf("# Not permitted.\n");
if (CGRequestScreenCaptureAccess() == false) {
printf("# CGRequestScreenCaptureAccess() returning false\n");
}
}
size_t output_width = 1280;
size_t output_height = 720;
dispatch_queue_t dq = dispatch_queue_create("com.domain.screengrabber", DISPATCH_QUEUE_SERIAL);
CGError err;
CGDisplayStreamRef sref = CGDisplayStreamCreateWithDispatchQueue(
1,
output_width,
output_height,
'BGRA',
NULL,
dq,
^(
CGDisplayStreamFrameStatus status,
uint64_t time,
IOSurfaceRef frame,
CGDisplayStreamUpdateRef ref
) {
printf("Got frame: %llu, FrameStatus:%d \n", time, status);
}
);
err = CGDisplayStreamStart(sref);
if (kCGErrorSuccess != err) {
printf("Error: failed to start streaming the display. %d\n", err);
exit(EXIT_FAILURE);
}
while (true) {
usleep(1e5);
}
CGDisplayStreamStop(sref);
printf("\n\n");
return 0;
}
Now I want to execute this from terminal, so I went to the build folder and
typed the app name.
cd /Users/klee/Library/Developer/Xcode/DerivedData/ScreenStreamTest-ezddqbkzhndhakadslymnvpowtig/Build/Products/Debug
./ScreenStreamTest
But I am getting following output without any prompt for permission.
# Start #
# Not permitted.
# CGRequestScreenCaptureAccess() returning false
Error: failed to start streaming the display. 1001
Is there a something I need to consider for this type of command line app?
A few weeks ago I requested the subject entitlement. I'm still waiting for it to be added to our account. Who or how can I find out what going on with it. I have no correspondence from Apple yet saying it was denied and why.
https://developer.apple.com/documentation/bundleresources/entitlements/com.apple.developer.persistent-content-capture?language=objc
Thank you.
I am a complete newbie when it comes to Swift and MacOS development. So apologies, I don't even know what is the right thing to search for.
I have an app which uses ScreenCaptureKit. I had a preview working which showed the different windows available, it initially required me to give my app permissions for screen and system audio recording which I did.
However now whenever I rebuild the app it asks for permission again and fails - despite the permission already being given.
1.Open the DingTalk macOS application, start a meeting, and initiate screen sharing.
2.The DingTalk app calls the [SCShareableContent getShareableContentWithCompletionHandler:] API.
3.The API takes over 40+ seconds to return a response.
I tried running the latest CaptureSample, selected local/canonical HDR, add to screen output, then stop capture. It is supposed to save the file, but no file is added at all.
It works for SDR, just not for HDR. How do I get HDR working?
Hi, I'm trying to modify the ScreenCaptureKit Sample code by implementing an actor for Metal rendering, but I'm experiencing issues with frame rendering sequence.
My app workflow is:
ScreenCapture -> createFrame -> setRenderData
Metal draw callback -> renderAsync (getData from renderData)
I've added timestamps to verify frame ordering, I also using binarySearch to insert the frame with timestamp, and while the timestamps appear to be in sequence, the actual rendering output seems out of order.
// ScreenCaptureKit sample
func createFrame(for sampleBuffer: CMSampleBuffer) async {
if let surface: IOSurface = getIOSurface(for: sampleBuffer) {
await renderer.setRenderData(surface, timeStamp: sampleBuffer.presentationTimeStamp.seconds)
}
}
class Renderer {
...
func setRenderData(surface: IOSurface, timeStamp: Double) async {
_ = await renderSemaphore.getSetBuffers(
isGet: false,
surface: surface,
timeStamp: timeStamp
)
}
func draw(in view: MTKView) {
Task {
await renderAsync(view)
}
}
func renderAsync(_ view: MTKView) async {
guard await renderSemaphore.beginRender() else { return }
guard let frame = await renderSemaphore.getSetBuffers(
isGet: true, surface: nil, timeStamp: nil
) else {
await renderSemaphore.endRender()
return }
guard let texture = await renderSemaphore.getRenderData(
device: self.device,
surface: frame.surface) else {
await renderSemaphore.endRender()
return
}
guard let commandBuffer = _commandQueue.makeCommandBuffer(),
let renderPassDescriptor = await view.currentRenderPassDescriptor,
let renderEncoder = commandBuffer.makeRenderCommandEncoder(descriptor: renderPassDescriptor) else {
await renderSemaphore.endRender()
return
}
// Shaders ..
renderEncoder.endEncoding()
commandBuffer.addCompletedHandler() { @Sendable (_ commandBuffer)-> Swift.Void in
updateFPS()
}
// commit frame in actor
let success = await renderSemaphore.commitFrame(
timeStamp: frame.timeStamp,
commandBuffer: commandBuffer,
drawable: view.currentDrawable!
)
if !success {
print("Frame dropped due to out-of-order timestamp")
}
await renderSemaphore.endRender()
}
}
actor RenderSemaphore {
private var frameBuffers: [FrameData] = []
private var lastReadTimeStamp: Double = 0.0
private var lastCommittedTimeStamp: Double = 0
private var activeTaskCount = 0
private var activeRenderCount = 0
private let maxTasks = 3
private var textureCache: CVMetalTextureCache?
init() {
}
func initTextureCache(device: MTLDevice) {
CVMetalTextureCacheCreate(kCFAllocatorDefault, nil, device, nil, &self.textureCache)
}
func beginRender() -> Bool {
guard activeRenderCount < maxTasks else { return false }
activeRenderCount += 1
return true
}
func endRender() {
if activeRenderCount > 0 {
activeRenderCount -= 1
}
}
func setTextureLoaded(_ loaded: Bool) {
isTextureLoaded = loaded
}
func getSetBuffers(isGet: Bool, surface: IOSurface?, timeStamp: Double?) -> FrameData? {
if isGet {
if !frameBuffers.isEmpty {
let frame = frameBuffers.removeFirst()
if frame.timeStamp > lastReadTimeStamp {
lastReadTimeStamp = frame.timeStamp
print(frame.timeStamp)
return frame
}
}
return nil
} else {
// Set
let frameData = FrameData(
surface: surface!,
timeStamp: timeStamp!
)
// insert to the right position
let insertIndex = binarySearch(for: timeStamp!)
frameBuffers.insert(frameData, at: insertIndex)
return frameData
}
}
private func binarySearch(for timeStamp: Double) -> Int {
var left = 0
var right = frameBuffers.count
while left < right {
let mid = (left + right) / 2
if frameBuffers[mid].timeStamp > timeStamp {
right = mid
} else {
left = mid + 1
}
}
return left
}
// for setRenderDataNormalized
func tryEnterTask() -> Bool {
guard activeTaskCount < maxTasks else { return false }
activeTaskCount += 1
return true
}
func exitTask() {
activeTaskCount -= 1
}
func commitFrame(timeStamp: Double,
commandBuffer: MTLCommandBuffer,
drawable: MTLDrawable) async -> Bool {
guard timeStamp > lastCommittedTimeStamp else {
print("Drop frame at commit: \(timeStamp) <= \(lastCommittedTimeStamp)")
return false
}
commandBuffer.present(drawable)
commandBuffer.commit()
lastCommittedTimeStamp = timeStamp
return true
}
func getRenderData(
device: MTLDevice,
surface: IOSurface,
depthData: [Float]
) -> (MTLTexture, MTLBuffer)? {
let _textureName = "RenderData"
var px: Unmanaged<CVPixelBuffer>?
let status = CVPixelBufferCreateWithIOSurface(kCFAllocatorDefault, surface, nil, &px)
guard status == kCVReturnSuccess, let screenImage = px?.takeRetainedValue() else {
return nil
}
CVMetalTextureCacheFlush(textureCache!, 0)
var texture: CVMetalTexture? = nil
let width = CVPixelBufferGetWidthOfPlane(screenImage, 0)
let height = CVPixelBufferGetHeightOfPlane(screenImage, 0)
let result2 = CVMetalTextureCacheCreateTextureFromImage(
kCFAllocatorDefault,
self.textureCache!,
screenImage,
nil,
MTLPixelFormat.bgra8Unorm,
width,
height,
0, &texture)
guard result2 == kCVReturnSuccess,
let cvTexture = texture,
let mtlTexture = CVMetalTextureGetTexture(cvTexture) else {
return nil
}
mtlTexture.label = _textureName
let depthBuffer = device.makeBuffer(bytes: depthData, length: depthData.count * MemoryLayout<Float>.stride)!
return (mtlTexture, depthBuffer)
}
}
Above's my code - could someone point out what might be wrong?
When I tried to launch my application from non-gui process (from launch daemon) NSworkspace openApplicationAtURL failed if I tried to run it when my device on the login screen.
Everything is working if someone logged in, but on the login screen I have the error
The application “TestApp” could not be launched because a miscellaneous error occurred. with code 256
NSWorkspace* workspace = [NSWorkspace sharedWorkspace];
NSWorkspaceOpenConfiguration* config = [NSWorkspaceOpenConfiguration configuration];
config.createsNewApplicationInstance = YES;
config.activates = NO;
config.promptsUserIfNeeded = NO;
config.addsToRecentItems = NO;
[workspace openApplicationAtURL: appURL
configuration: config
completionHandler:^(NSRunningApplication *app, NSError *error)
{
}];
Sometimes after the third try it works, sometimes not at all.
I try to use "open" command, it works on MacOS Sequoia, but not working for operating systems below, I see this error
The application cannot be opened for an unexpected reason, error=Error Domain=RBSRequestErrorDomain Code=5 "Launch failed." UserInfo={NSLocalizedFailureReason=Launch failed., NSUnderlyingError=0x600002998120 {Error Domain=OSLaunchdErrorDomain Code=125 "Domain does not support specified action" UserInfo={NSLocalizedFailureReason=Domain does not support specified action}}}
All these problems occur only on the login screen. I'm developing screen share utility, so I need somehow to launch my application on the login screen.
Could someone please help me understand what is recommended way to launch application on the login screen?
Topic:
Privacy & Security
SubTopic:
General
Tags:
Security Foundation
ScreenCaptureKit
Service Management
I'm creating app that listening other app's sound. in this use case, screen data is not needed.
but if I don't call SCStream#addStreamOutput(_, type: .screen, ...), console shows this error:
[ERROR] _SCStream_RemoteVideoQueueOperationHandlerWithError:701 stream output NOT found. Dropping frame
currently I'm setting SCStreamConfiguration#minimumFrameInterval to large value (e.g. 0.1fps) as workaround, but it would be good if i can completely disable screen capture for best performance.
there is any way to disable screen capture and only captures apps audio?
I understand that there are no delegate methods for this. But to determine a positive consent from user to Record Screen needs to be evaluated via block parameter.
When user denies the permission by mistake, and, if user tries again, the alert is not showing up. How do I reset the permission and throw the below alert again?
Guys,
In my main application bundle, I have included a helper bundle in its Resources. When the helper requests Accessibility permission, the system modal window displays what the helper is requesting permission for.
However, when the helper requests permission for Screen Recording, the system modal window displays that the main application bundle is requesting permission, which includes the helper.
This issue seems to be specific to Ventura, as both requests are displayed on behalf of the helper in Monterey.
I'm wondering if this is a known issue or limitation or if there is a way to make the permission request specifically from the helper.
I had no luck to compile a sample code provided by apple with Xcode 16.0 beta 5.
ScreenCaptureKit demo (https://developer.apple.com/documentation/screencapturekit/capturing_screen_content_in_macos)
The part it is failling is,
streamOutput.capturedFrameHandler = { continuation.yield($0) }
And the error message is
Sending '$0' risks causing data races
Task-isolated '$0' is passed as a 'sending' parameter; Uses in callee may race with later task-isolated uses
Please enlighten me why this is an issue and how to avoid?
Thanks in advance!
Our remote access application uses ScreenCaptureKit for capturing the screen in the user context and CGDisplayStream API as a fallback when running in the context of the Login Window.
Environment:
macOS 15.0; macOS 15.1 beta; Xcode 16
The application is authorized by the user for System Screen recording.
The GUI process runs under the root user over the Login Screen.
The calling thread gets stuck on CGDisplayStreamCreateWithDispatchQueue() for exactly 30 seconds. The method returns NULL afterward.
The same code worked fine on Sonoma
I have a macOS app in production, supporting all macOS versions since 10.15 (Catalina) thru Sequoia. One aspect of the app's functionality is to screen capture the entire screen, including all windows.
Starting with Sequoia, my users are receiving a scary system alert saying:
"SomeApp" is requesting to bypass the system private window picker and directly access your screen and audio. This will allow SomeApp to record your screen and system audio, including personal or sensitive information that may be visible or audible.
I have several questions and concerns about this alert. First of all, as a developer, this is frustrating, as I am using documented, long-standing system APIs, and made no change to my code to cause this warning. Second, nothing in my app records audio in any fashion, and yet the user is made to think I am trying to furtively bypass security controls to record audio, which is absolutely false. The alert seems to be due to the screen capture feature, which is one of the main features of the app, which the user explicitly requests and grants permission for.
But to get to the point of the question: is there any definitive documentation anywhere describing exactly which API's trigger this alert? I can't find first-party information from Apple, so I'm kind of guessing in the dark.
Searching the internet for all the info I can find (mostly from blog posts of developers and beta-testers), it seemed like the culprit in my code was probably a call to CGWindowListCreateImage, so I spent some time forking the code paths in my app (since I still support back to 10.15) to use the more modern ScreenCaptureKit APIs on systems that support it. But the alert is still appearing, despite not calling into that API at all.
Is there a way of calling the modern ScreenCaptureKit APIs that also triggers this alert? As an example, I'm using a snippet like this to get the shareable displays I need
do {
try await SCShareableContent.excludingDesktopWindows(
false,
onScreenWindowsOnly: false
)
return true
} catch {
return false
}
is it possible that this code is triggering the alert because I'm not excluding desktop windows and asking for all windows?
to sum up, I (and I'm guessing others) could really use some definitive guidelines on exactly which APIs trigger this alert, so that we can migrate and avoid them if possible. can anyone provide any guidance on this? Thanks in advance!
I have a mac os app that uses screen capture logic. It was originally coded using the Quartz CG api:
if let cgimage = CGDisplayCreateImage(CGMainDisplayID(), rect: cgRect) {...}
and this worked as expected even when capturing a screen rect that began on secondary monitor and ended on primary monitor (or was entirely contained on secondary monitor).
However now that API is deprecated and you're supposed to use ScreenCaptureKit instead. So I have attempted to convert the code. The trial code is:
let scConfig = SCStreamConfiguration()
scConfig.sourceRect = drect
scConfig.width = Int(drect.width)
scConfig.height = Int(drect.height)
SCScreenshotManager.captureImage(contentFilter: sFilter, configuration: scConfig) {any,error in
if let cgim = any {
print("image dims \(cgim.width), \(cgim.height), requested: \(drect)")
self.writeToFile2(cgim)
}
else {
print("SCREEN CAP failed")
}
}
...
where sFilter was previously set based on main screen display (with no exclusions). This code also "works" as long as the capture rect is entirely on primary monitor. But it fails if the rect spans both monitors or is fully contained on secondary monitor. (By fails I mean it produces empty image)
So my question is: How to use ScreenCaptureKit to obtain screen shot of rectangle that spans dual monitors?