I have an app that records a 32 x 32 rect under the cursor as the user moves it around and it sends it to Flutter.
It suffers from major lag.
Instead of getting 30 fps, I get about 7 fps. That is, there are significant lags between screen grabs.
This on an Intel Mac mini x64 with 15.7.3 and one display.
flutter: NATIVE: ExplodedView framesIn=2 timeSinceStart=1115.7ms gapSinceLastFrame=838.8ms
flutter: NATIVE: ExplodedView framesIn=4 timeSinceStart=1382.6ms gapSinceLastFrame=149.9ms
flutter: NATIVE: ExplodedView framesIn=5 timeSinceStart=1511.0ms gapSinceLastFrame=128.4ms
flutter: NATIVE: ExplodedView framesIn=7 timeSinceStart=1698.3ms gapSinceLastFrame=102.9ms
flutter: NATIVE: ExplodedView STOP polling totalTime=4482.6ms framesIn=28 framesSent=28 acks=28
Here's a testable excerpt:
import ScreenCaptureKit
import CoreMedia
import CoreVideo
import QuartzCore
final class Test: NSObject, SCStreamOutput, SCStreamDelegate {
private let q = DispatchQueue(label: "cap.q")
private var stream: SCStream?
private var lastFrameAt: CFTimeInterval = 0
private var frames = 0
func start() {
SCShareableContent.getExcludingDesktopWindows(false, onScreenWindowsOnly: true) { content, err in
guard err == nil, let display = content?.displays.first else {
print("shareableContent error: \(String(describing: err))"); return
}
let filter = SCContentFilter(display: display, excludingWindows: [])
let config = SCStreamConfiguration()
config.showsCursor = false
config.queueDepth = 1
config.minimumFrameInterval = CMTime(value: 1, timescale: 30)
config.pixelFormat = kCVPixelFormatType_32BGRA
config.width = 32
config.height = 32
config.sourceRect = CGRect(x: 100, y: 100, width: 32, height: 32)
let s = SCStream(filter: filter, configuration: config, delegate: self)
try! s.addStreamOutput(self, type: .screen, sampleHandlerQueue: self.q)
self.stream = s
s.startCapture { startErr in
print("startCapture err=\(String(describing: startErr))")
}
// Optional: move sourceRect at 30Hz (cursor-follow simulation)
Timer.scheduledTimer(withTimeInterval: 1.0/30.0, repeats: true) { _ in
let c2 = SCStreamConfiguration()
c2.showsCursor = false
c2.queueDepth = 1
c2.minimumFrameInterval = CMTime(value: 1, timescale: 30)
c2.pixelFormat = kCVPixelFormatType_32BGRA
c2.width = 32
c2.height = 32
let t = CACurrentMediaTime()
c2.sourceRect = CGRect(x: 100 + (sin(t) * 50), y: 100, width: 32, height: 32)
s.updateConfiguration(c2) { _ in }
}
}
}
func stream(_ stream: SCStream, didOutputSampleBuffer sb: CMSampleBuffer, of type: SCStreamOutputType) {
guard type == .screen else { return }
let now = CACurrentMediaTime()
let gapMs = (lastFrameAt == 0) ? 0 : (now - lastFrameAt) * 1000
lastFrameAt = now
frames += 1
if frames <= 10 || frames % 60 == 0 {
print("frames=\(frames) gapMs=\(String(format: "%.1f", gapMs))")
}
}
}
ScreenCaptureKit
RSS for tagScreenCaptureKit brings high-performance screen capture, including audio and video, to macOS.
Posts under ScreenCaptureKit tag
35 Posts
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Issue: Plain Executables Do Not Appear Under “Screen & System Audio Recording” on macOS 26.1 (Tahoe)
Summary
I am investigating a change in macOS 26.1 (Tahoe) where plain (non-bundled) executables that request screen recording access no longer appear under:
System Settings → Privacy & Security → Screen & System Audio Recording
This behavior differs from macOS Sequoia, where these executables did appear in the list and could be managed through the UI. Tahoe still prompts for permission and still allows the executable to capture the screen once permission is granted, but the executable never shows up in the UI list. This breaks user expectations and removes UI-based permission management.
To confirm the behavior, I created a small reproduction project with both:
a plain executable, and
an identical executable packaged inside an .app bundle.
Only the bundled version appears in System Settings.
Observed Behaviour
1. Plain Executable (from my reproduction project)
When running a plain executable that captures the screen:
macOS displays the normal screen-recording permission prompt.
Before granting permission: screenshots show only the desktop background.
After granting permission: screenshots capture the full display.
The executable does not appear under “Screen & System Audio Recording”.
Even when permission is granted manually (e.g., dragging the executable into the pane), the executable still does not appear, which prevents the user from modifying or revoking the permission through the UI.
If the executable is launched from inside another app (e.g., VS Code, Terminal), the parent app appears in the list instead, not the executable itself.
2. Bundled App Version (from the reproduction project)
I packaged the same code into a simple .app bundle (ScreenCaptureApp.app).
When running the app:
The same permission prompt appears.
Pre-permission screenshots show the desktop background.
Post-permission screenshots capture the full display.
The app does appear under “Screen & System Audio Recording”.
This bundle uses the same underlying executable — the only difference is packaging.
Hypothesis
macOS 26.1 (Tahoe) appears to require app bundles for an item to be shown in the Screen Recording privacy UI.
Plain executables:
still request and receive permission,
still function correctly after permission is granted,
but do not appear in the System Settings list.
This may be an intentional change, undocumented behavior, or a regression.
Reproduction Project
The reproduction project includes:
screen_capture.go A simple Go program that captures screenshots in a loop.
screen_capture_executable Plain executable built from the Go source.
ScreenCaptureApp.app/ App bundle containing the same executable.
build.sh Builds both the plain executable and the app bundle.
Permission reset and TCC testing scripts.
The project demonstrates the behavior consistently.
Steps to Reproduce
Plain Executable
Build:
./build.sh
Reset screen capture permissions:
sudo tccutil reset ScreenCapture
Run:
./screen_capture_executable
Before granting: screenshots show desktop only.
Grant permission when prompted.
After granting: full screenshots.
Executable does not appear in “Screen & System Audio Recording”.
Bundled App
Build (if not already built):
./build.sh
Reset permissions (optional):
sudo tccutil reset ScreenCapture
Run:
open ScreenCaptureApp.app
Before granting: screenshots show desktop.
After granting: full screenshots.
App bundle appears in the System Settings list.
Additional Check
I also tested launching the plain executable as a child process of another executable, similar to how some software architectures work.
Result:
Permission prompt appears
Permission can be granted
Executable still does not appear in the UI, even though TCC tracks it internally → consistent with the plain-executable behaviour.
This reinforces that only app bundles are listed.
Questions for Apple
Is the removal of plain executables from “Screen & System Audio Recording” an intentional change in macOS Tahoe?
If so, does Apple now require all screen-recording capable binaries to be packaged as .app bundles for the UI to display them?
Is there a supported method for making a plain executable (launched by a parent process) appear in the list?
If this is not intentional, what is the recommended path for reporting this as a regression?
Files
Unfortunately, I have discovered the zip file that contains my reproduction project can't be directly uploaded here.
Here is a Google Drive link instead: https://drive.google.com/file/d/1sXsr3Q0g6_UzlOIL54P5wbS7yBkpMJ7A/view?usp=sharing
Thank you for taking the time to review this. Any insight into whether this change is intentional or a regression would be very helpful.
How can I allow the popup I am encountering while I run my UI tests with video recording in the Github actions.
Since these tests are running on VMs, it's not possible to manually click Allow. Also the remote robot cannot interact with OS-level dialogs.
The ScreenCaptureKit sample application (https://developer.apple.com/documentation/screencapturekit/capturing-screen-content-in-macos) uses a filter initially set to capture content from the selected display, excluding only the sample application, and excepting no windows:
private var contentFilter: SCContentFilter {
var filter: SCContentFilter
switch captureType {
case .display:
guard let display = selectedDisplay else { fatalError("No display selected.") }
var excludedApps = [SCRunningApplication]()
// If a user chooses to exclude the app from the stream,
// exclude it by matching its bundle identifier.
if isAppExcluded {
excludedApps = availableApps.filter { app in
Bundle.main.bundleIdentifier == app.bundleIdentifier
}
}
// Create a content filter with excluded apps.
filter = SCContentFilter(display: display,
excludingApplications: excludedApps,
exceptingWindows: [])
.......
return filter
However, if another application uses the legacy NSWindowSharingType NSWindowSharingNone attribute, that application is initially not included in the captured stream. Only by toggling either the "Capture Type" or "Exclude sample from stream" checkbox does the initially hidden application become visible.
Additionally, if the "Stop Capture" button is used followed by "Start Capture", the application using the legacy NSWindowSharingType NSWindowSharingNone attribute is once again hidden from the stream, and is only made visible by toggling either the "Capture Type" or "Exclude sample from stream" checkbox.
Does some additional filter element or other SCStream configuration need to be included to verify that all applications, regardless of NSWindowSharingType, are captured using ScreenCaptureKit without requiring manual user interaction/filter refreshing? It seems odd that QuickTime screen recording (using ScreenCaptureKit) immediately captures an application using the NSWindowSharingNone attribute while the ScreenCaptureKit sample application linked above does not.
See images below showing the stream preview before and after toggling the "Capture Type" or "Exclude sample from stream" checkbox. Images were taken from a QuickTime screen recording during testing.
Hello everyone,
I'm working on a screen recording app using ScreenCaptureKit and I've hit a strange issue. My app records the screen to an .mp4 file, and everything works perfectly until the .captureMicrophone is false
In this case, I get a valid, playable .mp4 file.
However, as soon as I try to enable the microphone by setting streamConfig.captureMicrophone = true, the recording seems to work, but the final .mp4 file is corrupted and cannot be played by QuickTime or any other player. This happens whether capturesAudio (app audio) is on or off.
I've already added the "Privacy - Microphone Usage Description" (NSMicrophoneUsageDescription) to my Info.plist, so I don't think it's a permissions problem.
I have my logic split into a ScreenRecorder class that manages state and a CaptureEngine that handles the SCStream. Here is how I'm configuring my SCStream:
ScreenRecorder.swift
// This is my main SCStreamConfiguration
private var streamConfiguration: SCStreamConfiguration {
var streamConfig = SCStreamConfiguration()
// ... other HDR/preset config ...
// These are the problem properties
streamConfig.capturesAudio = isAudioCaptureEnabled
streamConfig.captureMicrophone = isMicCaptureEnabled // breaks it if true
streamConfig.excludesCurrentProcessAudio = false
streamConfig.showsCursor = false
if let region = selectedRegion, let display = currentDisplay {
// My region/frame logic (works fine)
let regionWidth = Int(region.frame.width)
let regionHeight = Int(region.frame.height)
streamConfig.width = regionWidth * scaleFactor
streamConfig.height = regionHeight * scaleFactor
// ... (sourceRect logic) ...
}
streamConfig.pixelFormat = kCVPixelFormatType_32BGRA
streamConfig.colorSpaceName = CGColorSpace.sRGB
streamConfig.minimumFrameInterval = CMTime(value: 1, timescale: 60)
return streamConfig
}
And here is how I'm setting up the SCRecordingOutput that writes the file:
ScreenRecorder.swift
private func initRecordingOutput(for region: ScreenPickerManager.SelectedRegion) throws {
let screeRecordingOutputURL = try RecordingWorkspace.createScreenRecordingVideoFile(
in: workspaceURL,
sessionIndex: sessionIndex
)
let recordingConfiguration = SCRecordingOutputConfiguration()
recordingConfiguration.outputURL = screeRecordingOutputURL
recordingConfiguration.outputFileType = .mp4
recordingConfiguration.videoCodecType = .hevc
let recordingOutput = SCRecordingOutput(configuration: recordingConfiguration, delegate: self)
self.recordingOutput = recordingOutput
}
Finally, my CaptureEngine adds these to the SCStream:
CaptureEngine.swift
class CaptureEngine: NSObject, @unchecked Sendable {
private(set) var stream: SCStream?
private var streamOutput: CaptureEngineStreamOutput?
// ... (dispatch queues) ...
func startCapture(configuration: SCStreamConfiguration, filter: SCContentFilter, recordingOutput: SCRecordingOutput) async throws {
let streamOutput = CaptureEngineStreamOutput()
self.streamOutput = streamOutput
do {
stream = SCStream(filter: filter, configuration: configuration, delegate: streamOutput)
// Add outputs for raw buffers (not used for file recording)
try stream?.addStreamOutput(streamOutput, type: .screen, sampleHandlerQueue: videoSampleBufferQueue)
try stream?.addStreamOutput(streamOutput, type: .audio, sampleHandlerQueue: audioSampleBufferQueue)
try stream?.addStreamOutput(streamOutput, type: .microphone, sampleHandlerQueue: micSampleBufferQueue)
// Add the file recording output
try stream?.addRecordingOutput(recordingOutput)
try await stream?.startCapture()
} catch {
logger.error("Failed to start capture: \(error.localizedDescription)")
throw error
}
}
// ... (stopCapture, etc.) ...
}
When I had the .captureMicrophone value to be false, I get a perfect .mp4 video playable everywhere, however, when its true, I am getting corrupted video which doesn't play at all :-
Hello, I am trying to capture screen recording ( output.mp4 ) using ScreenCaptureKit and also the mouse positions during the recording ( mouse.json ). The recording and the mouse positions ( tracked based on mouse movements events only ) needs to be perfectly synced in order to add effects in post editing.
I started off by using the await stream?.startCapture() and after that starting my mouse tracking function :-
try await captureEngine.startCapture(configuration: config, filter: filter, recordingOutput: recordingOutput)
let captureStartTime = Date()
mouseTracker?.startTracking(with: captureStartTime)
But every time I tested, there is a clear inconsistency in sync between the recorded video and the recorded mouse positions.
The only thing I want is to know when exactly does the recording "actually" started so that I can start the mouse capture at that same time, and thus I tried using the Delegates, but being able to set them up perfectly.
import Foundation
import AVFAudio
import ScreenCaptureKit
import OSLog
import Combine
class CaptureEngine: NSObject, @unchecked Sendable {
private let logger = Logger()
private(set) var stream: SCStream?
private var streamOutput: CaptureEngineStreamOutput?
private var recordingOutput: SCRecordingOutput?
private let videoSampleBufferQueue = DispatchQueue(label: "com.francestudio.phia.VideoSampleBufferQueue")
private let audioSampleBufferQueue = DispatchQueue(label: "com.francestudio.phia.AudioSampleBufferQueue")
private let micSampleBufferQueue = DispatchQueue(label: "com.francestudio.phia.MicSampleBufferQueue")
func startCapture(configuration: SCStreamConfiguration, filter: SCContentFilter, recordingOutput: SCRecordingOutput) async throws {
// Create the stream output delegate.
let streamOutput = CaptureEngineStreamOutput()
self.streamOutput = streamOutput
do {
stream = SCStream(filter: filter, configuration: configuration, delegate: streamOutput)
try stream?.addStreamOutput(streamOutput, type: .screen, sampleHandlerQueue: videoSampleBufferQueue)
try stream?.addStreamOutput(streamOutput, type: .audio, sampleHandlerQueue: audioSampleBufferQueue)
try stream?.addStreamOutput(streamOutput, type: .microphone, sampleHandlerQueue: micSampleBufferQueue)
self.recordingOutput = recordingOutput
recordingOutput.delegate = self
try stream?.addRecordingOutput(recordingOutput)
try await stream?.startCapture()
} catch {
logger.error("Failed to start capture: \(error.localizedDescription)")
throw error
}
}
func stopCapture() async throws {
do {
try await stream?.stopCapture()
} catch {
logger.error("Failed to stop capture: \(error.localizedDescription)")
throw error
}
}
func update(configuration: SCStreamConfiguration, filter: SCContentFilter) async {
do {
try await stream?.updateConfiguration(configuration)
try await stream?.updateContentFilter(filter)
} catch {
logger.error("Failed to update the stream session: \(String(describing: error))")
}
}
func stopRecordingOutputForStream(_ recordingOutput: SCRecordingOutput) throws {
try self.stream?.removeRecordingOutput(recordingOutput)
}
}
// MARK: - SCRecordingOutputDelegate
extension CaptureEngine: SCRecordingOutputDelegate {
func recordingOutputDidStartRecording(_ recordingOutput: SCRecordingOutput) {
let startTime = Date()
logger.info("Recording output did start recording \(startTime)")
}
func recordingOutputDidFinishRecording(_ recordingOutput: SCRecordingOutput) {
logger.info("Recording output did finish recording")
}
func recordingOutput(_ recordingOutput: SCRecordingOutput, didFailWithError error: any Error) {
logger.error("Recording output failed with error: \(error.localizedDescription)")
}
}
private class CaptureEngineStreamOutput: NSObject, SCStreamOutput, SCStreamDelegate {
private let logger = Logger()
override init() {
super.init()
}
func stream(_ stream: SCStream, didOutputSampleBuffer sampleBuffer: CMSampleBuffer, of outputType: SCStreamOutputType) {
guard sampleBuffer.isValid else { return }
switch outputType {
case .screen:
break
case .audio:
break
case .microphone:
break
@unknown default:
logger.error("Encountered unknown stream output type:")
}
}
func stream(_ stream: SCStream, didStopWithError error: Error) {
logger.error("Stream stopped with error: \(error.localizedDescription)")
}
}
I am getting error
Value of type 'SCRecordingOutput' has no member 'delegate'
Even though I am targeting macOs 15+ ( macOs 26 actually ) and macOs only.
What is the best way to achieving the desired result? Is there any other / better way to do it?
We use SCK to screen share, however [SCShareableContent getShareableContentWithCompletionHandler:] takes over 5+ seconds before returning the response. Is it normal? What we can do to reduce the time consumption?
Hello everyone, I am trying to implement ScreenCaptureKit into my project, I am using MacOs 26 for the target version and followed this official project from apple regarding the screencapture kit. https://developer.apple.com/documentation/ScreenCaptureKit/capturing-screen-content-in-macos
I used the official exact code and implemented in my app, but the results are not good. The video look blurry, unclear, lost colors and its like 720p honestly.
The 1st video frame t is result when I integrate it in my app.
After that, I used another app ( which was built in electron, they were using screencapturekit as well ) and there results were a lot better. The 2nd video frame is when I recorded using their application. It appears as close to as system display
I tried multiple things, but no impressive results. For my purpose, I want to the final recorded video to be as good as the display quality of the system. I also applied .hdr local display and coronolicial, but no help with that as well. Changed codecs to .mov, .hevc, but still no help
Why is not the recoded video as high quality as the display
I've been using CGWindowListCreateImage which automatically creates an image with the size of the captured window.
But SCScreenshotManager.captureImage(contentFilter:configuration:) always creates images with the width and height specified in the provided SCStreamConfiguration. I could be setting the size explicitly by reading SCWindow.frame or SCContentFilter.contentRect and multiplying the width and height by SCContentFilter.pointPixelScale , but it won't work if I want to keep the window shadow with SCStreamConfiguration.ignoreShadowsSingleWindow = false.
Is there a way and what's the best way to take full-resolution screenshots of the correct size?
import Cocoa
import ScreenCaptureKit
class ViewController: NSViewController {
@IBOutlet weak var imageView: NSImageView!
override func viewDidAppear() {
imageView.imageScaling = .scaleProportionallyUpOrDown
view.wantsLayer = true
view.layer!.backgroundColor = .init(red: 1, green: 0, blue: 0, alpha: 1)
Task {
let windows = try await SCShareableContent.excludingDesktopWindows(false, onScreenWindowsOnly: true).windows
let window = windows[0]
let filter = SCContentFilter(desktopIndependentWindow: window)
let configuration = SCStreamConfiguration()
configuration.ignoreShadowsSingleWindow = false
configuration.showsCursor = false
configuration.width = Int(Float(filter.contentRect.width) * filter.pointPixelScale)
configuration.height = Int(Float(filter.contentRect.height) * filter.pointPixelScale)
print(filter.contentRect)
let windowImage = try await SCScreenshotManager.captureImage(contentFilter: filter, configuration: configuration)
imageView.image = NSImage(cgImage: windowImage, size: CGSize(width: windowImage.width, height: windowImage.height))
}
}
}
SCStreamUpdateFrameContentRect X coordinate always returns 48 instead of expected 0
Environment
Device: MacBook Pro 13-inch
macOS: Sequoia 15.6.1
Xcode: 16.4
Framework: Screen Capture Kit
Issue Description
I'm experiencing an unexpected behavior with Screen Capture Kit where the SCStreamUpdateFrameContentRect X coordinate consistently returns 48 instead of the expected 0.
Code Context
I'm using SCContentSharingPicker to capture screen content and implementing the SCStreamOutput protocol to receive frame data. In my stream(_:didOutputSampleBuffer:of:) method, I'm extracting the content rect information from the sample buffer attachments:
func stream(_ stream: SCStream, didOutputSampleBuffer sampleBuffer: CMSampleBuffer, of type: SCStreamOutputType) {
switch type {
case .screen:
guard let attachmentsArray = CMSampleBufferGetSampleAttachmentsArray(sampleBuffer, createIfNecessary: false) as? [[SCStreamFrameInfo: Any]] else {
return
}
guard let attachments = attachmentsArray.first else { return }
if !attachments.keys.contains(.contentRect) {
return
}
print(attachments) // X coordinate always shows 48
/*
}, __C.SCStreamFrameInfo(_rawValue: SCStreamUpdateFrameContentRect): {
Height = 540;
Width = 864;
X = 48; <<-- unexpected value
Y = 0;
}]
*/
return
// ... other cases
}
}
Expected vs Actual Behavior
Expected: X coordinate should be 0 (indicating the content starts at the left edge of the screen)
Actual: X coordinate is consistently 48
Visual verification: When I display the captured screen content, it appears correctly without any offset, suggesting the actual content should indeed start at X=0
Main ViewModel Class
import Foundation
import ScreenCaptureKit
import SwiftUICore
class VM: NSObject, ObservableObject, SCContentSharingPickerObserver, SCStreamDelegate, SCStreamOutput {
@State var isRecording = false
// Error handling delegate
func stream(_ stream: SCStream, didStopWithError error: Error) {
DispatchQueue.main.async {
self.isRecording = false
}
}
var picker: SCContentSharingPicker?
func createPicker() -> SCContentSharingPicker {
if let p = picker {
return p
}
let picker = SCContentSharingPicker.shared
var config = SCContentSharingPicker.shared.defaultConfiguration //SCContentSharingPickerConfiguration()
config.allowedPickerModes = .singleDisplay
config.allowsChangingSelectedContent = false
config.excludedBundleIDs.append(Bundle.main.bundleIdentifier!)
picker.add(self)
picker.isActive = true
SCContentSharingPicker.shared.present(using: .display)
return picker
}
var stream: SCStream?
let videoSampleBufferQueue = DispatchQueue(label: "com.example.apple-samplecode.VideoSampleBufferQueue")
// observer call back for picker
func contentSharingPicker(_ picker: SCContentSharingPicker, didUpdateWith filter:
SCContentFilter, for stream: SCStream?) {
if let stream = stream {
stream.updateContentFilter(filter)
} else {
let config = SCStreamConfiguration()
config.capturesAudio = false
config.captureMicrophone = false
config.captureResolution = .automatic
config.captureDynamicRange = .SDR
config.showMouseClicks = false
config.showsCursor = false
// Set the frame rate for screen capture
config.minimumFrameInterval = CMTime(value: 1, timescale: 5)
self.stream = SCStream(filter: filter, configuration: config, delegate: self)
do {
try self.stream?.addStreamOutput(self, type: .screen, sampleHandlerQueue: self.videoSampleBufferQueue)
} catch {
print("\(error)")
}
self.stream?.updateContentFilter(filter)
DispatchQueue.main.async {
self.stream?.startCapture()
}
}
}
func contentSharingPicker(_ picker: SCContentSharingPicker, didCancelFor stream: SCStream?) {}
func contentSharingPickerStartDidFailWithError(_ error: any Error) {
print(error)
}
func stream(_ stream: SCStream, didOutputSampleBuffer sampleBuffer: CMSampleBuffer, of type: SCStreamOutputType) {
switch type {
case .screen:
guard let attachmentsArray = CMSampleBufferGetSampleAttachmentsArray(sampleBuffer,
createIfNecessary: false) as? [[SCStreamFrameInfo: Any]] else {
return
}
guard let attachments = attachmentsArray.first else { return }
if !attachments.keys.contains(.contentRect) {
return
}
print(attachments)
return
case .audio:
return
case .microphone:
return
@unknown default:
return
}
}
func outputVideoEffectDidStart(for stream: SCStream) {
print("outputVideoEffectDidStart")
}
func outputVideoEffectDidStop(for stream: SCStream) {
print("outputVideoEffectDidStop")
}
func streamDidBecomeActive(_ stream: SCStream) {
print("streamDidBecomeActive")
}
func streamDidBecomeInactive(_ stream: SCStream) {
print("streamDidBecomeInactive")
}
}
SCStreamUpdateFrameContentRect X coordinate always returns 48 instead of expected 0
Environment
Device: MacBook Pro 13-inch
macOS: Sequoia 15.6.1
Xcode: 16.4
Framework: Screen Capture Kit
Issue Description
I'm experiencing an unexpected behavior with Screen Capture Kit where the SCStreamUpdateFrameContentRect X coordinate consistently returns 48 instead of the expected 0.
Code Context
I'm using SCContentSharingPicker to capture screen content and implementing the SCStreamOutput protocol to receive frame data. In my stream(_:didOutputSampleBuffer:of:) method, I'm extracting the content rect information from the sample buffer attachments:
func stream(_ stream: SCStream, didOutputSampleBuffer sampleBuffer: CMSampleBuffer, of type: SCStreamOutputType) {
switch type {
case .screen:
guard let attachmentsArray = CMSampleBufferGetSampleAttachmentsArray(sampleBuffer, createIfNecessary: false) as? [[SCStreamFrameInfo: Any]] else {
return
}
guard let attachments = attachmentsArray.first else { return }
if !attachments.keys.contains(.contentRect) {
return
}
print(attachments) // X coordinate always shows 48
/*
, __C.SCStreamFrameInfo(_rawValue: SCStreamUpdateFrameContentRect): {
Height = 540;
Width = 864;
X = 48; <<-- unexpected offset
Y = 0;
}]
*/
return
// ... other cases
}
}
Expected vs Actual Behavior
Expected: X coordinate should be 0 (indicating the content starts at the left edge of the screen)
Actual: X coordinate is consistently 48
Visual verification: When I display the captured screen content, it appears correctly without any offset, suggesting the actual content should indeed start at X=0
Additional Information
The picker is configured with .singleDisplay mode
I'm excluding the current app's bundle ID from capture
The captured content visually appears correct, only the reported coordinates seem off
Main ViewModel Class
import Foundation
import ScreenCaptureKit
import SwiftUICore
class VM: NSObject, ObservableObject, SCContentSharingPickerObserver, SCStreamDelegate, SCStreamOutput {
@State var isRecording = false
// Error handling delegate
func stream(_ stream: SCStream, didStopWithError error: Error) {
DispatchQueue.main.async {
self.isRecording = false
}
}
var picker: SCContentSharingPicker?
func createPicker() -> SCContentSharingPicker {
if let p = picker {
return p
}
let picker = SCContentSharingPicker.shared
picker.add(self)
picker.isActive = true
SCContentSharingPicker.shared.present(using: .display)
return picker
}
var stream: SCStream?
let videoSampleBufferQueue = DispatchQueue(label: "com.example.apple-samplecode.VideoSampleBufferQueue")
// observer call back for picker
func contentSharingPicker(_ picker: SCContentSharingPicker, didUpdateWith filter:
SCContentFilter, for stream: SCStream?) {
if let stream = stream {
stream.updateContentFilter(filter)
} else {
let config = SCStreamConfiguration()
config.capturesAudio = false
config.captureMicrophone = false
config.captureResolution = .automatic
config.captureDynamicRange = .SDR
config.showMouseClicks = false
config.showsCursor = false
// Set the frame rate for screen capture
config.minimumFrameInterval = CMTime(value: 1, timescale: 5) // 10 FPS
self.stream = SCStream(filter: filter, configuration: config, delegate: self)
do {
try self.stream?.addStreamOutput(self, type: .screen, sampleHandlerQueue: self.videoSampleBufferQueue)
} catch {
print("\(error)")
}
self.stream?.updateContentFilter(filter)
DispatchQueue.main.async {
self.stream?.startCapture()
}
}
}
func contentSharingPicker(_ picker: SCContentSharingPicker, didCancelFor stream: SCStream?) {}
func contentSharingPickerStartDidFailWithError(_ error: any Error) {
print(error)
}
func stream(_ stream: SCStream, didOutputSampleBuffer sampleBuffer: CMSampleBuffer, of type: SCStreamOutputType) {
switch type {
case .screen:
guard let attachmentsArray = CMSampleBufferGetSampleAttachmentsArray(sampleBuffer,
createIfNecessary: false) as? [[SCStreamFrameInfo: Any]] else {
return
}
guard let attachments = attachmentsArray.first else { return }
if !attachments.keys.contains(.contentRect) {
return
}
print(attachments)
return
case .audio:
return
case .microphone:
return
@unknown default:
return
}
}
func outputVideoEffectDidStart(for stream: SCStream) {
print("outputVideoEffectDidStart")
}
func outputVideoEffectDidStop(for stream: SCStream) {
print("outputVideoEffectDidStop")
}
func streamDidBecomeActive(_ stream: SCStream) {
print("streamDidBecomeActive")
}
func streamDidBecomeInactive(_ stream: SCStream) {
print("streamDidBecomeInactive")
}
}
Dear Team,
We have developed a macOS app using Unity (6000.0.51f1) that includes learning activities, assessments/tests, audio recording, and video playback functionalities. For security and content protection, we want to restrict the ability for users to capture screenshots or screen recordings of the app (especially via the built-in Cmd+Shift+5 / Screenshot toolbar).
We have attempted several approaches, but they have not been reliable. We would appreciate guidance from Apple or the developer community on the feasibility of this requirement.
Our requirements:
Block or disable screenshots/screen recordings (particularly the built-in Cmd+Shift+5) for the app.
Preferably achieve this using public APIs so that the app remains App Store compatible and passes review.
If full blocking is not possible, then at least ensure that any captured content appears blank/black for sensitive sections of the app.
Additionally, we would like our app’s window behavior to work like other apps do:
Red button → Close the application completely.
Yellow button → Minimize the application to the Dock.
Green button → Maximize to full screen while still allowing access to the Dock and menu bar.
Any advice, best practices, or references to relevant documentation would be highly valuable.
Thank you for your support.
Topic:
UI Frameworks
SubTopic:
General
Tags:
Automatic Assessment Configuration
macOS
Mac App Store
ScreenCaptureKit
i'm trying to work on a simple screen recording app on macOS that always records the last 'x' seconds of your screen and saves it whenever you want, as a way to get comfortable with swift programming and apple APIs.
i was able to get it running for the past '30 seconds' and record and store it.
however i realised that there was a core issue with my solution:
i was defining the SCStreamConfiguration.queueDepth = 900 (to account for 30fps for 30 seconds) which goes completely against apple's instructions: https://developer.apple.com/documentation/screencapturekit/scstreamconfiguration/queuedepth?language=objc
now when i changed queueDepth back to 8, i am only able to record 8 frames and it saves only those first 8 frames.
i am unsure what the flow of the apis should be while dealing with screenCaptureKit.
for context, here's my recording manager code that handles this logic (queueDepth = 900)
import Foundation
import ScreenCaptureKit
import AVFoundation
class RecordingManager: NSObject, ObservableObject, SCStreamDelegate {
static let shared = RecordingManager()
@Published var isRecording = false
private var isStreamActive = false // Custom state flag
private var stream: SCStream?
private var streamOutputQueue = DispatchQueue(label: "com.clipback.StreamOutput", qos: .userInteractive)
private var screenStreamOutput: ScreenStreamOutput? // Strong reference to output
private var lastDisplayID: CGDirectDisplayID?
private let displayCheckQueue = DispatchQueue(label: "com.clipback.DisplayCheck", qos: .background)
// In-memory rolling buffer for last 30 seconds
private var rollingFrameBuffer: [(CMSampleBuffer, CMTime)] = []
private let rollingFrameBufferQueue = DispatchQueue(label: "com.clipback.RollingBuffer", qos: .userInteractive)
private let rollingBufferDuration: TimeInterval = 30.0 // seconds
// Track frame statistics
private var frameCount: Int = 0
private var lastReportTime: Date = Date()
// Monitor for display availability
private var displayCheckTimer: Timer?
private var isWaitingForDisplay = false
func startRecording() {
print("[DEBUG] startRecording called.")
guard !isRecording && !isWaitingForDisplay else {
print("[DEBUG] Already recording or waiting, ignoring startRecording call")
return
}
isWaitingForDisplay = true
isStreamActive = true // Set active state
checkForDisplay()
}
private func setupAndStartRecording(for display: SCDisplay, excluding appToExclude: SCRunningApplication?) {
print("[DEBUG] setupAndStartRecording called for display: \(display.displayID)")
let excludedApps = [appToExclude].compactMap { $0 }
let filter = SCContentFilter(display: display, excludingApplications: excludedApps, exceptingWindows: [])
let config = SCStreamConfiguration()
config.width = display.width
config.height = display.height
config.minimumFrameInterval = CMTime(value: 1, timescale: 30) // 30 FPS
config.queueDepth = 900
config.showsCursor = true
print("[DEBUG] SCStreamConfiguration created: width=\(config.width), height=\(config.height), FPS=\(config.minimumFrameInterval.timescale)")
stream = SCStream(filter: filter, configuration: config, delegate: self)
print("[DEBUG] SCStream initialized.")
self.screenStreamOutput = ScreenStreamOutput { [weak self] sampleBuffer, outputType in
guard let self = self else { return }
guard outputType == .screen else { return }
guard sampleBuffer.isValid else { return }
guard let attachments = CMSampleBufferGetSampleAttachmentsArray(sampleBuffer, createIfNecessary: false) as? [[SCStreamFrameInfo: Any]],
let statusRawValue = attachments.first?[.status] as? Int,
let status = SCFrameStatus(rawValue: statusRawValue),
status == .complete else {
return
}
self.trackFrameRate()
self.handleFrame(sampleBuffer)
}
do {
try stream?.addStreamOutput(screenStreamOutput!, type: .screen, sampleHandlerQueue: streamOutputQueue)
stream?.startCapture { [weak self] error in
print("[DEBUG] SCStream.startCapture completion handler.")
guard error == nil else {
print("[DEBUG] Failed to start capture: \(error!.localizedDescription)")
self?.handleStreamError(error!)
return
}
DispatchQueue.main.async {
self?.isRecording = true
self?.isStreamActive = true // Update state on successful start
print("[DEBUG] Recording started. isRecording = true.")
}
}
} catch {
print("[DEBUG] Error adding stream output: \(error.localizedDescription)")
handleStreamError(error)
}
}
private func handleFrame(_ sampleBuffer: CMSampleBuffer) {
rollingFrameBufferQueue.async { [weak self] in
guard let self = self else { return }
let pts = CMSampleBufferGetPresentationTimeStamp(sampleBuffer)
var retainedBuffer: CMSampleBuffer?
CMSampleBufferCreateCopy(allocator: kCFAllocatorDefault, sampleBuffer: sampleBuffer, sampleBufferOut: &retainedBuffer)
guard let buffer = retainedBuffer else {
print("[DEBUG] Failed to copy sample buffer")
return
}
self.rollingFrameBuffer.append((buffer, pts))
if let lastPTS = self.rollingFrameBuffer.last?.1 {
while let firstPTS = self.rollingFrameBuffer.first?.1,
CMTimeGetSeconds(CMTimeSubtract(lastPTS, firstPTS)) > self.rollingBufferDuration {
self.rollingFrameBuffer.removeFirst()
}
}
}
}
func stream(_ stream: SCStream, didStopWithError error: Error) {
print("[DEBUG] Stream stopped with error: \(error.localizedDescription)")
displayCheckQueue.async { [weak self] in // Move to displayCheckQueue for synchronization
self?.handleStreamError(error)
}
}
what could be the reason for this and what would be the possible fix logically? i dont understand why it's dependant on queueDepth, and if it is, how can I empty and append new recorded frames to it so that it continues working?
any help or resource is greatly appreciated!
I have a custom NSWindow that I want to exclude from screen capture by setting its sharing state to kCGWindowSharingStateSharingNone. The goal is to prevent this window from appearing in the content captured by ScreenCaptureKit.
[window setSharingType:NSWindowSharingType::NSWindowSharingNone];
However, on macOS 15.4+ (Sequoia), the window is still captured by ScreenCaptureKit and appears in the shared content.
Does anyone know if kCGWindowSharingStateSharingNone is still effective with ScreenCaptureKit on macOS 15.4 and later?
Hello all, I saw this interesting VisionOS app: https://apps.apple.com/us/app/splitscreen-multi-display/id6478007837
I was wondering if there was any documentation on the Swift APIs that were used to create this app.
We are working on a screen capture app. I have provisioning setup for a developer id certificate for do direct distribution and a distribution certificate for Mac Store distribution;
I submitted the app to the store with the distribution certificate provisioning active. We need to add documentation so while we are waiting, we decided to distribute the app directly and this is where the problems come in.
I made the developer id certificate and archive->exported the app. Then I manually stapled the app with "xcrun stapler staple Madshot360.app". I created a dmg file with the exported app.
The problems are;
The app captures screen area with ScreenCaptureKit. A prior version of the app used a development certificate. When a user runs this new developer id cert app. the macos gets confused because it doesn't connect the new version to the already permissioned older app version. The user has to manually delete the old permission and then restart the app so the new version creates a new record which can then be enabled. This is confusing for the user since the permission says the app is enabled but it really isn't. We experimented with IT using a command line to delete the old app permission. That did not remove the old permission but now the user can't delete this record at all. What can I do to force the removal of a permission that is broken. The command we ran was this.
"sudo tccutil reset ScreenCapture com.madwire.Madshot360"
The app used to display it's normal warning that screen recording needed the users permission. This is the permission I talk about above. Now there is a second permission screen that states the following;
"Madshot360" is requesting to bypass the system private window picker and directly access your screen and audio.
This will allow Madshot360 to record your screen and system audio, including personal or sensitive information that may be visible or audible. Allow, Open System Settings.
This is basically what the normal alert does. Why the second window and how can I stop it from appearing when the user has already allowed it. Is it because the binary is distributed directly from my computer?
Summary:
What can I do when a permission is broken? Is there a command that IT can use to remove any old permissions before installing the app. This app is to be used internally.
Is there a command line that will remove a specific app's permission before installing the app? Remember, the command line I showed you basically further broke the permissions for this app.
What is causing this second warning dialog to be displayed?
If there are multiple virtual displays connected when an app starts using SCStream, then if there is any change in the configuration of connected virtual screens (one of them gets disconnected, reconnected, etc), a new SCStream will always stream the content of the last connected virtual screen, no matter which virtual screen is intended to be streamed.
This happens despite the fact that the SCContentFilter is properly configured for the stream, with the filter's content having the right displayID with the proper frame in the global display coordinate system and the filter also confirms that it knows the proper contentRect size and pointPixelScale.
When all virtual displays are disconnected and reconnected, things return to normal - it's as is SCStream somehow gets confused and can't properly handle or internally reenumerate multiple virtual screens until none is connected.
This issue does not normally come up, as most users will probably have only one virtual displays (like a Sidecar display) connected at most. However some configurations (like systems using multiple DisplayLink displays or apps using the undocumented CGVirtualDisplay to create virtual screens) encounter this issue.
Note: this is a longstanding problem, has been like this from the first introduction of ScreenCaptureKit and even before, affected CGDisplayStream which similarly confused virtual screens.
The two ScreenCaptureKit WWDC22 sessions show how to capture with the new framework but the retina factor is hardcoded to 2 in SCStreamConfiguration.
When using on a non-retina display, the screencapture is floating on the upper-left corner of the image buffer.
There does not seem to be a simple way to retrieve the retina factor from the SCShareableContent data (when configuring the capture).
When processing the streaming output, the SCStreamFrameInfo attachment is supposed to have a scaleFactor property but .scaleFactor does not return a value.
I have found out that the attachement dictionary contains SCStreamUpdateFrameDisplayResolution. This entry gives me the retina factor but it is not an official SCStreamFrameInfo key. I list the keys to access it.
What is the proper way with ScreenCapture to handle the retina factors ?
Hi,
I would like to reset system window private picker alert with ScreenCapture kit. i can reset the ScreenCapture permission with tccutil reset ScreenCapture. but it does not reset the system window private picker alert. i tried deleting the application directory from container and it does not help. the system window private picker alert uses the old approval i gave and it does not prompt a new alert. How can i starta with fresh screencapture kit settings for an app in testing?
Thanks
Hi,
I'm using ScreenCaptureKit on macOS 14+ to record a single window. I've noticed that the Presenter Overlay only appears when capturing the entire screen, but it does not appear when recording a specific window or a region.
Is there a way to enable the Presenter Overlay while recording a single window or a defined region, similar to how it works with full-screen capture?
Any guidance or clarification would be greatly appreciated.
Thanks in advance!