Hello everyone, I am trying to implement ScreenCaptureKit into my project, I am using MacOs 26 for the target version and followed this official project from apple regarding the screencapture kit. https://developer.apple.com/documentation/ScreenCaptureKit/capturing-screen-content-in-macos
I used the official exact code and implemented in my app, but the results are not good. The video look blurry, unclear, lost colors and its like 720p honestly.
The 1st video frame t is result when I integrate it in my app.
After that, I used another app ( which was built in electron, they were using screencapturekit as well ) and there results were a lot better. The 2nd video frame is when I recorded using their application. It appears as close to as system display
I tried multiple things, but no impressive results. For my purpose, I want to the final recorded video to be as good as the display quality of the system. I also applied .hdr local display and coronolicial, but no help with that as well. Changed codecs to .mov, .hevc, but still no help
Why is not the recoded video as high quality as the display
ScreenCaptureKit
RSS for tagScreenCaptureKit brings high-performance screen capture, including audio and video, to macOS.
Posts under ScreenCaptureKit tag
40 Posts
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I've been using CGWindowListCreateImage which automatically creates an image with the size of the captured window.
But SCScreenshotManager.captureImage(contentFilter:configuration:) always creates images with the width and height specified in the provided SCStreamConfiguration. I could be setting the size explicitly by reading SCWindow.frame or SCContentFilter.contentRect and multiplying the width and height by SCContentFilter.pointPixelScale , but it won't work if I want to keep the window shadow with SCStreamConfiguration.ignoreShadowsSingleWindow = false.
Is there a way and what's the best way to take full-resolution screenshots of the correct size?
import Cocoa
import ScreenCaptureKit
class ViewController: NSViewController {
@IBOutlet weak var imageView: NSImageView!
override func viewDidAppear() {
imageView.imageScaling = .scaleProportionallyUpOrDown
view.wantsLayer = true
view.layer!.backgroundColor = .init(red: 1, green: 0, blue: 0, alpha: 1)
Task {
let windows = try await SCShareableContent.excludingDesktopWindows(false, onScreenWindowsOnly: true).windows
let window = windows[0]
let filter = SCContentFilter(desktopIndependentWindow: window)
let configuration = SCStreamConfiguration()
configuration.ignoreShadowsSingleWindow = false
configuration.showsCursor = false
configuration.width = Int(Float(filter.contentRect.width) * filter.pointPixelScale)
configuration.height = Int(Float(filter.contentRect.height) * filter.pointPixelScale)
print(filter.contentRect)
let windowImage = try await SCScreenshotManager.captureImage(contentFilter: filter, configuration: configuration)
imageView.image = NSImage(cgImage: windowImage, size: CGSize(width: windowImage.width, height: windowImage.height))
}
}
}
SCStreamUpdateFrameContentRect X coordinate always returns 48 instead of expected 0
Environment
Device: MacBook Pro 13-inch
macOS: Sequoia 15.6.1
Xcode: 16.4
Framework: Screen Capture Kit
Issue Description
I'm experiencing an unexpected behavior with Screen Capture Kit where the SCStreamUpdateFrameContentRect X coordinate consistently returns 48 instead of the expected 0.
Code Context
I'm using SCContentSharingPicker to capture screen content and implementing the SCStreamOutput protocol to receive frame data. In my stream(_:didOutputSampleBuffer:of:) method, I'm extracting the content rect information from the sample buffer attachments:
func stream(_ stream: SCStream, didOutputSampleBuffer sampleBuffer: CMSampleBuffer, of type: SCStreamOutputType) {
switch type {
case .screen:
guard let attachmentsArray = CMSampleBufferGetSampleAttachmentsArray(sampleBuffer, createIfNecessary: false) as? [[SCStreamFrameInfo: Any]] else {
return
}
guard let attachments = attachmentsArray.first else { return }
if !attachments.keys.contains(.contentRect) {
return
}
print(attachments) // X coordinate always shows 48
/*
}, __C.SCStreamFrameInfo(_rawValue: SCStreamUpdateFrameContentRect): {
Height = 540;
Width = 864;
X = 48; <<-- unexpected value
Y = 0;
}]
*/
return
// ... other cases
}
}
Expected vs Actual Behavior
Expected: X coordinate should be 0 (indicating the content starts at the left edge of the screen)
Actual: X coordinate is consistently 48
Visual verification: When I display the captured screen content, it appears correctly without any offset, suggesting the actual content should indeed start at X=0
Main ViewModel Class
import Foundation
import ScreenCaptureKit
import SwiftUICore
class VM: NSObject, ObservableObject, SCContentSharingPickerObserver, SCStreamDelegate, SCStreamOutput {
@State var isRecording = false
// Error handling delegate
func stream(_ stream: SCStream, didStopWithError error: Error) {
DispatchQueue.main.async {
self.isRecording = false
}
}
var picker: SCContentSharingPicker?
func createPicker() -> SCContentSharingPicker {
if let p = picker {
return p
}
let picker = SCContentSharingPicker.shared
var config = SCContentSharingPicker.shared.defaultConfiguration //SCContentSharingPickerConfiguration()
config.allowedPickerModes = .singleDisplay
config.allowsChangingSelectedContent = false
config.excludedBundleIDs.append(Bundle.main.bundleIdentifier!)
picker.add(self)
picker.isActive = true
SCContentSharingPicker.shared.present(using: .display)
return picker
}
var stream: SCStream?
let videoSampleBufferQueue = DispatchQueue(label: "com.example.apple-samplecode.VideoSampleBufferQueue")
// observer call back for picker
func contentSharingPicker(_ picker: SCContentSharingPicker, didUpdateWith filter:
SCContentFilter, for stream: SCStream?) {
if let stream = stream {
stream.updateContentFilter(filter)
} else {
let config = SCStreamConfiguration()
config.capturesAudio = false
config.captureMicrophone = false
config.captureResolution = .automatic
config.captureDynamicRange = .SDR
config.showMouseClicks = false
config.showsCursor = false
// Set the frame rate for screen capture
config.minimumFrameInterval = CMTime(value: 1, timescale: 5)
self.stream = SCStream(filter: filter, configuration: config, delegate: self)
do {
try self.stream?.addStreamOutput(self, type: .screen, sampleHandlerQueue: self.videoSampleBufferQueue)
} catch {
print("\(error)")
}
self.stream?.updateContentFilter(filter)
DispatchQueue.main.async {
self.stream?.startCapture()
}
}
}
func contentSharingPicker(_ picker: SCContentSharingPicker, didCancelFor stream: SCStream?) {}
func contentSharingPickerStartDidFailWithError(_ error: any Error) {
print(error)
}
func stream(_ stream: SCStream, didOutputSampleBuffer sampleBuffer: CMSampleBuffer, of type: SCStreamOutputType) {
switch type {
case .screen:
guard let attachmentsArray = CMSampleBufferGetSampleAttachmentsArray(sampleBuffer,
createIfNecessary: false) as? [[SCStreamFrameInfo: Any]] else {
return
}
guard let attachments = attachmentsArray.first else { return }
if !attachments.keys.contains(.contentRect) {
return
}
print(attachments)
return
case .audio:
return
case .microphone:
return
@unknown default:
return
}
}
func outputVideoEffectDidStart(for stream: SCStream) {
print("outputVideoEffectDidStart")
}
func outputVideoEffectDidStop(for stream: SCStream) {
print("outputVideoEffectDidStop")
}
func streamDidBecomeActive(_ stream: SCStream) {
print("streamDidBecomeActive")
}
func streamDidBecomeInactive(_ stream: SCStream) {
print("streamDidBecomeInactive")
}
}
SCStreamUpdateFrameContentRect X coordinate always returns 48 instead of expected 0
Environment
Device: MacBook Pro 13-inch
macOS: Sequoia 15.6.1
Xcode: 16.4
Framework: Screen Capture Kit
Issue Description
I'm experiencing an unexpected behavior with Screen Capture Kit where the SCStreamUpdateFrameContentRect X coordinate consistently returns 48 instead of the expected 0.
Code Context
I'm using SCContentSharingPicker to capture screen content and implementing the SCStreamOutput protocol to receive frame data. In my stream(_:didOutputSampleBuffer:of:) method, I'm extracting the content rect information from the sample buffer attachments:
func stream(_ stream: SCStream, didOutputSampleBuffer sampleBuffer: CMSampleBuffer, of type: SCStreamOutputType) {
switch type {
case .screen:
guard let attachmentsArray = CMSampleBufferGetSampleAttachmentsArray(sampleBuffer, createIfNecessary: false) as? [[SCStreamFrameInfo: Any]] else {
return
}
guard let attachments = attachmentsArray.first else { return }
if !attachments.keys.contains(.contentRect) {
return
}
print(attachments) // X coordinate always shows 48
/*
, __C.SCStreamFrameInfo(_rawValue: SCStreamUpdateFrameContentRect): {
Height = 540;
Width = 864;
X = 48; <<-- unexpected offset
Y = 0;
}]
*/
return
// ... other cases
}
}
Expected vs Actual Behavior
Expected: X coordinate should be 0 (indicating the content starts at the left edge of the screen)
Actual: X coordinate is consistently 48
Visual verification: When I display the captured screen content, it appears correctly without any offset, suggesting the actual content should indeed start at X=0
Additional Information
The picker is configured with .singleDisplay mode
I'm excluding the current app's bundle ID from capture
The captured content visually appears correct, only the reported coordinates seem off
Main ViewModel Class
import Foundation
import ScreenCaptureKit
import SwiftUICore
class VM: NSObject, ObservableObject, SCContentSharingPickerObserver, SCStreamDelegate, SCStreamOutput {
@State var isRecording = false
// Error handling delegate
func stream(_ stream: SCStream, didStopWithError error: Error) {
DispatchQueue.main.async {
self.isRecording = false
}
}
var picker: SCContentSharingPicker?
func createPicker() -> SCContentSharingPicker {
if let p = picker {
return p
}
let picker = SCContentSharingPicker.shared
picker.add(self)
picker.isActive = true
SCContentSharingPicker.shared.present(using: .display)
return picker
}
var stream: SCStream?
let videoSampleBufferQueue = DispatchQueue(label: "com.example.apple-samplecode.VideoSampleBufferQueue")
// observer call back for picker
func contentSharingPicker(_ picker: SCContentSharingPicker, didUpdateWith filter:
SCContentFilter, for stream: SCStream?) {
if let stream = stream {
stream.updateContentFilter(filter)
} else {
let config = SCStreamConfiguration()
config.capturesAudio = false
config.captureMicrophone = false
config.captureResolution = .automatic
config.captureDynamicRange = .SDR
config.showMouseClicks = false
config.showsCursor = false
// Set the frame rate for screen capture
config.minimumFrameInterval = CMTime(value: 1, timescale: 5) // 10 FPS
self.stream = SCStream(filter: filter, configuration: config, delegate: self)
do {
try self.stream?.addStreamOutput(self, type: .screen, sampleHandlerQueue: self.videoSampleBufferQueue)
} catch {
print("\(error)")
}
self.stream?.updateContentFilter(filter)
DispatchQueue.main.async {
self.stream?.startCapture()
}
}
}
func contentSharingPicker(_ picker: SCContentSharingPicker, didCancelFor stream: SCStream?) {}
func contentSharingPickerStartDidFailWithError(_ error: any Error) {
print(error)
}
func stream(_ stream: SCStream, didOutputSampleBuffer sampleBuffer: CMSampleBuffer, of type: SCStreamOutputType) {
switch type {
case .screen:
guard let attachmentsArray = CMSampleBufferGetSampleAttachmentsArray(sampleBuffer,
createIfNecessary: false) as? [[SCStreamFrameInfo: Any]] else {
return
}
guard let attachments = attachmentsArray.first else { return }
if !attachments.keys.contains(.contentRect) {
return
}
print(attachments)
return
case .audio:
return
case .microphone:
return
@unknown default:
return
}
}
func outputVideoEffectDidStart(for stream: SCStream) {
print("outputVideoEffectDidStart")
}
func outputVideoEffectDidStop(for stream: SCStream) {
print("outputVideoEffectDidStop")
}
func streamDidBecomeActive(_ stream: SCStream) {
print("streamDidBecomeActive")
}
func streamDidBecomeInactive(_ stream: SCStream) {
print("streamDidBecomeInactive")
}
}
Dear Team,
We have developed a macOS app using Unity (6000.0.51f1) that includes learning activities, assessments/tests, audio recording, and video playback functionalities. For security and content protection, we want to restrict the ability for users to capture screenshots or screen recordings of the app (especially via the built-in Cmd+Shift+5 / Screenshot toolbar).
We have attempted several approaches, but they have not been reliable. We would appreciate guidance from Apple or the developer community on the feasibility of this requirement.
Our requirements:
Block or disable screenshots/screen recordings (particularly the built-in Cmd+Shift+5) for the app.
Preferably achieve this using public APIs so that the app remains App Store compatible and passes review.
If full blocking is not possible, then at least ensure that any captured content appears blank/black for sensitive sections of the app.
Additionally, we would like our app’s window behavior to work like other apps do:
Red button → Close the application completely.
Yellow button → Minimize the application to the Dock.
Green button → Maximize to full screen while still allowing access to the Dock and menu bar.
Any advice, best practices, or references to relevant documentation would be highly valuable.
Thank you for your support.
Topic:
UI Frameworks
SubTopic:
General
Tags:
Automatic Assessment Configuration
macOS
Mac App Store
ScreenCaptureKit
i'm trying to work on a simple screen recording app on macOS that always records the last 'x' seconds of your screen and saves it whenever you want, as a way to get comfortable with swift programming and apple APIs.
i was able to get it running for the past '30 seconds' and record and store it.
however i realised that there was a core issue with my solution:
i was defining the SCStreamConfiguration.queueDepth = 900 (to account for 30fps for 30 seconds) which goes completely against apple's instructions: https://developer.apple.com/documentation/screencapturekit/scstreamconfiguration/queuedepth?language=objc
now when i changed queueDepth back to 8, i am only able to record 8 frames and it saves only those first 8 frames.
i am unsure what the flow of the apis should be while dealing with screenCaptureKit.
for context, here's my recording manager code that handles this logic (queueDepth = 900)
import Foundation
import ScreenCaptureKit
import AVFoundation
class RecordingManager: NSObject, ObservableObject, SCStreamDelegate {
static let shared = RecordingManager()
@Published var isRecording = false
private var isStreamActive = false // Custom state flag
private var stream: SCStream?
private var streamOutputQueue = DispatchQueue(label: "com.clipback.StreamOutput", qos: .userInteractive)
private var screenStreamOutput: ScreenStreamOutput? // Strong reference to output
private var lastDisplayID: CGDirectDisplayID?
private let displayCheckQueue = DispatchQueue(label: "com.clipback.DisplayCheck", qos: .background)
// In-memory rolling buffer for last 30 seconds
private var rollingFrameBuffer: [(CMSampleBuffer, CMTime)] = []
private let rollingFrameBufferQueue = DispatchQueue(label: "com.clipback.RollingBuffer", qos: .userInteractive)
private let rollingBufferDuration: TimeInterval = 30.0 // seconds
// Track frame statistics
private var frameCount: Int = 0
private var lastReportTime: Date = Date()
// Monitor for display availability
private var displayCheckTimer: Timer?
private var isWaitingForDisplay = false
func startRecording() {
print("[DEBUG] startRecording called.")
guard !isRecording && !isWaitingForDisplay else {
print("[DEBUG] Already recording or waiting, ignoring startRecording call")
return
}
isWaitingForDisplay = true
isStreamActive = true // Set active state
checkForDisplay()
}
private func setupAndStartRecording(for display: SCDisplay, excluding appToExclude: SCRunningApplication?) {
print("[DEBUG] setupAndStartRecording called for display: \(display.displayID)")
let excludedApps = [appToExclude].compactMap { $0 }
let filter = SCContentFilter(display: display, excludingApplications: excludedApps, exceptingWindows: [])
let config = SCStreamConfiguration()
config.width = display.width
config.height = display.height
config.minimumFrameInterval = CMTime(value: 1, timescale: 30) // 30 FPS
config.queueDepth = 900
config.showsCursor = true
print("[DEBUG] SCStreamConfiguration created: width=\(config.width), height=\(config.height), FPS=\(config.minimumFrameInterval.timescale)")
stream = SCStream(filter: filter, configuration: config, delegate: self)
print("[DEBUG] SCStream initialized.")
self.screenStreamOutput = ScreenStreamOutput { [weak self] sampleBuffer, outputType in
guard let self = self else { return }
guard outputType == .screen else { return }
guard sampleBuffer.isValid else { return }
guard let attachments = CMSampleBufferGetSampleAttachmentsArray(sampleBuffer, createIfNecessary: false) as? [[SCStreamFrameInfo: Any]],
let statusRawValue = attachments.first?[.status] as? Int,
let status = SCFrameStatus(rawValue: statusRawValue),
status == .complete else {
return
}
self.trackFrameRate()
self.handleFrame(sampleBuffer)
}
do {
try stream?.addStreamOutput(screenStreamOutput!, type: .screen, sampleHandlerQueue: streamOutputQueue)
stream?.startCapture { [weak self] error in
print("[DEBUG] SCStream.startCapture completion handler.")
guard error == nil else {
print("[DEBUG] Failed to start capture: \(error!.localizedDescription)")
self?.handleStreamError(error!)
return
}
DispatchQueue.main.async {
self?.isRecording = true
self?.isStreamActive = true // Update state on successful start
print("[DEBUG] Recording started. isRecording = true.")
}
}
} catch {
print("[DEBUG] Error adding stream output: \(error.localizedDescription)")
handleStreamError(error)
}
}
private func handleFrame(_ sampleBuffer: CMSampleBuffer) {
rollingFrameBufferQueue.async { [weak self] in
guard let self = self else { return }
let pts = CMSampleBufferGetPresentationTimeStamp(sampleBuffer)
var retainedBuffer: CMSampleBuffer?
CMSampleBufferCreateCopy(allocator: kCFAllocatorDefault, sampleBuffer: sampleBuffer, sampleBufferOut: &retainedBuffer)
guard let buffer = retainedBuffer else {
print("[DEBUG] Failed to copy sample buffer")
return
}
self.rollingFrameBuffer.append((buffer, pts))
if let lastPTS = self.rollingFrameBuffer.last?.1 {
while let firstPTS = self.rollingFrameBuffer.first?.1,
CMTimeGetSeconds(CMTimeSubtract(lastPTS, firstPTS)) > self.rollingBufferDuration {
self.rollingFrameBuffer.removeFirst()
}
}
}
}
func stream(_ stream: SCStream, didStopWithError error: Error) {
print("[DEBUG] Stream stopped with error: \(error.localizedDescription)")
displayCheckQueue.async { [weak self] in // Move to displayCheckQueue for synchronization
self?.handleStreamError(error)
}
}
what could be the reason for this and what would be the possible fix logically? i dont understand why it's dependant on queueDepth, and if it is, how can I empty and append new recorded frames to it so that it continues working?
any help or resource is greatly appreciated!
I have a custom NSWindow that I want to exclude from screen capture by setting its sharing state to kCGWindowSharingStateSharingNone. The goal is to prevent this window from appearing in the content captured by ScreenCaptureKit.
[window setSharingType:NSWindowSharingType::NSWindowSharingNone];
However, on macOS 15.4+ (Sequoia), the window is still captured by ScreenCaptureKit and appears in the shared content.
Does anyone know if kCGWindowSharingStateSharingNone is still effective with ScreenCaptureKit on macOS 15.4 and later?
Hello all, I saw this interesting VisionOS app: https://apps.apple.com/us/app/splitscreen-multi-display/id6478007837
I was wondering if there was any documentation on the Swift APIs that were used to create this app.
We are working on a screen capture app. I have provisioning setup for a developer id certificate for do direct distribution and a distribution certificate for Mac Store distribution;
I submitted the app to the store with the distribution certificate provisioning active. We need to add documentation so while we are waiting, we decided to distribute the app directly and this is where the problems come in.
I made the developer id certificate and archive->exported the app. Then I manually stapled the app with "xcrun stapler staple Madshot360.app". I created a dmg file with the exported app.
The problems are;
The app captures screen area with ScreenCaptureKit. A prior version of the app used a development certificate. When a user runs this new developer id cert app. the macos gets confused because it doesn't connect the new version to the already permissioned older app version. The user has to manually delete the old permission and then restart the app so the new version creates a new record which can then be enabled. This is confusing for the user since the permission says the app is enabled but it really isn't. We experimented with IT using a command line to delete the old app permission. That did not remove the old permission but now the user can't delete this record at all. What can I do to force the removal of a permission that is broken. The command we ran was this.
"sudo tccutil reset ScreenCapture com.madwire.Madshot360"
The app used to display it's normal warning that screen recording needed the users permission. This is the permission I talk about above. Now there is a second permission screen that states the following;
"Madshot360" is requesting to bypass the system private window picker and directly access your screen and audio.
This will allow Madshot360 to record your screen and system audio, including personal or sensitive information that may be visible or audible. Allow, Open System Settings.
This is basically what the normal alert does. Why the second window and how can I stop it from appearing when the user has already allowed it. Is it because the binary is distributed directly from my computer?
Summary:
What can I do when a permission is broken? Is there a command that IT can use to remove any old permissions before installing the app. This app is to be used internally.
Is there a command line that will remove a specific app's permission before installing the app? Remember, the command line I showed you basically further broke the permissions for this app.
What is causing this second warning dialog to be displayed?
If there are multiple virtual displays connected when an app starts using SCStream, then if there is any change in the configuration of connected virtual screens (one of them gets disconnected, reconnected, etc), a new SCStream will always stream the content of the last connected virtual screen, no matter which virtual screen is intended to be streamed.
This happens despite the fact that the SCContentFilter is properly configured for the stream, with the filter's content having the right displayID with the proper frame in the global display coordinate system and the filter also confirms that it knows the proper contentRect size and pointPixelScale.
When all virtual displays are disconnected and reconnected, things return to normal - it's as is SCStream somehow gets confused and can't properly handle or internally reenumerate multiple virtual screens until none is connected.
This issue does not normally come up, as most users will probably have only one virtual displays (like a Sidecar display) connected at most. However some configurations (like systems using multiple DisplayLink displays or apps using the undocumented CGVirtualDisplay to create virtual screens) encounter this issue.
Note: this is a longstanding problem, has been like this from the first introduction of ScreenCaptureKit and even before, affected CGDisplayStream which similarly confused virtual screens.
The two ScreenCaptureKit WWDC22 sessions show how to capture with the new framework but the retina factor is hardcoded to 2 in SCStreamConfiguration.
When using on a non-retina display, the screencapture is floating on the upper-left corner of the image buffer.
There does not seem to be a simple way to retrieve the retina factor from the SCShareableContent data (when configuring the capture).
When processing the streaming output, the SCStreamFrameInfo attachment is supposed to have a scaleFactor property but .scaleFactor does not return a value.
I have found out that the attachement dictionary contains SCStreamUpdateFrameDisplayResolution. This entry gives me the retina factor but it is not an official SCStreamFrameInfo key. I list the keys to access it.
What is the proper way with ScreenCapture to handle the retina factors ?
Hi,
I would like to reset system window private picker alert with ScreenCapture kit. i can reset the ScreenCapture permission with tccutil reset ScreenCapture. but it does not reset the system window private picker alert. i tried deleting the application directory from container and it does not help. the system window private picker alert uses the old approval i gave and it does not prompt a new alert. How can i starta with fresh screencapture kit settings for an app in testing?
Thanks
Hi,
I'm using ScreenCaptureKit on macOS 14+ to record a single window. I've noticed that the Presenter Overlay only appears when capturing the entire screen, but it does not appear when recording a specific window or a region.
Is there a way to enable the Presenter Overlay while recording a single window or a defined region, similar to how it works with full-screen capture?
Any guidance or clarification would be greatly appreciated.
Thanks in advance!
I’m using ScreenCaptureKit on macOS to grab frames and measure end-to-end latency (capture → my delegate callback). For each CMSampleBuffer I read:
let pts = CMSampleBufferGetPresentationTimeStamp(sampleBuffer).seconds
to get the “capture” timestamp, and I also extract the mach-absolute display time:
let attachments = CMSampleBufferGetSampleAttachmentsArray(sampleBuffer, createIfNecessary: false) as? [[SCStreamFrameInfo: Any]]
let displayMach = attachments?.first?[.displayTime] as? UInt64
// convert mach ticks to seconds...
Then I compare both against the current time:
let now = CACurrentMediaTime()
let latencyFromPTS = now - pts
let latencyFromDisplay = now - displayTimeSeconds
But I consistently see negative values for both calculations—i.e. the PTS or displayTime often end up numerically larger than now. This suggests that the “presentation timestamp” and the mach-absolute display time are coming from a different epoch or clock domain than CACurrentMediaTime().
Questions:
Which clocks/epochs does ScreenCaptureKit use for PTS and for .displayTime?
How can I align these timestamps with CACurrentMediaTime() so that now - pts and now - displayTime reliably yield non-negative real-world latencies?
Any pointers on the correct clock conversions or APIs to use would be greatly appreciated.
Issue Description
I'm implementing a system audio capture feature using AudioHardwareCreateProcessTap and AudioHardwareCreateAggregateDevice. The app successfully creates the tap and aggregate device, but when starting the IO procedure with AudioDeviceStart, it sometimes fails with OSStatus error 1852797029. (The operation couldn’t be completed. (OSStatus error 1852797029.)) The error occurs inconsistently, which makes it particularly difficult to debug and reproduce.
Questions
Has anyone encountered this intermittent "nope" error code (0x6e6f7065) when working with system audio capture?
Are there specific conditions or system states that might trigger this error sporadically?
Are there any known workarounds for handling this intermittent failure case?
Any insights or guidance would be greatly appreciated. I'm wondering if anyone else has encountered this specific "nope" error code (0x6e6f7065) when working with system audio capture.
Hi!
I'm developing an application based on Chrome that needs to take regular screenshots of webpages.
Under the hood (actually Chromium), it uses SCScreenshotManager to capture screenshots automatically (without user interaction).
I've noticed that regularly using this API triggers a user notification saying:
"Your Screen 'AppTest' has accessed your screen and system audio 3,594 times in the past 30 days. You can manage this in Settings."
How can I prevent this notification from appearing? Are there any specific entitlements(Or configuration of SCScreenshotManager) that I can use?
Thanks!
Is there anything to make a one off screenshot? like cmd+shift+3. But with code?
There is ScreenCaptureKit - but it seems to be more for video / audio streams. I thought of trying to get the first frame from the video stream. But it seems to be rather a complex workaround for a simple problem.
My app records the screen to use the audio for audio analysis for a music visualization. The app works perfectly in production but when uploaded to Transporter is rejected as below. What is the correct entitlement to use as the entitlement I am using seems deprecated.
Validation failed (409)
Invalid Code Signing Entitlements. Your application bundle's signature contains code signing entitlements that are not supported on macOS. Specifically, key 'com.apple.security.screen-capture' in 'com.boxedpandora.pulse.pkg/Payload/PuLsE.app/Contents/MacOS/PuLsE' is not supported. (ID: a1a436f5-925d-43bc-908d-0761064d589b)
Many thanks for any input provided!
Hi all,
with my app ScreenFloat, you can record your screen, along with system- and microphone audio.
Those two audio feeds are recorded into separate audio tracks in order to individually remove or edit them later on.
Now, these recordings you create with ScreenFloat can be drag-and-dropped to other apps instantly. So far, so good, but some apps, like Slack, or VLC, or even websites like YouTube, do not play back multiple audio tracks, just one.
So what I'm trying to do is, on dragging the video recording file out of ScreenFloat, instantly baking together the two individual audio tracks into one, and offering that new file as the drag and drop file, so that all audio is played in the target app.
But it's slow. I mean, it's actually quite fast, but for drag and drop, it's slow.
My approach is this:
"Bake together" the two audio tracks into a one-track m4a audio file using AVMutableAudioMix and AVAssetExportSession
Take the video track, add the new audio file as an audio track to it, and render that out using AVAssetExportSession
For a quick benchmark, a 3'40'' movie, step 1 takes ~1.7 seconds, and step two adds another ~1.5 seconds, so we're at ~3.2 seconds. That's an eternity for a drag and drop, where the user might cancel if there's no immediate feedback.
I could also do it in one step, but then I couldn't use the AV*Passthrough preset, and that makes it take around 32 seconds then, because I assume it touches the video data (which is unnecessary in this case, so I think the two-step approach here is the fastest).
So, my question is, is there a faster way?
The best idea I can come up with right now is, when initially recording the screen with system- and microphone audio as separate tracks, to also record both of them into a third, muted, "hidden" track I could use later on, basically eliminating the need for step one and just ripping the two single audio tracks out of the movie and only have the video and the "hidden" track (then unmuted), but I'd still have a ~1.5 second delay there. Also, there's the processing and data overhead (basically doubling the movie's audio data).
All this would be great for an export operation (where one expects it to take a little time), but for a drag-and-drop operation, it's not ideal.
I've discarded the idea of doing a promise file drag, because many apps do not accept those, and I want to keep wide compatibility with all sorts of apps.
I'd appreciate any ideas or pointers.
Thank you kindly,
Matthias
I have an SCStreamDelegate for capturing frames from applications. On recent point releases of macOS Sonoma, I've noticed that the stream is being cancelled with no user action being taken. I started trying to debug it and when my on error method is called, the error parameter being passed is null:
func stream(_ stream: SCStream, didStopWithError error: Error) {
/*debugger shows this and segfaults if I try to print "\(error)"
error (Error)
> error = (Builtin.RawPointer) 0x0
*/
From what I can tell, error should be a valid NSError so I can check the error code, based on similar code I've seen in, for example OBS (https://github.com/obsproject/obs-studio/blob/265239d4174f8d291b0de437088c5b78f8e27687/plugins/mac-capture/mac-sck-common.m#L29)
Usually when this happens, the menubar icon for screen sharing (where I would click to change sharing window, etc) stays there even after my app has closed an no apps are doing sharing stuff.
Has anyone come across this before? Am I misinterpreting what the debugger is saying about the error parameter?
I'm running macos 14.7.3, but I just updated from 14.7.2 earlier and had basically the same issue on both macos versions