I have a mac os app that uses screen capture logic. It was originally coded using the Quartz CG api:
if let cgimage = CGDisplayCreateImage(CGMainDisplayID(), rect: cgRect) {...}
and this worked as expected even when capturing a screen rect that began on secondary monitor and ended on primary monitor (or was entirely contained on secondary monitor).
However now that API is deprecated and you're supposed to use ScreenCaptureKit instead. So I have attempted to convert the code. The trial code is:
let scConfig = SCStreamConfiguration()
scConfig.sourceRect = drect
scConfig.width = Int(drect.width)
scConfig.height = Int(drect.height)
SCScreenshotManager.captureImage(contentFilter: sFilter, configuration: scConfig) {any,error in
if let cgim = any {
print("image dims \(cgim.width), \(cgim.height), requested: \(drect)")
self.writeToFile2(cgim)
}
else {
print("SCREEN CAP failed")
}
}
...
where sFilter was previously set based on main screen display (with no exclusions). This code also "works" as long as the capture rect is entirely on primary monitor. But it fails if the rect spans both monitors or is fully contained on secondary monitor. (By fails I mean it produces empty image)
So my question is: How to use ScreenCaptureKit to obtain screen shot of rectangle that spans dual monitors?
Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
1.在Fxplug4.3的 FxRemoteWindowAPI 的协议中,没有提供window.frame.origin的设置。
2.如果我自定义NSWindow时,在FxPlug中 [Window setLevel:NSFloatingWindowLevel];也没有执行。
3.请问我应该如何把窗口保留在Final cut pro的前面,并且不影响Final cut pro 的操作呢?
Topic:
Media Technologies
SubTopic:
Video
NSPanel *panel = [[myPanel alloc] initWithContentRect:NSMakeRect(100, 100, 400, 300) styleMask:NSWindowStyleMaskTitled | NSWindowStyleMaskClosable
backing:NSBackingStoreBuffered
defer:NO];
[panel setLevel:NSFloatingWindowLevel];//无效????
[panel makeKeyAndOrderFront:self];
问题:在FxPlug4.3中使用setLevel不能将panel放在Final cut pro和Mition的前面?
救命~~~全世界都没找到答案!
Topic:
Media Technologies
SubTopic:
Video
I have a LockedCameraCapture extension working well, however there is one situation I cannot find a solution to. If the user has not yet provided camera access permission then the main app will be launched rather than the LockedCameraCapture extension. I cannot find a mechanism by which my main app can detect that this was the reason for the launch and thereby request permission.
When the button is pressed from the control center without permission the app is run and the CameraCaptureIntent is called so I can prompt the user from there. However, as best I can tell the CameraCaptureIntent is not called when launched from a locked Lock Screen, the app is simply opened.
My app has a variety of functions, most of which do not involve the camera so I cannot just always prompt the user for camera access on open. Is there any mechanism by which my main app can detect it was launched for this reason so it could ask for permission? Thank you!
I had no luck to compile a sample code provided by apple with Xcode 16.0 beta 5.
ScreenCaptureKit demo (https://developer.apple.com/documentation/screencapturekit/capturing_screen_content_in_macos)
The part it is failling is,
streamOutput.capturedFrameHandler = { continuation.yield($0) }
And the error message is
Sending '$0' risks causing data races
Task-isolated '$0' is passed as a 'sending' parameter; Uses in callee may race with later task-isolated uses
Please enlighten me why this is an issue and how to avoid?
Thanks in advance!
Calling MIDIFlushOutput on a network endpoint is not cancelling events scheduled with future timestamps -- they continue to send.
e.g.
func send(eventLists: [MIDIEventList]) {
let outputPortRef = ...
let networkDestination = ...
for var eventList in eventLists {
MIDISendEventList(outputPortRef, networkDestination.objectRef, &eventList)
}
}
...
MIDIFlushOutput(networkDestination.objectRef)
I'm seeing that MIDIFlushOutput does successfully cancel scheduled events on a non-network endpoint. How can I clear all scheduled outgoing events over a MIDI Network connection?
1.In the FxRemoteWindowAPI protocol, there is no way to set window.frame.origin.
2.When using NSWindow, you cannot set [Window setLevel:NSFloatingWindowLevel].
3.How can I keep the window in front of Final Cut Pro without affecting the normal use of Final Cut Pro?
Topic:
Media Technologies
SubTopic:
Video
Hi,
After installing iPadOS18 Beta3 on my iPad 7th gen, the default camera app no longer detects QR codes.
I tried updating to Beta7, but the issue remained.
Also, third-party apps that use AVCaptureMetadataOutput in AVFoundation Framework to detect QR codes also no longer work.
You can reproduce the issue by running default camera app or the AVFoundation sample code from the Apple developer site on iPad 7th gen (iPadOS18Beta installed).
https://developer.apple.com/documentation/avfoundation/capture_setup/avcambarcode_detecting_barcodes_and_faces
Has anyone else experienced this issue?
I would like to know if this issue occurs on other iPad models as well.
This is similar to the following issue that previously occurred with iPadOS 17.4.
https://support.apple.com/en-lamr/118614
https://developer.apple.com/forums/thread/748092
I’ve built a iOS camera app that applies many CIFilters to an image captured by the camera. Some of my users have reported that on occasion the images have large parts that are blank, see below:
Frustratingly, I can’t reproduce this myself! Does anyone know what could he causing it, is it a memory issue? I haven’t posted the code as there’s a lot to look over and I’m not sure it would help diagnose it.
Thanks for any pointers.
Hello everyone,
I'm new to Core Audio and still haven't found my footing. I'm learning how to capture audio from the default device, using Audio Units. On my MacBook, the default audio input is mono. But when I write a piece of code to capture audio using AUHAL, I'm discovering that I need to provide an AudioBufferList with two channels, not one. Also, when I try to capture audio from an audio interface with 20 audio inputs, I must provide an AudioBufferList with two channels, and not with 20 channels. To investigate the issue, I wrote a small diagnostic program, which opens the default audio device and probes it for the number of channels. Depending on which way I'm probing, I'm getting different results. When I probe the stream format, I'm getting information that there is 1 channels. But when I probe the input audio unit, I'm getting information that there are 2 input channels.
Here's my program to demonstrate the issue:
// InputDeviceChannels.m
// Compile with:
// clang -framework CoreAudio -framework AudioToolbox -framework CoreFoundation -framework AudioUnit -o InputDeviceChannels InputDeviceChannels.m
//
// On my system, this prints:
// Device Name: MacBook Pro Microphone
// Number of Channels (Stream Format): 1
// Number of Elements (Element Count): 2
#import <AudioToolbox/AudioToolbox.h>
#import <AudioUnit/AudioUnit.h>
#import <CoreAudio/CoreAudio.h>
#import <Foundation/Foundation.h>
void printDeviceInfo(AudioUnit audioUnit) {
UInt32 size;
OSStatus err;
AudioStreamBasicDescription streamFormat;
size = sizeof(streamFormat);
err = AudioUnitGetProperty(audioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 1,
&streamFormat, &size);
if (err != noErr) {
printf("Error getting stream format\n");
exit(1);
}
int numChannels = streamFormat.mChannelsPerFrame;
UInt32 elementCount;
size = sizeof(elementCount);
err = AudioUnitGetProperty(audioUnit, kAudioUnitProperty_ElementCount, kAudioUnitScope_Input, 0,
&elementCount, &size);
if (err != noErr) {
printf("Error getting element count\n");
exit(1);
}
printf("Number of Channels (Stream Format): %d\n", numChannels);
printf("Number of Elements (Element Count): %d\n", elementCount);
}
void printDeviceName(AudioDeviceID deviceID) {
UInt32 size;
OSStatus err;
CFStringRef deviceName = NULL;
size = sizeof(deviceName);
err = AudioObjectGetPropertyData(
deviceID,
&(AudioObjectPropertyAddress){kAudioDevicePropertyDeviceNameCFString,
kAudioObjectPropertyScopeGlobal,
kAudioObjectPropertyElementMain},
0, NULL, &size, &deviceName);
if (err != noErr) {
printf("Error getting device name\n");
exit(1);
}
char deviceNameStr[256];
if (!CFStringGetCString(deviceName, deviceNameStr, sizeof(deviceNameStr),
kCFStringEncodingUTF8)) {
printf("Error converting device name to C string\n");
exit(1);
}
CFRelease(deviceName);
printf("Device Name: %s\n", deviceNameStr);
}
int main(int argc, const char *argv[]) {
@autoreleasepool {
OSStatus err;
// Get the default input device ID
AudioDeviceID input_device_id = kAudioObjectUnknown;
{
UInt32 property_size = sizeof(input_device_id);
AudioObjectPropertyAddress input_device_property = {
kAudioHardwarePropertyDefaultInputDevice,
kAudioObjectPropertyScopeGlobal,
kAudioObjectPropertyElementMain,
};
err = AudioObjectGetPropertyData(kAudioObjectSystemObject, &input_device_property, 0, NULL,
&property_size, &input_device_id);
if (err != noErr || input_device_id == kAudioObjectUnknown) {
printf("Error getting default input device ID\n");
exit(1);
}
}
// Print the device name using the input device ID
printDeviceName(input_device_id);
// Open audio unit for the input device
AudioComponentDescription desc = {kAudioUnitType_Output, kAudioUnitSubType_HALOutput,
kAudioUnitManufacturer_Apple, 0, 0};
AudioComponent component = AudioComponentFindNext(NULL, &desc);
AudioUnit audioUnit;
err = AudioComponentInstanceNew(component, &audioUnit);
if (err != noErr) {
printf("Error creating AudioUnit\n");
exit(1);
}
// Enable IO for input on the AudioUnit and disable output
UInt32 enableInput = 1;
UInt32 disableOutput = 0;
err = AudioUnitSetProperty(audioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Input,
1, &enableInput, sizeof(enableInput));
if (err != noErr) {
printf("Error enabling input on AudioUnit\n");
exit(1);
}
err = AudioUnitSetProperty(audioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Output,
0, &disableOutput, sizeof(disableOutput));
if (err != noErr) {
printf("Error disabling output on AudioUnit\n");
exit(1);
}
// Set the current device to the input device
err =
AudioUnitSetProperty(audioUnit, kAudioOutputUnitProperty_CurrentDevice,
kAudioUnitScope_Global, 0, &input_device_id, sizeof(input_device_id));
if (err != noErr) {
printf("Error setting device for AudioUnit\n");
exit(1);
}
// Initialize AudioUnit
err = AudioUnitInitialize(audioUnit);
if (err != noErr) {
printf("Error initializing AudioUnit\n");
exit(1);
}
// Print device info
printDeviceInfo(audioUnit);
// Clean up
AudioUnitUninitialize(audioUnit);
AudioComponentInstanceDispose(audioUnit);
}
return 0;
}
It prints:
Device Name: MacBook Pro Microphone
Number of Channels (Stream Format): 1
Number of Elements (Element Count): 2
I tried to set the number of channels to 1 on the input unit, but it didn’t change anything. After calling setNumberOfChannels(1, audioUnit), I’m still getting the same output.
Note 1: I know that I can ignore one channel, etc, etc. My purpose here is not to "somehow get it to work", I already did that. My purpose is to understand the API, so that I'll be able to write code that handles any number of audio inputs.
Note 2: I already read a bunch of documentation, especially this here: https://developer.apple.com/library/archive/technotes/tn2091/ - perhaps the channel map could help here, but I can’t make sense of it - I tried to use it based on my understanding but I only got the -50 OSStatus.
How should I understand this? Is it that that audio unit is an abstraction layer and automatically converts mono input into stereo input? Can I ask AUHAL to provide me the same number of input channels that the audio device has?
After the update on iOS 18, FairPlay content does not play.
We get an error: CoreMediaErrorDomain Code=-12891.
this error occurs after sending a ckc message. What does this error mean? Everything works fine on iOS < 18.
Hi, Recording Videos with AVAssetWriter, capture fps(camera output fps) is ok, but final result video fps was lower, the reason is AVAssetWriterInput.isReadyForMoreMediaData is false sometimes.
Yes, I have read document many times, it said need to set expectsMediaDataInRealTime to true and balabala...
I really be tortured by this problem for a long time, can I debug this problem? or any advice?
We are using the MediaPlayer API to provide CarPlay support. Starting in iOS 18 we are having issues updating the content list. The initial list of items will populate on a fresh instance but soon there after an error will show up saying we are not entitled to "com.apple.mediaremote.external-artwork-validation". From that point onwards no changes we make to our MPPlayableContentDataSource are reflected in CarPlay. Even after restarting the device.
While the MediaPlayer API is marked as deprecated, we are still using it to provide CarPlay support going back to iOS 10.
Has anyone else run into this or have suggestions for workarounds?
After upgrading to Xcode16RC, in an old project based on ObjC, I directly used the following controller code in AppDelegate:
- (void)viewDidLoad {
[super viewDidLoad];
// Do any additional setup after loading the view.
UIButton *b = [[UIButton alloc]initWithFrame:CGRectMake(100, 100, 44, 44)];
[b setTitle:@"title" forState:UIControlStateNormal];
[self.view addSubview:b];
[b addTarget:self action:@selector(onB:) forControlEvents:UIControlEventTouchUpInside];
}
- (IBAction)onB:(id)sender{
PHPickerConfiguration *config = [[PHPickerConfiguration alloc]initWithPhotoLibrary:PHPhotoLibrary.sharedPhotoLibrary];
config.preferredAssetRepresentationMode = PHPickerConfigurationAssetRepresentationModeCurrent;
config.selectionLimit = 1;
config.filter = nil;
PHPickerViewController *picker = [[PHPickerViewController alloc]initWithConfiguration:config];
picker.modalPresentationStyle = UIModalPresentationFullScreen;
picker.delegate = self;
[self presentViewController:picker animated:true completion:nil];
}
- (void)picker:(PHPickerViewController *)picker didFinishPicking:(NSArray<PHPickerResult *> *)results{
}
Environment: Simulator iPhone 15 Pro (iOS18)
Before this version (iOS17.4), clicking the button to pop up the system photo picker interface was normal (the top boundary was within the SafeAreaGuide area), but now the top boundary of the interface aligns directly to the top of the window, and clicking the photo cell is unresponsive.
If I create a new Target, using the same codes, the photo picker page does not have the above problem.
Therefore, I suspect it may be due to the old project’s .proj file’s info.plist, buildSetting, or buildPhase lacking some default configuration key value required by the new version, (My project was built years ago may be from iOS13 or earlier ) but I cannot confirm the final cause.
iOS18.0 has the additional messages:
objc[79039]: Class UIAccessibilityLoaderWebShared is implemented in both /Library/Developer/CoreSimulator/Volumes/iOS_22A3351/Library/Developer/CoreSimulator/Profiles/Runtimes/iOS 18.0.simruntime/Contents/Resources/RuntimeRoot/System/Library/AccessibilityBundles/WebCore.axbundle/WebCore (0x198028328) and /Library/Developer/CoreSimulator/Volumes/iOS_22A3351/Library/Developer/CoreSimulator/Profiles/Runtimes/iOS 18.0.simruntime/Contents/Resources/RuntimeRoot/System/Library/AccessibilityBundles/WebKit.axbundle/WebKit (0x1980fc398). One of the two will be used. Which one is undefined.
AX Safe category class 'SLHighlightDisambiguationPillViewAccessibility' was not found!
Has anyone encountered the same issue as me?
On Xcode 16 (16A242) app execution and UI will stall / lag as soon as an AVAudioPlayer is initialized.
let audioPlayer = try AVAudioPlayer(contentsOf: URL)
audioPlayer.volume = 1.0
audioPlayer.delegate = self
audioPlayer.prepareToPlay()
Typically you would not notice this in a music app for example, but it is especially noticable in games where multiple sounds are being played using multiple instances of AVAudioPlayer. The entire app slows down because of it.
This is similar to this issue from last year.
I have reported it to Apple in FB15144369, as this messes up my production games where fps goes down to nothing when sounds are enabled.
Unfortunately I cannot find a solution. Anyone?
How can I search for a single song by using its song ID? I tried the following code but it's not working:
MusicCatalogResourceRequest(matching: Song.self, equalTo: song.id.rawValue)
I get the following errors in Xcode:
Cannot convert value of type 'Song.Type' to expected argument type 'KeyPath<MusicItemType.FilterType, String>'
Generic parameter 'MusicItemType' could not be inferred
I have the song ID saved as a string inside song so I just want to grab the full MusicKit MusicItemType from the API. I looked at the documentation but it doesn't make any sense to me.
Please help!
Hello,
I am using the below code to request for video to be downloaded from iCloud. But the downloaded video size does not match with the original size of video.
PHVideoRequestOptions *options = [[PHVideoRequestOptions alloc] init];
options.version = PHVideoRequestOptionsVersionOriginal;
options.deliveryMode = PHVideoRequestOptionsDeliveryModeHighQualityFormat;
[options setNetworkAccessAllowed:YES];
[[PHImageManager defaultManager] requestAVAssetForVideo:asset options:options resultHandler:
^(AVAsset *avAsset, AVAudioMix *audioMix, NSDictionary *info) {
}
Original size of video, getting it from below code
NSArray *resources = [PHAssetResource assetResourcesForAsset:asset];
for (PHAssetResource *resource in resources) {
if ((resource.type == PHAssetResourceTypeVideo) || (resource.type== PHAssetResourceTypePhoto)){
return resource;
}
}
[resource valueForKey:@"fileSize"]
The original Size and the downloaded size of video is not matching.
Can anyone help me to debug what is the issue here
When setting the now playing info for playing media in MPNowPlayingInfoCenter we can set artwork. But it seems the Apple API for creating the artwork is crashing on iOS 18 (FB15145734).
On iOS 17 this gave the warning that the completion handler was not run on the main thread.
I've tried to seek help here: https://stackoverflow.com/questions/78989543/swift-data-race-with-appkit-mpmediaitemartwork-function/78990231?noredirect=1#comment139277425_78990231
but it seems that it's not possible to override the completion handler and therefor it's up to Apple to fix this issue.
.task {
await MainActor.run {
let nowPlayingInfoCenter = MPNowPlayingInfoCenter.default()
var nowPlayingInfo = [String: Any]()
let image = NSImage(named: "image")!
// warning: data race detected: @MainActor function at MPMediaItemArtwork/ContentView.swift:22 was not called on the main thread
nowPlayingInfo[MPMediaItemPropertyArtwork] = MPMediaItemArtwork(boundsSize: image.size, requestHandler: { _ in
// Not on main thread here!
return image
})
nowPlayingInfoCenter.nowPlayingInfo = nowPlayingInfo
}
}
I'm wondering if there is an alternative method to set the now playing artwork?
First, when the player uses m3u8
There should be a restart button on the player banner
As a result, sometimes the frame image does not update when seeking forward
The same behavior observed with AppleTV's default app
In addition, the same issue occurs when playing Apple event videos
Hi,
I’ve encountered a bug related to including tracks as a relationship in the playlists list. The issue arises when there is more than one playlist.
Specifically:
Single Playlist: The functionality works as expected.
Multiple Playlists: The application crashes.
Please let me know if you need additional information or if there are any updates on this issue.
Thank you!
curl --request GET \
--url 'https://api.music.apple.com/v1/me/library/playlists?include=tracks' \
--header 'Authorization: Bearer {token}' \
--header 'Music-User-Token: {token}'