The FxRemoteWindow API in Fxplug 4.3 does not provide window.frame.origin.
How to set a window that can be positioned and sized higher than the motion?
General
RSS for tagExplore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.
Post
Replies
Boosts
Views
Activity
Hello, I have a few apps that I use for screen recording/streaming like OBS as well as capturing screens to project into a VR headset (Immersed), and they use ScreenCaptureKit to record the full displays/all content.
But when capturing that display, some application windows or UI elements, like in Microsoft Teams when you begin a screen sharing session and you get a control-bar overlay to manage sharing options (stop, start, etc), that particular element does not get captured by the recording app's capture (though other MS Teams windows and all the other applications on screen do). Another app that has this problem is the CleanShot X screen capturing app, where it's overlay UI elements don't get captured, but are still on the physical screen. This of course when using Mac displays in VR causes in issue where you can't see these particular CleanShot controls but they are there and intercepting mouse clicks/input.
What would be causing only certain elements to not get captured when the recording app is telling ScreenCaptureKit to not exclude anything, and is there a property on these UI elements that the developer can "opt in" to get SCK to pick them up? I am trying to figure out what feedback to give to developers of programs that have this issue/if it's possible for them to modify their apps to change this behavior?
Thanks!
During testing, I encountered an issue with SharePlay. Since SharePlay necessitates multi-device testing, I intend to use my Mac and Vision Pro for testing. However, since these two devices are also my primary devices, I am reluctant to switch Apple IDs for testing purposes. Instead, I would like to test the original Apple ID. However, since both devices belong to the same Apple ID and rely on the same phone number, they are unable to FaceTime each other. I am at a loss as to how to proceed.
(BOOL)renderDestinationImage:(FxImageTile *)destinationImage
sourceImages:(NSArray<FxImageTile *> *)sourceImages
pluginState:(NSData *)pluginState
atTime:(CMTime)renderTime
error:(NSError * _Nullable *)outError
{
//。。。。。。其他代码
id sourceTexture = [sourceImages [ 0 ] metalTextureForDevice:[deviceCache deviceWithRegistryID:deviceRegistryID]];
//。。。。。。其他代码
// Clean up
[commandEncoder endEncoding];
[commandBuffer commit];
[commandBuffer waitUntilScheduled];
[colorAttachmentDescriptor release];
[deviceCache returnCommandQueueToCache:commandQueue];
self.texture = [sourceImages [ 0 ] metalTextureForDevice:[deviceCache deviceWithRegistryID:deviceRegistryID]];
return YES;
}
当我用self.texture 渲染到MTKView的时候,在Motion中显示出来非常模糊。
I downloaded IOS 18 buThank you very much in Switzerland we have TWINT payment and they don’t work.
Since updating to iOS 18 my screen time code dose not work and I am unable to give myself more time on apps. I am also unable to modify any of the screen time settings and if I go to settings and click on screen time the settings app will freeze and I need to close the settings window.
I have checked for updates and I have rest phone to default settings. Still not working, help!
I have a mac os app that uses screen capture logic. It was originally coded using the Quartz CG api:
if let cgimage = CGDisplayCreateImage(CGMainDisplayID(), rect: cgRect) {...}
and this worked as expected even when capturing a screen rect that began on secondary monitor and ended on primary monitor (or was entirely contained on secondary monitor).
However now that API is deprecated and you're supposed to use ScreenCaptureKit instead. So I have attempted to convert the code. The trial code is:
let scConfig = SCStreamConfiguration()
scConfig.sourceRect = drect
scConfig.width = Int(drect.width)
scConfig.height = Int(drect.height)
SCScreenshotManager.captureImage(contentFilter: sFilter, configuration: scConfig) {any,error in
if let cgim = any {
print("image dims \(cgim.width), \(cgim.height), requested: \(drect)")
self.writeToFile2(cgim)
}
else {
print("SCREEN CAP failed")
}
}
...
where sFilter was previously set based on main screen display (with no exclusions). This code also "works" as long as the capture rect is entirely on primary monitor. But it fails if the rect spans both monitors or is fully contained on secondary monitor. (By fails I mean it produces empty image)
So my question is: How to use ScreenCaptureKit to obtain screen shot of rectangle that spans dual monitors?
So I have been using the iOS 18 beta and I would like to say first and foremost really great update I love the timed messages and the car Sickness stuff that’s awesome but my only problem with the update is screen time. I have screen time on my phone and when I updated to iOS 18 beta I can’t seem to claim more screen time. I have my screen time to where I can ignore it once I run out But in the new update I can seem press the button all I want but It wont give me more screen time. This Is what I imagine a minor fix and I would really appreciate if this was fixed soon. But other than that small detail Amazing update love what your doing over here at apple!
It seems that there’s still no way to get all TIFF tags from a TIFF image, is that right? I've got these GeoTIFF images that have a handful of specialized TIFF tags in them. Calling CGImageSourceCopyPropertiesAtIndex(), I can see basic properties common to all TIFF images, like dimensions and color/pixel information, but no others.
Short of including libtiff, is there another way to get at the metadata? I've tried all of the options in CGImageSourceCopyAuxiliaryDataInfoAtIndex.
I've written a few bugs about this since 2020, all ignored.
I’m a tracking company and have my own tracking platform. Looking for the solution that using tag device for animals like Air Tags but running on my platform.
Is there a way to allow my platform to interface with the Find My Phone to get the location data of my Tags ?
Good morning,
I'm trying to use MusicKit functionalities in order to get last played songs and put them into a local DB, to be played later. Following the guide on developer.apple.com, I created the required AppServices integration:
Below is a minimal working version of what I'm doing:
func requestMusicAuthorization() async {
let status = await MusicAuthorization.request()
switch status {
case .authorized:
isAuthorizedForMusicKit = true
error = nil
case .restricted:
error = NSError(domain: "Music access is restricted", code: -10)
case .notDetermined:
break
case .denied:
error = NSError(domain: "Music access is denied", code: -10)
@unknown default:
break
}
}
on the SwiftUI ContentView there's something like that:
.onAppear {
Task {
await requestMusicAuthorization()
if MusicManager.shared.isAuthorizedForMusicKit {
let response = try await fetchLastSongs()
do {
let request = MusicRecentlyPlayedRequest<Song>()
let response = try await request.response()
var songs: [Song] = response.items.map { $0 }
// do some CloudKit handling with songs...
print("Recent songs: \(songs)")
} catch {
NSLog(error.localizedDescription)
}
}
}
}
Everything seems to works fine, but my console log is full of garbage like that:
MSVEntitlementUtilities - Process MyMusicApp PID[33633] - Group: (null) - Entitlement: com.apple.accounts.appleaccount.fullaccess - Entitled: NO - Error: (null)
Attempted to register account monitor for types client is not authorized to access: {(
"com.apple.account.iTunesStore"
)}
is there something I'm missing on? Should I ignore that and go forward with my implementation? Any help is really appreciated.
1
. I have recently downloaded iOS 18 and my phone is on dark mode but on somethings, things are on white / light mode. Is there any way too fix this? I’ll post some screen shots showing my problem.
How to add multiple resolutions to CMIO camera extension
I have an IOSurface and I want to turn that into a CIImage. However, the constructor of CIImage takes a IOSurfaceRef instead of a IOSurface.
On most platforms, this is not an issue because the two types are toll-free bridgeable... except for Mac Catalyst, where this fails.
I observed the same back in Xcode 13 on macOS. But there I could force-cast the IOSurface to a IOSurfaceRef:
let image = CIImage(ioSurface: surface as! IOSurfaceRef)
This cast fails at runtime on Catalyst.
I found that unsafeBitCast(surface, to: IOSurfaceRef.self) actually works on Catalyst, but it feels very wrong.
Am I missing something? Why aren't the types bridgeable on Catalyst?
Also, there should ideally be an init for CIImage that takes an IOSurface instead of a ref.
I am looking to do the OOB (Out of band) pairing using QR code with a device from iOS app. I referred the documentation but could not find if this is feasible or not. Few forum says it is not feasible, few says it is. May I know the latest state from Apple development support team?
Hello Apple,
I am concerned about the new iOS Screen Mirroring that is available on iOS.
I have an app that is only meant to be viewed on iPhones (not Macs or Computers, due to security reasons.
I am assuming that Screen Mirroring is using AirPlay underneath, otherwise is there an API being planned or coming that can disable this functionality or is there a way for my app to opt out out of iOS Screen Mirroring?
Thanks.
Hi everyone, I'm currently developing an iOS app using React Native and recently got accepted into the Apple Music Global Affiliate Program. To fully utilize this opportunity, I need to implement the following functionalities:
Authorize Apple Music usage
Play Apple Music within my app
Identify if a user has an Apple Music subscription
Initiate and complete Apple Music subscription within my app
I've successfully implemented the first three functionalities using the react-native-apple-music module. Now, I need your help to understand how I can directly trigger the Apple Music subscription process from within my app.
Thank you for your help!
I'm working on a macOS application that captures audio and video. When the user selects a video capture source (most likely an elgato box), I would like the application to automatically select the audio input from the same device. I was achieving this by pairing video and audio sources that had the same name, but this doesn't work when the user plugs in two capture devices of the same make and model.
With the command system_profiler SPUSBDataType I can list all the USB devices, and I can see that the two elgato boxes have different serial numbers. If I could find this serial number, then I could figure out which AVCaptureDevices come from the same hardware.
Is there a way to get the manufacturer's serial number from the AVCaptureDevice object? Or a way to identify the USB device for an AVCaptureDevice, and from there I could get the serial or some other unique ID?
Hello,
We've a music app reading MPMediaItem.
We got items using MPMediaQuery. But we realized that some downloaded tracks from Apple Music were fetched too. Not all downloaded track but only those who were played recently.
Of course, since these tracks are protected with DRM we can't play them in our player.
It's weird to get them in our query because we added predicate in order to dont fetch protected asset and iCloud item
MPMediaPropertyPredicate(value: false, forProperty: MPMediaItemPropertyHasProtectedAsset)
MPMediaPropertyPredicate(value: false, forProperty: MPMediaItemPropertyIsCloudItem)
To be sure, we made a second check on each item we've fetched
extension MPMediaItem {
public func isValid() -> Bool {
return self.assetURL != nil && !self.isCloudItem && !self.hasProtectedAsset
}
}
But we still get these items. Their hasProtectedAsset attribute always return false.
I dont know if it's a bug, but since we can't detect this items as Apple Music downloaded track, we can't either:
filter them to not add them in our application library
OR
switch on a MPMusicPlayerController.applicationMusicPlayer to allow the user to play them
We have an app with a broadcast extension with a RPBroadcastSampleHandler. The implementation is working fine, however for quite some users the extension suddenly crashes during the broadcast.
The stacktrace stacktrace of the crashing thread always looks like the shortened sample below. (Full crash reports and stack traces are attached to the submitted Feedbacks.) Looking at the stacktrace none of our code is running, just ReplayKit code handling XPC messages at that moment:
Thread:
#0 0x00000001e2cf342c in __pthread_kill ()
#1 0x00000001f6a51c0c in pthread_kill ()
#2 0x00000001a1bfaba0 in abort ()
#3 0x00000001a9e38588 in malloc_vreport ()
#4 0x00000001a9e35430 in malloc_zone_error ()
[...]
#18 0x0000000218ac91bc in -[RPBroadcastSampleHandler processPayload:completion:] ()
#19 0x0000000198b81360 in __NSXPCCONNECTION_IS_CALLING_OUT_TO_EXPORTED_OBJECT_S2__ ()
Is anyone aware of there issues with ReplayKit? Are there known workarounds? Could anything we're doing affect crashes like this?
Would greatly appreciate it if anyone from Apple DTS could look into this and flag the below Feedbacks to the relevant teams!
Feedback IDs: FB13949098, FB13949188