Post not yet marked as solved
When using AVCaptureVideoDataOutput/AVCaptureAudioDataOutput & AVAsset writer to record video with cinematic extended video stabilization, the audio lags video upto 1-1.5 seconds and as a result in the video recording, video playback is frozen in the last 1-1.5 seconds. This does not happen when using AVCaptureMovieFileOutput. I want to know if this can be fixed or there is a workaround to synchronize audio/video frames? How does AVCaptureMovieFileOutput handle it?
Post not yet marked as solved
I have a CoreMediaIO based DAL plugin written in Swift which currently poll a website to get that string. Which is not a good approach.
I want to send that string to DAL plugin via Operating System supported IPC(Inter Process Communication).
But there are many ways to do IPC on MacOS like
Apple Events
Distributed Notifications in Cocoa
BSD Notifications
Transferring Raw Data With CFMessagePort
Communicating With BSD Sockets
Communicating With BSD Pipes
In my case I just want one way communication from a application to DAL Plugin.
I am new to MacOS development so not sure which approch will be efficiant and best for my case of one way comminucation from Application to DAL Plugin?
Post not yet marked as solved
The CoreMediaIO Device Abstraction Layer (DAL) is analogous to CoreAudio’s Hardware Abstraction Layer (HAL). Just as the HAL deals with audio streams from audio hardware, the DAL handles video (and muxed) streams from video devices.
DAL Pludins resides at /Library/CoreMediaIO/Plug-Ins/DAL/
What is life cycle of these DAL Plugins?
When they get started running?
When they get stopped?
When they get paused?
Where can I see their logs?
What happens when they are not in use?
How can I see their performance if they are efficient or not?
One of the famous example of CoreMediaIO DAL Plugin is OBS Virtual Camera if someone does not know.
Note: This question should not be marked too broad. I am not asking multiple questions. It's only one question to know the life cycle of CoreMediaIO DAL Plugin.
Post not yet marked as solved
In my application I am capturing window using CGWindowListCreateImage
let windowID = 12345;
let windowImage = CGWindowListCreateImage(.null, .optionIncludingWindow, CGWindowID(windowID), [.bestResolution, .boundsIgnoreFraming])
This is working nicely.
How can I capture window with cursor using this approach?
Post not yet marked as solved
Facing strange issue in two devices, iPhone 6s and 12 mini. Not getting audio from the media when ringer is off, while playing media. Is this device specific issue, any settings issue, OS issue or ultimately app issue. But this works fine with you tube and other media apps.
Post not yet marked as solved
I’m using AVFoundation for image capture using camera on iPad.
But I’m not using Video or Audio related functionality.
Looks like with AVFoundation; CoreMedia, CoreVideo and CoreAudio are also imported in any project.
Is there any way by which I can remove these libraries(CoreMedia, CoreVideo and CoreAudio) from my app.
I have used otool to list all the frameworks and libraries being used by my framework.
Post not yet marked as solved
I’m using AVFoundation to access camera on iPad.
But with AVFoundation, CoreMedia is also imported, which in-turn imports CoreAudio and CoreVideo.
Keeping privacy concerns in mind, is there any way by which I can ensure that the app is never able to access Microphone or Video Recording?
AVfoundation
CoreMedia
Using a few webcams that worked previously on both M1 and Intel Macbooks, testing out on the new MacBook Pro 2021 and the UVC streams are not showing across apps.
Here's my setup
Tested the Anker, OBSBot, and an Opal, and not seeing any of them show up on UVC streams
E.g. The Anker shows up in System Information > USB but in UVC apps like Zoom, etc it does not show up.
Running
system_profiler SPCameraDataType -json
Returns just
{
"SPCameraDataType" : [
{
"_name" : "FaceTime HD Camera",
"spcamera_model-id" : "FaceTime HD Camera",
"spcamera_unique-id" : "47B4B64B70674B9CAD2BAE273A71F4B5"
}
]
}%
Post not yet marked as solved
Given an AVAsset, I'm performing a Vision trajectory request on it and would like to write out a video asset that only contains frames with trajectories (filter out downtime in sports footage where there's no ball moving).
I'm unsure what would be a good approach, but as a starting point I tried the following pipeline:
Copy sample buffer from the source AVAssetReaderOutput.
Perform trajectory request on a vision handler parameterized by the sample buffer.
For each resulting VNTrajectoryObservation (trajectory detected), use its associated CMTimeRange to configure a new AVAssetReader set to that time range.
Append the time range constrained sample buffer to one AVAssetWriterInput until the forEach is complete.
In code:
private func transferSamplesAsynchronously(from readerOutput: AVAssetReaderOutput,
to writerInput: AVAssetWriterInput,
onQueue queue: DispatchQueue,
sampleBufferProcessor: SampleBufferProcessor,
completionHandler: @escaping () -> Void) {
/*
The writerInput continously invokes this closure until finished or
cancelled. It throws an NSInternalInconsistencyException if called more
than once for the same writer.
*/
writerInput.requestMediaDataWhenReady(on: queue) {
var isDone = false
/*
While the writerInput accepts more data, process the sampleBuffer
and then transfer the processed sample to the writerInput.
*/
while writerInput.isReadyForMoreMediaData {
if self.isCancelled {
isDone = true
break
}
// Get the next sample from the asset reader output.
guard let sampleBuffer = readerOutput.copyNextSampleBuffer() else {
// The asset reader output has no more samples to vend.
isDone = true
break
}
let visionHandler = VNImageRequestHandler(cmSampleBuffer: sampleBuffer, orientation: self.orientation, options: [:])
do {
try visionHandler.perform([self.detectTrajectoryRequest])
if let results = self.detectTrajectoryRequest.results {
try results.forEach { result in
let assetReader = try AVAssetReader(asset: self.asset)
assetReader.timeRange = result.timeRange
let trackOutput = AVTrackOutputs.firstTrackOutput(ofType: .video, fromTracks: self.asset.tracks,
withOutputSettings: nil)
assetReader.add(trackOutput)
assetReader.startReading()
guard let sampleBuffer = trackOutput.copyNextSampleBuffer() else {
// The asset reader output has no more samples to vend.
isDone = true
return
}
// Append the sample to the asset writer input.
guard writerInput.append(sampleBuffer) else {
/*
The writer could not append the sample buffer.
The `readingAndWritingDidFinish()` function handles any
error information from the asset writer.
*/
isDone = true
return
}
}
}
} catch {
print(error)
}
}
if isDone {
/*
Calling `markAsFinished()` on the asset writer input does the
following:
1. Unblocks any other inputs needing more samples.
2. Cancels further invocations of this "request media data"
callback block.
*/
writerInput.markAsFinished()
/*
Tell the caller the reader output and writer input finished
transferring samples.
*/
completionHandler()
}
}
}
private func readingAndWritingDidFinish(assetReaderWriter: AVAssetReaderWriter,
completionHandler: @escaping FinishHandler) {
if isCancelled {
completionHandler(.success(.cancelled))
return
}
// Handle any error during processing of the video.
guard sampleTransferError == nil else {
assetReaderWriter.cancel()
completionHandler(.failure(sampleTransferError!))
return
}
// Evaluate the result reading the samples.
let result = assetReaderWriter.readingCompleted()
if case .failure = result {
completionHandler(result)
return
}
/*
Finish writing, and asynchronously evaluate the results from writing
the samples.
*/
assetReaderWriter.writingCompleted { result in
completionHandler(result)
return
}
}
When run I get the following:
No error is caught in the first catch clause, and none are caught in private func readingAndWritingDidFinish(assetReaderWriter: AVAssetReaderWriter, completionHandler: @escaping FinishHandler), the completion handler is called.
Help with any of the following questions would be appreciated:
What is causing what appears to be indefinite loading?
How might I isolate the problem further?
Am I misusing or misunderstanding how to selectively read from time ranges of AVAssetReader objects?
Should I forego the AVAssetReader / AVAsssetWriter route entirely, and use the time ranges with AVAssetExportSession instead? I don't know how the two approaches compare, or what to consider when choosing between the two.
Post not yet marked as solved
Modifying guidance given in an answer on AVFoundation + Vision trajectory detection, I'm instead saving time ranges of frames that have a specific ML label from my custom action classifier:
private lazy var detectHumanBodyPoseRequest: VNDetectHumanBodyPoseRequest = {
let detectHumanBodyPoseRequest = VNDetectHumanBodyPoseRequest(completionHandler: completionHandler)
return detectHumanBodyPoseRequest
}()
var timeRangesOfInterest: [Int : CMTimeRange] = [:]
private func readingAndWritingDidFinish(assetReaderWriter: AVAssetReaderWriter,
asset
completionHandler: @escaping FinishHandler) {
if isCancelled {
completionHandler(.success(.cancelled))
return
}
// Handle any error during processing of the video.
guard sampleTransferError == nil else {
assetReaderWriter.cancel()
completionHandler(.failure(sampleTransferError!))
return
}
// Evaluate the result reading the samples.
let result = assetReaderWriter.readingCompleted()
if case .failure = result {
completionHandler(result)
return
}
/*
Finish writing, and asynchronously evaluate the results from writing
the samples.
*/
assetReaderWriter.writingCompleted { result in
self.exportVideoTimeRanges(timeRanges: self.timeRangesOfInterest.map { $0.value }) { result in
completionHandler(result)
}
}
}
func exportVideoTimeRanges(timeRanges: [CMTimeRange], completion: @escaping (Result<OperationStatus, Error>) -> Void) {
let inputVideoTrack = self.asset.tracks(withMediaType: .video).first!
let composition = AVMutableComposition()
let compositionTrack = composition.addMutableTrack(withMediaType: .video, preferredTrackID: kCMPersistentTrackID_Invalid)!
var insertionPoint: CMTime = .zero
for timeRange in timeRanges {
try! compositionTrack.insertTimeRange(timeRange, of: inputVideoTrack, at: insertionPoint)
insertionPoint = insertionPoint + timeRange.duration
}
let exportSession = AVAssetExportSession(asset: composition, presetName: AVAssetExportPresetHighestQuality)!
try? FileManager.default.removeItem(at: self.outputURL)
exportSession.outputURL = self.outputURL
exportSession.outputFileType = .mov
exportSession.exportAsynchronously {
var result: Result<OperationStatus, Error>
switch exportSession.status {
case .completed:
result = .success(.completed)
case .cancelled:
result = .success(.cancelled)
case .failed:
// The `error` property is non-nil in the `.failed` status.
result = .failure(exportSession.error!)
default:
fatalError("Unexpected terminal export session status: \(exportSession.status).")
}
print("export finished: \(exportSession.status.rawValue) - \(exportSession.error)")
completion(result)
}
}
This worked fine with results vended from Apple's trajectory detection, but using my custom action classifier TennisActionClassifier (Core ML model exported from Create ML), I get the console error getSubtractiveDecodeDuration signalled err=-16364 (kMediaSampleTimingGeneratorError_InvalidTimeStamp) (Decode timestamp is earlier than previous sample's decode timestamp.) at MediaSampleTimingGenerator.c:180. Why might this be?
Post not yet marked as solved
After creating a custom action classifier in Create ML, previewing it (see the bottom of the page) with an input video shows the label associated with a segment of the video. What would be a good way to store the duration for a given label, say, each CMTimeRange of segment of video frames that are classified as containing "Jumping Jacks?"
I previously found that storing time ranges of trajectory results was convenient, since each VNTrajectoryObservation vended by Apple had an associated CMTimeRange.
However, using my custom action classifier instead, each VNObservation result's CMTimeRange has a duration value that's always 0.
func completionHandler(request: VNRequest, error: Error?) {
guard let results = request.results as? [VNHumanBodyPoseObservation] else {
return
}
if let result = results.first {
storeObservation(result)
}
do {
for result in results where try self.getLastTennisActionType(from: [result]) == .playing {
var fileRelativeTimeRange = result.timeRange
fileRelativeTimeRange.start = fileRelativeTimeRange.start - self.assetWriterStartTime
self.timeRangesOfInterest[Int(fileRelativeTimeRange.start.seconds)] = fileRelativeTimeRange
}
} catch {
print("Unable to perform the request: \(error.localizedDescription).")
}
}
In this case I'm interested in frames with the label "Playing" and successfully classify them, but I'm not sure where to go from here to track the duration of video segments with consecutive frames that have that label.
Post not yet marked as solved
Version: 14.x
Setting videoGravity of AVSampleBufferDisplayLayer does not work, however, when I change bounds of this layer, it worked, is this a bug or feature?
but in iOS 15, this propery worked well.
Post not yet marked as solved
Hi there,
I used AVAssetWriter to export mp4 file, input audio and video samples are encoded data (aac and h264 samples), sometimes it works well, but sometime it will failed randomly with error code -11800 and underlying code -17771 after calling finishWritingWithCompletionHandler, log looks like this:
AVAssetWriter status: Failed, status:err:Error Domain=AVFoundationErrorDomain Code=-11800 "The operation could not be completed" UserInfo={NSLocalizedFailureReason=An unknown error occurred (-17771), NSLocalizedDescription=The operation could not be completed, NSUnderlyingError=0x60400008e1d0 {Error Domain=NSOSStatusErrorDomain Code=-17771 "(null)"}}, status=3
And I notice these logs on Console.app just before the error message:
default 17:35:08.380220+0800 MyAPP <<<< FigExportCommmon >>>> remakerFamily_formatWriterErrorOccurred: (0x6140001ba240)
default 17:35:08.380561+0800 MyAPP <<<< FigExportCommmon >>>> remakerFamily_PostFailureNotificationIfError: (0x6140001ba240) state finish writing called
default 17:35:08.381010+0800 MyAPP <<<< FigExportCommmon >>>> remakerfamily_setFailureStatusIfError: (0x6140001ba240) err -17771 state finish writing called
I don't know why this will happen, any replies are appreciate. thx.
Post not yet marked as solved
Hi,
I'm currently looking for a way to render WebVTT subtitles while using AVSampleBufferDisplayLayer. A little background on the issue.
In order to comply with some client rules (for play/pause/seek) and support picture in picture, I had to abandon the AVPlayerLayer (switching to AVSampleBufferDisplayLayer) which made me lose the native subtitles support but let me tap into the play/pause/seek calls and enforce the needed rules. This way I was able to use the AVPictureInPictureController with a content source and delegate (the latter did the trick to tapping into the PiP calls).
But with this, I lost the subtitles. The first thing that came to my mind was to implement support for the rendering of subtitles. Adding an AVPlayerItemLegibleOutput to the AVPlayerItem allowed me to get access to the subtitles, just to find out they were annotated with CoreMedia CMTextMark which don't seem to be automatically rendered by a CATextLayer. Thought of converting the NSAttributedString "styles" from Core Media to "normal" styles but then I would also need to add support for laying the subs correctly. Certainly, one way to do it but not sure it's easier. Couldn't find anything on the Core Media documentation that helped either.
Then while digging around the AVPlayerItemOutput I saw the suppressesPlayerRendering and trying to use AVPlayerLayer and AVSampleBufferDisplayLayer together.
The first one would render the subtitles while the other would do the video rendering. Made a sample and it sure works on the simulator, but when running on the device I get two layers of video playing and it seems that the suppressesPlayerRendering flag doesn't do anything.
How can I tackle this problem?
Post not yet marked as solved
Hi,
I have written a DAL virtual webcam plugin which works fine with all apps (Zoom, OBS, ...) except Apple QuickTime.
Other 3rd party virtual webcams show up in QuickTime, for instance the OBS virtual cam plugin:
https://github.com/obsproject/obs-studio/tree/dde4d57d726ed6d9e244ffbac093d8ef54e29f44/plugins/mac-virtualcam/src/dal-plugin
My first intention was that it has something to do with code signing, so I removed the signature from OBS virtual cam plugin but it kept working in QuickTime.
This is the source code of my plugin's entry function:
#include <CoreFoundation/CoreFoundation.h>
#include "plugininterface.h"
extern "C" void *TestToolCIOPluginMain(CFAllocatorRef allocator, CFUUIDRef requestedTypeUUID)
{
// This writes to a log file in /tmp/logfile.txt but is NEVER called from QuickTime:
Logger::write("Called TestToolCIOPluginMain");
if (!CFEqual(requestedTypeUUID, kCMIOHardwarePlugInTypeID))
return nullptr;
return VCam::PluginInterface::create();
}
And the plugin's Info.plist (almost the same as OBS virtual cam's one):
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>CFBundleDevelopmentRegion</key>
<string>English</string>
<key>CFBundleExecutable</key>
<string>TestDriverCIO</string>
<key>CFBundleIdentifier</key>
<string>com.test.cmio.DAL.VirtualCamera</string>
<key>LSMinimumSystemVersion</key>
<string>10.13</string>
<key>CFBundleInfoDictionaryVersion</key>
<string>6.0</string>
<key>CFBundleName</key>
<string>TestDriverCIO</string>
<key>CFBundlePackageType</key>
<string>BNDL</string>
<key>CFBundleShortVersionString</key>
<string>3.0.0</string>
<key>CFBundleSignature</key>
<string>????</string>
<key>CFBundleVersion</key>
<string>3.0.0</string>
<key>CFBundleSupportedPlatforms</key>
<array>
<string>MacOSX</string>
</array>
<key>CFPlugInFactories</key>
<dict>
<key>AAAAAAAA-7320-5643-616D-363462697402</key>
<string>TestToolCIOPluginMain</string>
</dict>
<key>CMIOHardwareAssistantServiceNames</key>
<array>
<string>com.test.cmio.VCam.Assistant</string>
</array>
<key>CFPlugInTypes</key>
<dict>
<key>30010C1C-93BF-11D8-8B5B-000A95AF9C6A</key>
<array>
<string>AAAAAAAA-7320-5643-616D-363462697402</string>
</array>
</dict>
</dict>
</plist>
Interestingly "TestToolCIOPluginMain" is never called (the logger never writes an output) when starting QuickTime and the camera is not shown in QuickTime.
Is there something special required to get the DAL plugin to show up in QuickTime? What am I missing here?
Regards,
Post not yet marked as solved
I had successfully added the new template Camera Extension to a project, and was able to use it in MacOS 12.3 Beta 3. After updating to the RC version of MacOS 12.3, the extension no longer appears in (for example) Quicktime Player.
As far as the application is concerned, the requests to "update" the extension work, and I get an OSSystemExtensionRequest.Result.completed response from OSSystemExtensionManager.shared.submitRequest.
The only clue I see is an error in the Console app:
RegisterAssistantService.m:806:-[RegisterAssistantServer launchdJobForExtension:error:] submit returned Error Domain=OSLaunchdErrorDomain Code=125 UserInfo={NSLocalizedFailureReason=<private>} for {
EnablePressuredExit = 1;
Label = "CMIOExtension.app.mmhmm.CameraTest.camera";
LaunchEvents = {
"com.apple.cmio.registerassistantservice.system-extensions.matching" = {
"app.mmhmm.CameraTest.camera" = {
CMIOExtensionBundleIdentifier = "app.mmhmm.CameraTest.camera";
};
};
};
LimitLoadToSessionType = Background;
MachServices = {
"M3KUT44L48.app.mmhmm.CameraTest.camera" = 1;
};
ProcessType = Interactive;
Program = "/Library/SystemExtensions/9D731619-32C2-45C4-9B7C-2F22D184868A/app.mmhmm.CameraTest.camera.systemextension/Contents/MacOS/app.mmhmm.CameraTest.camera";
SandboxProfile = cmioextension;
"_ManagedBy" = "com.apple.cmio.registerassistantservice";
}
Any ideas? Submitting a deactivationRequest also apparently "works", but doesn't actually seem to remove the installed extension...
Edit: Weirdly, rebooting seems to have cleared up the issue.
When I ran systemextensionsctl list, I got a bunch of SE reported as "[terminated waiting to uninstall on reboot]":
% systemextensionsctl list
9 extension(s)
--- com.apple.system_extension.cmio
enabled active teamID bundleID (version) name [state]
M3KUT44L48 app.mmhmm.CameraTest.camera (1.0/1) camera [terminated waiting to uninstall on reboot]
M3KUT44L48 app.mmhmm.CameraTest.camera (1.0/1) camera [terminated waiting to uninstall on reboot]
M3KUT44L48 app.mmhmm.CameraTest.camera (1.0/1) camera [terminated waiting to uninstall on reboot]
M3KUT44L48 app.mmhmm.CameraTest.camera (1.0/1) camera [terminated waiting to uninstall on reboot]
M3KUT44L48 app.mmhmm.CameraTest.camera (1.0/1) camera [terminated waiting to uninstall on reboot]
M3KUT44L48 app.mmhmm.CameraTest.camera (1.0/1) camera [terminated waiting to uninstall on reboot]
M3KUT44L48 app.mmhmm.CameraTest.camera (1.0/1) camera [terminated waiting to uninstall on reboot]
M3KUT44L48 app.mmhmm.CameraTest.camera (1.0/1) camera [terminated waiting to uninstall on reboot]
M3KUT44L48 app.mmhmm.CameraTest.camera (1.0/1) camera [terminated waiting to uninstall on reboot]
I didn't see anything in the (very sparse) documentation that indicated how I'd know that a reboot was required to finish un-installing (especially given that it wasn't required for installing). Maybe that's what OSSystemExtensionProperties.isUnstalling is for?
Post not yet marked as solved
When playing several short HLS clips using AVPlayer connected to a TV using Apple's Lightning-to-HDMI adapter (A1438) we often fail with those unknown errors.
CoreMediaErrorDomain -12034
and
CoreMediaErrorDomain -12158
Anyone has any clue what the errors mean?
Environment:
iPhone8
iOS 15.4
Lightning-to-HDMI adapter (A1438)
Post not yet marked as solved
In the example of the Core Media IO there is a mach server that is providing the samples to the plugin (in the example it is a getting the samples from a yuv file). The architecture of OBS is similar. The mach server is reading from a camera or from from a file and provides the samples to the plugin.
Why do we need the mach server? I've implemented a virtual webcam where the plugin is capturing samples from a webcam (either the built in webcam or a USB one). It seems to be more efficient though I guess that there is a reason for doing this with a mach server. What is the reason?
Hi.
Our NWListener runs fine and accepts connections successfully when it's run in a standalone app, however, the same code fails when moved into the System Extension code.
Specifically, we get the error: The operation couldn’t be completed. (Network.NWError error 0.)
...
let listener = try NWListener(using: params!)
listener.service = NWListener.Service(name: "service",
type: "_service._tcp")
listener.stateUpdateHandler = { newState in
switch newState {
case .ready:
if let port = listener.port {
self.receiveIncomingDataOnConnection()
}
case .failed(let error):
listener.cancel()
print("Listener - failed with %{public}@, restarting", error.localizedDescription)
// Getting the error ^
default:
break
}
}
...
We have checked that the App Sandbox permissions for inbound and outbound connections are set in the entitlements file.
At this point, we are stumped at what's limiting our listener when it's running in the extension.
Thanks!
Post not yet marked as solved
I built an app which hosts a CMIOExtension. The app works, and it can activate the extension. The extension loads in e.g. Photo Booth and shows the expected video (a white horizontal line which moves down the picture).
I have a couple of questions about this though.
The sample Camera Extension is built with a CMIOExtension dictionary with just one entry, CMIOExtensionMachServiceName which is $(TeamIdentifierPrefix)$(PRODUCT_BUNDLE_IDENTIFIER)
This Mach service name won't work though. When attempting to activate the extension, sysextd says that the extensions has an invalid mach service name or is not signed, the value must be prefixed with one of the App Groups in the entitlement.
So in order to get the sample extension to activate from my app, I have to change its CMIOExtensionMachServiceName to
<my team ID>.com.mycompany.my-app-group.<myextensionname>
Is this to be expected?
The template CMIOExtension generates its own video using a timer. My app is intended to capture video from a source, filter that video, then feed it to the CMIOExtension, somehow. The template creates an app group called "$(TeamIdentifierPrefix)com.example.app-group", which suggests that it might be possible to use XPC to send frames from the app to the extension.
However, I've been unable to do so. I've used
NSXPCConnection * connection = [[NSXPCConnection alloc] initWithMachServiceName:, using the CMIOExtensionMachServiceName with no options and with the NSXPCConnectionPrivileged option. I've tried NSXPCConnection * connection = [[NSXPCConnection alloc] initWithServiceName: using the extension's bundle identifier. In all cases when I send the first message I get an error in the remote object proxy's handler:
Error Domain=NSCocoaErrorDomain Code=4099 "The connection to service named <whatever name I try> was invalidated: failed at lookup with error 3 - No such process."
According to the "Daemons and Services Programming Guide" an XPC service should have a CFBundlePackageType of XPC!, but a CMIOExtension is of type SYSX. It can't be both.
Does the CMIOExtension loading apparatus cook up a synthetic name for the XPC service, and if so, what is it? If none, how is one expected to get pixel buffers into the camera extension?