Post not yet marked as solved
I’m using AVFoundation to access camera on iPad.
But with AVFoundation, CoreMedia is also imported, which in-turn imports CoreAudio and CoreVideo.
Keeping privacy concerns in mind, is there any way by which I can ensure that the app is never able to access Microphone or Video Recording?
AVfoundation
CoreMedia
Post not yet marked as solved
I’m using AVFoundation for image capture using camera on iPad.
But I’m not using Video or Audio related functionality.
Looks like with AVFoundation; CoreMedia, CoreVideo and CoreAudio are also imported in any project.
Is there any way by which I can remove these libraries(CoreMedia, CoreVideo and CoreAudio) from my app.
I have used otool to list all the frameworks and libraries being used by my framework.
Post not yet marked as solved
I am trying to play videos in AVSampleBufferDisplayLayer. Everything works well except it seems like the screenshot no longer works for the AVSBDPL when taken programatically.
I have tried a couple of approaches and the screenshot taken is always a black screen in the area of the AVSBDPL. Here are the approached that I have tried, but none of them works:
1. Get an image from image context with [view drawViewHierarchyInRect:view.bounds afterScreenUpdates:YES]
- (UIImage *)_screenshot:(UIView *)view {
UIGraphicsBeginImageContextWithOptions(view.frame.size, view.opaque, 0.0);
[view drawViewHierarchyInRect:view.bounds afterScreenUpdates:YES];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
No matter which view I provided to the function(the screen, the player container view, etc.), the video area is always a black image. And I have tried different setup for the image context, or flip the afterScreenUpdates, the result is always the same.
2. Get an image from image context with [view.layer renderInContext:UIGraphicsGetCurrentContext()]
- (UIImage *)_screenshot:(UIView*)view {
UIGraphicsBeginImageContextWithOptions(view.frame.size, view.opaque, 0.0);
[view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
[layer renderInContext:UIGraphicsGetCurrentContext()] is an old API that is used below iOS 10. This is very slow and replaced by [drawViewHierarchyInRect:view] after iOS 10. Same here, the screenshot just shows a black screen.
3. Use UIGraphicsImageRenderer
- (UIImage *)_screenshotNew:(UIView*)view {
UIGraphicsImageRendererFormat *format = [UIGraphicsImageRendererFormat new];
format.opaque = view.opaque;
format.scale = 0.0;
UIGraphicsImageRenderer *renderer = [[UIGraphicsImageRenderer alloc] initWithSize:view.frame.size format:format];
UIImage *screenshotImage = [renderer imageWithActions:^(UIGraphicsImageRendererContext *_Nonnull rendererContext) {
[view drawViewHierarchyInRect:view.bounds afterScreenUpdates:YES];
}];
return screenshotImage;
}
This is the latest API to take a screenshot and convert it to an UIImage, which does not work either.
4. Use [view snapshotViewAfterScreenUpdates:YES]
UIView *snapView = [self.view snapshotViewAfterScreenUpdates:YES];
UIView has an API called snapshotViewAfterScreenUpdates. Surprisingly, the UIView returned by this API can be rendered directly in the UI, and it shows the right screenshot(Woohoo!). However, when I tried to convert the UIView to an UIImage, it becomes a black screen again.
Some additional configurations that I have tried
preventsCapture instance property in AVSBDPL. This is NO by default. And when it is set to YES, it prevents the user from taking screenshot of the layer by pressing the physical buttons on the phone. But it does not have any effect on programmatically taking screenshot.
outputObscuredDueToInsufficientExternalProtection instance property of AVSBDPL. This property is always NO for me. Thus, I don't think it obscures anything. Also, this is a iOS 14.5+ API, and I do see the issue below 14.5.
There are also very few posts when I searched on Google and all of them have run into the same issue but cannot solve. It would be really appreciated if any one can help me with this!
Post not yet marked as solved
I am porting over some video decoding code from Intel to M1 and I'm seeing a very strange pixelFormat.
The setup is pretty basic, basically just setting kCVPixelBufferMetalCompatibilityKey to true.
But I am at a complete loss as to how to interpret this pixelFormat. In looking through CVPixelBuffer.h, I don't see any constant even close. (Using Xcode 12.5.1).
This is the beginning of the debug description of the imageBuffer:
CVPixelBuffer 0x6000eea7bf60 width=320 height=480 pixelFormat=&8v0 iosurface=0x6000e4c87ff0 planes=2 poolName=decode
Post not yet marked as solved
We see strange crashes when running our app since macOS 12 Beta (but still on macOS 12.0.1). We have not been able to fully identify the issue but it seems to happen on continue video playback in an AVPlayer, sometimes due to background, sometimes due to continue playback directly. Xcode points to some code in the libsystem_kernel.dylib (seems different every time and never in our own code)
The log will show:
-[MTLDebugCommandBuffer lockPurgeableObjects]:2103: failed assertion 'MTLResource 0x600002293790 (label: (null)), referenced in cmd buffer 0x7f7b2200a000 (label: (null)) is in volatile or empty purgeable state at commit'
-[MTLDebugCommandBuffer lockPurgeableObjects]:2103: failed assertion 'MTLResource 0x600002293790 (label: (null)), referenced in cmd buffer 0x7f7b2200a000 (label: (null)) is in volatile or empty purgeable state at commit'
We tried finding the object 0x600002293790 and 0x7f7b2200a000 but this gave no additional information as to why the app crashes.
We are using a custom VideoCompositor: AVVideoCompositing and initialise the CIContext for the work done here with these options:
if let mtlDevice = MTLCreateSystemDefaultDevice()
let options: [CIContextOption : Any] = [
CIContextOption.useSoftwareRenderer: false,
CIContextOption.outputPremultiplied: false,
]
let context = CIContext(mtlDevice: mtlDevice, options: options)
}
Not sure this is an Xcode 13 debug issue? a macOS 12.0.1 Monterey issue? or an actual issue as we have not seen it crash when not using Xcode to build our app giving this information. But we have seen strange crashes on Audio/Video threads that we could not trace back to our code as well.
The crash has never occurred on Xcode 12 or on macOS Big Sur during previous testing.
Any information as to locating the source of the issue or a solution would be awesome.
Post not yet marked as solved
In the WWDC 2021 video 10047, it was mentioned to look for availability of Lossless CVPixelBuffer format and fallback to normal BGRA32 format if it is not available. But in the updated AVMultiCamPiP sample code, it first looks for Lossy format than the lossless. Why is it so and whats the exact difference it would make if we select lossy vs lossless?
My CODE:
the mediaURL.path is obtained from UIImagePickerControllerDelegate
guard UIVideoEditorController.canEditVideo(atPath: mediaURL.path) else { return }
let editor = UIVideoEditorController()
editor.delegate = self
editor.videoPath = mediaURL.path
editor.videoMaximumDuration = 10
editor.videoQuality = .typeMedium
self.parentViewController.present(editor, animated: true)
Error description on console as below.
Video export failed for asset <AVURLAsset: 0x283c71940, URL = file:///private/var/mobile/Containers/Data/PluginKitPlugin/7F7889C8-20DB-4429-9A67-3304C39A0725/tmp/trim.EECE5B69-0EF5-470C-B371-141CE1008F00.MOV>: Error Domain=AVFoundationErrorDomain Code=-11800
It doesn't call
func videoEditorController(_ editor: UIVideoEditorController, didFailWithError error: Error)
After showing error on console, UIVideoEditorController automatically dismiss itself.
Am I doing something wrong? or is it a bug in swift?
Thank you in advance.
Post not yet marked as solved
Hi, I am interested in extracting/accessing timestamp of each frame captured while recording a video via iPhone (HEVC - 4k 60fps). Any links to relevant documentation will be very useful.
Post not yet marked as solved
Hi.
landscape videos work fine, but portrait videos don't.
how to enable for it?