Post not yet marked as solved
So my timeline is this:
Got MBP 16' in March with graphics options:
AMD Radeon Pro 5500M 4 GB
Intel UHD Graphics 630 1536 MB
Up until 10.15.5 came out, I had zero problems/crashes and I always have the laptop closed and an external display connected with an official Apple A/V adapter using HDMI. As soon as I installed 10.15.5 the panics started happening.
Reason:					 (1 monitored services unresponsive): checkin with service: WindowServer returned not alive with context: unresponsive work processor(s): WindowServer main thread	40 seconds since last successful checkin
Literally after the update ended, I didn't touch the laptop for some time, the external monitor went to sleep and the laptop panic'ed and rebooted. I installed apps like Caffeine to prevent the external monitor from going to sleep and managed to continue working.
Some days after this the crashes started happening even when the monitor was not going to sleep. Usually when using apps that put some strain on the video such as video conferencing apps. These crashes started to become more frequent. The display froze, for about 2 minutes, the laptop started getting very warm and the fans would not go faster, then after 2 minutes the fans go into turbo mode for about 1 second and the laptop reboots.
After this I reverted to 10.15.4 and reset SMC, etc, and the panics when the display goes to sleep are gone, but the crashes when I'm using the computer continue. I tried ditching the adapter and using a usb-c displayport cable but the problem remained.
As a final test, I unplugged everything from the laptop and disabled "automatic graphics switching" to force the AMD to be used even with no external display. Sure enough, I was able to reproduce the issue. So it seems not related to an external display, but the AMD card itself (which is always used when an external display is connected).
Sad times.
Post not yet marked as solved
I have a background process which is updating an IOSurface-backed CVPixelBuffer at 30fps. I want to render a preview of that pixel buffer in my window, scaled to the size of the NSView that's displaying it. I get a callback every time the pixelbuffer/IOSurface is updated.
I've tried using a custom layer-backed NSView and setting the layer contents to the IOsurface -- which works when the view is created but it's never updated unless the window is resized or another window is in front of it.
I've tried setting both my view and my layer SetNeedsDisplay(), I've tried changing the layerContentsRedrawPolicy to .onSetNeedsDisplay, I've tried making sure all my content and update code is happening on the UI thread, but I can't get it to dynamically update.
Is there a way to bind my layer or view to the IOSurface once and then just have it reflect the updates as they happen, or, if not, at least mark the layer as dirty each frame when it changes?
I've pored over the docs but I don't see a lot about the relationship between IOSurface and CALayer.contents, and when in the lifecycle to mark things dirty (especially when updates are happening outside the view).
Here's example code:
class VideoPreviewThumbnail: NSView, VideoFeedConsumer {
let testCard = TestCardHelper()
override var wantsUpdateLayer: Bool {
get { return true }
}
required init?(coder decoder: NSCoder) {
super.init(coder: decoder)
self.wantsLayer = true
self.layerContentsRedrawPolicy = .onSetNeedsDisplay
		/* Scale the incoming data to the size of the view */
self.layer?.transform = CATransform3DMakeScale(
(self.layer?.contentsScale)! * self.frame.width / CGFloat(VideoSettings.width),
(self.layer?.contentsScale)! * self.frame.height / CGFloat(VideoSettings.height),
CGFloat(1))
	 /* Register us with the content provider */
VideoFeedBrowser.instance.registerConsumer(self)
}
deinit{
VideoFeedBrowser.instance.deregisterConsumer(self)
}
override func updateLayer() {
		/* ideally we woudln't need to do this */
updateLayer(pixelBuffer: VideoFeedBrowser.instance.renderer.pixelBuffer)
}
	/* This gets called every time our pixelbuffer is updated (30fps) */
@objc
func updateFrame(pixelBuffer: CVPixelBuffer) {
updateLayer(pixelBuffer: pixelBuffer)
}
func updateLayer(pixelBuffer: CVPixelBuffer) {
guard let surface = CVPixelBufferGetIOSurface(pixelBuffer)?.takeUnretainedValue() else {
print("pixelbuffer isn't IOsurface backed! noooooo!")
return;
}
	 /* these don't have any effect */
	
//		self.layer?.setNeedsDisplay()
//		self.setNeedsDisplay(invalidRect: self.visibleRect)
self.layer?.contents = surface
}
}
Post not yet marked as solved
Hi!
I recently bought the new iPhone 12 Pro Max.
I have noticed that when I shoot video's in the dark (with the lights on in the house), there is some kind of flickering visible in the video.
Apparently it is possible that due to very fast flickering of lights, slowmo video's make this kind of flickering visible when you can not see it with the ***** eye.
I however have this problem with normal video's as well. I have compared it with the video's on my iPhone X and it is definitely worse in my iPhone 12 video's.
I noticed that this happens while recording video on HD (or 4K) at 60 FPS, if you switch to 30 FPS this doesn't happen.
Anyone else that has this problem?
Problem happening on iOS 14.2.1 and iOS 14.3 Beta 2.
Thanks!
Post not yet marked as solved
Hello! I'd like to ask, is there any method how could I detect whether an application started to share or record my screen? It seems there isn't any notification from the system but maybe it's possible to detect somehow. Programmatically or from the CLI.
Post not yet marked as solved
Hi.
landscape videos work fine, but portrait videos don't.
how to enable for it?
Post not yet marked as solved
Hi, I am interested in extracting/accessing timestamp of each frame captured while recording a video via iPhone (HEVC - 4k 60fps). Any links to relevant documentation will be very useful.
My CODE:
the mediaURL.path is obtained from UIImagePickerControllerDelegate
guard UIVideoEditorController.canEditVideo(atPath: mediaURL.path) else { return }
let editor = UIVideoEditorController()
editor.delegate = self
editor.videoPath = mediaURL.path
editor.videoMaximumDuration = 10
editor.videoQuality = .typeMedium
self.parentViewController.present(editor, animated: true)
Error description on console as below.
Video export failed for asset <AVURLAsset: 0x283c71940, URL = file:///private/var/mobile/Containers/Data/PluginKitPlugin/7F7889C8-20DB-4429-9A67-3304C39A0725/tmp/trim.EECE5B69-0EF5-470C-B371-141CE1008F00.MOV>: Error Domain=AVFoundationErrorDomain Code=-11800
It doesn't call
func videoEditorController(_ editor: UIVideoEditorController, didFailWithError error: Error)
After showing error on console, UIVideoEditorController automatically dismiss itself.
Am I doing something wrong? or is it a bug in swift?
Thank you in advance.
Post not yet marked as solved
I am developing an app that sends pixel buffers from the Broadcast Upload Extension to OpenTok. When I run my broadcast extension it hits its memory limit in seconds. I have been looking for ways to reduce the size and scale of CMSampleBuffers and ended the process by first converting them to CIImage, then scaling them, and then converting them to CVPixelBuffers for sending OpenTok Servers. Unfortunately, the extension still crashes, even though I tried to reduce the pixel buffer. My code follows:
First I convert the CMSampleBuffer to CVPixelBuffer in processSampleBuffer function from Sample Handler then pass CVPixelBuffer to my function along with timestamps. Here I convert the CVPixelBuffer to cIImage and scale it using cIFilter(CILanczosScaleTransform). After that, I generate Pixel Buffer from CIImage using PixelBufferPool and cIContext and then send the new buffer to OpenTok Servers using videoCaptureConsumer.
func processPixelBuffer(pixelBuffer:CVPixelBuffer, timeStamp ts:CMTime) {
guard let ciImage = self.scaleFilterImage(inputImage: pixelBuffer.cmIImage, withAspectRatio: 1.0, scale: CGFloat(kVideoFrameScaleFactor)) else {return}
if self.pixelBufferPool == nil ||
self.pixelBuffer?.size != pixelBuffer.size{
self.destroyPixelBuffers()
self.updateBufferPool(newWidth: Int(ciImage.extent.size.width), newHeight: Int(ciImage.extent.size.height))
guard CVPixelBufferPoolCreatePixelBuffer(kCFAllocatorDefault, self.pixelBufferPool, &self.pixelBuffer) == kCVReturnSuccess
else {return}
}
context?.render(ciImage, to:pixelBuffer)
self.videoCaptureConsumer?.consumeImageBuffer(pixelBuffer,
orientation:.up,
timestamp:ts,
metadata:nil)
}
If the pixelBufferPool is nil or there is a change in the size of the pixelBuffer I update the pool.
private func updateBufferPool(newWidth: Int, newHeight: Int) {
let pixelBufferAttributes: [String: Any] = [
kCVPixelBufferPixelFormatTypeKey as String: UInt(self.videoFormat),
kCVPixelBufferWidthKey as String: newWidth,
kCVPixelBufferHeightKey as String: newHeight,
kCVPixelBufferIOSurfacePropertiesKey as String: [:]
]
CVPixelBufferPoolCreate(nil,nil, pixelBufferAttributes as NSDictionary?, &pixelBufferPool)
}
This is the function I use to scale the cIImage:
func scaleFilterImage(inputImage:CIImage, withAspectRatio aspectRatio:CGFloat, scale:CGFloat) -> CIImage? {
scaleFilter?.setValue(inputImage, forKey:kCIInputImageKey)
scaleFilter?.setValue(scale, forKey:kCIInputScaleKey)
scaleFilter?.setValue(aspectRatio, forKey:kCIInputAspectRatioKey)
return scaleFilter?.outputImage
}
My question is why it still keeps crashing and is there another way to reduce the CVPixelBuffer size without causing a memory limit crash?
I would appreciate any help on this. Swift or Objective - C, I am open to all suggestions.
Post not yet marked as solved
I am porting over some video decoding code from Intel to M1 and I'm seeing a very strange pixelFormat.
The setup is pretty basic, basically just setting kCVPixelBufferMetalCompatibilityKey to true.
But I am at a complete loss as to how to interpret this pixelFormat. In looking through CVPixelBuffer.h, I don't see any constant even close. (Using Xcode 12.5.1).
This is the beginning of the debug description of the imageBuffer:
CVPixelBuffer 0x6000eea7bf60 width=320 height=480 pixelFormat=&8v0 iosurface=0x6000e4c87ff0 planes=2 poolName=decode
Post not yet marked as solved
In the WWDC 2021 video 10047, it was mentioned to look for availability of Lossless CVPixelBuffer format and fallback to normal BGRA32 format if it is not available. But in the updated AVMultiCamPiP sample code, it first looks for Lossy format than the lossless. Why is it so and whats the exact difference it would make if we select lossy vs lossless?
Post not yet marked as solved
We see strange crashes when running our app since macOS 12 Beta (but still on macOS 12.0.1). We have not been able to fully identify the issue but it seems to happen on continue video playback in an AVPlayer, sometimes due to background, sometimes due to continue playback directly. Xcode points to some code in the libsystem_kernel.dylib (seems different every time and never in our own code)
The log will show:
-[MTLDebugCommandBuffer lockPurgeableObjects]:2103: failed assertion 'MTLResource 0x600002293790 (label: (null)), referenced in cmd buffer 0x7f7b2200a000 (label: (null)) is in volatile or empty purgeable state at commit'
-[MTLDebugCommandBuffer lockPurgeableObjects]:2103: failed assertion 'MTLResource 0x600002293790 (label: (null)), referenced in cmd buffer 0x7f7b2200a000 (label: (null)) is in volatile or empty purgeable state at commit'
We tried finding the object 0x600002293790 and 0x7f7b2200a000 but this gave no additional information as to why the app crashes.
We are using a custom VideoCompositor: AVVideoCompositing and initialise the CIContext for the work done here with these options:
if let mtlDevice = MTLCreateSystemDefaultDevice()
let options: [CIContextOption : Any] = [
CIContextOption.useSoftwareRenderer: false,
CIContextOption.outputPremultiplied: false,
]
let context = CIContext(mtlDevice: mtlDevice, options: options)
}
Not sure this is an Xcode 13 debug issue? a macOS 12.0.1 Monterey issue? or an actual issue as we have not seen it crash when not using Xcode to build our app giving this information. But we have seen strange crashes on Audio/Video threads that we could not trace back to our code as well.
The crash has never occurred on Xcode 12 or on macOS Big Sur during previous testing.
Any information as to locating the source of the issue or a solution would be awesome.
Post not yet marked as solved
I am trying to play videos in AVSampleBufferDisplayLayer. Everything works well except it seems like the screenshot no longer works for the AVSBDPL when taken programatically.
I have tried a couple of approaches and the screenshot taken is always a black screen in the area of the AVSBDPL. Here are the approached that I have tried, but none of them works:
1. Get an image from image context with [view drawViewHierarchyInRect:view.bounds afterScreenUpdates:YES]
- (UIImage *)_screenshot:(UIView *)view {
UIGraphicsBeginImageContextWithOptions(view.frame.size, view.opaque, 0.0);
[view drawViewHierarchyInRect:view.bounds afterScreenUpdates:YES];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
No matter which view I provided to the function(the screen, the player container view, etc.), the video area is always a black image. And I have tried different setup for the image context, or flip the afterScreenUpdates, the result is always the same.
2. Get an image from image context with [view.layer renderInContext:UIGraphicsGetCurrentContext()]
- (UIImage *)_screenshot:(UIView*)view {
UIGraphicsBeginImageContextWithOptions(view.frame.size, view.opaque, 0.0);
[view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
[layer renderInContext:UIGraphicsGetCurrentContext()] is an old API that is used below iOS 10. This is very slow and replaced by [drawViewHierarchyInRect:view] after iOS 10. Same here, the screenshot just shows a black screen.
3. Use UIGraphicsImageRenderer
- (UIImage *)_screenshotNew:(UIView*)view {
UIGraphicsImageRendererFormat *format = [UIGraphicsImageRendererFormat new];
format.opaque = view.opaque;
format.scale = 0.0;
UIGraphicsImageRenderer *renderer = [[UIGraphicsImageRenderer alloc] initWithSize:view.frame.size format:format];
UIImage *screenshotImage = [renderer imageWithActions:^(UIGraphicsImageRendererContext *_Nonnull rendererContext) {
[view drawViewHierarchyInRect:view.bounds afterScreenUpdates:YES];
}];
return screenshotImage;
}
This is the latest API to take a screenshot and convert it to an UIImage, which does not work either.
4. Use [view snapshotViewAfterScreenUpdates:YES]
UIView *snapView = [self.view snapshotViewAfterScreenUpdates:YES];
UIView has an API called snapshotViewAfterScreenUpdates. Surprisingly, the UIView returned by this API can be rendered directly in the UI, and it shows the right screenshot(Woohoo!). However, when I tried to convert the UIView to an UIImage, it becomes a black screen again.
Some additional configurations that I have tried
preventsCapture instance property in AVSBDPL. This is NO by default. And when it is set to YES, it prevents the user from taking screenshot of the layer by pressing the physical buttons on the phone. But it does not have any effect on programmatically taking screenshot.
outputObscuredDueToInsufficientExternalProtection instance property of AVSBDPL. This property is always NO for me. Thus, I don't think it obscures anything. Also, this is a iOS 14.5+ API, and I do see the issue below 14.5.
There are also very few posts when I searched on Google and all of them have run into the same issue but cannot solve. It would be really appreciated if any one can help me with this!
Post not yet marked as solved
I’m using AVFoundation for image capture using camera on iPad.
But I’m not using Video or Audio related functionality.
Looks like with AVFoundation; CoreMedia, CoreVideo and CoreAudio are also imported in any project.
Is there any way by which I can remove these libraries(CoreMedia, CoreVideo and CoreAudio) from my app.
I have used otool to list all the frameworks and libraries being used by my framework.
Post not yet marked as solved
I’m using AVFoundation to access camera on iPad.
But with AVFoundation, CoreMedia is also imported, which in-turn imports CoreAudio and CoreVideo.
Keeping privacy concerns in mind, is there any way by which I can ensure that the app is never able to access Microphone or Video Recording?
AVfoundation
CoreMedia
Post not yet marked as solved
This project is work for android and iOS, so I use a CAEAGLLayer to present live video which is 60fps. All code work well at iPhone11 and older devices. But at iPhone 12 and iPhone 13, it becomes strange.
The layer drops some frames. I profile with Instrument and find some drawable are waited more than 1/60 second. And after I turn on Screen Recorder, it work well. All drawables are waited less than 1/60 second. The layer present video with 60fps. After I turn off the screen recorder, it didn't work again.
Is there anyone can tell me what happened and how to workaround it?
Post not yet marked as solved
I use ffmpeg to playback video with videotoolbox (hardware), how can I get MTL::Texture from Avframe
when I receive hardware frame from function
avcodec_receive_frame(avctx, avframe), there is few example with metal c++, and i can't find CVPixelBufferRef type in metal c++, I really confused with this.
Post not yet marked as solved
On OSX12 system, iTurns, Music and other apps will trigger the CMIOObjectAddPropertyListener(Block) callback function when they are opened, while the camera device is not actually started.
I am working on a video editing app and I recently changed my code to render frames using a custom compositor. Filters are rendered well, but when I try to change a property of the filter, for example the intensity, the updates are laggy. I didn't have this problem before using the custom compositor. The problem (I'm assuming) is because now the renderer object is within the compositor so outside of the compositor class when I bind the values to a slider, it doesn't update instantly. I am using SwiftUI. Here is part of my custom compositor:
class CustomVideoCompositor: NSObject, AVVideoCompositing {
var metalContext: RendererContext?
override init() {
guard let device = MTLCreateSystemDefaultDevice(),
let commandQueue = device.makeCommandQueue() else {
super.init()
return
}
var newTextureCache: CVMetalTextureCache?
CVMetalTextureCacheCreate(kCFAllocatorDefault, nil, device, nil, &newTextureCache)
guard let texureCache = newTextureCache else {
super.init()
return
}
metalContext = RendererContext(device: device, commandQueue: commandQueue, textureCache: texureCache)
super.init()
}//init
func startRequest(_ request: AVAsynchronousVideoCompositionRequest) {
autoreleasepool {
renderingQueue.async {
if self.shouldCancelAllRequests {
request.finishCancelledRequest()
} else {
if let currentInstruction = request.videoCompositionInstruction as? CustomVideoCompositionInstruction {
guard let inputBuffer = request.sourceFrame(byTrackID: currentInstruction.trackID),
let videoEdits = currentInstruction.videoEdits
else {
request.finish(with: PixelBufferRequestError.newRenderedPixelBufferForRequestFailure)
return
}
request.finish(withComposedVideoFrame: self.renderVideoEdits(request: request, videoEdits: videoEdits, inputBuffer: inputBuffer))
} else if let currentInstruction = request.videoCompositionInstruction as? TransitionInstruction {
guard let fromBuffer = request.sourceFrame(byTrackID: currentInstruction.fromTrackID),
let toBuffer = request.sourceFrame(byTrackID: currentInstruction.toTrackID),
let outputBuffer = request.renderContext.newPixelBuffer(),
let fromVideoEdits = currentInstruction.fromVideoEdits,
let toVideoEdits = currentInstruction.toVideoEdits,
let transitionEdit = currentInstruction.transitionEdit,
let metalContext = self.metalContext
else {
request.finish(with: PixelBufferRequestError.newRenderedPixelBufferForRequestFailure)
return
}
if transitionEdit.transition.context == nil {
transitionEdit.transition.setContext(context: metalContext)
}
transitionEdit.transition.prepare()
let renderedFromBuffer = self.renderVideoEdits(request: request, videoEdits: fromVideoEdits, inputBuffer: fromBuffer)
let renderedToBuffer = self.renderVideoEdits(request: request, videoEdits: toVideoEdits, inputBuffer: toBuffer)
let renderedOutputBuffer = transitionEdit.transition.render(fromBuffer: renderedFromBuffer, toBuffer: renderedToBuffer, destinationBuffer: outputBuffer)
request.finish(withComposedVideoFrame: renderedOutputBuffer)
} else {
request.finish(with: PixelBufferRequestError.newRenderedPixelBufferForRequestFailure)
}
}
}//renderingQueue.async
}//autoreleasepool
}//startRequest
func renderVideoEdits(request: AVAsynchronousVideoCompositionRequest, videoEdits: VideoEdits, inputBuffer: CVPixelBuffer) -> CVPixelBuffer {
guard let metalContext = self.metalContext else {
return inputBuffer
}
var renderedBuffer: CVPixelBuffer = inputBuffer
for filter in videoEdits.filters {
if filter.context == nil {
filter.setContext(context: metalContext)
}
filter.prepare()
guard let outputBuffer = request.renderContext.newPixelBuffer() else {
return renderedBuffer
}
renderedBuffer = filter.render(inputBuffer: renderedBuffer, outputBuffer: outputBuffer)
}
return renderedBuffer
}//renderVideoEdits
func cancelAllPendingVideoCompositionRequests() {
renderingQueue.sync {
shouldCancelAllRequests = true
}
renderingQueue.async {
self.shouldCancelAllRequests = false
}
}//cancelAllPendingVideoCompositionRequests
}//CustomVideoCompositor
I access the renderer in a swiftUI view by doing something like this:
@State var renderer: FilterRenderer
renderer = videoComposition.instructions[currentInstruction].videoEdits.filter
Slider(value: $renderer.intensity, in: 0.0...1.0)
I used to render filters using an AVPlayerItemVideoOutput and this implementation worked just fine. It was fast and efficient. Any idea why this is happening? I needed to switch to using a custom compositor so I can source separate frames for transitions.
Post not yet marked as solved
Following the document and demo
mixing_metal_and_opengl_rendering_in_a_view
section "Select a Compatible Pixel Format" only show MTLPixelFormatBGRA8Unorm as followed.
if I want to use MTLPixelFormatRGBA8Unorm, how can I find the cvpixelformat and gl format which match MTLPixelFormatRGBA8Unorm??
Thanks in advance.
// Table of equivalent formats across CoreVideo, Metal, and OpenGL
static const AAPLTextureFormatInfo AAPLInteropFormatTable[] =
{
// Core Video Pixel Format, Metal Pixel Format, GL internalformat, GL format, GL type
{ kCVPixelFormatType_32BGRA, MTLPixelFormatBGRA8Unorm, GL_RGBA, GL_BGRA_EXT, GL_UNSIGNED_INT_8_8_8_8_REV },
#if TARGET_IOS
{ kCVPixelFormatType_32BGRA, MTLPixelFormatBGRA8Unorm_sRGB, GL_RGBA, GL_BGRA_EXT, GL_UNSIGNED_INT_8_8_8_8_REV },
#else
{ kCVPixelFormatType_ARGB2101010LEPacked, MTLPixelFormatBGR10A2Unorm, GL_RGB10_A2, GL_BGRA, GL_UNSIGNED_INT_2_10_10_10_REV },
{ kCVPixelFormatType_32BGRA, MTLPixelFormatBGRA8Unorm_sRGB, GL_SRGB8_ALPHA8, GL_BGRA, GL_UNSIGNED_INT_8_8_8_8_REV },
{ kCVPixelFormatType_64RGBAHalf, MTLPixelFormatRGBA16Float, GL_RGBA, GL_RGBA, GL_HALF_FLOAT },
#endif
};
Post not yet marked as solved
Only videos uploaded using iphone 11 only play audio using tag but plays when downloaded. I already tried to upload using other iphone devices and it works pretty well. Why is that?