Post not yet marked as solved
Hi!
I recently bought the new iPhone 12 Pro Max.
I have noticed that when I shoot video's in the dark (with the lights on in the house), there is some kind of flickering visible in the video.
Apparently it is possible that due to very fast flickering of lights, slowmo video's make this kind of flickering visible when you can not see it with the ***** eye.
I however have this problem with normal video's as well. I have compared it with the video's on my iPhone X and it is definitely worse in my iPhone 12 video's.
I noticed that this happens while recording video on HD (or 4K) at 60 FPS, if you switch to 30 FPS this doesn't happen.
Anyone else that has this problem?
Problem happening on iOS 14.2.1 and iOS 14.3 Beta 2.
Thanks!
Post not yet marked as solved
I have a DLP-Link 3D projector which I'd like to make use of by means of a hand-made player.
So far in my project: A class MovieView : NSView within a NSWindow, with stub drawing codes.
I know that, if I place drawing codes in the func draw(...) function, then NSGraphicsContext.current will be set up for me to use. But I'm drawing from a high-priority thread (the DisplayLink), so I obviously have to set it up myself.
How should I do that in Swift?
Post not yet marked as solved
Hi, I work on an App, that can generate videos.(or it should)I don’t know which Frameworks/library’s you can use to do this.I have input(image,pos,…) from this information(s) I want generate a video.The code should be compatible with SwiftUI, so that I can have sth. Like a live preview in iMovi or clips.(My current code is flexible, so it don’t matters if I have to restructure some small thinks)
Thanks in advance
Post not yet marked as solved
I have read that as of macOS X 10.14 setting setAllowsConcurrentViewDrawing to true on a NSWindow and setCanDrawConcurrently to true on its view is no longer supported to perform drawing outside the main thread.
All the documentation that I find on the internet strongly advise programmers to cleanup their main loops to perform only drawing and user input handing there. What else is left to do when this means multiple programmer*years of work?
I used to draw a CGImage wrapped in a NSImage through a NSGraphicsContext on a separate looping thread calling NSView's needsDisplay then [display], to display smooth animations.
I also picked up that NSOpenGLView has been deprecated.
With all that said, what would be the best way to go to perform threaded drawing in a NSView?
Thanks
Post not yet marked as solved
I have a background process which is updating an IOSurface-backed CVPixelBuffer at 30fps. I want to render a preview of that pixel buffer in my window, scaled to the size of the NSView that's displaying it. I get a callback every time the pixelbuffer/IOSurface is updated.
I've tried using a custom layer-backed NSView and setting the layer contents to the IOsurface -- which works when the view is created but it's never updated unless the window is resized or another window is in front of it.
I've tried setting both my view and my layer SetNeedsDisplay(), I've tried changing the layerContentsRedrawPolicy to .onSetNeedsDisplay, I've tried making sure all my content and update code is happening on the UI thread, but I can't get it to dynamically update.
Is there a way to bind my layer or view to the IOSurface once and then just have it reflect the updates as they happen, or, if not, at least mark the layer as dirty each frame when it changes?
I've pored over the docs but I don't see a lot about the relationship between IOSurface and CALayer.contents, and when in the lifecycle to mark things dirty (especially when updates are happening outside the view).
Here's example code:
class VideoPreviewThumbnail: NSView, VideoFeedConsumer {
let testCard = TestCardHelper()
override var wantsUpdateLayer: Bool {
get { return true }
}
required init?(coder decoder: NSCoder) {
super.init(coder: decoder)
self.wantsLayer = true
self.layerContentsRedrawPolicy = .onSetNeedsDisplay
		/* Scale the incoming data to the size of the view */
self.layer?.transform = CATransform3DMakeScale(
(self.layer?.contentsScale)! * self.frame.width / CGFloat(VideoSettings.width),
(self.layer?.contentsScale)! * self.frame.height / CGFloat(VideoSettings.height),
CGFloat(1))
	 /* Register us with the content provider */
VideoFeedBrowser.instance.registerConsumer(self)
}
deinit{
VideoFeedBrowser.instance.deregisterConsumer(self)
}
override func updateLayer() {
		/* ideally we woudln't need to do this */
updateLayer(pixelBuffer: VideoFeedBrowser.instance.renderer.pixelBuffer)
}
	/* This gets called every time our pixelbuffer is updated (30fps) */
@objc
func updateFrame(pixelBuffer: CVPixelBuffer) {
updateLayer(pixelBuffer: pixelBuffer)
}
func updateLayer(pixelBuffer: CVPixelBuffer) {
guard let surface = CVPixelBufferGetIOSurface(pixelBuffer)?.takeUnretainedValue() else {
print("pixelbuffer isn't IOsurface backed! noooooo!")
return;
}
	 /* these don't have any effect */
	
//		self.layer?.setNeedsDisplay()
//		self.setNeedsDisplay(invalidRect: self.visibleRect)
self.layer?.contents = surface
}
}
Post not yet marked as solved
I am developing a hybrid app, I am using java script and html, to be able to compile it to xcode I use capacitor, the problem is that my app includes videos but I cannot block the native ios player, I want to block it.
webview.allowsInlineMediaPlayback = yes;
I found this, the problem is that it only blocks it for ipad, not for iphones.
Post not yet marked as solved
Hello, I hope you are well.
I am developing a hybrid application, the application as such is web, the problem I have is that iOS does not display videos in safari, in Google Chrome yes, when I take out the application for iOS it does not display the videos either, I do not know if It would be due to the same problem that happens with safari. to hybridize the application I am using capacitor / core
Post not yet marked as solved
Thread 0 name: Dispatch queue: com.apple.main-thread
Thread 0 Crashed:
0 CoreFoundation 0x000000019d415ec8 0x19d371000 + 675528
1 CoreVideo 0x00000001a5a3d38c 0x1a5a2f000 + 58252
2 CoreVideo 0x00000001a5a3e498 0x1a5a2f000 + 62616
3 MirAIe 0x0000000103d7620c 0x102968000 + 21029388
4 MirAIe 0x0000000103beb76c 0x102968000 + 19412844
5 libdispatch.dylib 0x000000019d085a84 0x19d083000 + 10884
6 libdispatch.dylib 0x000000019d08781c 0x19d083000 + 18460
7 libdispatch.dylib 0x000000019d095c70 0x19d083000 + 76912
8 CoreFoundation 0x000000019d414340 0x19d371000 + 668480
9 CoreFoundation 0x000000019d40e218 0x19d371000 + 643608
10 CoreFoundation 0x000000019d40d308 0x19d371000 + 639752
11 GraphicsServices 0x00000001b4a90734 0x1b4a8d000 + 14132
12 UIKitCore 0x000000019fe8b75c 0x19f2c1000 + 12363612
13 UIKitCore 0x000000019fe90fcc 0x19f2c1000 + 12386252
14 MirAIe 0x00000001029818a4 0x102968000 + 104612
15 libdyld.dylib 0x000000019d0c9cf8 0x19d0c8000 + 7416
Thread 0 crashed with ARM Thread State (64-bit):
x0: 0x0000000281da30c0 x1: 0x0000000000000000 x2: 0x0000000281da30c0 x3: 0x00000001acafa188
x4: 0x00000000000062dc x5: 0x00000000fffffffe x6: 0x000000016d495f34 x7: 0x000000016d495f28
x8: 0x0000000000000000 x9: 0x0000000100000053 x10: 0x00006e0105ac30c0 x11: 0x007ffffffffffff8
x12: 0x0000000000000055 x13: 0x0000000106992330 x14: 0x00000000f781f800 x15: 0x0000000104bb5a00
x16: 0x00006e0105ac30c0 x17: 0x0000000105ac30c0 x18: 0x0000000110530abb x19: 0x0000000281da30c0
x20: 0x0000000000000000 x21: 0x0000000283614040 x22: 0x00000002839b9080 x23: 0x0000000000000114
x24: 0x0000000000000000 x25: 0x000000010572f9a0 x26: 0x000000000000000f x27: 0x0000000000000000
x28: 0x0000000002ffffff fp: 0x000000016d496970 lr: 0xbf283781a5a3d38c
sp: 0x000000016d496970 pc: 0x000000019d415ec8 cpsr: 0x20000000
esr: 0xf200c472 Address size fault
Post not yet marked as solved
I created a style transfer model using CreateML and can not save the generated styled image to tempDirectory, unsure if it is to do with the way I create the pixelBuffer? (below):
import Vision
import CoreML
import CoreVideo
let model = style1()
// set input size of the model
var modelInputSize = CGSize(width: 512, height: 512)
// create a cvpixel buffer
var pixelBuffer: CVPixelBuffer?
let attrs = [kCVPixelBufferCGImageCompatibilityKey: kCFBooleanTrue,
kCVPixelBufferCGBitmapContextCompatibilityKey: kCFBooleanTrue] as CFDictionary
CVPixelBufferCreate(kCFAllocatorDefault,
Int(modelInputSize.width),
Int(modelInputSize.height),
kCVPixelFormatType_32BGRA,
attrs,
&pixelBuffer)
// put bytes into pixelBuffer
let context = CIContext()
let argPathUrl = "file:///pathhere"
let modelImageUrl: URL = URL(string: argPathUrl)!;
guard let CiImageData = CIImage(contentsOf: modelImageUrl) else { return }
context.render(CiImageData, to: pixelBuffer!)
// predict image
let output = try? model.prediction(image: pixelBuffer!)
let predImage = CIImage(cvPixelBuffer: (output?.stylizedImage)!)
let context2 = CIContext()
let format = kCIFormatRGBA16
try! context2.writePNGRepresentation(of: predImage, to: FileManager.default.temporaryDirectory.appendingPathComponent("testcgi.png"), format: format, colorSpace: CGColorSpace(name: CGColorSpace.sRGB)!, options: [:])
let saveUrl = "testcgi.png"
return;
Post not yet marked as solved
Hey guys, tried to follow the super confusing doc on this, but no luck yet.
https://developer.apple.com/av-foundation/Incorporating-HDR-video-with-Dolby-Vision-into-your-apps.pdf
I have code that uses AVAssetReader and AVAssetReaderTrackOutput to directly pull frames from a video, but the colors are wrong for HDR dolby videos. Basically what I want is to extract frames from an HDR Dolby video as images and have those images not be the wrong color. Don't care if they are only 8 bit per color instead of 10 and all the all new stuff, just the closest that old fashioned 8 bit per color supports.
I added the statement marked // added for dolby hdr per the above doc, (spread across several lines), no luck, still bad colors.
Any hints of what I am missing?
NSMutableDictionary* dictionary = [[NSMutableDictionary alloc] init];
[dictionary setObject:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA] forKey:(NSString*)kCVPixelBufferPixelFormatTypeKey];
// added for dolby hdr
dictionary[AVVideoColorPropertiesKey] = @{
AVVideoColorPrimariesKey: AVVideoColorPrimaries_ITU_R_709_2,
AVVideoTransferFunctionKey: AVVideoTransferFunction_ITU_R_709_2,
AVVideoYCbCrMatrixKey: AVVideoYCbCrMatrix_ITU_R_709_2
};
AVAssetReaderTrackOutput* asset_reader_output = [[AVAssetReaderTrackOutput alloc] initWithTrack:video_track outputSettings:dictionary];
[asset_reader addOutput:asset_reader_output];
// from here we get sample buffers like this
CMSampleBufferRef buffer2 = [asset_reader_output copyNextSampleBuffer];
// then pixel buffer
CVPixelBufferRef inputPixelBuffer = CMSampleBufferGetImageBuffer(buffer2);
// then a CIImage
CIImage* ciImage = [CIImage imageWithCVPixelBuffer:inputPixelBuffer]; // one vid frame
then we use standard stuff to convert that to a CGImage/UIImage
Post not yet marked as solved
How do I use the SW API on my iPhone to get an instant image of an external USB webcamera
Post not yet marked as solved
I am developing an app that sends pixel buffers from the Broadcast Upload Extension to OpenTok. When I run my broadcast extension it hits its memory limit in seconds. I have been looking for ways to reduce the size and scale of CMSampleBuffers and ended the process by first converting them to CIImage, then scaling them, and then converting them to CVPixelBuffers for sending OpenTok Servers. Unfortunately, the extension still crashes, even though I tried to reduce the pixel buffer. My code follows:
First I convert the CMSampleBuffer to CVPixelBuffer in processSampleBuffer function from Sample Handler then pass CVPixelBuffer to my function along with timestamps. Here I convert the CVPixelBuffer to cIImage and scale it using cIFilter(CILanczosScaleTransform). After that, I generate Pixel Buffer from CIImage using PixelBufferPool and cIContext and then send the new buffer to OpenTok Servers using videoCaptureConsumer.
func processPixelBuffer(pixelBuffer:CVPixelBuffer, timeStamp ts:CMTime) {
guard let ciImage = self.scaleFilterImage(inputImage: pixelBuffer.cmIImage, withAspectRatio: 1.0, scale: CGFloat(kVideoFrameScaleFactor)) else {return}
if self.pixelBufferPool == nil ||
self.pixelBuffer?.size != pixelBuffer.size{
self.destroyPixelBuffers()
self.updateBufferPool(newWidth: Int(ciImage.extent.size.width), newHeight: Int(ciImage.extent.size.height))
guard CVPixelBufferPoolCreatePixelBuffer(kCFAllocatorDefault, self.pixelBufferPool, &self.pixelBuffer) == kCVReturnSuccess
else {return}
}
context?.render(ciImage, to:pixelBuffer)
self.videoCaptureConsumer?.consumeImageBuffer(pixelBuffer,
orientation:.up,
timestamp:ts,
metadata:nil)
}
If the pixelBufferPool is nil or there is a change in the size of the pixelBuffer I update the pool.
private func updateBufferPool(newWidth: Int, newHeight: Int) {
let pixelBufferAttributes: [String: Any] = [
kCVPixelBufferPixelFormatTypeKey as String: UInt(self.videoFormat),
kCVPixelBufferWidthKey as String: newWidth,
kCVPixelBufferHeightKey as String: newHeight,
kCVPixelBufferIOSurfacePropertiesKey as String: [:]
]
CVPixelBufferPoolCreate(nil,nil, pixelBufferAttributes as NSDictionary?, &pixelBufferPool)
}
This is the function I use to scale the cIImage:
func scaleFilterImage(inputImage:CIImage, withAspectRatio aspectRatio:CGFloat, scale:CGFloat) -> CIImage? {
scaleFilter?.setValue(inputImage, forKey:kCIInputImageKey)
scaleFilter?.setValue(scale, forKey:kCIInputScaleKey)
scaleFilter?.setValue(aspectRatio, forKey:kCIInputAspectRatioKey)
return scaleFilter?.outputImage
}
My question is why it still keeps crashing and is there another way to reduce the CVPixelBuffer size without causing a memory limit crash?
I would appreciate any help on this. Swift or Objective - C, I am open to all suggestions.
Post not yet marked as solved
Only videos uploaded using iphone 11 only play audio using tag but plays when downloaded. I already tried to upload using other iphone devices and it works pretty well. Why is that?
Post not yet marked as solved
So my timeline is this:
Got MBP 16' in March with graphics options:
AMD Radeon Pro 5500M 4 GB
Intel UHD Graphics 630 1536 MB
Up until 10.15.5 came out, I had zero problems/crashes and I always have the laptop closed and an external display connected with an official Apple A/V adapter using HDMI. As soon as I installed 10.15.5 the panics started happening.
Reason:					 (1 monitored services unresponsive): checkin with service: WindowServer returned not alive with context: unresponsive work processor(s): WindowServer main thread	40 seconds since last successful checkin
Literally after the update ended, I didn't touch the laptop for some time, the external monitor went to sleep and the laptop panic'ed and rebooted. I installed apps like Caffeine to prevent the external monitor from going to sleep and managed to continue working.
Some days after this the crashes started happening even when the monitor was not going to sleep. Usually when using apps that put some strain on the video such as video conferencing apps. These crashes started to become more frequent. The display froze, for about 2 minutes, the laptop started getting very warm and the fans would not go faster, then after 2 minutes the fans go into turbo mode for about 1 second and the laptop reboots.
After this I reverted to 10.15.4 and reset SMC, etc, and the panics when the display goes to sleep are gone, but the crashes when I'm using the computer continue. I tried ditching the adapter and using a usb-c displayport cable but the problem remained.
As a final test, I unplugged everything from the laptop and disabled "automatic graphics switching" to force the AMD to be used even with no external display. Sure enough, I was able to reproduce the issue. So it seems not related to an external display, but the AMD card itself (which is always used when an external display is connected).
Sad times.
Post not yet marked as solved
Following the document and demo
mixing_metal_and_opengl_rendering_in_a_view
section "Select a Compatible Pixel Format" only show MTLPixelFormatBGRA8Unorm as followed.
if I want to use MTLPixelFormatRGBA8Unorm, how can I find the cvpixelformat and gl format which match MTLPixelFormatRGBA8Unorm??
Thanks in advance.
// Table of equivalent formats across CoreVideo, Metal, and OpenGL
static const AAPLTextureFormatInfo AAPLInteropFormatTable[] =
{
// Core Video Pixel Format, Metal Pixel Format, GL internalformat, GL format, GL type
{ kCVPixelFormatType_32BGRA, MTLPixelFormatBGRA8Unorm, GL_RGBA, GL_BGRA_EXT, GL_UNSIGNED_INT_8_8_8_8_REV },
#if TARGET_IOS
{ kCVPixelFormatType_32BGRA, MTLPixelFormatBGRA8Unorm_sRGB, GL_RGBA, GL_BGRA_EXT, GL_UNSIGNED_INT_8_8_8_8_REV },
#else
{ kCVPixelFormatType_ARGB2101010LEPacked, MTLPixelFormatBGR10A2Unorm, GL_RGB10_A2, GL_BGRA, GL_UNSIGNED_INT_2_10_10_10_REV },
{ kCVPixelFormatType_32BGRA, MTLPixelFormatBGRA8Unorm_sRGB, GL_SRGB8_ALPHA8, GL_BGRA, GL_UNSIGNED_INT_8_8_8_8_REV },
{ kCVPixelFormatType_64RGBAHalf, MTLPixelFormatRGBA16Float, GL_RGBA, GL_RGBA, GL_HALF_FLOAT },
#endif
};
I am working on a video editing app and I recently changed my code to render frames using a custom compositor. Filters are rendered well, but when I try to change a property of the filter, for example the intensity, the updates are laggy. I didn't have this problem before using the custom compositor. The problem (I'm assuming) is because now the renderer object is within the compositor so outside of the compositor class when I bind the values to a slider, it doesn't update instantly. I am using SwiftUI. Here is part of my custom compositor:
class CustomVideoCompositor: NSObject, AVVideoCompositing {
var metalContext: RendererContext?
override init() {
guard let device = MTLCreateSystemDefaultDevice(),
let commandQueue = device.makeCommandQueue() else {
super.init()
return
}
var newTextureCache: CVMetalTextureCache?
CVMetalTextureCacheCreate(kCFAllocatorDefault, nil, device, nil, &newTextureCache)
guard let texureCache = newTextureCache else {
super.init()
return
}
metalContext = RendererContext(device: device, commandQueue: commandQueue, textureCache: texureCache)
super.init()
}//init
func startRequest(_ request: AVAsynchronousVideoCompositionRequest) {
autoreleasepool {
renderingQueue.async {
if self.shouldCancelAllRequests {
request.finishCancelledRequest()
} else {
if let currentInstruction = request.videoCompositionInstruction as? CustomVideoCompositionInstruction {
guard let inputBuffer = request.sourceFrame(byTrackID: currentInstruction.trackID),
let videoEdits = currentInstruction.videoEdits
else {
request.finish(with: PixelBufferRequestError.newRenderedPixelBufferForRequestFailure)
return
}
request.finish(withComposedVideoFrame: self.renderVideoEdits(request: request, videoEdits: videoEdits, inputBuffer: inputBuffer))
} else if let currentInstruction = request.videoCompositionInstruction as? TransitionInstruction {
guard let fromBuffer = request.sourceFrame(byTrackID: currentInstruction.fromTrackID),
let toBuffer = request.sourceFrame(byTrackID: currentInstruction.toTrackID),
let outputBuffer = request.renderContext.newPixelBuffer(),
let fromVideoEdits = currentInstruction.fromVideoEdits,
let toVideoEdits = currentInstruction.toVideoEdits,
let transitionEdit = currentInstruction.transitionEdit,
let metalContext = self.metalContext
else {
request.finish(with: PixelBufferRequestError.newRenderedPixelBufferForRequestFailure)
return
}
if transitionEdit.transition.context == nil {
transitionEdit.transition.setContext(context: metalContext)
}
transitionEdit.transition.prepare()
let renderedFromBuffer = self.renderVideoEdits(request: request, videoEdits: fromVideoEdits, inputBuffer: fromBuffer)
let renderedToBuffer = self.renderVideoEdits(request: request, videoEdits: toVideoEdits, inputBuffer: toBuffer)
let renderedOutputBuffer = transitionEdit.transition.render(fromBuffer: renderedFromBuffer, toBuffer: renderedToBuffer, destinationBuffer: outputBuffer)
request.finish(withComposedVideoFrame: renderedOutputBuffer)
} else {
request.finish(with: PixelBufferRequestError.newRenderedPixelBufferForRequestFailure)
}
}
}//renderingQueue.async
}//autoreleasepool
}//startRequest
func renderVideoEdits(request: AVAsynchronousVideoCompositionRequest, videoEdits: VideoEdits, inputBuffer: CVPixelBuffer) -> CVPixelBuffer {
guard let metalContext = self.metalContext else {
return inputBuffer
}
var renderedBuffer: CVPixelBuffer = inputBuffer
for filter in videoEdits.filters {
if filter.context == nil {
filter.setContext(context: metalContext)
}
filter.prepare()
guard let outputBuffer = request.renderContext.newPixelBuffer() else {
return renderedBuffer
}
renderedBuffer = filter.render(inputBuffer: renderedBuffer, outputBuffer: outputBuffer)
}
return renderedBuffer
}//renderVideoEdits
func cancelAllPendingVideoCompositionRequests() {
renderingQueue.sync {
shouldCancelAllRequests = true
}
renderingQueue.async {
self.shouldCancelAllRequests = false
}
}//cancelAllPendingVideoCompositionRequests
}//CustomVideoCompositor
I access the renderer in a swiftUI view by doing something like this:
@State var renderer: FilterRenderer
renderer = videoComposition.instructions[currentInstruction].videoEdits.filter
Slider(value: $renderer.intensity, in: 0.0...1.0)
I used to render filters using an AVPlayerItemVideoOutput and this implementation worked just fine. It was fast and efficient. Any idea why this is happening? I needed to switch to using a custom compositor so I can source separate frames for transitions.
Post not yet marked as solved
Hello! I'd like to ask, is there any method how could I detect whether an application started to share or record my screen? It seems there isn't any notification from the system but maybe it's possible to detect somehow. Programmatically or from the CLI.
Post not yet marked as solved
On OSX12 system, iTurns, Music and other apps will trigger the CMIOObjectAddPropertyListener(Block) callback function when they are opened, while the camera device is not actually started.
Post not yet marked as solved
This project is work for android and iOS, so I use a CAEAGLLayer to present live video which is 60fps. All code work well at iPhone11 and older devices. But at iPhone 12 and iPhone 13, it becomes strange.
The layer drops some frames. I profile with Instrument and find some drawable are waited more than 1/60 second. And after I turn on Screen Recorder, it work well. All drawables are waited less than 1/60 second. The layer present video with 60fps. After I turn off the screen recorder, it didn't work again.
Is there anyone can tell me what happened and how to workaround it?
Post not yet marked as solved
I use ffmpeg to playback video with videotoolbox (hardware), how can I get MTL::Texture from Avframe
when I receive hardware frame from function
avcodec_receive_frame(avctx, avframe), there is few example with metal c++, and i can't find CVPixelBufferRef type in metal c++, I really confused with this.