Post not yet marked as solved
I have a DLP-Link 3D projector which I'd like to make use of by means of a hand-made player.
So far in my project: A class MovieView : NSView within a NSWindow, with stub drawing codes.
I know that, if I place drawing codes in the func draw(...) function, then NSGraphicsContext.current will be set up for me to use. But I'm drawing from a high-priority thread (the DisplayLink), so I obviously have to set it up myself.
How should I do that in Swift?
Post not yet marked as solved
Hi, I work on an App, that can generate videos.(or it should)I don’t know which Frameworks/library’s you can use to do this.I have input(image,pos,…) from this information(s) I want generate a video.The code should be compatible with SwiftUI, so that I can have sth. Like a live preview in iMovi or clips.(My current code is flexible, so it don’t matters if I have to restructure some small thinks)
Thanks in advance
Post not yet marked as solved
I have read that as of macOS X 10.14 setting setAllowsConcurrentViewDrawing to true on a NSWindow and setCanDrawConcurrently to true on its view is no longer supported to perform drawing outside the main thread.
All the documentation that I find on the internet strongly advise programmers to cleanup their main loops to perform only drawing and user input handing there. What else is left to do when this means multiple programmer*years of work?
I used to draw a CGImage wrapped in a NSImage through a NSGraphicsContext on a separate looping thread calling NSView's needsDisplay then [display], to display smooth animations.
I also picked up that NSOpenGLView has been deprecated.
With all that said, what would be the best way to go to perform threaded drawing in a NSView?
Thanks
Post not yet marked as solved
I am developing a hybrid app, I am using java script and html, to be able to compile it to xcode I use capacitor, the problem is that my app includes videos but I cannot block the native ios player, I want to block it.
webview.allowsInlineMediaPlayback = yes;
I found this, the problem is that it only blocks it for ipad, not for iphones.
Post not yet marked as solved
Hello, I hope you are well.
I am developing a hybrid application, the application as such is web, the problem I have is that iOS does not display videos in safari, in Google Chrome yes, when I take out the application for iOS it does not display the videos either, I do not know if It would be due to the same problem that happens with safari. to hybridize the application I am using capacitor / core
Post not yet marked as solved
Thread 0 name: Dispatch queue: com.apple.main-thread
Thread 0 Crashed:
0 CoreFoundation 0x000000019d415ec8 0x19d371000 + 675528
1 CoreVideo 0x00000001a5a3d38c 0x1a5a2f000 + 58252
2 CoreVideo 0x00000001a5a3e498 0x1a5a2f000 + 62616
3 MirAIe 0x0000000103d7620c 0x102968000 + 21029388
4 MirAIe 0x0000000103beb76c 0x102968000 + 19412844
5 libdispatch.dylib 0x000000019d085a84 0x19d083000 + 10884
6 libdispatch.dylib 0x000000019d08781c 0x19d083000 + 18460
7 libdispatch.dylib 0x000000019d095c70 0x19d083000 + 76912
8 CoreFoundation 0x000000019d414340 0x19d371000 + 668480
9 CoreFoundation 0x000000019d40e218 0x19d371000 + 643608
10 CoreFoundation 0x000000019d40d308 0x19d371000 + 639752
11 GraphicsServices 0x00000001b4a90734 0x1b4a8d000 + 14132
12 UIKitCore 0x000000019fe8b75c 0x19f2c1000 + 12363612
13 UIKitCore 0x000000019fe90fcc 0x19f2c1000 + 12386252
14 MirAIe 0x00000001029818a4 0x102968000 + 104612
15 libdyld.dylib 0x000000019d0c9cf8 0x19d0c8000 + 7416
Thread 0 crashed with ARM Thread State (64-bit):
x0: 0x0000000281da30c0 x1: 0x0000000000000000 x2: 0x0000000281da30c0 x3: 0x00000001acafa188
x4: 0x00000000000062dc x5: 0x00000000fffffffe x6: 0x000000016d495f34 x7: 0x000000016d495f28
x8: 0x0000000000000000 x9: 0x0000000100000053 x10: 0x00006e0105ac30c0 x11: 0x007ffffffffffff8
x12: 0x0000000000000055 x13: 0x0000000106992330 x14: 0x00000000f781f800 x15: 0x0000000104bb5a00
x16: 0x00006e0105ac30c0 x17: 0x0000000105ac30c0 x18: 0x0000000110530abb x19: 0x0000000281da30c0
x20: 0x0000000000000000 x21: 0x0000000283614040 x22: 0x00000002839b9080 x23: 0x0000000000000114
x24: 0x0000000000000000 x25: 0x000000010572f9a0 x26: 0x000000000000000f x27: 0x0000000000000000
x28: 0x0000000002ffffff fp: 0x000000016d496970 lr: 0xbf283781a5a3d38c
sp: 0x000000016d496970 pc: 0x000000019d415ec8 cpsr: 0x20000000
esr: 0xf200c472 Address size fault
Post not yet marked as solved
I created a style transfer model using CreateML and can not save the generated styled image to tempDirectory, unsure if it is to do with the way I create the pixelBuffer? (below):
import Vision
import CoreML
import CoreVideo
let model = style1()
// set input size of the model
var modelInputSize = CGSize(width: 512, height: 512)
// create a cvpixel buffer
var pixelBuffer: CVPixelBuffer?
let attrs = [kCVPixelBufferCGImageCompatibilityKey: kCFBooleanTrue,
kCVPixelBufferCGBitmapContextCompatibilityKey: kCFBooleanTrue] as CFDictionary
CVPixelBufferCreate(kCFAllocatorDefault,
Int(modelInputSize.width),
Int(modelInputSize.height),
kCVPixelFormatType_32BGRA,
attrs,
&pixelBuffer)
// put bytes into pixelBuffer
let context = CIContext()
let argPathUrl = "file:///pathhere"
let modelImageUrl: URL = URL(string: argPathUrl)!;
guard let CiImageData = CIImage(contentsOf: modelImageUrl) else { return }
context.render(CiImageData, to: pixelBuffer!)
// predict image
let output = try? model.prediction(image: pixelBuffer!)
let predImage = CIImage(cvPixelBuffer: (output?.stylizedImage)!)
let context2 = CIContext()
let format = kCIFormatRGBA16
try! context2.writePNGRepresentation(of: predImage, to: FileManager.default.temporaryDirectory.appendingPathComponent("testcgi.png"), format: format, colorSpace: CGColorSpace(name: CGColorSpace.sRGB)!, options: [:])
let saveUrl = "testcgi.png"
return;
Post not yet marked as solved
Hey guys, tried to follow the super confusing doc on this, but no luck yet.
https://developer.apple.com/av-foundation/Incorporating-HDR-video-with-Dolby-Vision-into-your-apps.pdf
I have code that uses AVAssetReader and AVAssetReaderTrackOutput to directly pull frames from a video, but the colors are wrong for HDR dolby videos. Basically what I want is to extract frames from an HDR Dolby video as images and have those images not be the wrong color. Don't care if they are only 8 bit per color instead of 10 and all the all new stuff, just the closest that old fashioned 8 bit per color supports.
I added the statement marked // added for dolby hdr per the above doc, (spread across several lines), no luck, still bad colors.
Any hints of what I am missing?
NSMutableDictionary* dictionary = [[NSMutableDictionary alloc] init];
[dictionary setObject:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA] forKey:(NSString*)kCVPixelBufferPixelFormatTypeKey];
// added for dolby hdr
dictionary[AVVideoColorPropertiesKey] = @{
AVVideoColorPrimariesKey: AVVideoColorPrimaries_ITU_R_709_2,
AVVideoTransferFunctionKey: AVVideoTransferFunction_ITU_R_709_2,
AVVideoYCbCrMatrixKey: AVVideoYCbCrMatrix_ITU_R_709_2
};
AVAssetReaderTrackOutput* asset_reader_output = [[AVAssetReaderTrackOutput alloc] initWithTrack:video_track outputSettings:dictionary];
[asset_reader addOutput:asset_reader_output];
// from here we get sample buffers like this
CMSampleBufferRef buffer2 = [asset_reader_output copyNextSampleBuffer];
// then pixel buffer
CVPixelBufferRef inputPixelBuffer = CMSampleBufferGetImageBuffer(buffer2);
// then a CIImage
CIImage* ciImage = [CIImage imageWithCVPixelBuffer:inputPixelBuffer]; // one vid frame
then we use standard stuff to convert that to a CGImage/UIImage
Post not yet marked as solved
How do I use the SW API on my iPhone to get an instant image of an external USB webcamera
Post not yet marked as solved
Only videos uploaded using iphone 11 only play audio using tag but plays when downloaded. I already tried to upload using other iphone devices and it works pretty well. Why is that?
Post not yet marked as solved
Following the document and demo
mixing_metal_and_opengl_rendering_in_a_view
section "Select a Compatible Pixel Format" only show MTLPixelFormatBGRA8Unorm as followed.
if I want to use MTLPixelFormatRGBA8Unorm, how can I find the cvpixelformat and gl format which match MTLPixelFormatRGBA8Unorm??
Thanks in advance.
// Table of equivalent formats across CoreVideo, Metal, and OpenGL
static const AAPLTextureFormatInfo AAPLInteropFormatTable[] =
{
// Core Video Pixel Format, Metal Pixel Format, GL internalformat, GL format, GL type
{ kCVPixelFormatType_32BGRA, MTLPixelFormatBGRA8Unorm, GL_RGBA, GL_BGRA_EXT, GL_UNSIGNED_INT_8_8_8_8_REV },
#if TARGET_IOS
{ kCVPixelFormatType_32BGRA, MTLPixelFormatBGRA8Unorm_sRGB, GL_RGBA, GL_BGRA_EXT, GL_UNSIGNED_INT_8_8_8_8_REV },
#else
{ kCVPixelFormatType_ARGB2101010LEPacked, MTLPixelFormatBGR10A2Unorm, GL_RGB10_A2, GL_BGRA, GL_UNSIGNED_INT_2_10_10_10_REV },
{ kCVPixelFormatType_32BGRA, MTLPixelFormatBGRA8Unorm_sRGB, GL_SRGB8_ALPHA8, GL_BGRA, GL_UNSIGNED_INT_8_8_8_8_REV },
{ kCVPixelFormatType_64RGBAHalf, MTLPixelFormatRGBA16Float, GL_RGBA, GL_RGBA, GL_HALF_FLOAT },
#endif
};
I am working on a video editing app and I recently changed my code to render frames using a custom compositor. Filters are rendered well, but when I try to change a property of the filter, for example the intensity, the updates are laggy. I didn't have this problem before using the custom compositor. The problem (I'm assuming) is because now the renderer object is within the compositor so outside of the compositor class when I bind the values to a slider, it doesn't update instantly. I am using SwiftUI. Here is part of my custom compositor:
class CustomVideoCompositor: NSObject, AVVideoCompositing {
var metalContext: RendererContext?
override init() {
guard let device = MTLCreateSystemDefaultDevice(),
let commandQueue = device.makeCommandQueue() else {
super.init()
return
}
var newTextureCache: CVMetalTextureCache?
CVMetalTextureCacheCreate(kCFAllocatorDefault, nil, device, nil, &newTextureCache)
guard let texureCache = newTextureCache else {
super.init()
return
}
metalContext = RendererContext(device: device, commandQueue: commandQueue, textureCache: texureCache)
super.init()
}//init
func startRequest(_ request: AVAsynchronousVideoCompositionRequest) {
autoreleasepool {
renderingQueue.async {
if self.shouldCancelAllRequests {
request.finishCancelledRequest()
} else {
if let currentInstruction = request.videoCompositionInstruction as? CustomVideoCompositionInstruction {
guard let inputBuffer = request.sourceFrame(byTrackID: currentInstruction.trackID),
let videoEdits = currentInstruction.videoEdits
else {
request.finish(with: PixelBufferRequestError.newRenderedPixelBufferForRequestFailure)
return
}
request.finish(withComposedVideoFrame: self.renderVideoEdits(request: request, videoEdits: videoEdits, inputBuffer: inputBuffer))
} else if let currentInstruction = request.videoCompositionInstruction as? TransitionInstruction {
guard let fromBuffer = request.sourceFrame(byTrackID: currentInstruction.fromTrackID),
let toBuffer = request.sourceFrame(byTrackID: currentInstruction.toTrackID),
let outputBuffer = request.renderContext.newPixelBuffer(),
let fromVideoEdits = currentInstruction.fromVideoEdits,
let toVideoEdits = currentInstruction.toVideoEdits,
let transitionEdit = currentInstruction.transitionEdit,
let metalContext = self.metalContext
else {
request.finish(with: PixelBufferRequestError.newRenderedPixelBufferForRequestFailure)
return
}
if transitionEdit.transition.context == nil {
transitionEdit.transition.setContext(context: metalContext)
}
transitionEdit.transition.prepare()
let renderedFromBuffer = self.renderVideoEdits(request: request, videoEdits: fromVideoEdits, inputBuffer: fromBuffer)
let renderedToBuffer = self.renderVideoEdits(request: request, videoEdits: toVideoEdits, inputBuffer: toBuffer)
let renderedOutputBuffer = transitionEdit.transition.render(fromBuffer: renderedFromBuffer, toBuffer: renderedToBuffer, destinationBuffer: outputBuffer)
request.finish(withComposedVideoFrame: renderedOutputBuffer)
} else {
request.finish(with: PixelBufferRequestError.newRenderedPixelBufferForRequestFailure)
}
}
}//renderingQueue.async
}//autoreleasepool
}//startRequest
func renderVideoEdits(request: AVAsynchronousVideoCompositionRequest, videoEdits: VideoEdits, inputBuffer: CVPixelBuffer) -> CVPixelBuffer {
guard let metalContext = self.metalContext else {
return inputBuffer
}
var renderedBuffer: CVPixelBuffer = inputBuffer
for filter in videoEdits.filters {
if filter.context == nil {
filter.setContext(context: metalContext)
}
filter.prepare()
guard let outputBuffer = request.renderContext.newPixelBuffer() else {
return renderedBuffer
}
renderedBuffer = filter.render(inputBuffer: renderedBuffer, outputBuffer: outputBuffer)
}
return renderedBuffer
}//renderVideoEdits
func cancelAllPendingVideoCompositionRequests() {
renderingQueue.sync {
shouldCancelAllRequests = true
}
renderingQueue.async {
self.shouldCancelAllRequests = false
}
}//cancelAllPendingVideoCompositionRequests
}//CustomVideoCompositor
I access the renderer in a swiftUI view by doing something like this:
@State var renderer: FilterRenderer
renderer = videoComposition.instructions[currentInstruction].videoEdits.filter
Slider(value: $renderer.intensity, in: 0.0...1.0)
I used to render filters using an AVPlayerItemVideoOutput and this implementation worked just fine. It was fast and efficient. Any idea why this is happening? I needed to switch to using a custom compositor so I can source separate frames for transitions.
Post not yet marked as solved
On OSX12 system, iTurns, Music and other apps will trigger the CMIOObjectAddPropertyListener(Block) callback function when they are opened, while the camera device is not actually started.
Post not yet marked as solved
I use ffmpeg to playback video with videotoolbox (hardware), how can I get MTL::Texture from Avframe
when I receive hardware frame from function
avcodec_receive_frame(avctx, avframe), there is few example with metal c++, and i can't find CVPixelBufferRef type in metal c++, I really confused with this.
Post not yet marked as solved
This project is work for android and iOS, so I use a CAEAGLLayer to present live video which is 60fps. All code work well at iPhone11 and older devices. But at iPhone 12 and iPhone 13, it becomes strange.
The layer drops some frames. I profile with Instrument and find some drawable are waited more than 1/60 second. And after I turn on Screen Recorder, it work well. All drawables are waited less than 1/60 second. The layer present video with 60fps. After I turn off the screen recorder, it didn't work again.
Is there anyone can tell me what happened and how to workaround it?
Post not yet marked as solved
I’m using AVFoundation to access camera on iPad.
But with AVFoundation, CoreMedia is also imported, which in-turn imports CoreAudio and CoreVideo.
Keeping privacy concerns in mind, is there any way by which I can ensure that the app is never able to access Microphone or Video Recording?
AVfoundation
CoreMedia
Post not yet marked as solved
I’m using AVFoundation for image capture using camera on iPad.
But I’m not using Video or Audio related functionality.
Looks like with AVFoundation; CoreMedia, CoreVideo and CoreAudio are also imported in any project.
Is there any way by which I can remove these libraries(CoreMedia, CoreVideo and CoreAudio) from my app.
I have used otool to list all the frameworks and libraries being used by my framework.
Post not yet marked as solved
I am trying to play videos in AVSampleBufferDisplayLayer. Everything works well except it seems like the screenshot no longer works for the AVSBDPL when taken programatically.
I have tried a couple of approaches and the screenshot taken is always a black screen in the area of the AVSBDPL. Here are the approached that I have tried, but none of them works:
1. Get an image from image context with [view drawViewHierarchyInRect:view.bounds afterScreenUpdates:YES]
- (UIImage *)_screenshot:(UIView *)view {
UIGraphicsBeginImageContextWithOptions(view.frame.size, view.opaque, 0.0);
[view drawViewHierarchyInRect:view.bounds afterScreenUpdates:YES];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
No matter which view I provided to the function(the screen, the player container view, etc.), the video area is always a black image. And I have tried different setup for the image context, or flip the afterScreenUpdates, the result is always the same.
2. Get an image from image context with [view.layer renderInContext:UIGraphicsGetCurrentContext()]
- (UIImage *)_screenshot:(UIView*)view {
UIGraphicsBeginImageContextWithOptions(view.frame.size, view.opaque, 0.0);
[view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
[layer renderInContext:UIGraphicsGetCurrentContext()] is an old API that is used below iOS 10. This is very slow and replaced by [drawViewHierarchyInRect:view] after iOS 10. Same here, the screenshot just shows a black screen.
3. Use UIGraphicsImageRenderer
- (UIImage *)_screenshotNew:(UIView*)view {
UIGraphicsImageRendererFormat *format = [UIGraphicsImageRendererFormat new];
format.opaque = view.opaque;
format.scale = 0.0;
UIGraphicsImageRenderer *renderer = [[UIGraphicsImageRenderer alloc] initWithSize:view.frame.size format:format];
UIImage *screenshotImage = [renderer imageWithActions:^(UIGraphicsImageRendererContext *_Nonnull rendererContext) {
[view drawViewHierarchyInRect:view.bounds afterScreenUpdates:YES];
}];
return screenshotImage;
}
This is the latest API to take a screenshot and convert it to an UIImage, which does not work either.
4. Use [view snapshotViewAfterScreenUpdates:YES]
UIView *snapView = [self.view snapshotViewAfterScreenUpdates:YES];
UIView has an API called snapshotViewAfterScreenUpdates. Surprisingly, the UIView returned by this API can be rendered directly in the UI, and it shows the right screenshot(Woohoo!). However, when I tried to convert the UIView to an UIImage, it becomes a black screen again.
Some additional configurations that I have tried
preventsCapture instance property in AVSBDPL. This is NO by default. And when it is set to YES, it prevents the user from taking screenshot of the layer by pressing the physical buttons on the phone. But it does not have any effect on programmatically taking screenshot.
outputObscuredDueToInsufficientExternalProtection instance property of AVSBDPL. This property is always NO for me. Thus, I don't think it obscures anything. Also, this is a iOS 14.5+ API, and I do see the issue below 14.5.
There are also very few posts when I searched on Google and all of them have run into the same issue but cannot solve. It would be really appreciated if any one can help me with this!
Post not yet marked as solved
We see strange crashes when running our app since macOS 12 Beta (but still on macOS 12.0.1). We have not been able to fully identify the issue but it seems to happen on continue video playback in an AVPlayer, sometimes due to background, sometimes due to continue playback directly. Xcode points to some code in the libsystem_kernel.dylib (seems different every time and never in our own code)
The log will show:
-[MTLDebugCommandBuffer lockPurgeableObjects]:2103: failed assertion 'MTLResource 0x600002293790 (label: (null)), referenced in cmd buffer 0x7f7b2200a000 (label: (null)) is in volatile or empty purgeable state at commit'
-[MTLDebugCommandBuffer lockPurgeableObjects]:2103: failed assertion 'MTLResource 0x600002293790 (label: (null)), referenced in cmd buffer 0x7f7b2200a000 (label: (null)) is in volatile or empty purgeable state at commit'
We tried finding the object 0x600002293790 and 0x7f7b2200a000 but this gave no additional information as to why the app crashes.
We are using a custom VideoCompositor: AVVideoCompositing and initialise the CIContext for the work done here with these options:
if let mtlDevice = MTLCreateSystemDefaultDevice()
let options: [CIContextOption : Any] = [
CIContextOption.useSoftwareRenderer: false,
CIContextOption.outputPremultiplied: false,
]
let context = CIContext(mtlDevice: mtlDevice, options: options)
}
Not sure this is an Xcode 13 debug issue? a macOS 12.0.1 Monterey issue? or an actual issue as we have not seen it crash when not using Xcode to build our app giving this information. But we have seen strange crashes on Audio/Video threads that we could not trace back to our code as well.
The crash has never occurred on Xcode 12 or on macOS Big Sur during previous testing.
Any information as to locating the source of the issue or a solution would be awesome.
Post not yet marked as solved
In the WWDC 2021 video 10047, it was mentioned to look for availability of Lossless CVPixelBuffer format and fallback to normal BGRA32 format if it is not available. But in the updated AVMultiCamPiP sample code, it first looks for Lossy format than the lossless. Why is it so and whats the exact difference it would make if we select lossy vs lossless?