Posts

Post not yet marked as solved
1 Replies
0 Views
Have you tried setting the magnificationFilter of the view's layer to nearest?
Post not yet marked as solved
2 Replies
0 Views
Do built-in filters like CIGaussianBlur still work?
Post not yet marked as solved
1 Replies
0 Views
I encountered a very similar bug and I found a workaround. Can you try context.heifRepresentation(of: image.settingAlphaOne(in: image.extent), …) and see if that works?
Post marked as solved
1 Replies
0 Views
Found it: I'm getting the attachments (metadata) of the incoming sample buffers and accidentally leaked the dictionary due to wrong bridging. So instead of this NSDictionary* attachments = (_bridge NSDictionary* Nullable)CMCopyDictionaryOfAttachments(NULL, sampleBuffer, kCMAttachmentModeShouldPropagate); I should have done this NSDictionary* attachments = (bridgetransfer NSDictionary* Nullable)CMCopyDictionaryOfAttachments(NULL, sampleBuffer, kCMAttachmentModeShouldPropagate); Interestingly, this leak caused the capture session to stop delivering new sample buffers after 126 frames—without any warning, error, or notification.
Post not yet marked as solved
1 Replies
0 Views
A CIFilter is usually just a very small wrapper around a CIKernel, which is basically a bit of (intermediate) code that can be run on the GPU. When you apply a filter (chain) Core Image will optimize, potentially concatenate and finally compile the code and send it to the GPU. The Core Image runtime will probably perform some caching of those kernels, but I'm not sure. To summarize, CIFilters should have a very small memory footprint and you probably benefit more from caching when you re-use them instead of instantiating them on-the-fly.
Post not yet marked as solved
1 Replies
0 Views
I'm afraid the front camera doesn't support capturing in RAW.
Post not yet marked as solved
1 Replies
0 Views
Interesting, I never saw this type. Seems to be an old name. It seems you should just us PKDrawing instead now.
Post not yet marked as solved
4 Replies
0 Views
Could you please post some code? How do you set up the CGContext? Maybe also consider using the (newish) UIGraphicsImageRenderer API instead of setting up CGContext manually. This should be more reliable with consistent results across different devices. (And please also tag [Core Graphics] in you question instead of [Core Image].)
Post not yet marked as solved
3 Replies
0 Views
You are right. I decided to use newest SDKs and SwiftUI in order to learn how to best integrate Core Image workflows with them. And yes, it should work on a Mac, but that needs to run Big Sur and I haven't tested it yet. I tested on my iPad with iOS 14. Will check macOS soon. However, all the relevant APIs (especially MTKView) have been there for a while and should work the same way in older versions and in Objective-C. The important part is the setup of the MTKView and the draw method. If you follow this path, you should be good: AVCaptureDeviceInput → AVCaptureVideoDataOutput → AVCaptureVideoDataOutputSampleBufferDelegate → CVPixelBuffer → CIImage → applying CIFilters → CIImage → render into MTKView using a CIContext. Looking forward to your report! 🙂
Post not yet marked as solved
1 Replies
0 Views
Hi Kent, The filters you can add to a CALayer will apply to the layer's content, so they can't be used to display your own content (your CIImage) on the layer. Instead, you could either convert your CIImage into a UIImage and display that in a UIImageView. But for a video stream this is not the best approach since UIImageView is not made for handling so many frames per second. The most performant way is to render the CIImage into a MTKView (or a CAMetalLayer) using a CIContext. I created a project on GitHub - https://github.com/frankschlegel/core-image-by-example to show how this can be done. Hope this helps!
Post not yet marked as solved
3 Replies
0 Views
Hi suMac, I just uploaded a sample project that covers this use case: https://github.com/frankschlegel/core-image-by-example It's still very new and I haven't tested macOS yet, but on iOS it's working so far. Any feedback is welcome! 🙂
Post not yet marked as solved
1 Replies
0 Views
A RAW photo is basically a direct dump of the camera sensor's raw data, so it will always be in the full resolution of the sensor. You need to resize it yourself when you want a smaller version, for instance while converting ("developing") the RAW into a JPEG representation using Core Image. For the best downsampling quality I recommend using the CILanczosScaleTransform filter.
Post not yet marked as solved
1 Replies
0 Views
The process is actually not too hard: Put the Metal code of your kernel function (e.g., myKernel) into a file with the following naming scheme: <file_name>.ci.metal (like MyFilter.ci.metal). Add the two Build Rules described by David in the session. But be aware that the -I $MTL_HEADER_SEARCH_PATHS flag seems to cause trouble, so better just omit that. This will compile all the .ci.metal files into .ci.metallib files with the same <file_name>. You can then load your kernel function into a CIKernel like this: let url = Bundle(for: type(of: self)).url(forResource: "MyFilter", withExtension: "ci.metallib")! do {     let data = try Data(contentsOf: url)     self.kernel = try CIKernel(functionName: "myKernel", fromMetalLibraryData: data) } catch {     fatalError("Failed to create kernel: \(error.localizedDescription)") } Maybe you can elaborate on what errors you are getting?