Post not yet marked as solved
The ROI callback that is passed to a CIKernel’s apply(…) method seems to be referenced beyond the render call and is not released properly. That also means that any captured state is retained longer than expected.
I noticed this in a camera capture scenario because the capture session stopped delivering new frames after the initial batch. The output ran out of buffers because they were not properly returned to the pool. I was capturing the filter’s input image in the ROI callback like in this simplified case:
override var outputImage: CIImage? {
guard let inputImage = inputImage else { return nil }
let roiCallback: CIKernelROICallback = { _, _ in
return inputImage.extent
}
return Self.kernel.apply(extent: inputImage.extent, roiCallback: roiCallback, arguments: [inputImage])
}
While it is avoidable in this case, it is also very unexpected that the ROI callback is retained longer than needed for rendering the output image. Even when not capturing a lot of state, this would still unnecessarily accumulate over time.
Note that calling ciContext.clearCaches() does actually seem to release the captured ROI callbacks. But I don’t want to do that after every frame since there are also resources worth caching.
Is there a reason why Core Image caches the ROI callbacks beyond the rendering calls they are involved in?
Post not yet marked as solved
The same code can generate livePhoto in iOS 14, but can't generate livePhoto in iOS 15.1.
Does anyone know how to solve this problem? please help me. thanks
Post not yet marked as solved
hello all , I am an iOS developer.
I use
NSData *imageData = UIImagePNGRepresentation(editedImage);
CIImage *ciImage = [CIImage imageWithData:imageData];
CIDetector *detector = [CIDetector detectorOfType:CIDetectorTypeQRCode context:nil options:@{CIDetectorAccuracy: CIDetectorAccuracyHigh}];
NSArray *feature = [detector featuresInImage:ciImage];
if (feature.count > 0) {
CIQRCodeFeature *featureObject = [feature firstObject];
NSString *content = featureObject.messageString;
......
......
} else {
[self showAlertView];
}
it works well on most of QRCode. However, such as pic like this
it can not find right qrcode in the pic.
the NSArray feature is nil
so I wonder why this happen
this pic can be recognized in app like WeChat.Did I use the api not properly?
Hi,
According to the API documentation, the activeKeys property of the CIRAWFilterOption struct is deprecated. What is its replacement functionality?
Cheers.
Michael
Post not yet marked as solved
I am developing an app that sends pixel buffers from the Broadcast Upload Extension to OpenTok. When I run my broadcast extension it hits its memory limit in seconds. I have been looking for ways to reduce the size and scale of CMSampleBuffers and ended the process by first converting them to CIImage, then scaling them, and then converting them to CVPixelBuffers for sending OpenTok Servers. Unfortunately, the extension still crashes, even though I tried to reduce the pixel buffer. My code follows:
First I convert the CMSampleBuffer to CVPixelBuffer in processSampleBuffer function from Sample Handler then pass CVPixelBuffer to my function along with timestamps. Here I convert the CVPixelBuffer to cIImage and scale it using cIFilter(CILanczosScaleTransform). After that, I generate Pixel Buffer from CIImage using PixelBufferPool and cIContext and then send the new buffer to OpenTok Servers using videoCaptureConsumer.
func processPixelBuffer(pixelBuffer:CVPixelBuffer, timeStamp ts:CMTime) {
guard let ciImage = self.scaleFilterImage(inputImage: pixelBuffer.cmIImage, withAspectRatio: 1.0, scale: CGFloat(kVideoFrameScaleFactor)) else {return}
if self.pixelBufferPool == nil ||
self.pixelBuffer?.size != pixelBuffer.size{
self.destroyPixelBuffers()
self.updateBufferPool(newWidth: Int(ciImage.extent.size.width), newHeight: Int(ciImage.extent.size.height))
guard CVPixelBufferPoolCreatePixelBuffer(kCFAllocatorDefault, self.pixelBufferPool, &self.pixelBuffer) == kCVReturnSuccess
else {return}
}
context?.render(ciImage, to:pixelBuffer)
self.videoCaptureConsumer?.consumeImageBuffer(pixelBuffer,
orientation:.up,
timestamp:ts,
metadata:nil)
}
If the pixelBufferPool is nil or there is a change in the size of the pixelBuffer I update the pool.
private func updateBufferPool(newWidth: Int, newHeight: Int) {
let pixelBufferAttributes: [String: Any] = [
kCVPixelBufferPixelFormatTypeKey as String: UInt(self.videoFormat),
kCVPixelBufferWidthKey as String: newWidth,
kCVPixelBufferHeightKey as String: newHeight,
kCVPixelBufferIOSurfacePropertiesKey as String: [:]
]
CVPixelBufferPoolCreate(nil,nil, pixelBufferAttributes as NSDictionary?, &pixelBufferPool)
}
This is the function I use to scale the cIImage:
func scaleFilterImage(inputImage:CIImage, withAspectRatio aspectRatio:CGFloat, scale:CGFloat) -> CIImage? {
scaleFilter?.setValue(inputImage, forKey:kCIInputImageKey)
scaleFilter?.setValue(scale, forKey:kCIInputScaleKey)
scaleFilter?.setValue(aspectRatio, forKey:kCIInputAspectRatioKey)
return scaleFilter?.outputImage
}
My question is why it still keeps crashing and is there another way to reduce the CVPixelBuffer size without causing a memory limit crash?
I would appreciate any help on this. Swift or Objective - C, I am open to all suggestions.
Post not yet marked as solved
Hi. I'd like to be able to do a flood fill on images, either UIImage or CGImage, and was wondering if there was a built in way to do this provided by Apple's standard frameworks? i.e. Take a bitmap image and specify a point and color and then make it fill the area with that color, no matter what shape it is.
I've seen a few examples of algorithm code to do this, but they're quite large and complicated so am trying to avoid them.
Post not yet marked as solved
Currently i am getting depth data from delegate and even i converted to CIImage to check it's out put and it's gray scale but i cannot append that pixel buffer to AVAssetWriterInputPixelBufferAdaptor becuase once i tried to save in photo gallery i get error mentioned below.
Error:
The operation couldn’t be completed. (PHPhotosErrorDomain error 3302.
Setup:
private let videoDeviceDiscoverySession = AVCaptureDevice.DiscoverySession(deviceTypes: [.builtInWideAngleCamera, .builtInDualCamera, .builtInTrueDepthCamera, .builtInDualWideCamera],mediaType: .video, position: .back)
I tried both video pixel formats:
videoDataOutput!.videoSettings = [kCVPixelBufferPixelFormatTypeKey as String: kCVPixelFormatType_DepthFloat16]
videoDataOutput!.videoSettings = [kCVPixelBufferPixelFormatTypeKey as String: kCMPixelFormat_422YpCbCr8]
Post not yet marked as solved
Is there a way to apply color filters to the contents of a PDF displayed in a PDFView? My desired effect is to display the PDF with colors inverted. CIFilter and compositingFilter seem relevant, but I couldn't get to a working example. Any guidance appreciated!
Post not yet marked as solved
We are trying to create a custom CIFilter to add on top of our CALayer's. How ever only the default CIFilters seem to work on a CALayer.
We created a small new project on the ViewController.swift we added:
import Cocoa
import CoreImage
class ViewController: NSViewController {
override func viewDidLoad() {
super.viewDidLoad()
// Create some layers to work with! (square with gradient color)
let mainLayer = CALayer()
let shapeLayer = CAShapeLayer()
let gradientLayer = CAGradientLayer()
gradientLayer.colors = [NSColor.red.cgColor, NSColor.white.cgColor, NSColor.yellow.cgColor, NSColor.black.cgColor]
shapeLayer.path = CGPath(rect: CGRect(x: 0, y: 0, width: 500, height: 500), transform: nil)
shapeLayer.fillColor = CGColor.black
gradientLayer.frame = CGRect(x: 0, y: 0, width: 500, height: 500)
gradientLayer.mask = shapeLayer
gradientLayer.setAffineTransform(CGAffineTransform(translationX: 50, y: 50))
mainLayer.addSublayer(gradientLayer)
mainLayer.filters = []
self.view.layer?.addSublayer(mainLayer)
// Register the custom filter
CustomFilterRegister.register()
// Test with a normal image file, WORKS!
// if let image = NSImage(named: "test"), let cgImage = image.cgImage(forProposedRect: nil, context: nil, hints: nil) {
// if let filter = CIFilter(name: "CustomFilter") {
// filter.setValue(CIImage(cgImage: cgImage), forKey: kCIInputImageKey)
// let output = filter.outputImage
// // WORKS! Image filtered as expected!
// }
// }
// Does NOT work. No change in color of the layer!
if let filter = CIFilter(name: "CustomFilter") {
filter.name = "custom"
mainLayer.filters?.append(filter)
}
// This works: mainLayer and sublayers are blurred!
// if let filter = CIFilter(name: "CIGaussianBlur") {
// filter.name = "blur"
// mainLayer.filters?.append(filter)
// }
}
}
}
We created a simple custom CIFilter to give it a first try before we start building our custom CIFilter.
class CustomFilter: CIFilter {
// Error in xcode if you don't add this in!
override class var supportsSecureCoding: Bool {
return true
}
@objc dynamic var inputImage: CIImage?
@objc dynamic var inputSaturation: CGFloat = 1
@objc dynamic var inputBrightness: CGFloat = 0
@objc dynamic var inputContrast: CGFloat = 1
override func setDefaults() {
inputSaturation = 1
inputBrightness = 0
inputContrast = 2
}
override public var outputImage: CIImage? {
guard let image = inputImage else {
return nil
}
return image.applyingFilter("CIPhotoEffectProcess")
.applyingFilter("CIColorControls", parameters: [
kCIInputSaturationKey: inputSaturation,
kCIInputBrightnessKey: inputBrightness,
kCIInputContrastKey: inputContrast
])
}
}
class CustomFilterRegister: CIFilterConstructor {
static func register() {
CIFilter.registerName(
"CustomFilter", constructor: CustomFilterRegister(),
classAttributes: [
kCIAttributeFilterCategories: [kCICategoryBlur, kCICategoryVideo, kCICategoryStillImage]
])
}
func filter(withName name: String) -> CIFilter? {
switch name {
case "CustomFilter":
return CustomFilter()
default:
return nil
}
}
}
In the ViewController we added code to test with a normal image. This DOES work so the filter seems to be ok. We also tried a default CIGaussianBlur and that does work on the CALayer.
We are lost as to what is needed to get a custom CIFilter working with CALayer, and can't seem to find any information on it.
Please note that we are NOT looking for this type of CIFilter or a different way to get the filters result. We need a custom CIFilter to work on a CALayer.
Post not yet marked as solved
While the above three frameworks (viz. vImage, CoreImage, and MetalPerformaceShaders) serve different overall purposes, what are the strengths and weaknesses of the each of the three frameworks in terms of performance with respect to image processing? It seems that any of the three frameworks is highly performant; but where does each framework shine?
Post not yet marked as solved
I would like to know if there are some best practices for integrating Core ML models into a Core Image pipeline, especially when it comes to support for tiling.
We are using a CIImageProcessorKernel for integrating an MLModel-based filtering step into our filter chain. The wrapping CIFilter that actually calls the kernel handles the scaling of the input image to the size the model input requires.
In the roi(forInput:arguments:outputRect:) method the kernel signals that it always requires the full extent of the input image in order to produce an output (since MLModels don't support tiling).
In the process(with:arguments:output:) method, the kernel is performing the prediction of the model on the input pixel buffer and then copies the result into the output buffer.
This works well until the filter chain is getting more and more complex and input images become larger. At this point, Core Image wants to perform tiling to stay within the memory limits. It can't tile the input image of the kernel since we defined the ROI to be the whole image.
However, it is still calling the process(…) method multiple times, each time demanding a different tile/region of the output to be rendered. But since the model doesn't support producing only a part of the output, we effectively have to process the whole input image again for each output tile that should be produced.
We already tried caching the result of the model run between consecutive calls to process(…). However, we are unable to identify that the next call still belongs to the same rendering call, but for a different tile, instead of being a different rendering entirely, potentially with a different input image.
If we'd have access to the digest that Core Image computes for an image during processing, we would be able to detect if the input changed between calls to process(…). But this is not part of the CIImageProcessorInput.
What is the best practice here to avoid needless reevaluation of the model? How does Apple handle that in their ML-based filters like CIPersonSegmentation?
Using the CIFilter.qrCodeGenerator() to create a QR code I wanted to change the colours dynamically to suit Light/Dark mode, but was unable to figure out how to achieve this, is it possible please ?
struct QrCodeImage {
let context = CIContext()
func generateQRCode(from text: String) -> UIImage {
var qrImage = UIImage(systemName: "xmark.circle") ?? UIImage()
let data = Data(text.utf8)
let filter = CIFilter.qrCodeGenerator()
filter.setValue(data, forKey: "inputMessage")
let transform = CGAffineTransform(scaleX: 2, y: 2)
if let outputImage = filter.outputImage?.transformed(by: transform) {
if let image = context.createCGImage(
outputImage,
from: outputImage.extent) {
qrImage = UIImage(cgImage: image)
}
}
return qrImage
}
}
Further, I cannot see an option for the different modes and assume that any colour could be used, which would be a lot better for me.
ref: https://developer.apple.com/documentation/coreimage/ciqrcodegenerator
I wrote the following Metal Core Image Kernel to produce constant red color.
extern "C" float4 redKernel(coreimage::sampler inputImage, coreimage::destination dest)
{
return float4(1.0, 0.0, 0.0, 1.0);
}
And then I have this in Swift code:
class CIMetalRedColorKernel: CIFilter {
var inputImage:CIImage?
static var kernel:CIKernel = { () -> CIKernel in
let bundle = Bundle.main
let url = bundle.url(forResource: "Kernels", withExtension: "ci.metallib")!
let data = try! Data(contentsOf: url)
return try! CIKernel(functionName: "redKernel", fromMetalLibraryData: data)
}()
override var outputImage: CIImage? {
guard let inputImage = inputImage else {
return nil
}
let dod = inputImage.extent
return CIMetalTestRenderer.kernel.apply(extent: dod, roiCallback: { index, rect in
return rect
}, arguments: [inputImage])
}
}
As you can see, the dod is given to be the extent of the input image. But when I run the filter, I get a whole red image beyond the extent of the input image (DOD), why? I have multiple filters chained together and the overall size is 1920x1080. Isn't the red filter supposed to run only for DOD rectangle passed in it and produce clear pixels for anything outside the DOD?
Post not yet marked as solved
convert UIImage to CIImage,, but lose every element. position, rotate, scale etc..
i implemented video editor. so i add pan, rotate, pinch gesture event with UIImageVIew. and when i save video, i convert UIImageView to CIImage. but it's lose everything..
please help me......
==========.
CIFilter *filter = [CIFilter
filterWithName:@"CIAdditionCompositing"];
UIImageView *imageView = self.subviews[0];
CIImage *ciImage = [CIImage
imageWithCGImage:imageView.image.CGImage];
_playerItem.videoComposition = [AVVideoComposition
videoCompositionWithAsset:_playerItem.asset
applyingCIFiltersWithHandler:^(AVAsynchronou
sCIImageFilteringRequest *_Nonnull request) {
if (filter == nil) {
}
else {
CIImage *image =
request.sourceImage.imageByClampingToExtent;
[filter setDefaults];
[filter setValue:image
forKey:@"inputBackgroundImage"];
[filter setValue:ciImage forKey:@"inputImage"];
CIImage *outputImage = [filter.outputImage imageByCroppingToRect:request.sourceImage.extent];
[request finishWithImage:outputImage context:nil];
}
}
Post not yet marked as solved
When I try to use custom Core Image filters in the contentFilters property of an NSView, they don't work in Big Sur (as of 11.0.1 beta). They do work in Catalina. Doesn't matter if they're written using Metal or Core Image Kernel Language. I've reported this as a bug, but I'm wondering if there is some trick or workaround.
Hi,
I'm working on building a mac app in Swift that make batch conversions between the .openexr and .png file format in both directions. I would like to know what kinds of the library I could use. I found the mac system could directly convert the .openexr into other formats by right click on the openexr file. I also would like to know if the conversion could be done reversely with some supported libraries. Thanks.
Post not yet marked as solved
This is a real puzzler - I have read a 200 dpi 1-bit raster image from disk and plan to rotate it 90 degrees. I allocate the target NSBitmapImageRep and get pointer to the source and destination image data. After the rotation (seems to) complete, the destination image is blank (no pixels moved). In the Console, NSBitmapImageRep has output an error:
Failed to extract pixel data from NSBitmapImageRep. Error: -21778
No idea what this error code is or what it might be talking about. Anyone have a clue or know the definition of this error?
Is it possible to pass MTLTexture to Metal Core Image Kernel? How can Metal resources be shared with Core Image?
Post not yet marked as solved
If I create a CIRAWFilter object from a Raw image URL, the resulting object contains an NSDictionary titled, "_rawDictionary".
One of the entries in this dictionary is called "sushiFactor". I see this entry for multiple Raw images from various camera manufacturers.
Is this an industry-standard TIFF Tag in the Raw image community?
What is the meaning of its value?
Post not yet marked as solved
In our app we are working with different kinds of documents.
When working with text files we can add text attachments, by drag-and-dropping images into UITextView.
Those files are saved using NSAttributedString's fileWrapper and proper document type (RTFD for attributed text with attachments)
Everything worked fine before updating to macOS Monterey 12.1
But after update fileWrapper function returns the error:
"image destination must have at least one image" and results in unknown file icon after saving and loading the file.
The issue is caused only when dropping or pasting images (png and jpeg). When working with original RTFD files, created with TextEdit, everything is fine.
Seems like it occurs because iOS UIImages cant have tiff representation and RTFD file packages contain tiff images as attachments.
Anybody faced this kind of issue? Is this an Apple bug or smth else?