Core Image

RSS for tag

Use built-in or custom filters to process still and video images using Core Image.

Core Image Documentation

Posts under Core Image tag

79 Posts
Sort by:
Post not yet marked as solved
0 Replies
221 Views
Is there a way to apply color filters to the contents of a PDF displayed in a PDFView? My desired effect is to display the PDF with colors inverted. CIFilter and compositingFilter seem relevant, but I couldn't get to a working example. Any guidance appreciated!
Posted
by
Post not yet marked as solved
0 Replies
277 Views
I would like to know if there are some best practices for integrating Core ML models into a Core Image pipeline, especially when it comes to support for tiling. We are using a CIImageProcessorKernel for integrating an MLModel-based filtering step into our filter chain. The wrapping CIFilter that actually calls the kernel handles the scaling of the input image to the size the model input requires. In the roi(forInput:arguments:outputRect:) method the kernel signals that it always requires the full extent of the input image in order to produce an output (since MLModels don't support tiling). In the process(with:arguments:output:) method, the kernel is performing the prediction of the model on the input pixel buffer and then copies the result into the output buffer. This works well until the filter chain is getting more and more complex and input images become larger. At this point, Core Image wants to perform tiling to stay within the memory limits. It can't tile the input image of the kernel since we defined the ROI to be the whole image. However, it is still calling the process(…) method multiple times, each time demanding a different tile/region of the output to be rendered. But since the model doesn't support producing only a part of the output, we effectively have to process the whole input image again for each output tile that should be produced. We already tried caching the result of the model run between consecutive calls to process(…). However, we are unable to identify that the next call still belongs to the same rendering call, but for a different tile, instead of being a different rendering entirely, potentially with a different input image. If we'd have access to the digest that Core Image computes for an image during processing, we would be able to detect if the input changed between calls to process(…). But this is not part of the CIImageProcessorInput. What is the best practice here to avoid needless reevaluation of the model? How does Apple handle that in their ML-based filters like CIPersonSegmentation?
Posted
by
Post marked as solved
2 Replies
424 Views
Using the CIFilter.qrCodeGenerator() to create a QR code I wanted to change the colours dynamically to suit Light/Dark mode, but was unable to figure out how to achieve this, is it possible please ? struct QrCodeImage {     let context = CIContext()     func generateQRCode(from text: String) -> UIImage {         var qrImage = UIImage(systemName: "xmark.circle") ?? UIImage()         let data = Data(text.utf8)         let filter = CIFilter.qrCodeGenerator()         filter.setValue(data, forKey: "inputMessage")         let transform = CGAffineTransform(scaleX: 2, y: 2)         if let outputImage = filter.outputImage?.transformed(by: transform) {             if let image = context.createCGImage(                 outputImage,                 from: outputImage.extent) {                 qrImage = UIImage(cgImage: image)             }         }         return qrImage     } } Further, I cannot see an option for the different modes and assume that any colour could be used, which would be a lot better for me. ref: https://developer.apple.com/documentation/coreimage/ciqrcodegenerator
Posted
by
Post not yet marked as solved
0 Replies
400 Views
convert UIImage to CIImage,, but lose every element. position, rotate, scale etc.. i implemented video editor. so i add pan, rotate, pinch gesture event with UIImageVIew. and when i save video, i convert UIImageView to CIImage. but it's lose everything.. please help me...... ==========.        CIFilter *filter = [CIFilter filterWithName:@"CIAdditionCompositing"];       UIImageView *imageView = self.subviews[0];       CIImage *ciImage = [CIImage imageWithCGImage:imageView.image.CGImage];               _playerItem.videoComposition = [AVVideoComposition                       videoCompositionWithAsset:_playerItem.asset                       applyingCIFiltersWithHandler:^(AVAsynchronou sCIImageFilteringRequest *_Nonnull request) { if (filter == nil) {                        } else {                          CIImage *image = request.sourceImage.imageByClampingToExtent;                          [filter setDefaults];                          [filter setValue:image forKey:@"inputBackgroundImage"];                          [filter setValue:ciImage forKey:@"inputImage"];                                                     CIImage *outputImage = [filter.outputImage imageByCroppingToRect:request.sourceImage.extent];                                                     [request finishWithImage:outputImage context:nil];                        }                       }
Posted
by
Post marked as solved
1 Replies
344 Views
Hi, I'm working on building a mac app in Swift that make batch conversions between the .openexr and .png file format in both directions. I would like to know what kinds of the library I could use. I found the mac system could directly convert the .openexr into other formats by right click on the openexr file. I also would like to know if the conversion could be done reversely with some supported libraries. Thanks.
Posted
by
Post not yet marked as solved
0 Replies
160 Views
This is a real puzzler - I have read a 200 dpi 1-bit raster image from disk and plan to rotate it 90 degrees. I allocate the target NSBitmapImageRep and get pointer to the source and destination image data. After the rotation (seems to) complete, the destination image is blank (no pixels moved). In the Console, NSBitmapImageRep has output an error: Failed to extract pixel data from NSBitmapImageRep. Error: -21778 No idea what this error code is or what it might be talking about. Anyone have a clue or know the definition of this error?
Posted
by
Post marked as solved
3 Replies
573 Views
I wrote the following Metal Core Image Kernel to produce constant red color. extern "C" float4 redKernel(coreimage::sampler inputImage, coreimage::destination dest) {     return float4(1.0, 0.0, 0.0, 1.0); } And then I have this in Swift code: class CIMetalRedColorKernel: CIFilter {     var inputImage:CIImage?     static var kernel:CIKernel = { () -> CIKernel in         let bundle = Bundle.main         let url = bundle.url(forResource: "Kernels", withExtension: "ci.metallib")!         let data = try! Data(contentsOf: url)         return try! CIKernel(functionName: "redKernel", fromMetalLibraryData: data)     }()     override var outputImage: CIImage? {         guard let inputImage = inputImage else {             return nil         }         let dod = inputImage.extent         return CIMetalTestRenderer.kernel.apply(extent: dod, roiCallback: { index, rect in             return rect         }, arguments: [inputImage])     } } As you can see, the dod is given to be the extent of the input image. But when I run the filter, I get a whole red image beyond the extent of the input image (DOD), why? I have multiple filters chained together and the overall size is 1920x1080. Isn't the red filter supposed to run only for DOD rectangle passed in it and produce clear pixels for anything outside the DOD?
Posted
by
Post not yet marked as solved
0 Replies
178 Views
If I create a CIRAWFilter object from a Raw image URL, the resulting object contains an NSDictionary titled, "_rawDictionary". One of the entries in this dictionary is called "sushiFactor". I see this entry for multiple Raw images from various camera manufacturers. Is this an industry-standard TIFF Tag in the Raw image community? What is the meaning of its value?
Posted
by
Post not yet marked as solved
2 Replies
370 Views
I am unable to initialize a CIRAWFilter object and access its instance variables using the new CIRAWFilter class API in iOS 15. If I compile and execute the following code, I get a run-time error stating, "-[CIRAWFilterImpl isGamutMappingEnabled]: unrecognized selector sent to instance 0x1065ec570'" It appears that a "CIRAWFilterImpl" object is getting created, not a "CIRAWFilter" object. I cannot cast to that type as it is unknown (most likely a private class). My code: if #available(iOS 15.0, *) { let rawFilter : CIRAWFilter = CIRAWFilter(imageURL: self.imageURL) let isGamutMappingEnabled = rawFilter.isGamutMappingEnabled print("isGamutMappingEnabled: (isGamutMappingEnabled)") }
Posted
by
Post not yet marked as solved
0 Replies
292 Views
In our app we are working with different kinds of documents. When working with text files we can add text attachments, by drag-and-dropping images into UITextView. Those files are saved using NSAttributedString's fileWrapper and proper document type (RTFD for attributed text with attachments) Everything worked fine before updating to macOS Monterey 12.1 But after update fileWrapper function returns the error: "image destination must have at least one image" and results in unknown file icon after saving and loading the file. The issue is caused only when dropping or pasting images (png and jpeg). When working with original RTFD files, created with TextEdit, everything is fine. Seems like it occurs because iOS UIImages cant have tiff representation and RTFD file packages contain tiff images as attachments. Anybody faced this kind of issue? Is this an Apple bug or smth else?
Posted
by
Post not yet marked as solved
0 Replies
468 Views
While the above three frameworks (viz. vImage, CoreImage, and MetalPerformaceShaders) serve different overall purposes, what are the strengths and weaknesses of the each of the three frameworks in terms of performance with respect to image processing? It seems that any of the three frameworks is highly performant; but where does each framework shine?
Posted
by
Post not yet marked as solved
0 Replies
310 Views
I have a project where I capture live video from the camera then pass it through a chain of CIFilters and render the result into an MTLTexture. It is all working well except that each time a call to CIContext's render toMTLTexture function is called the memory usage increases by ~150Mb and never goes back down. This causes the app to be killed by the OS after about 15-20 images are processed due to memory issues. I have isolated the issue to the following process image function: // Initialise required filters CIFilter *grayScaleFilter = [CIFilter filterWithName:@"CIColorMatrix" keysAndValues: @"inputRVector", [CIVector vectorWithX:1 / 3.0 Y:1 / 3.0 Z:1 / 3.0 W:0], nil]; CIFilter *blurFilter = [CIFilter filterWithName:@"CIBoxBlur" keysAndValues:kCIInputRadiusKey, [NSNumber numberWithFloat:3.0], nil]; const CGFloat dxFilterValues[9] = { 1, 0, -1, 2, 0, -2, 1, 0, -1}; CIFilter *dxFilter = [CIFilter filterWithName:@"CIConvolution3X3" keysAndValues:kCIInputWeightsKey, [CIVector vectorWithValues:dxFilterValues count:9], nil]; const CGFloat dyFilterValues[9] = { 1, 2, 1, 0, 0, 0, -1, -2, -1}; CIFilter *dyFilter = [CIFilter filterWithName:@"CIConvolution3X3" keysAndValues:kCIInputWeightsKey, [CIVector vectorWithValues:dyFilterValues count:9], nil]; // Phase filter is my custom filter implemented with a Metal Kernel CIFilter *phaseFilter = [CIFilter filterWithName:@"PhaseFilter"]; // Apply filter chain to input image [grayScaleFilter setValue:image forKey:kCIInputImageKey]; [blurFilter setValue:grayScaleFilter.outputImage forKey:kCIInputImageKey]; [dxFilter setValue:blurFilter.outputImage forKey:kCIInputImageKey]; [dyFilter setValue:blurFilter.outputImage forKey:kCIInputImageKey]; [phaseFilter setValue:multiplierFilterDx.outputImage forKey:@"inputX"]; [phaseFilter setValue:multiplierFilterDy.outputImage forKey:@"inputY"]; // Initialize MTLTextures MTLTextureDescriptor* desc = [MTLTextureDescriptor texture2DDescriptorWithPixelFormat:MTLPixelFormatR8Unorm width:720 height:1280 mipmapped:NO]; desc.usage = MTLTextureUsageShaderWrite | MTLTextureUsageShaderRead; id<MTLTexture> phaseTexture = [CoreImageOperations::device newTextureWithDescriptor:descriptor]; // Render to MTLTexture CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); // Memory usage increases by ~150Mb after the following function call!!! [context render:phaseFilter.outputImage toMTLTexture:phaseTexture commandBuffer:commandBuffer bounds:phaseFilter.outputImage.extent colorSpace:colorSpace]; CFRelease(colorSpace); return phaseTexture; I profiled the memory usage using instruments and found that most of the memory was being used by IOSurface objects with CoreImage listed as the responsible library and CreateCachedSurface as the responsible caller. (See screenshot below) This is very strange because I set up my CIContext to not cache intermediates witht the following line: const CIContext *context = [CIContext contextWithMTLCommandQueue:commandQueue options:@{ kCIContextWorkingFormat: [NSNumber numberWithInt:kCIFormatRGBAf], kCIContextCacheIntermediates: @NO, kCIContextName: @"Image Processor" }]; Any thoughts or advice would be greatly appreciated!
Posted
by
Post not yet marked as solved
1 Replies
421 Views
I am trying to develop tone curves filter using Metal or Core Image as I find CIToneCurve filter is having limitations (number of points are atmost 5, spline curve it is using is not documented, and sometimes output is a black image even with 4 points). Moreover it's not straightforward to have separate R,G,B curves independently. I decided to explore other libraries that implement tone curve and the only one that I know is GPUImage (few others borrow code from the same library). But the source code is too cryptic to understand and I have [doubts] about the manner in which it is generating look up texture (https://stackoverflow.com/questions/70516363/gpuimage-tone-curve-rgbcomposite-filter). Can someone explain how to correctly implement R,G,B, and RGB composite curves filter like in Mac Photos App?
Posted
by
Post marked as solved
1 Replies
311 Views
There is a write function documented in the CoreImage Metal shader reference here: https://developer.apple.com/metal/MetalCIKLReference6.pdf But I'm not sure how to use it. I assumed one would be able to use it on the destination parameter i.e. dest.write(...) but I get the error, "no member named 'write' in 'coreimage::destination'" How do I use this function?
Posted
by
Post not yet marked as solved
2 Replies
404 Views
I've created a custom BoxBlur kernel that produces identical results to Apple's built-in box blur (CIBoxBlur) kernel but my custom kernel is orders of magnitude slower. So naturally I am wondering what I'm doing wrong to get such poor performance. Below is my custom kernel in the Metal shading language. Can you spot why it's so slow? The built-in filter performs well so I can only assume it's something I'm doing wrong. #include <CoreImage/CoreImage.h> #import <simd/simd.h> extern "C" { namespace coreimage { float4 customBoxBlurFilterKernel(sampler src) { float2 crd = src.coord(); int edge = 100; int minx = crd.x - edge; int maxx = crd.x + edge; int miny = crd.y - edge; int maxy = crd.y + edge; float4 sums = float4(0,0,0,0); float cnt = 0; // compute average of surrounding rgb values for(int row=miny; row < maxy; row++) { for(int col=minx; col < maxx; col++) { float4 samp = src.sample(float2(col, row)); sums[0] += samp[0]; sums[1] += samp[1]; sums[2] += samp[2]; cnt += 1.; } } return float4(sums[0]/cnt, sums[1]/cnt, sums[2]/cnt, 1); } } }
Posted
by
Post not yet marked as solved
0 Replies
244 Views
I have a .cube file storing LUT data, such as this: TITLE "Cool LUT" LUT_3D_SIZE 64 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0157 0.0000 0.0000 0.0353 0.0000 0.0000 My question is how do I load required NSData that can be used in CIColorCube filter? When using Metal, I convert this data into MTLTexture using AdobeLUTParser. Not sure what to do in case of CoreImage.
Posted
by
Post not yet marked as solved
0 Replies
285 Views
CVPixelBuffer.h defines kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange = '420v', /* Bi-Planar Component Y'CbCr 8-bit 4:2:0, video-range (luma=[16,235] chroma=[16,240]). baseAddr points to a big-endian CVPlanarPixelBufferInfo_YCbCrBiPlanar struct */ kCVPixelFormatType_420YpCbCr10BiPlanarVideoRange = 'x420', /* 2 plane YCbCr10 4:2:0, each 10 bits in the MSBs of 16bits, video-range (luma=[64,940] chroma=[64,960]) */ But when I set above format camera output, and I find the output pixelbuffer's value is exceed the range.I can see [0 -255] for 420YpCbCr8BiPlanarVideoRange and [0,1023] for 420YpCbCr10BiPlanarVideoRange Is it a bug or something wrong of the output?If it is not how can I choose the correct matrix transfer the yuv data to rgb?
Posted
by
Post not yet marked as solved
0 Replies
240 Views
I compared with several options to use get auxiliary images from CIImage. These options leak AVSemanticSegmentationMatte when using debug memory graph CIImage.init(data: data, options: [.auxiliarySemanticSegmentationSkinMatte: true]) CIImage.init(data: data, options: [.auxiliarySemanticSegmentationHairMatte: true]) CIImage.init(data: data, options: [.auxiliarySemanticSegmentationTeethMatte: true]) Other options .auxiliaryDisparity and .auxiliaryPortraitEffectsMatte do not leak AVDepthData nor AVPortraitEffectsMatte.
Posted
by
Post not yet marked as solved
0 Replies
318 Views
I am trying to use a CIColorKernel or CIBlendKernel with sampler arguments but the program crashes. Here is my shader code which compiles successfully. extern "C" float4 wipeLinear(coreimage::sampler t1, coreimage::sampler t2, float time) { float2 coord1 = t1.coord(); float2 coord2 = t2.coord(); float4 innerRect = t2.extent(); float minX = innerRect.x + time*innerRect.z; float minY = innerRect.y + time*innerRect.w; float cropWidth = (1 - time) * innerRect.w; float cropHeight = (1 - time) * innerRect.z; float4 s1 = t1.sample(coord1); float4 s2 = t2.sample(coord2); if ( coord1.x > minX && coord1.x < minX + cropWidth && coord1.y > minY && coord1.y <= minY + cropHeight) { return s1; } else { return s2; } } And it crashes on initialization. class CIWipeRenderer: CIFilter { var backgroundImage:CIImage? var foregroundImage:CIImage? var inputTime: Float = 0.0 static var kernel:CIColorKernel = { () -> CIColorKernel in let url = Bundle.main.url(forResource: "AppCIKernels", withExtension: "ci.metallib")! let data = try! Data(contentsOf: url) return try! CIColorKernel(functionName: "wipeLinear", fromMetalLibraryData: data) //Crashes here!!!! }() override var outputImage: CIImage? { guard let backgroundImage = backgroundImage else { return nil } guard let foregroundImage = foregroundImage else { return nil } return CIWipeRenderer.kernel.apply(extent: backgroundImage.extent, arguments: [backgroundImage, foregroundImage, inputTime]) } } It crashes in the try line with the following error: Fatal error: 'try!' expression unexpectedly raised an error: Foundation._GenericObjCError.nilError If I replace the kernel code with the following, it works like a charm: extern "C" float4 wipeLinear(coreimage::sample_t s1, coreimage::sample_t s2, float time) { return mix(s1, s2, time); }
Posted
by