Post not yet marked as solved
I have a project where I capture live video from the camera then pass it through a chain of CIFilters and render the result into an MTLTexture. It is all working well except that each time a call to CIContext's render toMTLTexture function is called the memory usage increases by ~150Mb and never goes back down. This causes the app to be killed by the OS after about 15-20 images are processed due to memory issues.
I have isolated the issue to the following process image function:
// Initialise required filters
CIFilter *grayScaleFilter = [CIFilter filterWithName:@"CIColorMatrix" keysAndValues: @"inputRVector", [CIVector vectorWithX:1 / 3.0 Y:1 / 3.0 Z:1 / 3.0 W:0], nil];
CIFilter *blurFilter = [CIFilter filterWithName:@"CIBoxBlur" keysAndValues:kCIInputRadiusKey, [NSNumber numberWithFloat:3.0], nil];
const CGFloat dxFilterValues[9] = { 1, 0, -1, 2, 0, -2, 1, 0, -1};
CIFilter *dxFilter = [CIFilter filterWithName:@"CIConvolution3X3" keysAndValues:kCIInputWeightsKey, [CIVector vectorWithValues:dxFilterValues count:9], nil];
const CGFloat dyFilterValues[9] = { 1, 2, 1, 0, 0, 0, -1, -2, -1};
CIFilter *dyFilter = [CIFilter filterWithName:@"CIConvolution3X3" keysAndValues:kCIInputWeightsKey, [CIVector vectorWithValues:dyFilterValues count:9], nil];
// Phase filter is my custom filter implemented with a Metal Kernel
CIFilter *phaseFilter = [CIFilter filterWithName:@"PhaseFilter"];
// Apply filter chain to input image
[grayScaleFilter setValue:image forKey:kCIInputImageKey];
[blurFilter setValue:grayScaleFilter.outputImage forKey:kCIInputImageKey];
[dxFilter setValue:blurFilter.outputImage forKey:kCIInputImageKey];
[dyFilter setValue:blurFilter.outputImage forKey:kCIInputImageKey];
[phaseFilter setValue:multiplierFilterDx.outputImage forKey:@"inputX"];
[phaseFilter setValue:multiplierFilterDy.outputImage forKey:@"inputY"];
// Initialize MTLTextures
MTLTextureDescriptor* desc = [MTLTextureDescriptor texture2DDescriptorWithPixelFormat:MTLPixelFormatR8Unorm width:720 height:1280 mipmapped:NO];
desc.usage = MTLTextureUsageShaderWrite | MTLTextureUsageShaderRead;
id<MTLTexture> phaseTexture = [CoreImageOperations::device newTextureWithDescriptor:descriptor];
// Render to MTLTexture
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// Memory usage increases by ~150Mb after the following function call!!!
[context render:phaseFilter.outputImage toMTLTexture:phaseTexture commandBuffer:commandBuffer bounds:phaseFilter.outputImage.extent colorSpace:colorSpace];
CFRelease(colorSpace);
return phaseTexture;
I profiled the memory usage using instruments and found that most of the memory was being used by IOSurface objects with CoreImage listed as the responsible library and CreateCachedSurface as the responsible caller. (See screenshot below)
This is very strange because I set up my CIContext to not cache intermediates witht the following line:
const CIContext *context = [CIContext contextWithMTLCommandQueue:commandQueue options:@{ kCIContextWorkingFormat: [NSNumber numberWithInt:kCIFormatRGBAf], kCIContextCacheIntermediates: @NO, kCIContextName: @"Image Processor" }];
Any thoughts or advice would be greatly appreciated!
Post not yet marked as solved
I would like to be able to write CIKernels that output a lookup table or other blob of data, to be used by subsequent CIKernels.The data generated by such a kernel is fed to subsequent kernels whose corresponsing sampler input has been tagged with the "__table" attribute to disable colormatching for that input. This is a scenario that already works if one has the ability to allocate the CIImage himself, so that a nil colorspace can be passed. But when asking CIKernel for an output image, it is not possible to request an image without a colorspace associated with it. I'm referring to methods like this:- applyWithExtent:roiCallback:arguments:There are also no APIs in Core Image that would allow you to strip colorspace information from an existing image, AFAIK. (As in "hey I know this recipe is tagged with the working/output colorspace, but in reality it contains values that do not encode RGB at all")If I feed the output image from applyWithExtent:... to another kernel whose sampler has the __table attribute, from my observations it still appears to be colormatched. I can see three possibilities:1) I am clearly missing something.2) The __table attribute no longer has the desired effect, perhaps a regression.3) A new API is needed to cover this usage scenario.Any help is greatly appreciated!Best,Gabe
I try to write ToneCurveFilter using CoreImage. I have curves data in RGB image. Size of the RGB image with curves data is 256*1. Every pixel presents curve value. For example, for red = 200, you should get pixel with coordinates (200, 0) and get red channel from this pixel.Image: http://i.stack.imgur.com/Njhr6.pngI exam colour of every pixel in the curve image, and they are correct. (Identical values, result filter shouldn't change image colour)Also I write a kernel for the filter: kernel vec4 coreImageKernel(uniform sampler src, __table sampler toneCurveData) { vec4 color = unpremultiply(sample(src, samplerCoord(src))); //return vec4(color.r, color.g, color.b, 1.0); vec2 redPointPosition = samplerTransform(toneCurveData, vec2(color.r * 255.0, 0.5)); float red = (sample(toneCurveData, redPointPosition)).r; vec2 greenPointPosition = samplerTransform(toneCurveData,vec2(color.g * 255.0, 0.5)); float green = (sample(toneCurveData, greenPointPosition)).g; vec2 bluePointPosition = samplerTransform(toneCurveData,vec2(color.b * 255.0, 0.5)); float blue = (sample(toneCurveData, bluePointPosition)).b; vec4 resultColor = vec4(red, green, blue, color.a); return premultiply(resultColor); }I expect that this filter, will not change my source image. But a result image is more darker then the original image. I've tried change color.x * 255.0 to concreate value, for example to 230.0, but color of the result image was (224, 223, 224). And I do not understand why.To test my kernels I use "Quartz Composer" -- very useful tool.I've tried add ROI function, but this is not helped for me. function myROIFunction(samplerIndex, dstRect, __image info) { __vec4 dstRectResult = dstRect; if(samplerIndex == 1) { dstRectResult = info.extent; } return dstRectResult; } function __image main(__image src, __image toneCurveData, __color monohromeColor, __number power) { coreImageKernel.ROIHandler = myROIFunction; return coreImageKernel.apply(src.definition, toneCurveData, src, toneCurveData, monohromeColor, power); }I think problem in coordinate space of toneCurveData sample, but I can't understand what exactly.Main question: how get a pixel colour from toneCurveData for some pixel (for example, for pixel with coordinates (255.0, 0.0), it should be (255.0, 255.0, 255.0))?Also I've tryed use a NSData object with toneCurve data as parameter of the kernel, this's not helped me.
Post not yet marked as solved
I guess Apple won't respond to this but does anyone know what the timeline might be for Apple to provide support for the Sony Alpha RAW files.
Or what is the typical timeframe to release an update to provide this support ?
Post not yet marked as solved
Can anyone from apple provide the spline formulation that is used in the CIToneCurve filter?The documentation says that a spline is passed thru the points you give it, but thereare a lot of spline types in the world, and they all can yield different shapes thru the same set of points.
There is a write function documented in the CoreImage Metal shader reference here: https://developer.apple.com/metal/MetalCIKLReference6.pdf
But I'm not sure how to use it. I assumed one would be able to use it on the destination parameter i.e. dest.write(...) but I get the error, "no member named 'write' in 'coreimage::destination'"
How do I use this function?
Post not yet marked as solved
I've created a custom BoxBlur kernel that produces identical results to Apple's built-in box blur (CIBoxBlur) kernel but my custom kernel is orders of magnitude slower. So naturally I am wondering what I'm doing wrong to get such poor performance. Below is my custom kernel in the Metal shading language. Can you spot why it's so slow? The built-in filter performs well so I can only assume it's something I'm doing wrong.
#include <CoreImage/CoreImage.h>
#import <simd/simd.h>
extern "C" {
namespace coreimage {
float4 customBoxBlurFilterKernel(sampler src) {
float2 crd = src.coord();
int edge = 100;
int minx = crd.x - edge;
int maxx = crd.x + edge;
int miny = crd.y - edge;
int maxy = crd.y + edge;
float4 sums = float4(0,0,0,0);
float cnt = 0;
// compute average of surrounding rgb values
for(int row=miny; row < maxy; row++) {
for(int col=minx; col < maxx; col++) {
float4 samp = src.sample(float2(col, row));
sums[0] += samp[0];
sums[1] += samp[1];
sums[2] += samp[2];
cnt += 1.;
}
}
return float4(sums[0]/cnt, sums[1]/cnt, sums[2]/cnt, 1);
}
}
}
Post not yet marked as solved
I have a .cube file storing LUT data, such as this:
TITLE "Cool LUT"
LUT_3D_SIZE 64
0.0000 0.0000 0.0000
0.0000 0.0000 0.0000
0.0157 0.0000 0.0000
0.0353 0.0000 0.0000
My question is how do I load required NSData that can be used in CIColorCube filter? When using Metal, I convert this data into MTLTexture using AdobeLUTParser. Not sure what to do in case of CoreImage.
Post not yet marked as solved
CVPixelBuffer.h defines
kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange = '420v', /* Bi-Planar Component Y'CbCr 8-bit 4:2:0, video-range (luma=[16,235] chroma=[16,240]). baseAddr points to a big-endian CVPlanarPixelBufferInfo_YCbCrBiPlanar struct */
kCVPixelFormatType_420YpCbCr10BiPlanarVideoRange = 'x420', /* 2 plane YCbCr10 4:2:0, each 10 bits in the MSBs of 16bits, video-range (luma=[64,940] chroma=[64,960]) */
But when I set above format camera output, and I find the output pixelbuffer's value is exceed the range.I can see [0 -255] for 420YpCbCr8BiPlanarVideoRange and
[0,1023] for 420YpCbCr10BiPlanarVideoRange
Is it a bug or something wrong of the output?If it is not how can I choose the correct matrix transfer the yuv data to rgb?
Post not yet marked as solved
I compared with several options to use get auxiliary images from CIImage.
These options leak AVSemanticSegmentationMatte when using debug memory graph
CIImage.init(data: data, options: [.auxiliarySemanticSegmentationSkinMatte: true])
CIImage.init(data: data, options: [.auxiliarySemanticSegmentationHairMatte: true])
CIImage.init(data: data, options: [.auxiliarySemanticSegmentationTeethMatte: true])
Other options .auxiliaryDisparity and .auxiliaryPortraitEffectsMatte do not leak AVDepthData nor AVPortraitEffectsMatte.
Post not yet marked as solved
Hi.
I like to use - as I thought possibly – a Core ML model to identify the main clouts of an image. The idea is to detect used colors in fashion images to get a kind of a "color trend" in a set of images.
I found this question in the forum already, but it never got an answers (as questions to the questions did not get answered by the initial poster):
https://developer.apple.com/forums/thread/94324
Maybe, Core Models are not the way to do this (are there more about objects and texts)? Any hint to other techniques are welcome, too.
The only approach I do not want to follow is to use online services as images have to get delivered to them – and usually are kept there. I want to realize an on-premise approach.
Thanks for any hints!
Post not yet marked as solved
I am trying to use a CIColorKernel or CIBlendKernel with sampler arguments but the program crashes. Here is my shader code which compiles successfully.
extern "C" float4 wipeLinear(coreimage::sampler t1, coreimage::sampler t2, float time) {
float2 coord1 = t1.coord();
float2 coord2 = t2.coord();
float4 innerRect = t2.extent();
float minX = innerRect.x + time*innerRect.z;
float minY = innerRect.y + time*innerRect.w;
float cropWidth = (1 - time) * innerRect.w;
float cropHeight = (1 - time) * innerRect.z;
float4 s1 = t1.sample(coord1);
float4 s2 = t2.sample(coord2);
if ( coord1.x > minX && coord1.x < minX + cropWidth && coord1.y > minY && coord1.y <= minY + cropHeight) {
return s1;
} else {
return s2;
}
}
And it crashes on initialization.
class CIWipeRenderer: CIFilter {
var backgroundImage:CIImage?
var foregroundImage:CIImage?
var inputTime: Float = 0.0
static var kernel:CIColorKernel = { () -> CIColorKernel in
let url = Bundle.main.url(forResource: "AppCIKernels", withExtension: "ci.metallib")!
let data = try! Data(contentsOf: url)
return try! CIColorKernel(functionName: "wipeLinear", fromMetalLibraryData: data) //Crashes here!!!!
}()
override var outputImage: CIImage? {
guard let backgroundImage = backgroundImage else {
return nil
}
guard let foregroundImage = foregroundImage else {
return nil
}
return CIWipeRenderer.kernel.apply(extent: backgroundImage.extent, arguments: [backgroundImage, foregroundImage, inputTime])
}
}
It crashes in the try line with the following error:
Fatal error: 'try!' expression unexpectedly raised an error: Foundation._GenericObjCError.nilError
If I replace the kernel code with the following, it works like a charm:
extern "C" float4 wipeLinear(coreimage::sample_t s1, coreimage::sample_t s2, float time)
{
return mix(s1, s2, time);
}
Post not yet marked as solved
Hi, I'm using filter CIAreaMinMax to get the brightest and darkest color information from an image.
Normally the filter should output an image with two pixels (brightest and darkest). However, when I implement this filter to an image which contains two similar colors, then the result will be incorrect. The symptom of the incorrect result is, the two pixels' red channel has been switched, but G and B value have no problem.
The test image I am using is, an png image only contains two color:
RGB(37,62,88), GRB(10,132,255).
After processed by the code, it will output an image which contains two pixels:
RGB(10,62,88), GRB(37,132,255).
In below is the test code for swift playground:
import Cocoa
import CoreImage
import CoreGraphics
func saveImage(_ image: NSImage, atUrl url: URL) {
let cgImage = image.cgImage(forProposedRect: nil, context: nil, hints: nil)!
let newRep = NSBitmapImageRep(cgImage: cgImage)
newRep.size = image.size
let pngData = newRep.representation(using: .png, properties: [:])!
try! pngData.write(to: url)
}
var sourceImage = CIImage.init(contentsOf: URL.init(fileURLWithPath: "/Users/ABC/Downloads/test.png"))!
let filter = CIFilter(name: "CIAreaMinMax")!
filter.setValue(sourceImage, forKey: kCIInputImageKey)
let civ = CIVector.init(x: sourceImage.extent.minX, y: sourceImage.extent.minY, z: sourceImage.extent.width, w: sourceImage.extent.height)
filter.setValue(civ, forKey: kCIInputExtentKey)
var filteredImage = filter.outputImage!
let context = CIContext(options: [.workingColorSpace: kCFNull!])
let filteredCGImageRef = context.createCGImage(
filteredImage,
from: filteredImage.extent)
let output = NSImage(cgImage: filteredCGImageRef!, size: NSSize.init(width: filteredImage.extent.width, height: filteredImage.extent.height))
saveImage(output, atUrl: URL.init(fileURLWithPath: "/Users/ABC/Downloads/output.png"))
Hello,
The metal compiler is crashing for me when attempting to compile a metal source file that contains Core Image kernel implementations. This is a minimal version of a file that produces the crash:
extern "C" { namespace coreimage {
inline void swap(thread float4 &a, thread float4 &b) {
float4 tmp = a;
a = min(a, b);
b = max(tmp, b);
}
typedef sample_t s;
float4 median_reduction_3(s v0, s v1, s v2) {
swap(v1, v2); swap(v0, v2); swap(v0, v1);
return v1;
}
}}
Some observations:
If inline is removed, the code compiles fine. I'm not sure if there's a performance impact, as the backend llvm compiler might as well decide to inline it on its own.
If the calls to swap are commented in the median reduction function, the code compiles.
If the -fcikernel compilation flag is not used, it also compiles fine (doesn't crash). Of course, that configuration doesn't allow the use of functions inside the file as Core Image kernels.
I'm using the build settings recommended in this WWDC20 session (without indicating the location of the header files, since it's empty in my project and the new compiler interprets the argument following -I as a directory).
Hi,I am using filter in my project. When I load image from gallery and apply filter then it works fine. But when I capture an image and apply the filter then it gives me strange result. Captured image gets rotated left by 90 degrees.It will be helpful if someone can suggest me a better solution. I am using swift version 1.2Thank you
Post not yet marked as solved
Hi, recently I just received a crash report from firebase Crashlytics. It only happens on iOS15. I do not really know what does the crash means. Can someone please explain it to me? Thank you very much!
EXC_BREAKPOINT 0x0000000181064114
Crashed: com.apple.root.utility-qos
0 CoreFoundation 0xbc114 CFDataGetBytes + 156
1 ImageIO 0x1d48 CGImageGetImageSource + 156
2 UIKitCore 0x1d0954 -[_UIImageCGImageContent dealloc] + 48
3 libobjc.A.dylib 0x755c AutoreleasePoolPage::releaseUntil(objc_object**) + 200
4 libobjc.A.dylib 0x3928 objc_autoreleasePoolPop + 208
5 libdispatch.dylib 0x463c _dispatch_last_resort_autorelease_pool_pop + 44
6 libdispatch.dylib 0x16064 _dispatch_root_queue_drain + 1056
7 libdispatch.dylib 0x165f8 _dispatch_worker_thread2 + 164
8 libsystem_pthread.dylib 0x10b8 _pthread_wqthread + 228
9 libsystem_pthread.dylib 0xe94 start_wqthread + 8
Post not yet marked as solved
Hello.I have a problem with the built-in QR code detection of a vCard 4.0 in iOS.As described in https://tools.ietf.org/html/rfc6350 vCard Version 4.0 is always encoded with UTF-8. But it seems that the built-in QR code reader of iOS isn't able to decode the special characters correctly. Or am I missing something?Here is an example. Encode it with any QR code generator in the Internet and point your iOS Camera app at it. You will see that all special characters are not displayed correctly.BEGIN:VCARDVERSION:4.0N:T€st;BjörnORG:ÖÜÄTEL;CELL:+12 (34) 567890ADR:;;Blà St.;Blè Town;;12345;ç-LandURL:https://www.test123.comEMAIL;WORK;INTERNET:test@test123.comEND:VCARD
Post not yet marked as solved
Hello.
I am an iOS app developer.
I want to get the contents of the corresponding QR from the QR code image.
As of iOS 15.0.2, iOS API CIDetector featuresInImage returns no data.
The same API works normally in iOS 14.6.
Please check if there is a bug in CIDetector featuresInImage in iOS 15.0.2.
Best regards,
Hyoung-jin, Kim
Post not yet marked as solved
In iOS, I am creating a CIRAWFilter object by calling init() on the image file's URL and setting "CIRAWFilterOption.allowDraftMode : true" in the options dictionary passed in to the init call. If I set a scaleFactor = 0.5 and then call .outputImage() on the filter object, I get back a CIImage with an extent that is half the size of what I expect. If I set scaleFactor = 0.51 then I get back a CIImage with the expected extent.
For example, starting with an original RAW image file of size 4,032 x 2,048:
If I set the scaleFactor to 0.75 and call outputImage() on the CIRawFilter object, I get back a CIImage object with the extent 3,024 x 1,536 which has width and height at 75% of the original image's width and height.
However, if I set the scaleFactor to 0.50 and call outputImage() on the CIRawFilter object, I get back an image with the extent 1,008 x 512, which is 25% of the size of the original RAW image and not 50%. If I set "allowDraftMode : false" then I always get back the correctly sized CIImage.
Is this a bug or expected behavior?
Post not yet marked as solved
I am working in face editing function and here I want to make face smooth as well as white skin tone.
I am able to smooth face on image using YUCIHighPassSkinSmoothing library.
But I am not able to whitening the face only can smoothing face.
self.inputCIImage = CIImage(cgImage: self.imgPhotoForEdit.image!.cgImage!)
self.filter.inputImage = self.inputCIImage
self.filter.inputRadius = 6.0
self.filter.inputAmount = NSNumber(value: selectedSmoothingValue)
self.filter.inputSharpnessFactor = 0.0
let outputCIImage = filter.outputImage!
let outputCGImage = self.context.createCGImage(outputCIImage, from: outputCIImage.extent)
let outputUIImage = UIImage(cgImage: outputCGImage!, scale: self.imgPhotoForEdit.image!.scale, orientation: self.imgPhotoForEdit.image!.imageOrientation)
self.imgPhotoForEdit.image = outputUIImage
}
Here is my code implemented using YUCIHighPassSkinSmoothing library.