Post not yet marked as solved
I want to get text by reading QR Code image.
By the way, in iOS15.0.2, CIDetector featuresInImage returns no data.
But in iOS 14.6 it returns data.
Please answer what the reason is.
Post not yet marked as solved
Hello,
I am trying to create an animated sequence of HEIC images but I cannot save the frame property duration. It seems this is a well know bug: https://github.com/SDWebImage/SDWebImage/issues/3120
The kCGImagePropertyHEICSDictionary is never saved.
Here's a sample project to reproduce the bug: ImageIOHEICSEncodeDecodeBug.zip
Has anybody managed to save this information in a HEIC sequence?
Thanks!
Here's how I am writing an reading the image sequence
- (void)testHEICSBug {
// First, load an animated image (GIF)
// And you can change the type into png, which is an animated PNG format. Same result
NSData *GIFData = [NSData dataWithContentsOfFile:[[NSBundle mainBundle] pathForResource:@"image1" ofType:@"gif"]];
CGImageSourceRef source = CGImageSourceCreateWithData((__bridge CFDataRef)GIFData, nil);
NSUInteger frameCount = CGImageSourceGetCount(source);
NSAssert(frameCount > 1, @"GIF frame count > 1");
// Split into frames array, encode to HEICS
NSMutableData *heicsData = [NSMutableData data];
CGImageDestinationRef destination = CGImageDestinationCreateWithData((__bridge CFMutableDataRef)heicsData, (__bridge CFStringRef)AVFileTypeHEIC, frameCount, nil);
for (int i = 0; i < frameCount; i++) {
// First get the GIF input image and duration
CGImageRef cgImage = CGImageSourceCreateImageAtIndex(source, i, nil);
NSDictionary *inputProperties = (__bridge_transfer NSDictionary *)CGImageSourceCopyPropertiesAtIndex(source, i, nil);
NSDictionary *inputDictionary = inputProperties[(__bridge NSString *)kCGImagePropertyGIFDictionary];
NSTimeInterval duration = [inputDictionary[(__bridge NSString *)kCGImagePropertyGIFUnclampedDelayTime] doubleValue];
NSAssert(cgImage, @"CGImage not nil");
NSAssert(duration > 0, @"Input duration > 0");
// Then, encode into HEICS animated image
NSMutableDictionary *outputDProperties = [NSMutableDictionary dictionary];
outputDProperties[(__bridge NSString *)kCGImagePropertyHEICSDictionary] = @{(__bridge NSString *)kCGImagePropertyHEICSUnclampedDelayTime : @(duration)};
CGImageDestinationAddImage(destination, cgImage, (__bridge_retained CFDictionaryRef)outputDProperties);
}
// Output HEICS image data
BOOL result = CGImageDestinationFinalize(destination);
NSAssert(result, @"Encode HEICS failed");
// Next, try to use ImageIO to decode HEICS and check duration
CGImageSourceRef newSource = CGImageSourceCreateWithData((__bridge CFDataRef)heicsData, nil);
frameCount = CGImageSourceGetCount(newSource);
NSAssert(frameCount > 1, @"New HEICS should be aniamted image");
NSUInteger frameIndex = 1; // I pick the 2nd frame, actually any frame contains this issue.
NSDictionary *newProperties = (__bridge_transfer NSDictionary *)CGImageSourceCopyPropertiesAtIndex(newSource, frameIndex, nil);
NSDictionary *newDictionary = newProperties[(__bridge NSString *)kCGImagePropertyHEICSDictionary];
NSTimeInterval newDuration = [newDictionary[(__bridge NSString *)kCGImagePropertyHEICSUnclampedDelayTime] doubleValue];
CGImageRef newImage = CGImageSourceCreateImageAtIndex(newSource, frameIndex, nil);
// Now, check the HEICS frame duration, however, it's nil :(
// Only image is kept.
NSAssert(newImage, @"frame image is not nil");
NSAssert(newDuration > 0, @"Decode the HEICS (which encoded from GIF) will loss the frame duration");
}
Post not yet marked as solved
I am not able to scan or read the 1D barcode present in the device photos library image, whereas the same I have achieved for QR code(2D barcode) image using the below code.
CIDetector* detector = [CIDetector detectorOfType:CIDetectorTypeQRCode context:nil options:@{CIDetectorAccuracy:CIDetectorAccuracyHigh}];
if (detector)
{
CIImage *img = [[CIImage alloc]initWithImage:image];
NSArray* imgFeatures = [detector featuresInImage:img];
NSString* contents;
for (CIQRCodeFeature* imgFeature in imgFeatures)
{
DLog(@"decode %@ ",imgFeature.messageString);
contents = imgFeature.messageString;
if(contents){
DLog(@"Success");
}else{
DLog(@"Failure");
}
return;
}
}
As per my inference, the CIDetector has only the following types to detect from image
CIDetectorTypeFace
CIDetectorTypeRectange
CIDetectorTypeQRCode
CIDetectorTypeText
https://developer.apple.com/documentation/coreimage/cidetector/detector_types?language=objc
Please let me know how I can get my barcode images from the device photos library read/scanned.
Post not yet marked as solved
We see strange crashes when running our app since macOS 12 Beta (but still on macOS 12.0.1). We have not been able to fully identify the issue but it seems to happen on continue video playback in an AVPlayer, sometimes due to background, sometimes due to continue playback directly. Xcode points to some code in the libsystem_kernel.dylib (seems different every time and never in our own code)
The log will show:
-[MTLDebugCommandBuffer lockPurgeableObjects]:2103: failed assertion 'MTLResource 0x600002293790 (label: (null)), referenced in cmd buffer 0x7f7b2200a000 (label: (null)) is in volatile or empty purgeable state at commit'
-[MTLDebugCommandBuffer lockPurgeableObjects]:2103: failed assertion 'MTLResource 0x600002293790 (label: (null)), referenced in cmd buffer 0x7f7b2200a000 (label: (null)) is in volatile or empty purgeable state at commit'
We tried finding the object 0x600002293790 and 0x7f7b2200a000 but this gave no additional information as to why the app crashes.
We are using a custom VideoCompositor: AVVideoCompositing and initialise the CIContext for the work done here with these options:
if let mtlDevice = MTLCreateSystemDefaultDevice()
let options: [CIContextOption : Any] = [
CIContextOption.useSoftwareRenderer: false,
CIContextOption.outputPremultiplied: false,
]
let context = CIContext(mtlDevice: mtlDevice, options: options)
}
Not sure this is an Xcode 13 debug issue? a macOS 12.0.1 Monterey issue? or an actual issue as we have not seen it crash when not using Xcode to build our app giving this information. But we have seen strange crashes on Audio/Video threads that we could not trace back to our code as well.
The crash has never occurred on Xcode 12 or on macOS Big Sur during previous testing.
Any information as to locating the source of the issue or a solution would be awesome.
Post not yet marked as solved
I have CVPixelBuffer's in kCVPixelFormatType_420YpCbCr10BiPlanarVideoRange, which is 10 bit HDR. I need to convert these to RGB and display in MTKView. I need to know the correct pixel format to use, the BT2020 conversion matrix, and displaying the 10 bit RGB pixel buffer in MTKView.
I am wondering if it is possible to detect a document or an envelope with aspect ratio (width / height) equals to or more than 2.0 on iOS 15 using a CIDetector object.
I found that starting from iOS 15, my application stopped to detect envelopes with the previously mentioned aspect ratios.
I have tried to use CIDetectorAspectRatio, CIDetectorFocalLength & CIDetectorMinFeatureSize options with desired values to fine-tune the detection, but that didn't solve the problem.
The following is the method I'm using for getting the detected rectangles. It returns an CIRectangleFeature array of 1 element in case running application on iPhone with iOS version earlier than iOS 15, while it returns an empty array in case I'm running the application on iPhone with iOS 15 or later.
static func rectangles(inImage image: CIImage) -> [CIRectangleFeature]? {
let rectangleDetector = CIDetector(ofType: CIDetectorTypeRectangle, context: CIContext(options: nil), options: [CIDetectorAccuracy: CIDetectorAccuracyHigh])
guard let rectangleFeatures = rectangleDetector?.features(in: image) as? [CIRectangleFeature] else {
return nil
}
return rectangleFeatures
}
Thank you in advance.
Post not yet marked as solved
Hi,
Apologies, but I am completely new to Apple development, struggling to find the right information that I need, and would really appreciate some pointers from experienced developers as to the best approach for a project I am starting.
The use case I have relates to using properties of colour to predict the density of a fluid from a photograph.
Each photograph will simply be a single colour, the properties of the photograph (colour intensity / brightness / saturation) will vary between each photograph as the density of the fluid changes and I am looking to use these (or possibly other similar properties) to determine a value for the fluid density.
What I would like to ask is:
1- Do you think CoreML is the best approach to use for predicting the density based upon the colour properties of the photograph, or should I start somewhere else?
2- Can you point me to any helpful related documentation which will help me get started.
I hope someone can help.
Many thanks in advance
Steve
Post not yet marked as solved
I have tried everything but it looks to be impossible to get MTKView to display full range of colors of HDR CIImage made from CVPixelBuffer (in 10bit YUV format). Only builtin layers such as AVCaptureVideoPreviewLayer, AVPlayerLayer, AVSampleBufferDisplayLayer are able to fully display HDR images on iOS. Is MTKView incapable of displaying full BT2020_HLG color range? Why does MTKView clip colors no matter even if I set pixel Color format to bgra10_xr or bgra10_xr_srgb?
convenience init(frame: CGRect, contentScale:CGFloat) {
self.init(frame: frame)
contentScaleFactor = contentScale
}
convenience init(frame: CGRect) {
let device = MetalCamera.metalDevice
self.init(frame: frame, device: device)
colorPixelFormat = .bgra10_xr
self.preferredFramesPerSecond = 30
}
override init(frame frameRect: CGRect, device: MTLDevice?) {
guard let device = device else {
fatalError("Can't use Metal")
}
guard let cmdQueue = device.makeCommandQueue(maxCommandBufferCount: 5) else {
fatalError("Can't make Command Queue")
}
commandQueue = cmdQueue
context = CIContext(mtlDevice: device, options: [CIContextOption.cacheIntermediates: false])
super.init(frame: frameRect, device: device)
self.framebufferOnly = false
self.clearColor = MTLClearColor(red: 0, green: 0, blue: 0, alpha: 0)
}
And then rendering code:
override func draw(_ rect: CGRect) {
guard let image = self.image else {
return
}
let dRect = self.bounds
let drawImage: CIImage
let targetSize = dRect.size
let imageSize = image.extent.size
let scalingFactor = min(targetSize.width/imageSize.width, targetSize.height/imageSize.height)
let scalingTransform = CGAffineTransform(scaleX: scalingFactor, y: scalingFactor)
let translation:CGPoint = CGPoint(x: (targetSize.width - imageSize.width * scalingFactor)/2 , y: (targetSize.height - imageSize.height * scalingFactor)/2)
let translationTransform = CGAffineTransform(translationX: translation.x, y: translation.y)
let scalingTranslationTransform = scalingTransform.concatenating(translationTransform)
drawImage = image.transformed(by: scalingTranslationTransform)
let commandBuffer = commandQueue.makeCommandBufferWithUnretainedReferences()
guard let texture = self.currentDrawable?.texture else {
return
}
var colorSpace:CGColorSpace
if #available(iOS 14.0, *) {
colorSpace = CGColorSpace(name: CGColorSpace.itur_2100_HLG)!
} else {
// Fallback on earlier versions
colorSpace = drawImage.colorSpace ?? CGColorSpaceCreateDeviceRGB()
}
NSLog("Image \(colorSpace.name), \(image.colorSpace?.name)")
context.render(drawImage, to: texture, commandBuffer: commandBuffer, bounds: dRect, colorSpace: colorSpace)
commandBuffer?.present(self.currentDrawable!, afterMinimumDuration: 1.0/Double(self.preferredFramesPerSecond))
commandBuffer?.commit()
}
Post not yet marked as solved
Hi
I am getting a frame of video data that is from a third party SDK. The object contains the data buffer, data length, y buffer, u buffer, v buffer and a few more bits that are related to the SDK. The data is I420 (from the name of the object). I am using the following code to try and make an NSImage from the data.
var pseudoVideoData = Data.init(bytes: buffer!,
count: Int(bufSize))
let cgImg = pseudoVideoData.withUnsafeMutableBytes { (ptr) -> CGImage in
let ctx = CGContext(
data: ptr.baseAddress,
width: Int(Double(windowWidth) * 0.562), // Why?
height: Int(Double(windowHeight) * 0.519), // Why? again???
bitsPerComponent: 8,
bytesPerRow: Int(4 * streamWidth),
space: CGColorSpace(name: CGColorSpace.sRGB)!,
bitmapInfo: bmInfo
)!
return ctx.makeImage()!
}
let imgSize = NSSize(width: CGFloat(windowWidth), height: CGFloat(windowHeight))
let img = NSImage.init(cgImage: cgImg, size: imgSize)
self.pseudoVideoView.image = img
}
I can blast the image into an NSImageView.image but the image is missing colour. I can get the y buffer, u buffer and v buffer, but I don't know how to mash all the data into a nice coloured image.
Can someone point me to a URL or some sample code that I can look at to get over this problem?
Thanks and Best Regards
John
I have doubts about Core Image coordinate system, way transforms are applied and way the image extent is determined. I couldn't find much in documentation or on internet so I tried the following code to rotate CIImage and display it in UIImageView. As I understand there is no absolute coordinate system in Core Image. The bottom left corner of an image is supposed to be (0,0). But my experiments show something else.
I created a prototype to rotate a CIImage by pi/10 radians on each button click. Here is the code I wrote.
override func viewDidLoad() {
super.viewDidLoad()
// Do any additional setup after loading the view.
imageView.contentMode = .scaleAspectFit
let uiImage = UIImage(contentsOfFile: imagePath)
ciImage = CIImage(cgImage: (uiImage?.cgImage)!)
imageView.image = uiImage
}
private var currentAngle = CGFloat(0)
private var ciImage:CIImage!
private var ciContext = CIContext()
@IBAction func rotateImage() {
let extent = ciImage.extent
let translate = CGAffineTransform(translationX: extent.midX, y: extent.midY)
let uiImage = UIImage(contentsOfFile: imagePath)
currentAngle = currentAngle + CGFloat.pi/10
let rotate = CGAffineTransform(rotationAngle: currentAngle)
let translateBack = CGAffineTransform(translationX: -extent.midX, y: -extent.midY)
let transform = translateBack.concatenating(rotate.concatenating(translate))
ciImage = CIImage(cgImage: (uiImage?.cgImage)!)
ciImage = ciImage.transformed(by: transform)
NSLog("Extent \(ciImage.extent), Angle \(currentAngle)")
let cgImage = ciContext.createCGImage(ciImage, from: ciImage.extent)
imageView.image = UIImage(cgImage: cgImage!)
}
But in the logs, I see the extent of images have negative origin.x and origin.y. What does it mean? Relative to whom it is negative and where exactly is (0,0) then? What exactly is image extent and how does Core Image coordinate system work?
2021-09-24 14:43:29.280393+0400 CoreImagePrototypes[65817:5175194] Metal API Validation Enabled
2021-09-24 14:43:31.094877+0400 CoreImagePrototypes[65817:5175194] Extent (-105.0, -105.0, 1010.0, 1010.0), Angle 0.3141592653589793
2021-09-24 14:43:41.426371+0400 CoreImagePrototypes[65817:5175194] Extent (-159.0, -159.0, 1118.0, 1118.0), Angle 0.6283185307179586
2021-09-24 14:43:42.244703+0400 CoreImagePrototypes[65817:5175194] Extent (-159.0, -159.0, 1118.0, 1118.0), Angle 0.9424777960769379
Post not yet marked as solved
In the "Explore Core Image kernel improvements" session, David mentioned that it is now possible to compile [[stitchable]] CI kernels at runtime. However, I fail to get it working.
The kernel requires the #import of <CoreImage/CoreImage.h> and linking against the CoreImage Metal library. But I don't know how to link against the library when compiling my kernel at runtime. Also, according to the Metal Best Practices Guide, "the #include directive is not supported at runtime for user files."
Any guidance on how the runtime compilation works is much appreciated! 🙂
From the imagecapturecore-rs crate here:
https://github.com/brandonhamilton/image-capture-core-rs/issues/7
Only didOpenSessionWithError fires when connecting a PTP (picture transfer protocol) device with a None for the error value and an NSArray with a count of 0.
decl.add_method(
sel!(device:didOpenSessionWithError:),
device_did_open_session_with_error as extern "C" fn(&Object, Sel, id, id),
);
println!(" 📸 add_method didCloseSessionWithError");
decl.add_method(
sel!(device:didCloseSessionWithError:),
device_did_close_session_with_error as extern "C" fn(&Object, Sel, id, id),
);
println!(" 📸 add_method didRemoveDevice");
decl.add_method(
sel!(didRemoveDevice:),
device_did_remove_device as extern "C" fn(&Object, Sel, id),
);
println!(" 📸 add_method withCompleteContentCatalog");
decl.add_method(
sel!(withCompleteContentCatalog:),
device_did_become_ready as extern "C" fn(&Object, Sel, id),
);
Do I need to be using the fancier cameraDevice.requestOpenSession() with the callback function from here?
https://developer.apple.com/documentation/imagecapturecore/icdevice/3142916-requestopensession
As seen on StackOverflow:
https://stackoverflow.com/questions/68978185/apple-ptp-withcompletecontentcatalog-not-firing-rust-obj-c
Post not yet marked as solved
In the past we have tested iOS 13 and iOS 12 iPhone 6, 6s, and 10 with the face anti spoofing. It was working. However, with iOS 14, we have learned that the input from camera is not working with face anti spoofing. The image taken from camera is producing poor scores on whether the face (in image) is a real person. The machine learning model works by reading the pixels and checks for many things, including the depth of the face, the background of the head, and whether there appears to be image manipulation in the pixel. we are very confident we have not changed our app in anyway, so we are asking if there has been any changes made to the iOS 14 camera that affected the image being outputted to the public func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection). Currently, the model works great on Android phones.
This is a weird XCode 13 beta bug (including beta 5). Metal Core Image kernels fail to load from the library giving error
2021-08-26 12:05:23.806226+0400 MetalFilter[23183:1751438] [api] +[CIKernel kernelWithFunctionName:fromMetalLibraryData:options:error:] Cannot initialize kernel with given library data.
[ERROR] Failed to create CIColorKernel: Error Domain=CIKernel Code=6 "(null)" UserInfo={CINonLocalizedDescriptionKey=Cannot initialize kernel with given library data.}
But there is no such error with XCode 12.5. The kernel loads fine. Only on XCode 13 beta there is an error.
Post not yet marked as solved
As of 08/19/21, when calling this new API writeHEIF10RepresentationOfImage with the same arguments as writeHEIFRepresentationOfImage (minus the forum argument), I get the following exception...
*** -[__NSPlaceholderDictionary initWithObjects:forKeys:count:]: attempt to insert nil object from objects[0] (NSInvalidArgumentException)
I am assuming that the new API isn't working, but I'm posting here for visibility.
I used Xcode 13.0 Beta 5 and I tested it on an iPhone 12 running iOS 15.0 Beta 6
Post not yet marked as solved
Is anyone using the new CIRAWFilter API successfully? This is the one introduced in iOS 15, not the OG one. I'm able to instantiate the class and get an output image, but as soon as I try to use the same instance a second time (in any way), I get a crash.
I'm storing the CIRAWFilter in an instance variable to tweak its parameters (such as exposure) later, but while the CIRAWFilter object is retained, it seems like it's an empty shell. It either crashes with EXC_BAD_ACCESS or responds that it was sent an unrecognized selector.
Makes it tough to adopt this new API!
Here's a reproducible code sample for the curious (make sure to add a RAW file called example.dng to the bundle):
import SwiftUI
import Photos
import CoreImage
import CoreImage.CIFilterBuiltins
final class FilterHolder: ObservableObject {
var rawFilter: CIRAWFilter? = nil
}
struct ContentView: View {
enum RawError: String, Error {
case noURL, noFilter, noImage, noCGImage, noAsset, noFullSizeUrl, unhandled
}
@State var image: Image?
@State var rawError: RawError? = nil
@State var ev: Float = 0.0
@StateObject var filterHolder = FilterHolder()
var body: some View {
VStack {
if let image = image {
VStack {
image.resizable()
.aspectRatio(contentMode: .fit)
.frame(maxWidth: .infinity, maxHeight: .infinity)
Slider(value: $ev, in: 0...3)
}
} else {
if let error = rawError {
Text("Error: \(error.localizedDescription)")
} else {
Text("Loading…")
}
}
}.onChange(of: ev) { newValue in
filterHolder.rawFilter?.exposure = newValue
render()
}.onAppear {
loadLocalImage()
}
}
func loadLocalImage() {
do {
guard let url = Bundle.main.url(forResource: "example", withExtension: "dng") else {
throw RawError.noURL
}
guard let filter = CIRAWFilter(imageURL: url) else {
throw RawError.noFilter
}
filter.neutralTemperature = 5600.0
filterHolder.rawFilter = filter
render()
} catch {
rawError = .unhandled
}
}
func render() {
do {
guard let output = filterHolder.rawFilter?.outputImage else {
throw RawError.noImage
}
let context = CIContext(options: nil)
guard let cgImage = context.createCGImage(output, from: output.extent) else {
throw RawError.noCGImage
}
image = Image(uiImage: UIImage(cgImage: cgImage))
} catch {
rawError = .unhandled
}
}
}
The image should load correctly once, but when the slider is adjusted, you'll see the crash.
Filed as FB9524345
Post not yet marked as solved
We are implementing a CIImageProcessorKernel that uses an MTLRenderCommandEncoder to perform some mesh-based rendering into the output’s metalTexture. This works on iOS, but crashes on macOS. This is because the usage of the texture does not specify renderTarget on those devices—but not always. Sometimes the output’s texture can be used as renderTarget, but sometimes not. It seems there are both kinds of textures in CIs internal texture cache, and which one is used depends on the order in which filters are executed.
So far we only observed this on macOS (on different Macs, even on M1 and macOS 12 Beta) but not on iOS (also not on an M1 iPad).
We would expect to always be able to use the output’s texture as render target so we can use it as a color attachment for the render pass.
Is there some way to configure a CIImageProcessorKernel to always get renderTarget output textures? Or do we really need to render into a temporary texture and blit the result into the output texture? This would be a huge waste of memory and time…
I have followed instructions from the video but get this error:
air-lld: framework not found CoreImage
air-lld command failed with exit code 1 (use -v to see invocation)
I'm using Xcode 13 Beta 4.
Please add a sample project. Thank you!
Hello everyone !In my task, I do not have the ability to connect сi.Metallib in the project. I have a ci.metallib shader, this kind
#define TRACKING_SEVERITY 0.025
#define TRACKING_SPEED 0.2
#define SHIMMER_SPEED 30.0
#define RGB_MASK_SIZE 2.0
#include <metal_stdlib>
#include <CoreImage/CoreImage.h>
using namespace metal;
extern "C" { namespace coreimage {
float mod(float x, float y) {
return float(x - y * floor(x/y));
}
float4 mainImage(sampler_h src, float time, float amount) {
// const float magnitude = sin(time) * 0.1 * amount;
float2 greenCoord = src.coord();
greenCoord.x -= sin(greenCoord.y * 500.0 + time) * INTERLACING_SEVERITY * amount;
float scan = mod(greenCoord.y, 3.0);
float yOffset = floor(sin(time * SHIMMER_SPEED));
float pix = (greenCoord.y+yOffset) * src.size().x + greenCoord.x;
pix = floor(pix);
float4 colMask = float4(mod(pix, RGB_MASK_SIZE), mod((pix+1.0), RGB_MASK_SIZE), mod((pix+2.0), RGB_MASK_SIZE), 1.0);
colMask = colMask / (RGB_MASK_SIZE - 1.0) + 0.5;
// Tracking
float t = -time * TRACKING_SPEED;
float fractionalTime = (t - floor(t)) * 1.3 - TRACKING_HEIGHT;
if(fractionalTime + TRACKING_HEIGHT >= greenCoord.y && fractionalTime <= greenCoord.y)
{
greenCoord.x -= fractionalTime * TRACKING_SEVERITY;
}
return src.sample(greenCoord).b*colMask*scan;
}
}}
How does this code welcome to this form?
let kernel = CIKernel(source: """
kernel vec4 mainImage(sampler image, float time, float amount) {
float mod(float x, float y) {
return float(x - y * floor(x/y));
}?
vec2 greenCoord = destCoord();?
.......????
}
""")
How exactly the types change?